Jan 20 13:01:02 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 20 13:01:02 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 20 13:01:02 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:01:02 localhost kernel: BIOS-provided physical RAM map:
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 20 13:01:02 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 20 13:01:02 localhost kernel: NX (Execute Disable) protection: active
Jan 20 13:01:02 localhost kernel: APIC: Static calls initialized
Jan 20 13:01:02 localhost kernel: SMBIOS 2.8 present.
Jan 20 13:01:02 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 20 13:01:02 localhost kernel: Hypervisor detected: KVM
Jan 20 13:01:02 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 20 13:01:02 localhost kernel: kvm-clock: using sched offset of 3306231928 cycles
Jan 20 13:01:02 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 20 13:01:02 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 20 13:01:02 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 20 13:01:02 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 20 13:01:02 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 20 13:01:02 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 20 13:01:02 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 20 13:01:02 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 20 13:01:02 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 20 13:01:02 localhost kernel: Using GB pages for direct mapping
Jan 20 13:01:02 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 20 13:01:02 localhost kernel: ACPI: Early table checksum verification disabled
Jan 20 13:01:02 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 20 13:01:02 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:01:02 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:01:02 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:01:02 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 20 13:01:02 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:01:02 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:01:02 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 20 13:01:02 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 20 13:01:02 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 20 13:01:02 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 20 13:01:02 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 20 13:01:02 localhost kernel: No NUMA configuration found
Jan 20 13:01:02 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 20 13:01:02 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 20 13:01:02 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 20 13:01:02 localhost kernel: Zone ranges:
Jan 20 13:01:02 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 20 13:01:02 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 20 13:01:02 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:01:02 localhost kernel:   Device   empty
Jan 20 13:01:02 localhost kernel: Movable zone start for each node
Jan 20 13:01:02 localhost kernel: Early memory node ranges
Jan 20 13:01:02 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 20 13:01:02 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 20 13:01:02 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:01:02 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 20 13:01:02 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 20 13:01:02 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 20 13:01:02 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 20 13:01:02 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 20 13:01:02 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 20 13:01:02 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 20 13:01:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 20 13:01:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 20 13:01:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 20 13:01:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 20 13:01:02 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 20 13:01:02 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 20 13:01:02 localhost kernel: TSC deadline timer available
Jan 20 13:01:02 localhost kernel: CPU topo: Max. logical packages:   8
Jan 20 13:01:02 localhost kernel: CPU topo: Max. logical dies:       8
Jan 20 13:01:02 localhost kernel: CPU topo: Max. dies per package:   1
Jan 20 13:01:02 localhost kernel: CPU topo: Max. threads per core:   1
Jan 20 13:01:02 localhost kernel: CPU topo: Num. cores per package:     1
Jan 20 13:01:02 localhost kernel: CPU topo: Num. threads per package:   1
Jan 20 13:01:02 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 20 13:01:02 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 20 13:01:02 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 20 13:01:02 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 20 13:01:02 localhost kernel: Booting paravirtualized kernel on KVM
Jan 20 13:01:02 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 20 13:01:02 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 20 13:01:02 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 20 13:01:02 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 20 13:01:02 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 20 13:01:02 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 20 13:01:02 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:01:02 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 20 13:01:02 localhost kernel: random: crng init done
Jan 20 13:01:02 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 20 13:01:02 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 20 13:01:02 localhost kernel: Fallback order for Node 0: 0 
Jan 20 13:01:02 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 20 13:01:02 localhost kernel: Policy zone: Normal
Jan 20 13:01:02 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 20 13:01:02 localhost kernel: software IO TLB: area num 8.
Jan 20 13:01:02 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 20 13:01:02 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 20 13:01:02 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 20 13:01:02 localhost kernel: Dynamic Preempt: voluntary
Jan 20 13:01:02 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 20 13:01:02 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 20 13:01:02 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 20 13:01:02 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 20 13:01:02 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 20 13:01:02 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 20 13:01:02 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 20 13:01:02 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 20 13:01:02 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:01:02 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:01:02 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:01:02 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 20 13:01:02 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 20 13:01:02 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 20 13:01:02 localhost kernel: Console: colour VGA+ 80x25
Jan 20 13:01:02 localhost kernel: printk: console [ttyS0] enabled
Jan 20 13:01:02 localhost kernel: ACPI: Core revision 20230331
Jan 20 13:01:02 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 20 13:01:02 localhost kernel: x2apic enabled
Jan 20 13:01:02 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 20 13:01:02 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 20 13:01:02 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 20 13:01:02 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 20 13:01:02 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 20 13:01:02 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 20 13:01:02 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 20 13:01:02 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 20 13:01:02 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 20 13:01:02 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 20 13:01:02 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 20 13:01:02 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 20 13:01:02 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 20 13:01:02 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 20 13:01:02 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 20 13:01:02 localhost kernel: x86/bugs: return thunk changed
Jan 20 13:01:02 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 20 13:01:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 20 13:01:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 20 13:01:02 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 20 13:01:02 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 20 13:01:02 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 20 13:01:02 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 20 13:01:02 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 20 13:01:02 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 20 13:01:02 localhost kernel: landlock: Up and running.
Jan 20 13:01:02 localhost kernel: Yama: becoming mindful.
Jan 20 13:01:02 localhost kernel: SELinux:  Initializing.
Jan 20 13:01:02 localhost kernel: LSM support for eBPF active
Jan 20 13:01:02 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:01:02 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:01:02 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 20 13:01:02 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 20 13:01:02 localhost kernel: ... version:                0
Jan 20 13:01:02 localhost kernel: ... bit width:              48
Jan 20 13:01:02 localhost kernel: ... generic registers:      6
Jan 20 13:01:02 localhost kernel: ... value mask:             0000ffffffffffff
Jan 20 13:01:02 localhost kernel: ... max period:             00007fffffffffff
Jan 20 13:01:02 localhost kernel: ... fixed-purpose events:   0
Jan 20 13:01:02 localhost kernel: ... event mask:             000000000000003f
Jan 20 13:01:02 localhost kernel: signal: max sigframe size: 1776
Jan 20 13:01:02 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 20 13:01:02 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 20 13:01:02 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 20 13:01:02 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 20 13:01:02 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 20 13:01:02 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 20 13:01:02 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 20 13:01:02 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 20 13:01:02 localhost kernel: Memory: 7763736K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618364K reserved, 0K cma-reserved)
Jan 20 13:01:02 localhost kernel: devtmpfs: initialized
Jan 20 13:01:02 localhost kernel: x86/mm: Memory block size: 128MB
Jan 20 13:01:02 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 20 13:01:02 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 20 13:01:02 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 20 13:01:02 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 20 13:01:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 20 13:01:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 20 13:01:02 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 20 13:01:02 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 20 13:01:02 localhost kernel: audit: type=2000 audit(1768914059.891:1): state=initialized audit_enabled=0 res=1
Jan 20 13:01:02 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 20 13:01:02 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 20 13:01:02 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 20 13:01:02 localhost kernel: cpuidle: using governor menu
Jan 20 13:01:02 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 20 13:01:02 localhost kernel: PCI: Using configuration type 1 for base access
Jan 20 13:01:02 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 20 13:01:02 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 20 13:01:02 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 20 13:01:02 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 20 13:01:02 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 20 13:01:02 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 20 13:01:02 localhost kernel: Demotion targets for Node 0: null
Jan 20 13:01:02 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 20 13:01:02 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 20 13:01:02 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 20 13:01:02 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 20 13:01:02 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 20 13:01:02 localhost kernel: ACPI: Interpreter enabled
Jan 20 13:01:02 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 20 13:01:02 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 20 13:01:02 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 20 13:01:02 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 20 13:01:02 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 20 13:01:02 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 20 13:01:02 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [3] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [4] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [5] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [6] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [7] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [8] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [9] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [10] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [11] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [12] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [13] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [14] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [15] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [16] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [17] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [18] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [19] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [20] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [21] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [22] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [23] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [24] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [25] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [26] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [27] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [28] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [29] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [30] registered
Jan 20 13:01:02 localhost kernel: acpiphp: Slot [31] registered
Jan 20 13:01:02 localhost kernel: PCI host bridge to bus 0000:00
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 20 13:01:02 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 20 13:01:02 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 20 13:01:02 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:01:02 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 20 13:01:02 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 20 13:01:02 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 20 13:01:02 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 20 13:01:02 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 20 13:01:02 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 20 13:01:02 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 20 13:01:02 localhost kernel: iommu: Default domain type: Translated
Jan 20 13:01:02 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 20 13:01:02 localhost kernel: SCSI subsystem initialized
Jan 20 13:01:02 localhost kernel: ACPI: bus type USB registered
Jan 20 13:01:02 localhost kernel: usbcore: registered new interface driver usbfs
Jan 20 13:01:02 localhost kernel: usbcore: registered new interface driver hub
Jan 20 13:01:02 localhost kernel: usbcore: registered new device driver usb
Jan 20 13:01:02 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 20 13:01:02 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 20 13:01:02 localhost kernel: PTP clock support registered
Jan 20 13:01:02 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 20 13:01:02 localhost kernel: NetLabel: Initializing
Jan 20 13:01:02 localhost kernel: NetLabel:  domain hash size = 128
Jan 20 13:01:02 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 20 13:01:02 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 20 13:01:02 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 20 13:01:02 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 20 13:01:02 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 20 13:01:02 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 20 13:01:02 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 20 13:01:02 localhost kernel: vgaarb: loaded
Jan 20 13:01:02 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 20 13:01:02 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 20 13:01:02 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 20 13:01:02 localhost kernel: pnp: PnP ACPI init
Jan 20 13:01:02 localhost kernel: pnp 00:03: [dma 2]
Jan 20 13:01:02 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 20 13:01:02 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 20 13:01:02 localhost kernel: NET: Registered PF_INET protocol family
Jan 20 13:01:02 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 20 13:01:02 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 20 13:01:02 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 20 13:01:02 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 20 13:01:02 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 20 13:01:02 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 20 13:01:02 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 20 13:01:02 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:01:02 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:01:02 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 20 13:01:02 localhost kernel: NET: Registered PF_XDP protocol family
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 20 13:01:02 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 20 13:01:02 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 20 13:01:02 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 20 13:01:02 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 70890 usecs
Jan 20 13:01:02 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 20 13:01:02 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 20 13:01:02 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 20 13:01:02 localhost kernel: ACPI: bus type thunderbolt registered
Jan 20 13:01:02 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 20 13:01:02 localhost kernel: Initialise system trusted keyrings
Jan 20 13:01:02 localhost kernel: Key type blacklist registered
Jan 20 13:01:02 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 20 13:01:02 localhost kernel: zbud: loaded
Jan 20 13:01:02 localhost kernel: integrity: Platform Keyring initialized
Jan 20 13:01:02 localhost kernel: integrity: Machine keyring initialized
Jan 20 13:01:02 localhost kernel: Freeing initrd memory: 87956K
Jan 20 13:01:02 localhost kernel: NET: Registered PF_ALG protocol family
Jan 20 13:01:02 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 20 13:01:02 localhost kernel: Key type asymmetric registered
Jan 20 13:01:02 localhost kernel: Asymmetric key parser 'x509' registered
Jan 20 13:01:02 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 20 13:01:02 localhost kernel: io scheduler mq-deadline registered
Jan 20 13:01:02 localhost kernel: io scheduler kyber registered
Jan 20 13:01:02 localhost kernel: io scheduler bfq registered
Jan 20 13:01:02 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 20 13:01:02 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 20 13:01:02 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 20 13:01:02 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 20 13:01:02 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 20 13:01:02 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 20 13:01:02 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 20 13:01:02 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 20 13:01:02 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 20 13:01:02 localhost kernel: Non-volatile memory driver v1.3
Jan 20 13:01:02 localhost kernel: rdac: device handler registered
Jan 20 13:01:02 localhost kernel: hp_sw: device handler registered
Jan 20 13:01:02 localhost kernel: emc: device handler registered
Jan 20 13:01:02 localhost kernel: alua: device handler registered
Jan 20 13:01:02 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 20 13:01:02 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 20 13:01:02 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 20 13:01:02 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 20 13:01:02 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 20 13:01:02 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 20 13:01:02 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 20 13:01:02 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 20 13:01:02 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 20 13:01:02 localhost kernel: hub 1-0:1.0: USB hub found
Jan 20 13:01:02 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 20 13:01:02 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 20 13:01:02 localhost kernel: usbserial: USB Serial support registered for generic
Jan 20 13:01:02 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 20 13:01:02 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 20 13:01:02 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 20 13:01:02 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 20 13:01:02 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 20 13:01:02 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 20 13:01:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 20 13:01:02 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 20 13:01:02 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T13:01:01 UTC (1768914061)
Jan 20 13:01:02 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 20 13:01:02 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 20 13:01:02 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 20 13:01:02 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 20 13:01:02 localhost kernel: usbcore: registered new interface driver usbhid
Jan 20 13:01:02 localhost kernel: usbhid: USB HID core driver
Jan 20 13:01:02 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 20 13:01:02 localhost kernel: Initializing XFRM netlink socket
Jan 20 13:01:02 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 20 13:01:02 localhost kernel: Segment Routing with IPv6
Jan 20 13:01:02 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 20 13:01:02 localhost kernel: mpls_gso: MPLS GSO support
Jan 20 13:01:02 localhost kernel: IPI shorthand broadcast: enabled
Jan 20 13:01:02 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 20 13:01:02 localhost kernel: AES CTR mode by8 optimization enabled
Jan 20 13:01:02 localhost kernel: sched_clock: Marking stable (1146001907, 149189416)->(1415794952, -120603629)
Jan 20 13:01:02 localhost kernel: registered taskstats version 1
Jan 20 13:01:02 localhost kernel: Loading compiled-in X.509 certificates
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 20 13:01:02 localhost kernel: Demotion targets for Node 0: null
Jan 20 13:01:02 localhost kernel: page_owner is disabled
Jan 20 13:01:02 localhost kernel: Key type .fscrypt registered
Jan 20 13:01:02 localhost kernel: Key type fscrypt-provisioning registered
Jan 20 13:01:02 localhost kernel: Key type big_key registered
Jan 20 13:01:02 localhost kernel: Key type encrypted registered
Jan 20 13:01:02 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 20 13:01:02 localhost kernel: Loading compiled-in module X.509 certificates
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:01:02 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 20 13:01:02 localhost kernel: ima: No architecture policies found
Jan 20 13:01:02 localhost kernel: evm: Initialising EVM extended attributes:
Jan 20 13:01:02 localhost kernel: evm: security.selinux
Jan 20 13:01:02 localhost kernel: evm: security.SMACK64 (disabled)
Jan 20 13:01:02 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 20 13:01:02 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 20 13:01:02 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 20 13:01:02 localhost kernel: evm: security.apparmor (disabled)
Jan 20 13:01:02 localhost kernel: evm: security.ima
Jan 20 13:01:02 localhost kernel: evm: security.capability
Jan 20 13:01:02 localhost kernel: evm: HMAC attrs: 0x1
Jan 20 13:01:02 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 20 13:01:02 localhost kernel: Running certificate verification RSA selftest
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 20 13:01:02 localhost kernel: Running certificate verification ECDSA selftest
Jan 20 13:01:02 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 20 13:01:02 localhost kernel: clk: Disabling unused clocks
Jan 20 13:01:02 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 20 13:01:02 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 20 13:01:02 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 20 13:01:02 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 20 13:01:02 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 20 13:01:02 localhost kernel: Run /init as init process
Jan 20 13:01:02 localhost kernel:   with arguments:
Jan 20 13:01:02 localhost kernel:     /init
Jan 20 13:01:02 localhost kernel:   with environment:
Jan 20 13:01:02 localhost kernel:     HOME=/
Jan 20 13:01:02 localhost kernel:     TERM=linux
Jan 20 13:01:02 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 20 13:01:02 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:01:02 localhost systemd[1]: Detected virtualization kvm.
Jan 20 13:01:02 localhost systemd[1]: Detected architecture x86-64.
Jan 20 13:01:02 localhost systemd[1]: Running in initrd.
Jan 20 13:01:02 localhost systemd[1]: No hostname configured, using default hostname.
Jan 20 13:01:02 localhost systemd[1]: Hostname set to <localhost>.
Jan 20 13:01:02 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 20 13:01:02 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 20 13:01:02 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 20 13:01:02 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 20 13:01:02 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 20 13:01:02 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 20 13:01:02 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 20 13:01:02 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 20 13:01:02 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 20 13:01:02 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:01:02 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 13:01:02 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 20 13:01:02 localhost systemd[1]: Reached target Local File Systems.
Jan 20 13:01:02 localhost systemd[1]: Reached target Path Units.
Jan 20 13:01:02 localhost systemd[1]: Reached target Slice Units.
Jan 20 13:01:02 localhost systemd[1]: Reached target Swaps.
Jan 20 13:01:02 localhost systemd[1]: Reached target Timer Units.
Jan 20 13:01:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 13:01:02 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 20 13:01:02 localhost systemd[1]: Listening on Journal Socket.
Jan 20 13:01:02 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 13:01:02 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 13:01:02 localhost systemd[1]: Reached target Socket Units.
Jan 20 13:01:02 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 13:01:02 localhost systemd[1]: Starting Journal Service...
Jan 20 13:01:02 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:01:02 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:01:02 localhost systemd[1]: Starting Create System Users...
Jan 20 13:01:02 localhost systemd[1]: Starting Setup Virtual Console...
Jan 20 13:01:02 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 13:01:02 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:01:02 localhost systemd-journald[306]: Journal started
Jan 20 13:01:02 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/35085f331a2741e3805d02c7ac6a1d7f) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:01:02 localhost systemd[1]: Started Journal Service.
Jan 20 13:01:02 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Jan 20 13:01:02 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Jan 20 13:01:02 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 20 13:01:02 localhost systemd[1]: Finished Create System Users.
Jan 20 13:01:02 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:01:02 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:01:02 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:01:02 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:01:02 localhost systemd[1]: Finished Setup Virtual Console.
Jan 20 13:01:02 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 20 13:01:02 localhost systemd[1]: Starting dracut cmdline hook...
Jan 20 13:01:02 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 20 13:01:02 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:01:02 localhost systemd[1]: Finished dracut cmdline hook.
Jan 20 13:01:02 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 20 13:01:02 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 20 13:01:02 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 20 13:01:02 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 20 13:01:02 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 20 13:01:02 localhost kernel: RPC: Registered udp transport module.
Jan 20 13:01:02 localhost kernel: RPC: Registered tcp transport module.
Jan 20 13:01:02 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 20 13:01:02 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 20 13:01:02 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 20 13:01:02 localhost rpc.statd[443]: Initializing NSM state
Jan 20 13:01:02 localhost rpc.idmapd[448]: Setting log level to 0
Jan 20 13:01:02 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 20 13:01:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:01:02 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:01:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:01:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 20 13:01:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 20 13:01:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 13:01:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 20 13:01:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:01:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:01:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:01:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:01:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:01:02 localhost systemd[1]: Reached target Network.
Jan 20 13:01:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:01:02 localhost systemd[1]: Starting dracut initqueue hook...
Jan 20 13:01:03 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 20 13:01:03 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 20 13:01:03 localhost kernel:  vda: vda1
Jan 20 13:01:03 localhost kernel: libata version 3.00 loaded.
Jan 20 13:01:03 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 20 13:01:03 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 20 13:01:03 localhost kernel: scsi host0: ata_piix
Jan 20 13:01:03 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 20 13:01:03 localhost kernel: scsi host1: ata_piix
Jan 20 13:01:03 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 20 13:01:03 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 20 13:01:03 localhost systemd[1]: Reached target System Initialization.
Jan 20 13:01:03 localhost systemd[1]: Reached target Basic System.
Jan 20 13:01:03 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:01:03 localhost systemd[1]: Reached target Initrd Root Device.
Jan 20 13:01:03 localhost kernel: ata1: found unknown device (class 0)
Jan 20 13:01:03 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 20 13:01:03 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 20 13:01:03 localhost systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:01:03 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 20 13:01:03 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 20 13:01:03 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 20 13:01:03 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 20 13:01:03 localhost systemd[1]: Finished dracut initqueue hook.
Jan 20 13:01:03 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:01:03 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 20 13:01:03 localhost systemd[1]: Reached target Remote File Systems.
Jan 20 13:01:03 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 20 13:01:03 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 20 13:01:03 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 20 13:01:03 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 20 13:01:03 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:01:03 localhost systemd[1]: Mounting /sysroot...
Jan 20 13:01:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 20 13:01:03 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 20 13:01:04 localhost kernel: XFS (vda1): Ending clean mount
Jan 20 13:01:04 localhost systemd[1]: Mounted /sysroot.
Jan 20 13:01:04 localhost systemd[1]: Reached target Initrd Root File System.
Jan 20 13:01:04 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 20 13:01:04 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 20 13:01:04 localhost systemd[1]: Reached target Initrd File Systems.
Jan 20 13:01:04 localhost systemd[1]: Reached target Initrd Default Target.
Jan 20 13:01:04 localhost systemd[1]: Starting dracut mount hook...
Jan 20 13:01:04 localhost systemd[1]: Finished dracut mount hook.
Jan 20 13:01:04 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 20 13:01:04 localhost rpc.idmapd[448]: exiting on signal 15
Jan 20 13:01:04 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 20 13:01:04 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 20 13:01:04 localhost systemd[1]: Stopped target Network.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Timer Units.
Jan 20 13:01:04 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 20 13:01:04 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Basic System.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Path Units.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Remote File Systems.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Slice Units.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Socket Units.
Jan 20 13:01:04 localhost systemd[1]: Stopped target System Initialization.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Local File Systems.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Swaps.
Jan 20 13:01:04 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut mount hook.
Jan 20 13:01:04 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 20 13:01:04 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 20 13:01:04 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 20 13:01:04 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 20 13:01:04 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:01:04 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 20 13:01:04 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 20 13:01:04 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 20 13:01:04 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 20 13:01:04 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 20 13:01:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 20 13:01:04 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 20 13:01:04 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Closed udev Control Socket.
Jan 20 13:01:04 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Closed udev Kernel Socket.
Jan 20 13:01:04 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 20 13:01:04 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 20 13:01:04 localhost systemd[1]: Starting Cleanup udev Database...
Jan 20 13:01:04 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 20 13:01:04 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 20 13:01:04 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Stopped Create System Users.
Jan 20 13:01:04 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 20 13:01:04 localhost systemd[1]: Finished Cleanup udev Database.
Jan 20 13:01:04 localhost systemd[1]: Reached target Switch Root.
Jan 20 13:01:04 localhost systemd[1]: Starting Switch Root...
Jan 20 13:01:04 localhost systemd[1]: Switching root.
Jan 20 13:01:04 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Jan 20 13:01:04 localhost systemd-journald[306]: Journal stopped
Jan 20 13:01:05 localhost kernel: audit: type=1404 audit(1768914064.937:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability open_perms=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:01:05 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:01:05 localhost kernel: audit: type=1403 audit(1768914065.061:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 20 13:01:05 localhost systemd[1]: Successfully loaded SELinux policy in 127.362ms.
Jan 20 13:01:05 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.635ms.
Jan 20 13:01:05 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:01:05 localhost systemd[1]: Detected virtualization kvm.
Jan 20 13:01:05 localhost systemd[1]: Detected architecture x86-64.
Jan 20 13:01:05 localhost systemd-rc-local-generator[636]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:01:05 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Stopped Switch Root.
Jan 20 13:01:05 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 20 13:01:05 localhost systemd[1]: Created slice Slice /system/getty.
Jan 20 13:01:05 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 20 13:01:05 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 20 13:01:05 localhost systemd[1]: Created slice User and Session Slice.
Jan 20 13:01:05 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:01:05 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 20 13:01:05 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 20 13:01:05 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 13:01:05 localhost systemd[1]: Stopped target Switch Root.
Jan 20 13:01:05 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 20 13:01:05 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 20 13:01:05 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 20 13:01:05 localhost systemd[1]: Reached target Path Units.
Jan 20 13:01:05 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 20 13:01:05 localhost systemd[1]: Reached target Slice Units.
Jan 20 13:01:05 localhost systemd[1]: Reached target Swaps.
Jan 20 13:01:05 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 20 13:01:05 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 20 13:01:05 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 20 13:01:05 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 20 13:01:05 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 20 13:01:05 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 13:01:05 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 13:01:05 localhost systemd[1]: Mounting Huge Pages File System...
Jan 20 13:01:05 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 20 13:01:05 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 20 13:01:05 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 20 13:01:05 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:01:05 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 13:01:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:01:05 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 20 13:01:05 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 20 13:01:05 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 20 13:01:05 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 20 13:01:05 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 20 13:01:05 localhost systemd[1]: Stopped Journal Service.
Jan 20 13:01:05 localhost systemd[1]: Starting Journal Service...
Jan 20 13:01:05 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:01:05 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 20 13:01:05 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:01:05 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 20 13:01:05 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 20 13:01:05 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:01:05 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 13:01:05 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 20 13:01:05 localhost systemd[1]: Mounted Huge Pages File System.
Jan 20 13:01:05 localhost systemd-journald[677]: Journal started
Jan 20 13:01:05 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:01:05 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 20 13:01:05 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Started Journal Service.
Jan 20 13:01:05 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 20 13:01:05 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 20 13:01:05 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 20 13:01:05 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 13:01:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:01:05 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 20 13:01:05 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 20 13:01:05 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 20 13:01:05 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 20 13:01:05 localhost kernel: ACPI: bus type drm_connector registered
Jan 20 13:01:05 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:01:05 localhost kernel: fuse: init (API version 7.37)
Jan 20 13:01:05 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 20 13:01:05 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 20 13:01:05 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 20 13:01:05 localhost systemd[1]: Mounting FUSE Control File System...
Jan 20 13:01:05 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:01:05 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 20 13:01:05 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 20 13:01:05 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 20 13:01:05 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 20 13:01:05 localhost systemd[1]: Starting Create System Users...
Jan 20 13:01:05 localhost systemd[1]: Mounted FUSE Control File System.
Jan 20 13:01:05 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:01:05 localhost systemd-journald[677]: Received client request to flush runtime journal.
Jan 20 13:01:05 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 20 13:01:05 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 20 13:01:05 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:01:05 localhost systemd[1]: Finished Create System Users.
Jan 20 13:01:05 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:01:05 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:01:05 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:01:05 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 20 13:01:05 localhost systemd[1]: Reached target Local File Systems.
Jan 20 13:01:05 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 20 13:01:05 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 20 13:01:05 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 20 13:01:05 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 20 13:01:05 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 20 13:01:05 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 20 13:01:05 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:01:05 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 20 13:01:05 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 20 13:01:05 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:01:05 localhost systemd[1]: Starting Security Auditing Service...
Jan 20 13:01:05 localhost systemd[1]: Starting RPC Bind...
Jan 20 13:01:05 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 20 13:01:05 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 20 13:01:05 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 20 13:01:05 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 20 13:01:05 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 20 13:01:05 localhost augenrules[706]: /sbin/augenrules: No change
Jan 20 13:01:06 localhost systemd[1]: Started RPC Bind.
Jan 20 13:01:06 localhost augenrules[721]: No rules
Jan 20 13:01:06 localhost augenrules[721]: enabled 1
Jan 20 13:01:06 localhost augenrules[721]: failure 1
Jan 20 13:01:06 localhost augenrules[721]: pid 701
Jan 20 13:01:06 localhost augenrules[721]: rate_limit 0
Jan 20 13:01:06 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:01:06 localhost augenrules[721]: lost 0
Jan 20 13:01:06 localhost augenrules[721]: backlog 3
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:01:06 localhost augenrules[721]: enabled 1
Jan 20 13:01:06 localhost augenrules[721]: failure 1
Jan 20 13:01:06 localhost augenrules[721]: pid 701
Jan 20 13:01:06 localhost augenrules[721]: rate_limit 0
Jan 20 13:01:06 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:01:06 localhost augenrules[721]: lost 0
Jan 20 13:01:06 localhost augenrules[721]: backlog 0
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:01:06 localhost augenrules[721]: enabled 1
Jan 20 13:01:06 localhost augenrules[721]: failure 1
Jan 20 13:01:06 localhost augenrules[721]: pid 701
Jan 20 13:01:06 localhost augenrules[721]: rate_limit 0
Jan 20 13:01:06 localhost augenrules[721]: backlog_limit 8192
Jan 20 13:01:06 localhost augenrules[721]: lost 0
Jan 20 13:01:06 localhost augenrules[721]: backlog 4
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time 60000
Jan 20 13:01:06 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 20 13:01:06 localhost systemd[1]: Started Security Auditing Service.
Jan 20 13:01:06 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 20 13:01:06 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 20 13:01:06 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 20 13:01:06 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:01:06 localhost systemd[1]: Starting Update is Completed...
Jan 20 13:01:06 localhost systemd[1]: Finished Update is Completed.
Jan 20 13:01:06 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:01:06 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:01:06 localhost systemd[1]: Reached target System Initialization.
Jan 20 13:01:06 localhost systemd[1]: Started dnf makecache --timer.
Jan 20 13:01:06 localhost systemd[1]: Started Daily rotation of log files.
Jan 20 13:01:06 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 20 13:01:06 localhost systemd[1]: Reached target Timer Units.
Jan 20 13:01:06 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 13:01:06 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 20 13:01:06 localhost systemd[1]: Reached target Socket Units.
Jan 20 13:01:06 localhost systemd-udevd[731]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:01:06 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 20 13:01:06 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:01:06 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 20 13:01:06 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:01:06 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:01:06 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:01:06 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 20 13:01:06 localhost systemd[1]: Reached target Basic System.
Jan 20 13:01:06 localhost dbus-broker-lau[768]: Ready
Jan 20 13:01:06 localhost systemd[1]: Starting NTP client/server...
Jan 20 13:01:06 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 20 13:01:06 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 20 13:01:06 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 20 13:01:06 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 20 13:01:06 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 20 13:01:06 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 20 13:01:06 localhost chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:01:06 localhost chronyd[791]: Loaded 0 symmetric keys
Jan 20 13:01:06 localhost chronyd[791]: Using right/UTC timezone to obtain leap second data
Jan 20 13:01:06 localhost chronyd[791]: Loaded seccomp filter (level 2)
Jan 20 13:01:06 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 20 13:01:06 localhost systemd[1]: Started irqbalance daemon.
Jan 20 13:01:06 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 20 13:01:06 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:01:06 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:01:06 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:01:06 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 20 13:01:06 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 20 13:01:06 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 20 13:01:06 localhost systemd[1]: Starting User Login Management...
Jan 20 13:01:06 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 20 13:01:06 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 20 13:01:06 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 20 13:01:06 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 20 13:01:06 localhost kernel: Console: switching to colour dummy device 80x25
Jan 20 13:01:06 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 20 13:01:06 localhost kernel: [drm] features: -context_init
Jan 20 13:01:06 localhost kernel: kvm_amd: TSC scaling supported
Jan 20 13:01:06 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 20 13:01:06 localhost kernel: kvm_amd: Nested Paging enabled
Jan 20 13:01:06 localhost kernel: kvm_amd: LBR virtualization supported
Jan 20 13:01:06 localhost systemd[1]: Started NTP client/server.
Jan 20 13:01:06 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 20 13:01:06 localhost kernel: [drm] number of scanouts: 1
Jan 20 13:01:06 localhost kernel: [drm] number of cap sets: 0
Jan 20 13:01:06 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 20 13:01:06 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 20 13:01:06 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 20 13:01:06 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 20 13:01:06 localhost systemd-logind[796]: New seat seat0.
Jan 20 13:01:06 localhost systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 13:01:06 localhost systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 13:01:06 localhost systemd[1]: Started User Login Management.
Jan 20 13:01:06 localhost iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Jan 20 13:01:06 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 20 13:01:06 localhost cloud-init[839]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 20 Jan 2026 13:01:06 +0000. Up 6.48 seconds.
Jan 20 13:01:07 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 20 13:01:07 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 20 13:01:07 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpmhlv5pbr.mount: Deactivated successfully.
Jan 20 13:01:07 localhost systemd[1]: Starting Hostname Service...
Jan 20 13:01:07 localhost systemd[1]: Started Hostname Service.
Jan 20 13:01:07 np0005588918.novalocal systemd-hostnamed[853]: Hostname set to <np0005588918.novalocal> (static)
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Reached target Preparation for Network.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Starting Network Manager...
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4157] NetworkManager (version 1.54.3-2.el9) is starting... (boot:961c158a-383a-4022-81b9-57a8f5012ec2)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4162] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4242] manager[0x563caa977000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4275] hostname: hostname: using hostnamed
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4276] hostname: static hostname changed from (none) to "np0005588918.novalocal"
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4281] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4366] manager[0x563caa977000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4367] manager[0x563caa977000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4435] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4436] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4437] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4438] manager: Networking is enabled by state file
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4440] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4452] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4478] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4490] dhcp: init: Using DHCP client 'internal'
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4493] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4507] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4515] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4526] device (lo): Activation: starting connection 'lo' (852309f6-3ce4-4bbb-99b9-75bbfcd15836)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4538] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4542] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4578] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4581] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4584] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4586] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4588] device (eth0): carrier: link connected
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4590] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4598] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4626] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4632] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4632] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4635] manager: NetworkManager state is now CONNECTING
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4636] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Started Network Manager.
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4652] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4656] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Reached target Network.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4719] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4726] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4741] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4752] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4754] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4766] device (lo): Activation: successful, device activated.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4772] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4786] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4789] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4791] device (eth0): Activation: successful, device activated.
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4796] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:01:07 np0005588918.novalocal NetworkManager[857]: <info>  [1768914067.4799] manager: startup complete
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Reached target NFS client services.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: Reached target Remote File Systems.
Jan 20 13:01:07 np0005588918.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 20 Jan 2026 13:01:07 +0000. Up 7.39 seconds.
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.148         | 255.255.255.0 | global | fa:16:3e:a6:80:e9 |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fea6:80e9/64 |       .       |  link  | fa:16:3e:a6:80:e9 |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 20 13:01:07 np0005588918.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Jan 20 13:01:09 np0005588918.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Generating public/private rsa key pair.
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key fingerprint is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: SHA256:RTbqaIAIoyFJk1wJ32MbjbK19HMt3nuZJMJL3wKNBhA root@np0005588918.novalocal
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key's randomart image is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +---[RSA 3072]----+
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |B=+..E.   +      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |=*oo..o  + .     |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |o .o.O... .      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |    *.*+ ..      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |   . oooSoo.     |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |     .  +Boo .   |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |        o.=.+ o  |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |         . o.=   |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |           .o    |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key fingerprint is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: SHA256:XhpOxGOhWlC0RQXISfTgSjX6zQN37KtqrTJFXwIc9ik root@np0005588918.novalocal
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key's randomart image is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +---[ECDSA 256]---+
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |    .B@==o.      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |     **X +       |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |    o E.X o      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |   . =.O.+.      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |    o...Soo      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |      .+.= .     |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |     . .+ .      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |    o . ..       |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |     +oo.        |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key fingerprint is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: SHA256:zudUzxxfctO/6uC6o5ueFBOaCbjKQ1B9Zwa8xTeD76c root@np0005588918.novalocal
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: The key's randomart image is:
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +--[ED25519 256]--+
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |  .. ..o .       |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: | .. . o B +      |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |.. . . B o o     |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |. . . = . .     .|
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: | o   + oS.  . o.+|
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |+      oo ...+ =+|
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |.o     .o o+  + o|
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |  .   . o=E .   .|
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: |      .*oo=..o.. |
Jan 20 13:01:10 np0005588918.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Reached target Network is Online.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting System Logging Service...
Jan 20 13:01:10 np0005588918.novalocal sm-notify[1002]: Version 2.5.4 starting
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Permit User Sessions...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Finished Permit User Sessions.
Jan 20 13:01:10 np0005588918.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 20 13:01:10 np0005588918.novalocal sshd[1004]: Server listening on :: port 22.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started Command Scheduler.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started Getty on tty1.
Jan 20 13:01:10 np0005588918.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Jan 20 13:01:10 np0005588918.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 20 13:01:10 np0005588918.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 99% if used.)
Jan 20 13:01:10 np0005588918.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Jan 20 13:01:10 np0005588918.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 20 13:01:10 np0005588918.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Reached target Login Prompts.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Started System Logging Service.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Reached target Multi-User System.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 20 13:01:10 np0005588918.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:01:10 np0005588918.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 20 13:01:10 np0005588918.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 20 13:01:10 np0005588918.novalocal cloud-init[1100]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 20 Jan 2026 13:01:10 +0000. Up 10.47 seconds.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 20 13:01:10 np0005588918.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1264]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 20 Jan 2026 13:01:11 +0000. Up 10.84 seconds.
Jan 20 13:01:11 np0005588918.novalocal dracut[1268]: dracut-057-102.git20250818.el9
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1285]: #############################################################
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1286]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1288]: 256 SHA256:XhpOxGOhWlC0RQXISfTgSjX6zQN37KtqrTJFXwIc9ik root@np0005588918.novalocal (ECDSA)
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1290]: 256 SHA256:zudUzxxfctO/6uC6o5ueFBOaCbjKQ1B9Zwa8xTeD76c root@np0005588918.novalocal (ED25519)
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1292]: 3072 SHA256:RTbqaIAIoyFJk1wJ32MbjbK19HMt3nuZJMJL3wKNBhA root@np0005588918.novalocal (RSA)
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1293]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1294]: #############################################################
Jan 20 13:01:11 np0005588918.novalocal cloud-init[1264]: Cloud-init v. 24.4-8.el9 finished at Tue, 20 Jan 2026 13:01:11 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.00 seconds
Jan 20 13:01:11 np0005588918.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 20 13:01:11 np0005588918.novalocal systemd[1]: Reached target Cloud-init target.
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1350]: Connection reset by 38.102.83.114 port 41292 [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1352]: Unable to negotiate with 38.102.83.114 port 41304: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1360]: Connection reset by 38.102.83.114 port 41318 [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1364]: Unable to negotiate with 38.102.83.114 port 41324: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1368]: Unable to negotiate with 38.102.83.114 port 41328: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1372]: Connection reset by 38.102.83.114 port 41332 [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1380]: Connection reset by 38.102.83.114 port 41344 [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1385]: Unable to negotiate with 38.102.83.114 port 41360: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 20 13:01:11 np0005588918.novalocal sshd-session[1387]: Unable to negotiate with 38.102.83.114 port 41362: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:01:12 np0005588918.novalocal chronyd[791]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 20 13:01:12 np0005588918.novalocal chronyd[791]: System clock wrong by -1.168940 seconds
Jan 20 13:01:11 np0005588918.novalocal systemd-journald[677]: Time jumped backwards, rotating.
Jan 20 13:01:11 np0005588918.novalocal chronyd[791]: System clock was stepped by -1.168940 seconds
Jan 20 13:01:11 np0005588918.novalocal chronyd[791]: System clock TAI offset set to 37 seconds
Jan 20 13:01:11 np0005588918.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:01:11 np0005588918.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: memstrack is not available
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: memstrack is not available
Jan 20 13:01:11 np0005588918.novalocal dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: *** Including module: systemd ***
Jan 20 13:01:12 np0005588918.novalocal dracut[1270]: *** Including module: fips ***
Jan 20 13:01:14 np0005588918.novalocal dracut[1270]: *** Including module: systemd-initrd ***
Jan 20 13:01:14 np0005588918.novalocal dracut[1270]: *** Including module: i18n ***
Jan 20 13:01:14 np0005588918.novalocal dracut[1270]: *** Including module: drm ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: prefixdevname ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: kernel-modules ***
Jan 20 13:01:15 np0005588918.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: kernel-modules-extra ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: qemu ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: fstab-sys ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: rootfs-block ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: terminfo ***
Jan 20 13:01:15 np0005588918.novalocal dracut[1270]: *** Including module: udev-rules ***
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 20 13:01:16 np0005588918.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Jan 20 13:01:16 np0005588918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: Skipping udev rule: 91-permissions.rules
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: *** Including module: virtiofs ***
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: *** Including module: dracut-systemd ***
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: *** Including module: usrmount ***
Jan 20 13:01:16 np0005588918.novalocal dracut[1270]: *** Including module: base ***
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]: *** Including module: fs-lib ***
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]: *** Including module: kdumpbase ***
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:   microcode_ctl module: mangling fw_dir
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 20 13:01:17 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]: *** Including module: openssl ***
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]: *** Including module: shutdown ***
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]: *** Including module: squash ***
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]: *** Including modules done ***
Jan 20 13:01:18 np0005588918.novalocal dracut[1270]: *** Installing kernel module dependencies ***
Jan 20 13:01:19 np0005588918.novalocal dracut[1270]: *** Installing kernel module dependencies done ***
Jan 20 13:01:19 np0005588918.novalocal dracut[1270]: *** Resolving executable dependencies ***
Jan 20 13:01:20 np0005588918.novalocal dracut[1270]: *** Resolving executable dependencies done ***
Jan 20 13:01:20 np0005588918.novalocal dracut[1270]: *** Generating early-microcode cpio image ***
Jan 20 13:01:20 np0005588918.novalocal dracut[1270]: *** Store current command line parameters ***
Jan 20 13:01:20 np0005588918.novalocal dracut[1270]: Stored kernel commandline:
Jan 20 13:01:20 np0005588918.novalocal dracut[1270]: No dracut internal kernel commandline stored in the initramfs
Jan 20 13:01:21 np0005588918.novalocal dracut[1270]: *** Install squash loader ***
Jan 20 13:01:21 np0005588918.novalocal dracut[1270]: *** Squashing the files inside the initramfs ***
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: *** Squashing the files inside the initramfs done ***
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: *** Hardlinking files ***
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Mode:           real
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Files:          50
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Linked:         0 files
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Compared:       0 xattrs
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Compared:       0 files
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Saved:          0 B
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: Duration:       0.000564 seconds
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: *** Hardlinking files done ***
Jan 20 13:01:23 np0005588918.novalocal dracut[1270]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 20 13:01:24 np0005588918.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 20 13:01:24 np0005588918.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 20 13:01:24 np0005588918.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 20 13:01:24 np0005588918.novalocal systemd[1]: Startup finished in 1.452s (kernel) + 3.092s (initrd) + 20.295s (userspace) = 24.841s.
Jan 20 13:01:36 np0005588918.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:02:41 np0005588918.novalocal sshd-session[4305]: error: kex_exchange_identification: read: Connection reset by peer
Jan 20 13:02:41 np0005588918.novalocal sshd-session[4305]: Connection reset by 176.120.22.52 port 7883
Jan 20 13:08:47 np0005588918.novalocal chronyd[791]: Selected source 173.206.123.141 (2.centos.pool.ntp.org)
Jan 20 13:16:22 np0005588918.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 20 13:16:22 np0005588918.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 20 13:16:22 np0005588918.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 20 13:16:22 np0005588918.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 20 13:16:35 np0005588918.novalocal sshd-session[4315]: Accepted publickey for zuul from 38.102.83.114 port 46254 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 20 13:16:35 np0005588918.novalocal systemd-logind[796]: New session 1 of user zuul.
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Queued start job for default target Main User Target.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Created slice User Application Slice.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Reached target Paths.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Reached target Timers.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Starting D-Bus User Message Bus Socket...
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Starting Create User's Volatile Files and Directories...
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Finished Create User's Volatile Files and Directories.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Reached target Sockets.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Reached target Basic System.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Reached target Main User Target.
Jan 20 13:16:35 np0005588918.novalocal systemd[4319]: Startup finished in 110ms.
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 20 13:16:35 np0005588918.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 20 13:16:35 np0005588918.novalocal sshd-session[4315]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:16:35 np0005588918.novalocal python3[4401]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:39 np0005588918.novalocal python3[4429]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:47 np0005588918.novalocal python3[4487]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:48 np0005588918.novalocal python3[4527]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 20 13:16:51 np0005588918.novalocal python3[4553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/GVaHfRonG9ohfZBeZsfGsPAY5Ua/gRcfFAsYYpV+pfGGgyLPk7GpRkk4pr+e8jNRtdcfblMAicASH+5mJlHBm4eUbFYKtcwEXZXv6pyuCU3Ecns8qj50vHni0ryqqxTyg09WqOLv2u9xctOgas5b8y8tPl7bs2/uwlGFud/NxTxRMamezw0jUgKB9f6nJj6TiaAzomayQwqBx0/0kk8Cc6o4JsrOc92YyIsAjs+grfO5gO6MLYaAFWaCv28+Yvj3G37RUIAILUpORm4vyFNvxLGV+iIKd8ZYqqV6cczJ2tM7MGlfjYz9lTXL7WHkY2Knel8HDycvHH85Ydujv3gyD8d/m+dy4VHhMoU3HR1Syxx5e1GxOjU6NV7ZtEMjYtqE6zUdCNY1zXUU4uGxxPK7dF2Zzx5ODWpS7ssrJVRsLzDPf1YiIyi/g3OHzO95EzucQchqJsVh3MJI8D/C2CjI432eipKKcQAYY9sD9/mpPwBqI0PKwfSGTpsps60NwhM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:16:51 np0005588918.novalocal python3[4577]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:51 np0005588918.novalocal python3[4676]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:16:53 np0005588918.novalocal python3[4747]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915011.6700597-251-189086756204647/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=db0b5efc11684c95b5a6c3da9b48c4c5_id_rsa follow=False checksum=3ee7ffdf9f2bde9aa4c9d676d061c45199023a01 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:53 np0005588918.novalocal python3[4870]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:16:53 np0005588918.novalocal python3[4941]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915013.2317002-306-53879828004684/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=db0b5efc11684c95b5a6c3da9b48c4c5_id_rsa.pub follow=False checksum=c665db3a39036994c79fbfd6a268cbf34e365958 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:55 np0005588918.novalocal python3[4989]: ansible-ping Invoked with data=pong
Jan 20 13:16:56 np0005588918.novalocal python3[5013]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:16:58 np0005588918.novalocal python3[5071]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 20 13:16:59 np0005588918.novalocal python3[5103]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:16:59 np0005588918.novalocal python3[5127]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588918.novalocal python3[5151]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588918.novalocal python3[5175]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588918.novalocal python3[5199]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:00 np0005588918.novalocal python3[5223]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:02 np0005588918.novalocal sudo[5247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cspkasmyuwmyqbocigetkmwvnicsdeca ; /usr/bin/python3'
Jan 20 13:17:02 np0005588918.novalocal sudo[5247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:02 np0005588918.novalocal python3[5249]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:02 np0005588918.novalocal sudo[5247]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:03 np0005588918.novalocal sudo[5325]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwhmfrrtxxjpxihmnqsjszqtxertuaxj ; /usr/bin/python3'
Jan 20 13:17:03 np0005588918.novalocal sudo[5325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:03 np0005588918.novalocal python3[5327]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:03 np0005588918.novalocal sudo[5325]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:03 np0005588918.novalocal sudo[5398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywjvjqwajsaptbyucetqfqnfdrkqkaso ; /usr/bin/python3'
Jan 20 13:17:03 np0005588918.novalocal sudo[5398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:03 np0005588918.novalocal python3[5400]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915022.9064002-31-214664764342885/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:03 np0005588918.novalocal sudo[5398]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:04 np0005588918.novalocal python3[5448]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:04 np0005588918.novalocal python3[5472]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588918.novalocal python3[5496]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588918.novalocal python3[5520]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588918.novalocal python3[5544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:05 np0005588918.novalocal python3[5568]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588918.novalocal python3[5592]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588918.novalocal irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 20 13:17:06 np0005588918.novalocal irqbalance[793]: IRQ 26 affinity is now unmanaged
Jan 20 13:17:06 np0005588918.novalocal python3[5616]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:06 np0005588918.novalocal python3[5640]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588918.novalocal python3[5664]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588918.novalocal python3[5688]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588918.novalocal python3[5712]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:07 np0005588918.novalocal python3[5736]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588918.novalocal python3[5760]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588918.novalocal python3[5784]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588918.novalocal python3[5808]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:08 np0005588918.novalocal python3[5832]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588918.novalocal python3[5856]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588918.novalocal python3[5880]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588918.novalocal python3[5904]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:09 np0005588918.novalocal python3[5928]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588918.novalocal python3[5952]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588918.novalocal python3[5976]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:10 np0005588918.novalocal python3[6000]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:11 np0005588918.novalocal python3[6024]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:11 np0005588918.novalocal python3[6048]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:17:14 np0005588918.novalocal sudo[6072]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyhfowgtkgsjeoqiyrzmnuftuivfeexz ; /usr/bin/python3'
Jan 20 13:17:14 np0005588918.novalocal sudo[6072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:14 np0005588918.novalocal python3[6074]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:17:14 np0005588918.novalocal systemd[1]: Starting Time & Date Service...
Jan 20 13:17:14 np0005588918.novalocal systemd[1]: Started Time & Date Service.
Jan 20 13:17:14 np0005588918.novalocal systemd-timedated[6076]: Changed time zone to 'UTC' (UTC).
Jan 20 13:17:14 np0005588918.novalocal sudo[6072]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:14 np0005588918.novalocal sudo[6103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffeauiisdvdesurmxjzqxymxahjfdfzs ; /usr/bin/python3'
Jan 20 13:17:14 np0005588918.novalocal sudo[6103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:14 np0005588918.novalocal python3[6105]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:14 np0005588918.novalocal sudo[6103]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:15 np0005588918.novalocal python3[6181]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:15 np0005588918.novalocal python3[6252]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768915035.0572963-251-128488107716390/source _original_basename=tmp5pgikg6s follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:16 np0005588918.novalocal python3[6352]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:16 np0005588918.novalocal python3[6423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768915035.9604423-301-192412097917773/source _original_basename=tmpsndxpl60 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:17 np0005588918.novalocal sudo[6523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pimybqgsllszweruooclxynpixgfhidk ; /usr/bin/python3'
Jan 20 13:17:17 np0005588918.novalocal sudo[6523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:17 np0005588918.novalocal python3[6525]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:17 np0005588918.novalocal sudo[6523]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:17 np0005588918.novalocal sudo[6596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvgmazguaniudxvpxwxxjfihzptlcpf ; /usr/bin/python3'
Jan 20 13:17:17 np0005588918.novalocal sudo[6596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:17 np0005588918.novalocal python3[6598]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768915037.2021716-381-53705603591358/source _original_basename=tmp460unyg4 follow=False checksum=a6c024a6649a87ca7709e2430139c248a6eabb0e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:17 np0005588918.novalocal sudo[6596]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:18 np0005588918.novalocal python3[6646]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:18 np0005588918.novalocal python3[6672]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:19 np0005588918.novalocal sudo[6750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekkvdpijdruhigxpplsmlnnrqwzvgiy ; /usr/bin/python3'
Jan 20 13:17:19 np0005588918.novalocal sudo[6750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:19 np0005588918.novalocal python3[6752]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:17:19 np0005588918.novalocal sudo[6750]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:19 np0005588918.novalocal sudo[6823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xetiirazuiusbspekmwglzdgmpvllylw ; /usr/bin/python3'
Jan 20 13:17:19 np0005588918.novalocal sudo[6823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:19 np0005588918.novalocal python3[6825]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915038.940801-451-164256656792479/source _original_basename=tmpxkfaq9eq follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:19 np0005588918.novalocal sudo[6823]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:20 np0005588918.novalocal sudo[6874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faielauvszythcdwojppgvjtsmkwskzu ; /usr/bin/python3'
Jan 20 13:17:20 np0005588918.novalocal sudo[6874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:20 np0005588918.novalocal python3[6876]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-d383-642d-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:17:20 np0005588918.novalocal sudo[6874]: pam_unix(sudo:session): session closed for user root
Jan 20 13:17:20 np0005588918.novalocal python3[6904]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-d383-642d-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 20 13:17:22 np0005588918.novalocal python3[6933]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:44 np0005588918.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 13:17:48 np0005588918.novalocal sudo[6960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxylwjtghxrurdwukswgssgbchenodn ; /usr/bin/python3'
Jan 20 13:17:48 np0005588918.novalocal sudo[6960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:17:48 np0005588918.novalocal python3[6962]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:17:48 np0005588918.novalocal sudo[6960]: pam_unix(sudo:session): session closed for user root
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 20 13:18:31 np0005588918.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 20 13:18:31 np0005588918.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0814] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:18:31 np0005588918.novalocal systemd-udevd[6963]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0957] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0977] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0979] device (eth1): carrier: link connected
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0980] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0986] policy: auto-activating connection 'Wired connection 1' (f7cdda31-68ef-3d97-9512-d5118da8907c)
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0989] device (eth1): Activation: starting connection 'Wired connection 1' (f7cdda31-68ef-3d97-9512-d5118da8907c)
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0989] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0991] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0994] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:18:31 np0005588918.novalocal NetworkManager[857]: <info>  [1768915111.0997] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:18:32 np0005588918.novalocal python3[6990]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-d6d1-215a-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:18:42 np0005588918.novalocal sudo[7068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqjwukbxdgjjlubhwsoknfnngffyoee ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:18:42 np0005588918.novalocal sudo[7068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:18:42 np0005588918.novalocal python3[7070]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:18:42 np0005588918.novalocal sudo[7068]: pam_unix(sudo:session): session closed for user root
Jan 20 13:18:42 np0005588918.novalocal sudo[7141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbdpqjxjdyldwkfddzutevgnunrgkflu ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:18:42 np0005588918.novalocal sudo[7141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:18:42 np0005588918.novalocal python3[7143]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768915121.869813-104-36206054936767/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1f4a710241e899ba15b0e6b223029b008ba6c5e8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:18:42 np0005588918.novalocal sudo[7141]: pam_unix(sudo:session): session closed for user root
Jan 20 13:18:43 np0005588918.novalocal sudo[7191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnadsxlidhastibrjuewahragafrioag ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:18:43 np0005588918.novalocal sudo[7191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:18:43 np0005588918.novalocal python3[7193]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Stopping Network Manager...
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3595] caught SIGTERM, shutting down normally.
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3604] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3604] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3605] dhcp4 (eth0): state changed no lease
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3607] manager: NetworkManager state is now CONNECTING
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3794] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3794] dhcp4 (eth1): state changed no lease
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[857]: <info>  [1768915123.3856] exiting (success)
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Stopped Network Manager.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: NetworkManager.service: Consumed 5.869s CPU time, 10.0M memory peak.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Starting Network Manager...
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.4459] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:961c158a-383a-4022-81b9-57a8f5012ec2)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.4462] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.4512] manager[0x55e3de503000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Starting Hostname Service...
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Started Hostname Service.
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5274] hostname: hostname: using hostnamed
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5275] hostname: static hostname changed from (none) to "np0005588918.novalocal"
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5278] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5283] manager[0x55e3de503000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5283] manager[0x55e3de503000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5307] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5307] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5307] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5308] manager: Networking is enabled by state file
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5310] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5313] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5334] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5341] dhcp: init: Using DHCP client 'internal'
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5343] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5347] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5351] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5359] device (lo): Activation: starting connection 'lo' (852309f6-3ce4-4bbb-99b9-75bbfcd15836)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5364] device (eth0): carrier: link connected
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5367] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5371] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5371] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5377] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5382] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5387] device (eth1): carrier: link connected
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5390] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5393] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (f7cdda31-68ef-3d97-9512-d5118da8907c) (indicated)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5394] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5397] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5402] device (eth1): Activation: starting connection 'Wired connection 1' (f7cdda31-68ef-3d97-9512-d5118da8907c)
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Started Network Manager.
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5408] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5412] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5414] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5416] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5419] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5422] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5424] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5426] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5429] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5434] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5436] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5442] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5445] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5460] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5461] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5466] device (lo): Activation: successful, device activated.
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5471] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5477] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5535] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5551] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5552] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5555] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5557] device (eth0): Activation: successful, device activated.
Jan 20 13:18:43 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915123.5560] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:18:43 np0005588918.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:18:43 np0005588918.novalocal sudo[7191]: pam_unix(sudo:session): session closed for user root
Jan 20 13:18:43 np0005588918.novalocal python3[7280]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-d6d1-215a-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:18:53 np0005588918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:19:13 np0005588918.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:19:22 np0005588918.novalocal systemd[4319]: Starting Mark boot as successful...
Jan 20 13:19:22 np0005588918.novalocal systemd[4319]: Finished Mark boot as successful.
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2216] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:19:29 np0005588918.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:19:29 np0005588918.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2585] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2588] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2593] device (eth1): Activation: successful, device activated.
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2599] manager: startup complete
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2602] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <warn>  [1768915169.2606] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2612] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2794] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2795] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2795] dhcp4 (eth1): state changed no lease
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2808] policy: auto-activating connection 'ci-private-network' (3c78a448-20f9-55c6-afc8-1adecda4fa01)
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2811] device (eth1): Activation: starting connection 'ci-private-network' (3c78a448-20f9-55c6-afc8-1adecda4fa01)
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2812] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2814] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2819] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.2826] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.4043] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.4045] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:19:29 np0005588918.novalocal NetworkManager[7209]: <info>  [1768915169.4053] device (eth1): Activation: successful, device activated.
Jan 20 13:19:39 np0005588918.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:19:43 np0005588918.novalocal sshd-session[4328]: Received disconnect from 38.102.83.114 port 46254:11: disconnected by user
Jan 20 13:19:43 np0005588918.novalocal sshd-session[4328]: Disconnected from user zuul 38.102.83.114 port 46254
Jan 20 13:19:43 np0005588918.novalocal sshd-session[4315]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:19:43 np0005588918.novalocal systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Jan 20 13:20:51 np0005588918.novalocal sshd-session[7309]: Accepted publickey for zuul from 38.102.83.114 port 45448 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:20:51 np0005588918.novalocal systemd-logind[796]: New session 3 of user zuul.
Jan 20 13:20:51 np0005588918.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 20 13:20:51 np0005588918.novalocal sshd-session[7309]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:20:51 np0005588918.novalocal sudo[7388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodnnproepaubmuffgncqjyafcynqgkb ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:20:51 np0005588918.novalocal sudo[7388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:20:51 np0005588918.novalocal python3[7390]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:20:51 np0005588918.novalocal sudo[7388]: pam_unix(sudo:session): session closed for user root
Jan 20 13:20:51 np0005588918.novalocal sudo[7461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueyjzxcvprahkchtotylpvsbogbeqvmj ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 13:20:51 np0005588918.novalocal sudo[7461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:20:51 np0005588918.novalocal python3[7463]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915251.2891064-373-255198289673812/source _original_basename=tmpxzboiw6i follow=False checksum=a06d82404ae9ae38c6111e54a4021096121ff7ef backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:20:51 np0005588918.novalocal sudo[7461]: pam_unix(sudo:session): session closed for user root
Jan 20 13:20:56 np0005588918.novalocal sshd-session[7312]: Connection closed by 38.102.83.114 port 45448
Jan 20 13:20:56 np0005588918.novalocal sshd-session[7309]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:20:56 np0005588918.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 20 13:20:56 np0005588918.novalocal systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Jan 20 13:20:56 np0005588918.novalocal systemd-logind[796]: Removed session 3.
Jan 20 13:22:22 np0005588918.novalocal systemd[4319]: Created slice User Background Tasks Slice.
Jan 20 13:22:22 np0005588918.novalocal systemd[4319]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 13:22:22 np0005588918.novalocal systemd[4319]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 13:26:22 np0005588918.novalocal sshd-session[7493]: Accepted publickey for zuul from 38.102.83.114 port 56384 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:26:22 np0005588918.novalocal systemd-logind[796]: New session 4 of user zuul.
Jan 20 13:26:22 np0005588918.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 20 13:26:22 np0005588918.novalocal sshd-session[7493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:26:22 np0005588918.novalocal sudo[7520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfdbrkivmypfldfiqbaktwboevdrrbm ; /usr/bin/python3'
Jan 20 13:26:22 np0005588918.novalocal sudo[7520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:22 np0005588918.novalocal python3[7522]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-67c0-97af-000000000ca4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:22 np0005588918.novalocal sudo[7520]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:22 np0005588918.novalocal sudo[7548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkienynqsszutzlbjburmeoefvjcchqr ; /usr/bin/python3'
Jan 20 13:26:22 np0005588918.novalocal sudo[7548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588918.novalocal python3[7550]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588918.novalocal sudo[7548]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588918.novalocal sudo[7575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqgowiqipibrawomzwbhqfrjykvaxbdy ; /usr/bin/python3'
Jan 20 13:26:23 np0005588918.novalocal sudo[7575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588918.novalocal python3[7577]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588918.novalocal sudo[7575]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588918.novalocal sudo[7601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egwcvidexpcmajdtmxiyyvbttmqpuebu ; /usr/bin/python3'
Jan 20 13:26:23 np0005588918.novalocal sudo[7601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588918.novalocal python3[7603]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588918.novalocal sudo[7601]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:23 np0005588918.novalocal sudo[7627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgpziyzkkdemhylryvldksbzjrpbpfbu ; /usr/bin/python3'
Jan 20 13:26:23 np0005588918.novalocal sudo[7627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:23 np0005588918.novalocal python3[7629]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:23 np0005588918.novalocal sudo[7627]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:24 np0005588918.novalocal sudo[7653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhckmuzynirtjavorwtulzfnxeuoflme ; /usr/bin/python3'
Jan 20 13:26:24 np0005588918.novalocal sudo[7653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:24 np0005588918.novalocal python3[7655]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:24 np0005588918.novalocal sudo[7653]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:24 np0005588918.novalocal sudo[7731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpkilgfujksyioogcvpxyyonivcpuqld ; /usr/bin/python3'
Jan 20 13:26:24 np0005588918.novalocal sudo[7731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:24 np0005588918.novalocal python3[7733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:24 np0005588918.novalocal sudo[7731]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:24 np0005588918.novalocal sudo[7804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzevnoccmijiydfixtkheowobbpuagn ; /usr/bin/python3'
Jan 20 13:26:24 np0005588918.novalocal sudo[7804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:25 np0005588918.novalocal python3[7806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915584.5580597-364-28202673595285/source _original_basename=tmpoc8vywio follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:25 np0005588918.novalocal sudo[7804]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:26 np0005588918.novalocal sudo[7854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrsexyndrwzyntxtfvmlehvswdtykfi ; /usr/bin/python3'
Jan 20 13:26:26 np0005588918.novalocal sudo[7854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:26 np0005588918.novalocal python3[7856]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 13:26:26 np0005588918.novalocal systemd[1]: Reloading.
Jan 20 13:26:26 np0005588918.novalocal systemd-rc-local-generator[7875]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:26:26 np0005588918.novalocal sudo[7854]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:28 np0005588918.novalocal sudo[7910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjasyvsbvabhwebtrnkimmvfurzujimu ; /usr/bin/python3'
Jan 20 13:26:28 np0005588918.novalocal sudo[7910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:28 np0005588918.novalocal python3[7912]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 20 13:26:28 np0005588918.novalocal sudo[7910]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:28 np0005588918.novalocal sudo[7936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqvivadqechqgyhdpfagvxbzwhcnpba ; /usr/bin/python3'
Jan 20 13:26:28 np0005588918.novalocal sudo[7936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:28 np0005588918.novalocal python3[7938]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:28 np0005588918.novalocal sudo[7936]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:28 np0005588918.novalocal sudo[7964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykwzuaieuixglzmuwexsvskckjzrchj ; /usr/bin/python3'
Jan 20 13:26:28 np0005588918.novalocal sudo[7964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588918.novalocal python3[7966]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588918.novalocal sudo[7964]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:29 np0005588918.novalocal sudo[7992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adyiskjchnpqsjjjpgvjfjoytewmxyoa ; /usr/bin/python3'
Jan 20 13:26:29 np0005588918.novalocal sudo[7992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588918.novalocal python3[7994]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588918.novalocal sudo[7992]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:29 np0005588918.novalocal sudo[8020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrnnlfexrtgqdmujkwjxjdtsvjuhzhco ; /usr/bin/python3'
Jan 20 13:26:29 np0005588918.novalocal sudo[8020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:29 np0005588918.novalocal python3[8022]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:29 np0005588918.novalocal sudo[8020]: pam_unix(sudo:session): session closed for user root
Jan 20 13:26:30 np0005588918.novalocal python3[8049]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-67c0-97af-000000000cab-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:30 np0005588918.novalocal python3[8079]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:26:34 np0005588918.novalocal sshd-session[7496]: Connection closed by 38.102.83.114 port 56384
Jan 20 13:26:34 np0005588918.novalocal sshd-session[7493]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:26:34 np0005588918.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 20 13:26:34 np0005588918.novalocal systemd[1]: session-4.scope: Consumed 3.864s CPU time.
Jan 20 13:26:34 np0005588918.novalocal systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Jan 20 13:26:34 np0005588918.novalocal systemd-logind[796]: Removed session 4.
Jan 20 13:26:35 np0005588918.novalocal sshd-session[8083]: Accepted publickey for zuul from 38.102.83.114 port 59748 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:26:36 np0005588918.novalocal systemd-logind[796]: New session 5 of user zuul.
Jan 20 13:26:36 np0005588918.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 20 13:26:36 np0005588918.novalocal sshd-session[8083]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:26:36 np0005588918.novalocal sudo[8110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yngiwjbwlqaphtbgxqyxdyqwlmjgipiq ; /usr/bin/python3'
Jan 20 13:26:36 np0005588918.novalocal sudo[8110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:26:36 np0005588918.novalocal python3[8112]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:26:42 np0005588918.novalocal setsebool[8151]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 20 13:26:42 np0005588918.novalocal setsebool[8151]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:26:56 np0005588918.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:27:07 np0005588918.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:27:26 np0005588918.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:27:26 np0005588918.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:27:26 np0005588918.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:27:26 np0005588918.novalocal systemd[1]: Reloading.
Jan 20 13:27:27 np0005588918.novalocal systemd-rc-local-generator[8922]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:27:27 np0005588918.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:27:28 np0005588918.novalocal sudo[8110]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:30 np0005588918.novalocal python3[12327]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-45b7-a25c-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:27:31 np0005588918.novalocal kernel: evm: overlay not supported
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: Starting D-Bus User Message Bus...
Jan 20 13:27:31 np0005588918.novalocal dbus-broker-launch[13327]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 20 13:27:31 np0005588918.novalocal dbus-broker-launch[13327]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: Started D-Bus User Message Bus.
Jan 20 13:27:31 np0005588918.novalocal dbus-broker-lau[13327]: Ready
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: Created slice Slice /user.
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: podman-13206.scope: unit configures an IP firewall, but not running as root.
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: (This warning is only shown for the first unit using IP firewalling.)
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: Started podman-13206.scope.
Jan 20 13:27:31 np0005588918.novalocal systemd[4319]: Started podman-pause-17592d73.scope.
Jan 20 13:27:32 np0005588918.novalocal sudo[13928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whehnokaxxclkimwzgyfbdrejecdlamg ; /usr/bin/python3'
Jan 20 13:27:32 np0005588918.novalocal sudo[13928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:27:32 np0005588918.novalocal python3[13930]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.233:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.233:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:27:32 np0005588918.novalocal python3[13930]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 20 13:27:32 np0005588918.novalocal sudo[13928]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:33 np0005588918.novalocal sshd-session[8086]: Connection closed by 38.102.83.114 port 59748
Jan 20 13:27:33 np0005588918.novalocal sshd-session[8083]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:27:33 np0005588918.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 20 13:27:33 np0005588918.novalocal systemd[1]: session-5.scope: Consumed 48.719s CPU time.
Jan 20 13:27:33 np0005588918.novalocal systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Jan 20 13:27:33 np0005588918.novalocal systemd-logind[796]: Removed session 5.
Jan 20 13:27:52 np0005588918.novalocal sshd-session[22012]: Unable to negotiate with 38.102.83.230 port 58004: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 13:27:52 np0005588918.novalocal sshd-session[22008]: Connection closed by 38.102.83.230 port 57966 [preauth]
Jan 20 13:27:52 np0005588918.novalocal sshd-session[22014]: Connection closed by 38.102.83.230 port 57980 [preauth]
Jan 20 13:27:52 np0005588918.novalocal sshd-session[22009]: Unable to negotiate with 38.102.83.230 port 57998: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 13:27:52 np0005588918.novalocal sshd-session[22015]: Unable to negotiate with 38.102.83.230 port 57986: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 13:27:58 np0005588918.novalocal sshd-session[24043]: Accepted publickey for zuul from 38.102.83.114 port 57518 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:27:58 np0005588918.novalocal systemd-logind[796]: New session 6 of user zuul.
Jan 20 13:27:58 np0005588918.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 20 13:27:58 np0005588918.novalocal sshd-session[24043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:27:58 np0005588918.novalocal python3[24149]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:27:58 np0005588918.novalocal sudo[24337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpomlqvazcpxjucbkaelpglitwnlrgm ; /usr/bin/python3'
Jan 20 13:27:58 np0005588918.novalocal sudo[24337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:27:59 np0005588918.novalocal python3[24347]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:27:59 np0005588918.novalocal sudo[24337]: pam_unix(sudo:session): session closed for user root
Jan 20 13:27:59 np0005588918.novalocal sudo[24764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcxhcheugshkpnytkerckvkxfstdshr ; /usr/bin/python3'
Jan 20 13:27:59 np0005588918.novalocal sudo[24764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:00 np0005588918.novalocal python3[24771]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005588918.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 20 13:28:00 np0005588918.novalocal useradd[24854]: new group: name=cloud-admin, GID=1002
Jan 20 13:28:00 np0005588918.novalocal useradd[24854]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 20 13:28:00 np0005588918.novalocal sudo[24764]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:00 np0005588918.novalocal sudo[25101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtihcearefjnqbszkkjglpqjntlmvyho ; /usr/bin/python3'
Jan 20 13:28:00 np0005588918.novalocal sudo[25101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:00 np0005588918.novalocal python3[25110]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxDwznlRTnwVs4thRw4BwvCKpJwxh+kvQMz22TwtomAycQeWyoRQIrmHPtGJrxAVwBD4Z4rIf2j/gxUloZVamc= zuul@np0005588917.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:28:00 np0005588918.novalocal sudo[25101]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:01 np0005588918.novalocal sudo[25385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bklnzlvpfunduwkzwhuklhempximdqjw ; /usr/bin/python3'
Jan 20 13:28:01 np0005588918.novalocal sudo[25385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:01 np0005588918.novalocal python3[25394]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:28:01 np0005588918.novalocal sudo[25385]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:01 np0005588918.novalocal sudo[25676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqvkydzstcxppncwvjfvkeksdcsrhfb ; /usr/bin/python3'
Jan 20 13:28:01 np0005588918.novalocal sudo[25676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:01 np0005588918.novalocal python3[25683]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768915681.0273058-167-111555983223123/source _original_basename=tmprlkmkncf follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:28:01 np0005588918.novalocal sudo[25676]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:02 np0005588918.novalocal sudo[26013]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uokjomqakrkqqygpggdrtxvopvdukrxi ; /usr/bin/python3'
Jan 20 13:28:02 np0005588918.novalocal sudo[26013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:28:02 np0005588918.novalocal python3[26022]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 20 13:28:02 np0005588918.novalocal systemd[1]: Starting Hostname Service...
Jan 20 13:28:02 np0005588918.novalocal systemd[1]: Started Hostname Service.
Jan 20 13:28:02 np0005588918.novalocal systemd-hostnamed[26133]: Changed pretty hostname to 'compute-0'
Jan 20 13:28:02 compute-0 systemd-hostnamed[26133]: Hostname set to <compute-0> (static)
Jan 20 13:28:02 compute-0 NetworkManager[7209]: <info>  [1768915682.8818] hostname: static hostname changed from "np0005588918.novalocal" to "compute-0"
Jan 20 13:28:02 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:28:02 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:28:02 compute-0 sudo[26013]: pam_unix(sudo:session): session closed for user root
Jan 20 13:28:03 compute-0 sshd-session[24092]: Connection closed by 38.102.83.114 port 57518
Jan 20 13:28:03 compute-0 sshd-session[24043]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:28:03 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 20 13:28:03 compute-0 systemd[1]: session-6.scope: Consumed 2.285s CPU time.
Jan 20 13:28:03 compute-0 systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Jan 20 13:28:03 compute-0 systemd-logind[796]: Removed session 6.
Jan 20 13:28:12 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:28:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:28:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:28:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 55.991s CPU time.
Jan 20 13:28:13 compute-0 systemd[1]: run-r84c695fdae764bd6931204227bd02da8.service: Deactivated successfully.
Jan 20 13:28:16 compute-0 irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 20 13:28:16 compute-0 irqbalance[793]: IRQ 27 affinity is now unmanaged
Jan 20 13:28:32 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:32:16 compute-0 sshd-session[29934]: Connection closed by 159.223.5.14 port 36082
Jan 20 13:33:00 compute-0 sshd-session[29935]: Accepted publickey for zuul from 38.102.83.230 port 51488 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:33:00 compute-0 systemd-logind[796]: New session 7 of user zuul.
Jan 20 13:33:00 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 20 13:33:00 compute-0 sshd-session[29935]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:33:01 compute-0 python3[30011]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:33:02 compute-0 sudo[30125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouknszmxsbkbrinxgihedflfjjdomyrv ; /usr/bin/python3'
Jan 20 13:33:02 compute-0 sudo[30125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-0 python3[30127]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:03 compute-0 sudo[30125]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:03 compute-0 sudo[30198]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sndpddnxlcmarywmngnxerzuytgmempv ; /usr/bin/python3'
Jan 20 13:33:03 compute-0 sudo[30198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-0 python3[30200]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:03 compute-0 sudo[30198]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:03 compute-0 sudo[30224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihvltddrcdmopgadlikludzldbsgbqac ; /usr/bin/python3'
Jan 20 13:33:03 compute-0 sudo[30224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:03 compute-0 python3[30226]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:03 compute-0 sudo[30224]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-0 sudo[30297]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epougikfadtxzwojkmzkupryobrfnqqc ; /usr/bin/python3'
Jan 20 13:33:04 compute-0 sudo[30297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-0 python3[30299]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:04 compute-0 sudo[30297]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-0 sudo[30323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdvaewgebqmweiqfnrwxjfnfonhkajj ; /usr/bin/python3'
Jan 20 13:33:04 compute-0 sudo[30323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-0 python3[30325]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:04 compute-0 sudo[30323]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-0 sudo[30396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kezmteczpwwshsxjwhdidtqyiabkkjgr ; /usr/bin/python3'
Jan 20 13:33:04 compute-0 sudo[30396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:04 compute-0 python3[30398]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:04 compute-0 sudo[30396]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:04 compute-0 sudo[30422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwlqqkstslcxrhrmqfksdiiegeqaqujo ; /usr/bin/python3'
Jan 20 13:33:04 compute-0 sudo[30422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-0 python3[30424]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:05 compute-0 sudo[30422]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-0 sudo[30495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfgdfinccaptmgajlwhuzxsfzfjrhndb ; /usr/bin/python3'
Jan 20 13:33:05 compute-0 sudo[30495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-0 python3[30497]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:05 compute-0 sudo[30495]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-0 sudo[30521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhevpyvlvhvicbprarlmepbecgnntusx ; /usr/bin/python3'
Jan 20 13:33:05 compute-0 sudo[30521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-0 python3[30523]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:05 compute-0 sudo[30521]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:05 compute-0 sudo[30594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teknejnadtzlppfopmfqamlgvjcjaojm ; /usr/bin/python3'
Jan 20 13:33:05 compute-0 sudo[30594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:05 compute-0 python3[30596]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:05 compute-0 sudo[30594]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-0 sudo[30620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpnegbgvmtvifswnjlzrrkrfdxlfeuj ; /usr/bin/python3'
Jan 20 13:33:06 compute-0 sudo[30620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:06 compute-0 python3[30622]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:06 compute-0 sudo[30620]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-0 sudo[30693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglxfldfnsyfpowshdxgixixeoykahgc ; /usr/bin/python3'
Jan 20 13:33:06 compute-0 sudo[30693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:06 compute-0 python3[30695]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:06 compute-0 sudo[30693]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-0 sudo[30719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgvvhwdxooiscxinspawjfqgorsukjjj ; /usr/bin/python3'
Jan 20 13:33:06 compute-0 sudo[30719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:06 compute-0 python3[30721]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:33:06 compute-0 sudo[30719]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:06 compute-0 sudo[30792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yodjbrbjwpgocatqnkcvtkdgyuhvcbne ; /usr/bin/python3'
Jan 20 13:33:06 compute-0 sudo[30792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:33:07 compute-0 python3[30794]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768915982.826504-34065-80065892106219/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:33:07 compute-0 sudo[30792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:33:10 compute-0 sshd-session[30819]: Unable to negotiate with 192.168.122.11 port 43528: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 13:33:10 compute-0 sshd-session[30821]: Connection closed by 192.168.122.11 port 43496 [preauth]
Jan 20 13:33:10 compute-0 sshd-session[30820]: Connection closed by 192.168.122.11 port 43508 [preauth]
Jan 20 13:33:10 compute-0 sshd-session[30822]: Unable to negotiate with 192.168.122.11 port 43518: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 13:33:10 compute-0 sshd-session[30823]: Unable to negotiate with 192.168.122.11 port 43524: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 13:33:20 compute-0 python3[30852]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:37:12 compute-0 sshd-session[30857]: Connection closed by authenticating user root 159.223.5.14 port 53372 [preauth]
Jan 20 13:38:20 compute-0 sshd-session[29938]: Received disconnect from 38.102.83.230 port 51488:11: disconnected by user
Jan 20 13:38:20 compute-0 sshd-session[29938]: Disconnected from user zuul 38.102.83.230 port 51488
Jan 20 13:38:20 compute-0 sshd-session[29935]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:38:20 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 20 13:38:20 compute-0 systemd[1]: session-7.scope: Consumed 4.772s CPU time.
Jan 20 13:38:20 compute-0 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Jan 20 13:38:20 compute-0 systemd-logind[796]: Removed session 7.
Jan 20 13:38:33 compute-0 sshd-session[30859]: Connection closed by authenticating user root 159.223.5.14 port 33808 [preauth]
Jan 20 13:38:41 compute-0 sshd-session[30861]: Connection closed by 157.245.78.139 port 51984
Jan 20 13:39:49 compute-0 sshd-session[30863]: Connection closed by authenticating user root 159.223.5.14 port 50812 [preauth]
Jan 20 13:40:00 compute-0 sshd-session[30865]: Connection closed by authenticating user root 157.245.78.139 port 46124 [preauth]
Jan 20 13:40:53 compute-0 sshd-session[30867]: Connection closed by authenticating user root 157.245.78.139 port 54532 [preauth]
Jan 20 13:41:01 compute-0 sshd-session[30869]: Connection closed by authenticating user root 159.223.5.14 port 53922 [preauth]
Jan 20 13:41:46 compute-0 sshd-session[30872]: Connection closed by authenticating user root 157.245.78.139 port 36454 [preauth]
Jan 20 13:42:15 compute-0 sshd-session[30874]: Connection closed by authenticating user root 159.223.5.14 port 44730 [preauth]
Jan 20 13:42:37 compute-0 sshd-session[30876]: Connection closed by authenticating user root 157.245.78.139 port 40402 [preauth]
Jan 20 13:43:29 compute-0 sshd-session[30878]: Connection closed by authenticating user root 159.223.5.14 port 34200 [preauth]
Jan 20 13:43:30 compute-0 sshd-session[30880]: Connection closed by authenticating user root 157.245.78.139 port 46620 [preauth]
Jan 20 13:44:20 compute-0 sshd-session[30882]: Connection closed by authenticating user root 157.245.78.139 port 38018 [preauth]
Jan 20 13:44:39 compute-0 sshd-session[30884]: Connection closed by authenticating user root 159.223.5.14 port 35270 [preauth]
Jan 20 13:45:11 compute-0 sshd-session[30887]: Connection closed by authenticating user root 157.245.78.139 port 38960 [preauth]
Jan 20 13:45:43 compute-0 sshd-session[30890]: Accepted publickey for zuul from 192.168.122.30 port 38626 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:45:43 compute-0 systemd-logind[796]: New session 8 of user zuul.
Jan 20 13:45:43 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 20 13:45:43 compute-0 sshd-session[30890]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:45:44 compute-0 python3.9[31043]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:45:46 compute-0 sudo[31224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dliuradeypyixsanlnjumuyuskiseuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916745.7579381-56-228944649526886/AnsiballZ_command.py'
Jan 20 13:45:46 compute-0 sudo[31224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:45:46 compute-0 python3.9[31226]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:45:49 compute-0 sshd-session[31053]: Connection closed by authenticating user root 159.223.5.14 port 59950 [preauth]
Jan 20 13:45:54 compute-0 sudo[31224]: pam_unix(sudo:session): session closed for user root
Jan 20 13:45:54 compute-0 sshd-session[30893]: Connection closed by 192.168.122.30 port 38626
Jan 20 13:45:54 compute-0 sshd-session[30890]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:45:54 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 20 13:45:54 compute-0 systemd[1]: session-8.scope: Consumed 7.683s CPU time.
Jan 20 13:45:54 compute-0 systemd-logind[796]: Session 8 logged out. Waiting for processes to exit.
Jan 20 13:45:54 compute-0 systemd-logind[796]: Removed session 8.
Jan 20 13:45:57 compute-0 sshd-session[31284]: Connection closed by authenticating user root 157.245.78.139 port 57828 [preauth]
Jan 20 13:46:10 compute-0 sshd-session[31286]: Accepted publickey for zuul from 192.168.122.30 port 55514 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:46:10 compute-0 systemd-logind[796]: New session 9 of user zuul.
Jan 20 13:46:10 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 20 13:46:10 compute-0 sshd-session[31286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:46:11 compute-0 python3.9[31439]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 13:46:12 compute-0 python3.9[31613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:13 compute-0 sudo[31763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjqqldwowkjnqfhfazarzbzddsbntavc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916773.1945152-93-202155448645632/AnsiballZ_command.py'
Jan 20 13:46:13 compute-0 sudo[31763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:13 compute-0 python3.9[31765]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:46:13 compute-0 sudo[31763]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:14 compute-0 sudo[31916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckqrjkomyjimrijrykkvgrwwksyodbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916774.293563-129-33948782131261/AnsiballZ_stat.py'
Jan 20 13:46:14 compute-0 sudo[31916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:14 compute-0 python3.9[31918]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:46:14 compute-0 sudo[31916]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:16 compute-0 sudo[32068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yviulbmeqxxyblycvxgxsybkymkasbfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916775.1186988-153-14223416910825/AnsiballZ_file.py'
Jan 20 13:46:16 compute-0 sudo[32068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:16 compute-0 python3.9[32070]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:16 compute-0 sudo[32068]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:16 compute-0 sudo[32220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbhdbloezobxnwtdukjtjzvznwbpeex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916776.5209484-177-259228202745224/AnsiballZ_stat.py'
Jan 20 13:46:16 compute-0 sudo[32220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:16 compute-0 python3.9[32222]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:46:16 compute-0 sudo[32220]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:17 compute-0 sudo[32343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qramgnppyajoxunrkmapechmvpaimbkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916776.5209484-177-259228202745224/AnsiballZ_copy.py'
Jan 20 13:46:17 compute-0 sudo[32343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:17 compute-0 python3.9[32345]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768916776.5209484-177-259228202745224/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:17 compute-0 sudo[32343]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:18 compute-0 sudo[32495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowlufjvqaldoferxapgqglnrchdzjzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916778.0852904-222-248534895303488/AnsiballZ_setup.py'
Jan 20 13:46:18 compute-0 sudo[32495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:18 compute-0 python3.9[32497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:18 compute-0 sudo[32495]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:19 compute-0 sudo[32651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdoytutfdpzvhrodcdxzhwlsatvyghwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916779.2302544-246-27783518389297/AnsiballZ_file.py'
Jan 20 13:46:19 compute-0 sudo[32651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:19 compute-0 python3.9[32653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:46:19 compute-0 sudo[32651]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:20 compute-0 sudo[32803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyggfjsycmoexpgvmuliiabhbablhhww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916780.0564969-273-71363379951748/AnsiballZ_file.py'
Jan 20 13:46:20 compute-0 sudo[32803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:20 compute-0 python3.9[32805]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:46:20 compute-0 sudo[32803]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:21 compute-0 python3.9[32955]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:46:27 compute-0 python3.9[33208]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:46:28 compute-0 python3.9[33358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:30 compute-0 python3.9[33512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:46:31 compute-0 sudo[33668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icaxrashcukuqqnmzjnqaikbyiasbnlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916790.7493289-417-27099603576227/AnsiballZ_setup.py'
Jan 20 13:46:31 compute-0 sudo[33668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:31 compute-0 python3.9[33670]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:46:31 compute-0 sudo[33668]: pam_unix(sudo:session): session closed for user root
Jan 20 13:46:32 compute-0 sudo[33752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdwiwludapraasjgkyvqpleqehmfzavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916790.7493289-417-27099603576227/AnsiballZ_dnf.py'
Jan 20 13:46:32 compute-0 sudo[33752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:46:32 compute-0 python3.9[33754]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:46:45 compute-0 sshd-session[33825]: Connection closed by authenticating user root 157.245.78.139 port 35948 [preauth]
Jan 20 13:46:57 compute-0 sshd-session[33872]: Connection closed by authenticating user root 159.223.5.14 port 42002 [preauth]
Jan 20 13:47:19 compute-0 systemd[1]: Reloading.
Jan 20 13:47:19 compute-0 systemd-rc-local-generator[33955]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:20 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 20 13:47:20 compute-0 systemd[1]: Reloading.
Jan 20 13:47:20 compute-0 systemd-rc-local-generator[34000]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:20 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 20 13:47:20 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 20 13:47:20 compute-0 systemd[1]: Reloading.
Jan 20 13:47:20 compute-0 systemd-rc-local-generator[34037]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:47:20 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 20 13:47:20 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 13:47:21 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 13:47:21 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 13:47:32 compute-0 sshd-session[34088]: Connection closed by authenticating user root 157.245.78.139 port 33078 [preauth]
Jan 20 13:48:06 compute-0 sshd-session[34232]: Connection closed by authenticating user root 159.223.5.14 port 53638 [preauth]
Jan 20 13:48:18 compute-0 sshd-session[34267]: Connection closed by authenticating user root 157.245.78.139 port 52408 [preauth]
Jan 20 13:48:26 compute-0 kernel: SELinux:  Converting 2723 SID table entries...
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:48:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:48:27 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 20 13:48:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:48:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:48:27 compute-0 systemd[1]: Reloading.
Jan 20 13:48:27 compute-0 systemd-rc-local-generator[34383]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:48:27 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:48:27 compute-0 sudo[33752]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:48:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:48:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.115s CPU time.
Jan 20 13:48:28 compute-0 systemd[1]: run-r74e9de7a47434c9f8bf8a3f815567eee.service: Deactivated successfully.
Jan 20 13:48:38 compute-0 sudo[35295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmodkuosdvnfnmpkwjdfezubyeeahae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916918.7044976-453-236382661708962/AnsiballZ_command.py'
Jan 20 13:48:38 compute-0 sudo[35295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:39 compute-0 python3.9[35297]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:48:40 compute-0 sudo[35295]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:41 compute-0 sudo[35576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbjbwvcxryfzyddbczsqmqcrnidcqoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916920.5057282-477-274911353632673/AnsiballZ_selinux.py'
Jan 20 13:48:41 compute-0 sudo[35576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:41 compute-0 python3.9[35578]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 13:48:41 compute-0 sudo[35576]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:42 compute-0 sudo[35728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofiqssdgbrhwnskabplzszbfvouqrivp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916922.0191145-510-57097635678869/AnsiballZ_command.py'
Jan 20 13:48:42 compute-0 sudo[35728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:42 compute-0 python3.9[35730]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 13:48:43 compute-0 sudo[35728]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:44 compute-0 sudo[35881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aukqhzslfzxvbphzrfevprkfsivddivf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916924.0660577-534-54179072796034/AnsiballZ_file.py'
Jan 20 13:48:44 compute-0 sudo[35881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:45 compute-0 python3.9[35883]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:45 compute-0 sudo[35881]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:47 compute-0 sudo[36033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbmxenkaehhukriueapdvxjpcvlzdnnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916926.611049-558-84719834065713/AnsiballZ_mount.py'
Jan 20 13:48:47 compute-0 sudo[36033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:47 compute-0 python3.9[36035]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 13:48:47 compute-0 sudo[36033]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:48 compute-0 sudo[36185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dysmvnxtzbrrgccvlcgepqjclpdxlthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916928.395734-642-163233494433915/AnsiballZ_file.py'
Jan 20 13:48:48 compute-0 sudo[36185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:48 compute-0 python3.9[36187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:48:48 compute-0 sudo[36185]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:52 compute-0 sudo[36337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqtvlrjmecprkzsghwwafxccpwfacnbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916932.6473758-666-230259988401845/AnsiballZ_stat.py'
Jan 20 13:48:52 compute-0 sudo[36337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:53 compute-0 python3.9[36339]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:48:53 compute-0 sudo[36337]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:53 compute-0 sudo[36460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxaksfgcfkgpyfamsnjfqkbekbzwcvcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916932.6473758-666-230259988401845/AnsiballZ_copy.py'
Jan 20 13:48:53 compute-0 sudo[36460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:54 compute-0 python3.9[36462]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768916932.6473758-666-230259988401845/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:54 compute-0 sudo[36460]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:57 compute-0 sudo[36612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxyhojoahpzyrxeyeursrscmrtuvfedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916937.6724386-738-126088849765131/AnsiballZ_stat.py'
Jan 20 13:48:57 compute-0 sudo[36612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:58 compute-0 python3.9[36614]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:48:58 compute-0 sudo[36612]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:58 compute-0 sudo[36764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzwcpeoipsdayfstmugfoeusviotzotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916938.4642606-762-65872399720204/AnsiballZ_command.py'
Jan 20 13:48:58 compute-0 sudo[36764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:58 compute-0 python3.9[36766]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:48:58 compute-0 sudo[36764]: pam_unix(sudo:session): session closed for user root
Jan 20 13:48:59 compute-0 sudo[36917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulvecmsbdycoqzmhifeboxwwfizcgaps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916939.311024-786-46873352791232/AnsiballZ_file.py'
Jan 20 13:48:59 compute-0 sudo[36917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:48:59 compute-0 python3.9[36919]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:48:59 compute-0 sudo[36917]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:00 compute-0 sudo[37069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyfesspdvnckvrtszkzchkpglfqdkwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916940.3938103-819-130596124930361/AnsiballZ_getent.py'
Jan 20 13:49:00 compute-0 sudo[37069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:00 compute-0 python3.9[37071]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 13:49:01 compute-0 sudo[37069]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:01 compute-0 sudo[37222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eudzixdiqfdndiqhzxsoxcqvcmwrqhsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916941.3102565-843-190927174772557/AnsiballZ_group.py'
Jan 20 13:49:01 compute-0 sudo[37222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:02 compute-0 python3.9[37224]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:02 compute-0 groupadd[37225]: group added to /etc/group: name=qemu, GID=107
Jan 20 13:49:02 compute-0 groupadd[37225]: group added to /etc/gshadow: name=qemu
Jan 20 13:49:02 compute-0 groupadd[37225]: new group: name=qemu, GID=107
Jan 20 13:49:02 compute-0 sudo[37222]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:02 compute-0 sudo[37380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segajwvurmglahrzvhxkenbgwpeaidin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916942.4072907-867-18263976768077/AnsiballZ_user.py'
Jan 20 13:49:02 compute-0 sudo[37380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:03 compute-0 python3.9[37382]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:49:03 compute-0 useradd[37384]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 13:49:03 compute-0 sudo[37380]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:03 compute-0 sudo[37540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbpukdihsuwwhycqyuoykkuhaxfnzqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916943.6880686-891-84422958437287/AnsiballZ_getent.py'
Jan 20 13:49:03 compute-0 sudo[37540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:04 compute-0 python3.9[37542]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 13:49:04 compute-0 sudo[37540]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:04 compute-0 sudo[37695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfndinarnwhmezwakvxtfpisnwfjfzvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916944.5401516-915-90964010386734/AnsiballZ_group.py'
Jan 20 13:49:04 compute-0 sudo[37695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:04 compute-0 python3.9[37697]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:04 compute-0 groupadd[37698]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 20 13:49:04 compute-0 groupadd[37698]: group added to /etc/gshadow: name=hugetlbfs
Jan 20 13:49:04 compute-0 sshd-session[37598]: Connection closed by authenticating user root 157.245.78.139 port 42474 [preauth]
Jan 20 13:49:04 compute-0 groupadd[37698]: new group: name=hugetlbfs, GID=42477
Jan 20 13:49:05 compute-0 sudo[37695]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:05 compute-0 sudo[37853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-razlqzxryblkcjkwnylxsaeclhocmwsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916945.487916-942-19856492788372/AnsiballZ_file.py'
Jan 20 13:49:05 compute-0 sudo[37853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:05 compute-0 python3.9[37855]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 13:49:05 compute-0 sudo[37853]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:06 compute-0 sudo[38005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrhsthkrcpkknmjnnajclbmivtcmftd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916946.5308793-975-22011209617404/AnsiballZ_dnf.py'
Jan 20 13:49:06 compute-0 sudo[38005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:07 compute-0 python3.9[38007]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:49:09 compute-0 sudo[38005]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:09 compute-0 sudo[38158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcpgqlkzgfovntziupumbpbmtrllnyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916949.1632123-999-84980151500291/AnsiballZ_file.py'
Jan 20 13:49:09 compute-0 sudo[38158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:09 compute-0 python3.9[38160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:09 compute-0 sudo[38158]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:10 compute-0 sudo[38310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhpkqyoiuodvnhyvrestuzhfatsrxvvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916950.0227413-1023-24594447521336/AnsiballZ_stat.py'
Jan 20 13:49:10 compute-0 sudo[38310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:10 compute-0 python3.9[38312]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:49:10 compute-0 sudo[38310]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:10 compute-0 sudo[38433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypvenyzhaljiizcdcgvkknmpjiydjhvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916950.0227413-1023-24594447521336/AnsiballZ_copy.py'
Jan 20 13:49:10 compute-0 sudo[38433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:11 compute-0 python3.9[38435]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768916950.0227413-1023-24594447521336/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:11 compute-0 sudo[38433]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:12 compute-0 sudo[38585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wttrgsnbychplxwkcbbdlvultmmaevxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916951.3840961-1068-156889879903229/AnsiballZ_systemd.py'
Jan 20 13:49:12 compute-0 sudo[38585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:12 compute-0 python3.9[38587]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:49:12 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 13:49:12 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 20 13:49:12 compute-0 kernel: Bridge firewalling registered
Jan 20 13:49:12 compute-0 systemd-modules-load[38592]: Inserted module 'br_netfilter'
Jan 20 13:49:12 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 13:49:12 compute-0 sudo[38585]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:13 compute-0 sudo[38747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gonoasrneadyueswbqdanfcofynvfffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916952.7180965-1092-228169226829138/AnsiballZ_stat.py'
Jan 20 13:49:13 compute-0 sudo[38747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:13 compute-0 python3.9[38749]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:49:13 compute-0 sudo[38747]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:14 compute-0 sudo[38871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uryhyuujiugkjdbywyiolkmfvvscegly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916952.7180965-1092-228169226829138/AnsiballZ_copy.py'
Jan 20 13:49:14 compute-0 sudo[38871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:14 compute-0 python3.9[38873]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768916952.7180965-1092-228169226829138/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:49:14 compute-0 sudo[38871]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:16 compute-0 sudo[39023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdrwahdfziihgxbxmftvvcrtdbbffxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916955.406289-1146-189618993813803/AnsiballZ_dnf.py'
Jan 20 13:49:16 compute-0 sudo[39023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:16 compute-0 python3.9[39025]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:49:16 compute-0 sshd-session[38591]: Connection closed by authenticating user root 159.223.5.14 port 56486 [preauth]
Jan 20 13:49:19 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 13:49:19 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 13:49:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:49:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:49:20 compute-0 systemd[1]: Reloading.
Jan 20 13:49:20 compute-0 systemd-rc-local-generator[39085]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:20 compute-0 systemd[1]: Starting dnf makecache...
Jan 20 13:49:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:49:20 compute-0 dnf[39100]: Failed determining last makecache time.
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-barbican-42b4c41831408a8e323 125 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 163 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-cinder-1c00d6490d88e436f26ef 145 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-python-stevedore-c4acc5639fd2329372142 140 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-python-cloudkitty-tests-tempest-2c80f8 156 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-os-refresh-config-9bfc52b5049be2d8de61 152 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 174 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-python-designate-tests-tempest-347fdbc 177 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-glance-1fd12c29b339f30fe823e 171 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 172 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 sudo[39023]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-manila-3c01b7181572c95dac462 182 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-python-whitebox-neutron-tests-tempest- 174 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-octavia-ba397f07a7331190208c 166 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-openstack-watcher-c014f81a8647287f6dcc 184 kB/s | 3.0 kB     00:00
Jan 20 13:49:20 compute-0 dnf[39100]: delorean-ansible-config_template-5ccaa22121a7ff 158 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 187 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: delorean-python-tempestconf-8515371b7cceebd4282 178 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: delorean-openstack-heat-ui-013accbfd179753bc3f0 181 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.4 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: CentOS Stream 9 - AppStream                      62 kB/s | 6.8 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: CentOS Stream 9 - CRB                            68 kB/s | 6.3 kB     00:00
Jan 20 13:49:21 compute-0 python3.9[40431]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:49:21 compute-0 dnf[39100]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: dlrn-antelope-testing                           136 kB/s | 3.0 kB     00:00
Jan 20 13:49:21 compute-0 dnf[39100]: dlrn-antelope-build-deps                        156 kB/s | 3.0 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: centos9-rabbitmq                                130 kB/s | 3.0 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: centos9-storage                                 149 kB/s | 3.0 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: centos9-opstools                                139 kB/s | 3.0 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: NFV SIG OpenvSwitch                             123 kB/s | 3.0 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: repo-setup-centos-appstream                     205 kB/s | 4.4 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: repo-setup-centos-baseos                        123 kB/s | 3.9 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: repo-setup-centos-highavailability               91 kB/s | 3.9 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: repo-setup-centos-powertools                    140 kB/s | 4.3 kB     00:00
Jan 20 13:49:22 compute-0 dnf[39100]: Extra Packages for Enterprise Linux 9 - x86_64  238 kB/s |  32 kB     00:00
Jan 20 13:49:22 compute-0 python3.9[41325]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 13:49:23 compute-0 dnf[39100]: Metadata cache created.
Jan 20 13:49:23 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 13:49:23 compute-0 systemd[1]: Finished dnf makecache.
Jan 20 13:49:23 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.920s CPU time.
Jan 20 13:49:23 compute-0 python3.9[42176]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:49:24 compute-0 sudo[43059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhnrmvjxbfgsqbiadopckvmhlpvgied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916963.815031-1263-38360946790889/AnsiballZ_command.py'
Jan 20 13:49:24 compute-0 sudo[43059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:24 compute-0 python3.9[43085]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:24 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:49:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:49:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:49:24 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.547s CPU time.
Jan 20 13:49:24 compute-0 systemd[1]: run-rfb6f259ac57f4d73b9795c2b920326c6.service: Deactivated successfully.
Jan 20 13:49:24 compute-0 systemd[1]: Starting Authorization Manager...
Jan 20 13:49:24 compute-0 polkitd[43447]: Started polkitd version 0.117
Jan 20 13:49:24 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:49:25 compute-0 polkitd[43447]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 13:49:25 compute-0 polkitd[43447]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 13:49:25 compute-0 polkitd[43447]: Finished loading, compiling and executing 2 rules
Jan 20 13:49:25 compute-0 systemd[1]: Started Authorization Manager.
Jan 20 13:49:25 compute-0 polkitd[43447]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 20 13:49:25 compute-0 sudo[43059]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:25 compute-0 sudo[43615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzddqihwuktuevcexxjuqbcnbcjhheox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916965.5581918-1290-201062565354645/AnsiballZ_systemd.py'
Jan 20 13:49:25 compute-0 sudo[43615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:26 compute-0 python3.9[43617]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:26 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 13:49:26 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 13:49:26 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 13:49:26 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:49:26 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:49:26 compute-0 sudo[43615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:27 compute-0 python3.9[43778]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 13:49:30 compute-0 sudo[43928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgdaxilgbqorafkublkikxqouhzrjjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916970.493707-1461-25037050964679/AnsiballZ_systemd.py'
Jan 20 13:49:30 compute-0 sudo[43928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:31 compute-0 python3.9[43930]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:31 compute-0 systemd[1]: Reloading.
Jan 20 13:49:31 compute-0 systemd-rc-local-generator[43958]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:31 compute-0 sudo[43928]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:31 compute-0 sudo[44116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oizwupjixgnsorgcwwxezedpnctrjijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916971.564266-1461-45995560524794/AnsiballZ_systemd.py'
Jan 20 13:49:31 compute-0 sudo[44116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:32 compute-0 python3.9[44118]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:49:32 compute-0 systemd[1]: Reloading.
Jan 20 13:49:32 compute-0 systemd-rc-local-generator[44146]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:49:32 compute-0 sudo[44116]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:33 compute-0 sudo[44305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuulpyinsewqycuyqjtqcinijkcosooi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916972.7815917-1509-40005296892484/AnsiballZ_command.py'
Jan 20 13:49:33 compute-0 sudo[44305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:33 compute-0 python3.9[44307]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:33 compute-0 sudo[44305]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:33 compute-0 sudo[44458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sflbnguyevwtcnjsndtccpysktthcdgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916973.5690315-1533-259751297082848/AnsiballZ_command.py'
Jan 20 13:49:33 compute-0 sudo[44458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:33 compute-0 python3.9[44460]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:34 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 20 13:49:34 compute-0 sudo[44458]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:34 compute-0 sudo[44611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpvfvkzpmwuxivrkyxgnaibwwrfwdwrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916974.3614001-1557-75178963009749/AnsiballZ_command.py'
Jan 20 13:49:34 compute-0 sudo[44611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:34 compute-0 python3.9[44613]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:36 compute-0 sudo[44611]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:37 compute-0 sudo[44773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snvsxaqlwofbsxmuimaomrwaqppulgbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916976.7680702-1581-159833885887490/AnsiballZ_command.py'
Jan 20 13:49:37 compute-0 sudo[44773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:37 compute-0 python3.9[44775]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:49:37 compute-0 sudo[44773]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:37 compute-0 sudo[44926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spchqaucwmmxxoweuzswkxutlkbtemkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916977.5967288-1605-168629017072474/AnsiballZ_systemd.py'
Jan 20 13:49:37 compute-0 sudo[44926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:38 compute-0 python3.9[44928]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:49:38 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:49:38 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:49:38 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 20 13:49:38 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:49:38 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:49:38 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:49:38 compute-0 sudo[44926]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:38 compute-0 sshd-session[31289]: Connection closed by 192.168.122.30 port 55514
Jan 20 13:49:38 compute-0 sshd-session[31286]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:49:38 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 20 13:49:38 compute-0 systemd[1]: session-9.scope: Consumed 2min 21.267s CPU time.
Jan 20 13:49:38 compute-0 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Jan 20 13:49:38 compute-0 systemd-logind[796]: Removed session 9.
Jan 20 13:49:44 compute-0 sshd-session[44959]: Accepted publickey for zuul from 192.168.122.30 port 48896 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:49:44 compute-0 systemd-logind[796]: New session 10 of user zuul.
Jan 20 13:49:44 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 20 13:49:44 compute-0 sshd-session[44959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:49:45 compute-0 python3.9[45112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:49:46 compute-0 sudo[45266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiorpmtfruupywkhrnbophxrgjeglcgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916986.265008-68-248326164359958/AnsiballZ_getent.py'
Jan 20 13:49:46 compute-0 sudo[45266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:46 compute-0 python3.9[45268]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 13:49:46 compute-0 sudo[45266]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:47 compute-0 sudo[45419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqwjrkbukvjyxmyfscuqxlysduuittif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916987.1858616-92-179723548960315/AnsiballZ_group.py'
Jan 20 13:49:47 compute-0 sudo[45419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:47 compute-0 python3.9[45421]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:49:47 compute-0 groupadd[45422]: group added to /etc/group: name=openvswitch, GID=42476
Jan 20 13:49:47 compute-0 groupadd[45422]: group added to /etc/gshadow: name=openvswitch
Jan 20 13:49:47 compute-0 groupadd[45422]: new group: name=openvswitch, GID=42476
Jan 20 13:49:47 compute-0 sudo[45419]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:48 compute-0 sudo[45577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrlhkrjlmxsmtbulxduaayedfbojqzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916988.3689067-116-72178965285938/AnsiballZ_user.py'
Jan 20 13:49:48 compute-0 sudo[45577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:49 compute-0 python3.9[45579]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:49:49 compute-0 useradd[45581]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 13:49:49 compute-0 useradd[45581]: add 'openvswitch' to group 'hugetlbfs'
Jan 20 13:49:49 compute-0 useradd[45581]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 20 13:49:49 compute-0 sudo[45577]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:49 compute-0 sudo[45737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzdbkklnuibpsbmoamgxnovgucloqnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916989.5615623-146-30940093730801/AnsiballZ_setup.py'
Jan 20 13:49:49 compute-0 sudo[45737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:50 compute-0 python3.9[45739]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:49:50 compute-0 sudo[45737]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:50 compute-0 sudo[45821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbhgotegbaduwpyacxwerexqvppxwvti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916989.5615623-146-30940093730801/AnsiballZ_dnf.py'
Jan 20 13:49:50 compute-0 sudo[45821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:51 compute-0 python3.9[45823]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 13:49:52 compute-0 sshd-session[45825]: Connection closed by authenticating user root 157.245.78.139 port 38404 [preauth]
Jan 20 13:49:53 compute-0 sudo[45821]: pam_unix(sudo:session): session closed for user root
Jan 20 13:49:54 compute-0 sudo[45987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myvkdewclunzwxmzkybhghgvhrqteptl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768916993.9916887-188-258170261084127/AnsiballZ_dnf.py'
Jan 20 13:49:54 compute-0 sudo[45987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:49:54 compute-0 python3.9[45989]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:08 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:50:08 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:50:08 compute-0 groupadd[46012]: group added to /etc/group: name=unbound, GID=994
Jan 20 13:50:08 compute-0 groupadd[46012]: group added to /etc/gshadow: name=unbound
Jan 20 13:50:08 compute-0 groupadd[46012]: new group: name=unbound, GID=994
Jan 20 13:50:08 compute-0 useradd[46019]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 20 13:50:08 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 20 13:50:08 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 20 13:50:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:10 compute-0 systemd[1]: Reloading.
Jan 20 13:50:10 compute-0 systemd-rc-local-generator[46517]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:10 compute-0 systemd-sysv-generator[46521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:10 compute-0 sudo[45987]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:10 compute-0 systemd[1]: run-rabc34ab8d05e42849f02171f0cd7ac16.service: Deactivated successfully.
Jan 20 13:50:11 compute-0 sudo[47084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yctyyjjqwftrubiilhjmqsdfigjejfwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917011.046595-212-272342343773530/AnsiballZ_systemd.py'
Jan 20 13:50:11 compute-0 sudo[47084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:11 compute-0 python3.9[47086]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 13:50:11 compute-0 systemd[1]: Reloading.
Jan 20 13:50:12 compute-0 systemd-sysv-generator[47120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:12 compute-0 systemd-rc-local-generator[47117]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:12 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 20 13:50:12 compute-0 chown[47128]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 20 13:50:12 compute-0 ovs-ctl[47133]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 20 13:50:12 compute-0 ovs-ctl[47133]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 20 13:50:12 compute-0 ovs-ctl[47133]: Starting ovsdb-server [  OK  ]
Jan 20 13:50:12 compute-0 ovs-vsctl[47183]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 20 13:50:12 compute-0 ovs-vsctl[47203]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"367c1a2c-b16a-4828-ab5a-626bb50023b4\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 20 13:50:12 compute-0 ovs-ctl[47133]: Configuring Open vSwitch system IDs [  OK  ]
Jan 20 13:50:12 compute-0 ovs-vsctl[47209]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 13:50:12 compute-0 ovs-ctl[47133]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:50:12 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 20 13:50:12 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 20 13:50:12 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 20 13:50:12 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 20 13:50:12 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 20 13:50:12 compute-0 ovs-ctl[47253]: Inserting openvswitch module [  OK  ]
Jan 20 13:50:12 compute-0 ovs-ctl[47222]: Starting ovs-vswitchd [  OK  ]
Jan 20 13:50:12 compute-0 ovs-vsctl[47270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 13:50:12 compute-0 ovs-ctl[47222]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:50:12 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 20 13:50:12 compute-0 systemd[1]: Starting Open vSwitch...
Jan 20 13:50:12 compute-0 systemd[1]: Finished Open vSwitch.
Jan 20 13:50:12 compute-0 sudo[47084]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:14 compute-0 python3.9[47422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:50:15 compute-0 sudo[47573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nylomjivqqizrfkeohgjyyragdnwqdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917014.6353285-266-12950542884603/AnsiballZ_sefcontext.py'
Jan 20 13:50:15 compute-0 sudo[47573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:15 compute-0 python3.9[47575]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 13:50:16 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:50:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:50:16 compute-0 sudo[47573]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:17 compute-0 python3.9[47730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:50:18 compute-0 sudo[47886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwqxqjoqhzetvzratgmzsgvdzvhnrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917018.3735967-320-259470589167767/AnsiballZ_dnf.py'
Jan 20 13:50:18 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 20 13:50:18 compute-0 sudo[47886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:18 compute-0 python3.9[47888]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:20 compute-0 sudo[47886]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:20 compute-0 sudo[48039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyguhpljkauvsxphehhgkntddmirplvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917020.4773898-344-158376925895264/AnsiballZ_command.py'
Jan 20 13:50:20 compute-0 sudo[48039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:21 compute-0 python3.9[48041]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:50:21 compute-0 sudo[48039]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:22 compute-0 sudo[48328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rclnrhapwyzbeaxezgjjynbreetynxyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917022.1683474-368-42070431018822/AnsiballZ_file.py'
Jan 20 13:50:22 compute-0 sudo[48328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:22 compute-0 python3.9[48330]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 13:50:22 compute-0 sudo[48328]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:24 compute-0 python3.9[48480]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:50:24 compute-0 sudo[48632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiegprhjofipeyxwwafmabngjalpfzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917024.537985-416-272361885661529/AnsiballZ_dnf.py'
Jan 20 13:50:24 compute-0 sudo[48632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:25 compute-0 python3.9[48634]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:25 compute-0 sshd-session[48048]: Connection closed by authenticating user root 159.223.5.14 port 40794 [preauth]
Jan 20 13:50:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:27 compute-0 systemd[1]: Reloading.
Jan 20 13:50:27 compute-0 systemd-rc-local-generator[48669]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:27 compute-0 systemd-sysv-generator[48672]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:27 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:27 compute-0 systemd[1]: run-r0c7f1d30fef849cf9f4e6bfebe09aaa5.service: Deactivated successfully.
Jan 20 13:50:27 compute-0 sudo[48632]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:28 compute-0 sudo[48949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbbhkvhnlicrlgagdlmlvqwywgqxmmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917027.7981932-440-161672214578639/AnsiballZ_systemd.py'
Jan 20 13:50:28 compute-0 sudo[48949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:28 compute-0 python3.9[48951]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:50:28 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:50:28 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:50:28 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:50:28 compute-0 systemd[1]: Stopping Network Manager...
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4584] caught SIGTERM, shutting down normally.
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4607] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4607] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4607] dhcp4 (eth0): state changed no lease
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4610] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:50:28 compute-0 NetworkManager[7209]: <info>  [1768917028.4697] exiting (success)
Jan 20 13:50:28 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:50:28 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:50:28 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:50:28 compute-0 systemd[1]: Stopped Network Manager.
Jan 20 13:50:28 compute-0 systemd[1]: NetworkManager.service: Consumed 11.035s CPU time, 4.1M memory peak, read 0B from disk, written 19.0K to disk.
Jan 20 13:50:28 compute-0 systemd[1]: Starting Network Manager...
Jan 20 13:50:28 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.5583] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:961c158a-383a-4022-81b9-57a8f5012ec2)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.5590] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.5680] manager[0x563c3eb1a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:50:28 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 13:50:28 compute-0 systemd[1]: Started Hostname Service.
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6535] hostname: hostname: using hostnamed
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6537] hostname: static hostname changed from (none) to "compute-0"
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6547] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6555] manager[0x563c3eb1a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6555] manager[0x563c3eb1a000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6596] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6612] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6613] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6614] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6615] manager: Networking is enabled by state file
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6620] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6626] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6668] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6687] dhcp: init: Using DHCP client 'internal'
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6691] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6702] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6711] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6725] device (lo): Activation: starting connection 'lo' (852309f6-3ce4-4bbb-99b9-75bbfcd15836)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6736] device (eth0): carrier: link connected
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6743] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6755] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6756] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6766] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6776] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6787] device (eth1): carrier: link connected
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6793] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6801] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (3c78a448-20f9-55c6-afc8-1adecda4fa01) (indicated)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6802] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6810] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6821] device (eth1): Activation: starting connection 'ci-private-network' (3c78a448-20f9-55c6-afc8-1adecda4fa01)
Jan 20 13:50:28 compute-0 systemd[1]: Started Network Manager.
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6831] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6848] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6853] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6858] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6862] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6870] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6874] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6878] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6882] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6895] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6900] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6920] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6938] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6947] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6949] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6955] device (lo): Activation: successful, device activated.
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6962] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6963] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6966] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.6969] device (eth1): Activation: successful, device activated.
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7311] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7318] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:50:28 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7387] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7408] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7411] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7415] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7419] device (eth0): Activation: successful, device activated.
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7424] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:50:28 compute-0 NetworkManager[48960]: <info>  [1768917028.7429] manager: startup complete
Jan 20 13:50:28 compute-0 sudo[48949]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:28 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:50:29 compute-0 sudo[49177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrbkkzesjyrxwijdppgdvsljsdfeelw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917029.1235797-464-103171962436555/AnsiballZ_dnf.py'
Jan 20 13:50:29 compute-0 sudo[49177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:29 compute-0 python3.9[49179]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:50:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:50:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:50:34 compute-0 systemd[1]: Reloading.
Jan 20 13:50:34 compute-0 systemd-rc-local-generator[49229]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:50:34 compute-0 systemd-sysv-generator[49234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:50:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:50:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:50:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:50:35 compute-0 systemd[1]: run-r6b45ce7930e242c3aa625177348804c4.service: Deactivated successfully.
Jan 20 13:50:35 compute-0 sudo[49177]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:36 compute-0 sudo[49638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gghvlsuqhlntoikvahtkjcijaaydddkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917036.3497448-500-195105122347381/AnsiballZ_stat.py'
Jan 20 13:50:36 compute-0 sudo[49638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:36 compute-0 python3.9[49640]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:50:36 compute-0 sudo[49638]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:37 compute-0 sudo[49790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ympzgwyxxiwmfyonnriwedhpzsvxogbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917037.2662702-527-42827704206884/AnsiballZ_ini_file.py'
Jan 20 13:50:37 compute-0 sudo[49790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:37 compute-0 python3.9[49792]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:37 compute-0 sudo[49790]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:38 compute-0 sudo[49944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhsboffjmyxkavrhbriihmtlmamjgngm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917038.449696-557-74785269035330/AnsiballZ_ini_file.py'
Jan 20 13:50:38 compute-0 sudo[49944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:38 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:50:38 compute-0 python3.9[49946]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:39 compute-0 sudo[49944]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:39 compute-0 sudo[50096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxykwnchhrnbqeahczhfscptusptjtnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917039.1857188-557-64036725570052/AnsiballZ_ini_file.py'
Jan 20 13:50:39 compute-0 sudo[50096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:39 compute-0 python3.9[50098]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:39 compute-0 sudo[50096]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:40 compute-0 sudo[50248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yakivsvpamfvbnlgslkbhvijzbtbdrtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917040.0663896-602-99026191038539/AnsiballZ_ini_file.py'
Jan 20 13:50:40 compute-0 sudo[50248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:40 compute-0 python3.9[50250]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:40 compute-0 sudo[50248]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:41 compute-0 sudo[50402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalfqezbsljnkaljflgqbbkxytmaiaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917040.736712-602-249830384910155/AnsiballZ_ini_file.py'
Jan 20 13:50:41 compute-0 sudo[50402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:41 compute-0 python3.9[50404]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:41 compute-0 sudo[50402]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:41 compute-0 sshd-session[50362]: Connection closed by authenticating user root 157.245.78.139 port 60250 [preauth]
Jan 20 13:50:41 compute-0 sudo[50554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjeniuybbnpjtyfdyfikfiikzayuxese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917041.6297877-647-135896928655171/AnsiballZ_stat.py'
Jan 20 13:50:41 compute-0 sudo[50554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:42 compute-0 python3.9[50556]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:50:42 compute-0 sudo[50554]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:42 compute-0 sudo[50677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqgzdncuynctczckrqczvntmbjfhuqpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917041.6297877-647-135896928655171/AnsiballZ_copy.py'
Jan 20 13:50:42 compute-0 sudo[50677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:42 compute-0 python3.9[50679]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917041.6297877-647-135896928655171/.source _original_basename=.psl4p87t follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:42 compute-0 sudo[50677]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:43 compute-0 sudo[50829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyxtdvqugthymmexghyzqfftrlcurzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917043.1285143-692-23701942909769/AnsiballZ_file.py'
Jan 20 13:50:43 compute-0 sudo[50829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:43 compute-0 python3.9[50831]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:43 compute-0 sudo[50829]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:44 compute-0 sudo[50981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeweurxwrptqhvakuvunxguyckhmvbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917043.9558632-716-33373501909325/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 20 13:50:44 compute-0 sudo[50981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:44 compute-0 python3.9[50983]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 20 13:50:44 compute-0 sudo[50981]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:45 compute-0 sudo[51133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzpromyxuomnakhjzckdzicxvkpirmse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917044.8350265-743-55143404462650/AnsiballZ_file.py'
Jan 20 13:50:45 compute-0 sudo[51133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:45 compute-0 python3.9[51135]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:45 compute-0 sudo[51133]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:46 compute-0 sudo[51285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzbjqtewvojtgfucgjfqgtubrhgemsic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917045.761001-773-93137089562201/AnsiballZ_stat.py'
Jan 20 13:50:46 compute-0 sudo[51285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:46 compute-0 sudo[51285]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:46 compute-0 sudo[51408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpgekuwtbavnshzpclwclsznlzdhgyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917045.761001-773-93137089562201/AnsiballZ_copy.py'
Jan 20 13:50:46 compute-0 sudo[51408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:46 compute-0 sudo[51408]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:47 compute-0 sudo[51560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-httiazsdgsmdmnsxkpjicalzwwnugkzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917047.2226965-818-13863075104257/AnsiballZ_slurp.py'
Jan 20 13:50:47 compute-0 sudo[51560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:47 compute-0 python3.9[51562]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 20 13:50:47 compute-0 sudo[51560]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:48 compute-0 sudo[51735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvktffqaufjcaownsynrclvhjbixjxhq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.195816-845-205789419550554/async_wrapper.py j241077263097 300 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.195816-845-205789419550554/AnsiballZ_edpm_os_net_config.py _'
Jan 20 13:50:48 compute-0 sudo[51735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:49 compute-0 ansible-async_wrapper.py[51737]: Invoked with j241077263097 300 /home/zuul/.ansible/tmp/ansible-tmp-1768917048.195816-845-205789419550554/AnsiballZ_edpm_os_net_config.py _
Jan 20 13:50:49 compute-0 ansible-async_wrapper.py[51740]: Starting module and watcher
Jan 20 13:50:49 compute-0 ansible-async_wrapper.py[51740]: Start watching 51741 (300)
Jan 20 13:50:49 compute-0 ansible-async_wrapper.py[51741]: Start module (51741)
Jan 20 13:50:49 compute-0 ansible-async_wrapper.py[51737]: Return async_wrapper task started.
Jan 20 13:50:49 compute-0 sudo[51735]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:49 compute-0 python3.9[51742]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 20 13:50:49 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 20 13:50:49 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 20 13:50:49 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 20 13:50:49 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 20 13:50:49 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9077] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9099] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9531] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9533] audit: op="connection-add" uuid="84251e78-6763-4677-898b-2f2679837cba" name="br-ex-br" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9547] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9548] audit: op="connection-add" uuid="ec513230-482d-4f88-9c79-1c5194d7bfea" name="br-ex-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9559] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9561] audit: op="connection-add" uuid="c56de875-81fe-4d00-8e78-b724368a55bb" name="eth1-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9572] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9573] audit: op="connection-add" uuid="2a009dec-c9ed-4db9-bd45-987c8ae2045e" name="vlan20-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9584] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9585] audit: op="connection-add" uuid="2b3c9696-3180-4527-8e78-3bad5179c1dd" name="vlan21-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9594] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9596] audit: op="connection-add" uuid="adbc3875-bcf0-4b1a-9488-9e2e888b8535" name="vlan22-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9606] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9607] audit: op="connection-add" uuid="2775df49-5291-4c42-9afc-ca87091c7ceb" name="vlan23-port" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9628] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9643] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9645] audit: op="connection-add" uuid="20f00fd3-c884-4709-b401-38ecf4b2bb21" name="br-ex-if" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9700] audit: op="connection-update" uuid="3c78a448-20f9-55c6-afc8-1adecda4fa01" name="ci-private-network" args="connection.controller,connection.master,connection.timestamp,connection.port-type,connection.slave-type,ovs-external-ids.data,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routing-rules,ovs-interface.type,ipv4.routes,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.method,ipv4.routing-rules" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9718] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9719] audit: op="connection-add" uuid="4c1a9b43-8322-47e6-b08e-385943b307c3" name="vlan20-if" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9734] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9735] audit: op="connection-add" uuid="b98a6e0f-01b5-4fd7-8fa2-14e2bacacc56" name="vlan21-if" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9748] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9749] audit: op="connection-add" uuid="49ec721f-5532-4d7b-a416-3efa7443c8a3" name="vlan22-if" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9762] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9764] audit: op="connection-add" uuid="16f221ad-2207-4488-b001-45e807b40716" name="vlan23-if" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9776] audit: op="connection-delete" uuid="f7cdda31-68ef-3d97-9512-d5118da8907c" name="Wired connection 1" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9787] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9789] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9794] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9797] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (84251e78-6763-4677-898b-2f2679837cba)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9797] audit: op="connection-activate" uuid="84251e78-6763-4677-898b-2f2679837cba" name="br-ex-br" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9798] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9799] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9802] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9805] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ec513230-482d-4f88-9c79-1c5194d7bfea)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9807] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9807] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9810] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9813] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c56de875-81fe-4d00-8e78-b724368a55bb)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9814] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9815] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9819] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9821] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2a009dec-c9ed-4db9-bd45-987c8ae2045e)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9822] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9823] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9827] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9829] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2b3c9696-3180-4527-8e78-3bad5179c1dd)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9831] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9831] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9834] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9837] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (adbc3875-bcf0-4b1a-9488-9e2e888b8535)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9838] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9839] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9842] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9845] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2775df49-5291-4c42-9afc-ca87091c7ceb)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9846] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9847] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9849] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9853] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9854] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9856] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9858] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (20f00fd3-c884-4709-b401-38ecf4b2bb21)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9859] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9861] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9862] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9863] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9863] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9871] device (eth1): disconnecting for new activation request.
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9871] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9873] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9875] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9876] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9878] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9880] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9882] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9885] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4c1a9b43-8322-47e6-b08e-385943b307c3)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9885] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9887] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9889] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9890] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9892] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9893] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9895] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9898] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (b98a6e0f-01b5-4fd7-8fa2-14e2bacacc56)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9898] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9901] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9902] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9903] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9905] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9906] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9908] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9910] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (49ec721f-5532-4d7b-a416-3efa7443c8a3)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9911] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9913] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9914] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9915] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9917] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <warn>  [1768917050.9918] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9920] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9923] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (16f221ad-2207-4488-b001-45e807b40716)
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9923] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9926] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9927] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9929] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9931] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9942] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51743 uid=0 result="success"
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9944] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9948] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9950] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9956] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9958] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9961] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9963] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9965] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9968] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9971] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9973] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9974] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9978] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9981] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9983] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9984] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9988] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9990] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9993] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9994] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:50 compute-0 NetworkManager[48960]: <info>  [1768917050.9998] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0001] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0001] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0001] dhcp4 (eth0): state changed no lease
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0002] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0012] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51743 uid=0 result="fail" reason="Device is not activated"
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0400] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0411] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 20 13:50:51 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0424] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0431] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 20 13:50:51 compute-0 systemd-udevd[51749]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-0 kernel: Timeout policy base is empty
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0438] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0452] device (eth1): disconnecting for new activation request.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0453] audit: op="connection-activate" uuid="3c78a448-20f9-55c6-afc8-1adecda4fa01" name="ci-private-network" pid=51743 uid=0 result="success"
Jan 20 13:50:51 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0491] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51743 uid=0 result="success"
Jan 20 13:50:51 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0567] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0661] device (eth1): Activation: starting connection 'ci-private-network' (3c78a448-20f9-55c6-afc8-1adecda4fa01)
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0665] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0673] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0677] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0682] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0685] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0689] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0693] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0694] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0696] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0697] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0698] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0707] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0713] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0716] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0719] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0721] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0724] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0727] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0730] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0733] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0737] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0740] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0743] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0746] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0750] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0756] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 kernel: br-ex: entered promiscuous mode
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0769] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0770] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0775] device (eth1): Activation: successful, device activated.
Jan 20 13:50:51 compute-0 kernel: vlan22: entered promiscuous mode
Jan 20 13:50:51 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 20 13:50:51 compute-0 systemd-udevd[51748]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0884] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0897] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0915] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0918] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0924] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 kernel: vlan20: entered promiscuous mode
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0955] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0970] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0989] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0990] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 kernel: vlan23: entered promiscuous mode
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.0995] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 kernel: vlan21: entered promiscuous mode
Jan 20 13:50:51 compute-0 systemd-udevd[51747]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1097] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1109] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1128] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1131] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1135] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1152] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1155] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1179] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1187] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1224] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1229] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1230] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1236] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1241] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:50:51 compute-0 NetworkManager[48960]: <info>  [1768917051.1245] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.2329] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.3396] dhcp4 (eth0): state changed new lease, address=38.102.83.148
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.4708] checkpoint[0x563c3eaf0950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.4711] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 sudo[52101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxcrlqsupzfmnsylsqpkubkfvqjuuxsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.2058594-845-227411349400969/AnsiballZ_async_status.py'
Jan 20 13:50:52 compute-0 sudo[52101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.7448] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.7460] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 python3.9[52103]: ansible-ansible.legacy.async_status Invoked with jid=j241077263097.51737 mode=status _async_dir=/root/.ansible_async
Jan 20 13:50:52 compute-0 sudo[52101]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.9290] audit: op="networking-control" arg="global-dns-configuration" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.9314] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.9345] audit: op="networking-control" arg="global-dns-configuration" pid=51743 uid=0 result="success"
Jan 20 13:50:52 compute-0 NetworkManager[48960]: <info>  [1768917052.9364] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51743 uid=0 result="success"
Jan 20 13:50:53 compute-0 NetworkManager[48960]: <info>  [1768917053.0751] checkpoint[0x563c3eaf0a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 20 13:50:53 compute-0 NetworkManager[48960]: <info>  [1768917053.0755] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51743 uid=0 result="success"
Jan 20 13:50:53 compute-0 ansible-async_wrapper.py[51741]: Module complete (51741)
Jan 20 13:50:54 compute-0 ansible-async_wrapper.py[51740]: Done in kid B.
Jan 20 13:50:56 compute-0 sudo[52205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqxorxhqqxsxpgcctqoszjiutcazgiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.2058594-845-227411349400969/AnsiballZ_async_status.py'
Jan 20 13:50:56 compute-0 sudo[52205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:56 compute-0 python3.9[52207]: ansible-ansible.legacy.async_status Invoked with jid=j241077263097.51737 mode=status _async_dir=/root/.ansible_async
Jan 20 13:50:56 compute-0 sudo[52205]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:57 compute-0 sudo[52305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwssdtadnykljkcmbaueliqsjuhschu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917052.2058594-845-227411349400969/AnsiballZ_async_status.py'
Jan 20 13:50:57 compute-0 sudo[52305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:57 compute-0 python3.9[52307]: ansible-ansible.legacy.async_status Invoked with jid=j241077263097.51737 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 13:50:57 compute-0 sudo[52305]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:58 compute-0 sudo[52457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrszvuhnitfmnmelbjlxzjocaoeljoqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917058.3310459-926-120576220524812/AnsiballZ_stat.py'
Jan 20 13:50:58 compute-0 sudo[52457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:58 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:50:58 compute-0 python3.9[52459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:50:58 compute-0 sudo[52457]: pam_unix(sudo:session): session closed for user root
Jan 20 13:50:59 compute-0 sudo[52583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hardcxqzxgpswpmzvpognyjpbeiuhxur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917058.3310459-926-120576220524812/AnsiballZ_copy.py'
Jan 20 13:50:59 compute-0 sudo[52583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:50:59 compute-0 python3.9[52585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917058.3310459-926-120576220524812/.source.returncode _original_basename=.y36wtc7m follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:50:59 compute-0 sudo[52583]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:00 compute-0 sudo[52735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxethuafjkzpkscejfuqjpgfysazaqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917060.1762698-974-162396653171602/AnsiballZ_stat.py'
Jan 20 13:51:00 compute-0 sudo[52735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:00 compute-0 python3.9[52737]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:00 compute-0 sudo[52735]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:01 compute-0 sudo[52858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugnaxznzliytzwrccladwgnzcnzcedwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917060.1762698-974-162396653171602/AnsiballZ_copy.py'
Jan 20 13:51:01 compute-0 sudo[52858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:01 compute-0 python3.9[52860]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917060.1762698-974-162396653171602/.source.cfg _original_basename=.3rbabsan follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:01 compute-0 sudo[52858]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:01 compute-0 sudo[53010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocmplagexydpcrbaphujnqqunjwhkxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917061.649992-1019-107570845744561/AnsiballZ_systemd.py'
Jan 20 13:51:01 compute-0 sudo[53010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:02 compute-0 python3.9[53012]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:51:02 compute-0 systemd[1]: Reloading Network Manager...
Jan 20 13:51:02 compute-0 NetworkManager[48960]: <info>  [1768917062.3935] audit: op="reload" arg="0" pid=53016 uid=0 result="success"
Jan 20 13:51:02 compute-0 NetworkManager[48960]: <info>  [1768917062.3949] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 20 13:51:02 compute-0 systemd[1]: Reloaded Network Manager.
Jan 20 13:51:02 compute-0 sudo[53010]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:02 compute-0 sshd-session[44962]: Connection closed by 192.168.122.30 port 48896
Jan 20 13:51:02 compute-0 sshd-session[44959]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:51:02 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 20 13:51:02 compute-0 systemd[1]: session-10.scope: Consumed 51.704s CPU time.
Jan 20 13:51:02 compute-0 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Jan 20 13:51:02 compute-0 systemd-logind[796]: Removed session 10.
Jan 20 13:51:08 compute-0 sshd-session[53047]: Accepted publickey for zuul from 192.168.122.30 port 37698 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:51:08 compute-0 systemd-logind[796]: New session 11 of user zuul.
Jan 20 13:51:08 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 20 13:51:08 compute-0 sshd-session[53047]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:51:09 compute-0 python3.9[53201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:10 compute-0 python3.9[53355]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:12 compute-0 python3.9[53548]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:12 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:51:12 compute-0 sshd-session[53050]: Connection closed by 192.168.122.30 port 37698
Jan 20 13:51:12 compute-0 sshd-session[53047]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:51:12 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 20 13:51:12 compute-0 systemd[1]: session-11.scope: Consumed 2.292s CPU time.
Jan 20 13:51:12 compute-0 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Jan 20 13:51:12 compute-0 systemd-logind[796]: Removed session 11.
Jan 20 13:51:18 compute-0 sshd-session[53577]: Accepted publickey for zuul from 192.168.122.30 port 60766 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:51:18 compute-0 systemd-logind[796]: New session 12 of user zuul.
Jan 20 13:51:18 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 20 13:51:18 compute-0 sshd-session[53577]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:51:19 compute-0 python3.9[53731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:20 compute-0 python3.9[53885]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:21 compute-0 sudo[54039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujuklmovrknpbxwcwxlavzekzdabrnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917081.3749838-80-58756031910466/AnsiballZ_setup.py'
Jan 20 13:51:21 compute-0 sudo[54039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:21 compute-0 python3.9[54041]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:22 compute-0 sudo[54039]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:22 compute-0 sudo[54124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hutlkzdgccikfnwftqzqaiqmojlqxqad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917081.3749838-80-58756031910466/AnsiballZ_dnf.py'
Jan 20 13:51:22 compute-0 sudo[54124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:22 compute-0 python3.9[54126]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:23 compute-0 sudo[54124]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:24 compute-0 sudo[54277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjqplggwxddkgolxsdslqukcydchbqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917084.20658-116-100402752945826/AnsiballZ_setup.py'
Jan 20 13:51:24 compute-0 sudo[54277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:24 compute-0 python3.9[54279]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:24 compute-0 sudo[54277]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:26 compute-0 sudo[54472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srembsjbnotmlnpalocwyfxdpakphqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917085.528316-149-92519991796061/AnsiballZ_file.py'
Jan 20 13:51:26 compute-0 sudo[54472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:26 compute-0 python3.9[54474]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:26 compute-0 sudo[54472]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:26 compute-0 sudo[54624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tanxhbhvdblklxxxrcinbafwzcamkgij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917086.5064614-173-33047986800010/AnsiballZ_command.py'
Jan 20 13:51:26 compute-0 sudo[54624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:27 compute-0 python3.9[54626]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2604367748-merged.mount: Deactivated successfully.
Jan 20 13:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2245946283-merged.mount: Deactivated successfully.
Jan 20 13:51:27 compute-0 podman[54627]: 2026-01-20 13:51:27.19812306 +0000 UTC m=+0.051591426 system refresh
Jan 20 13:51:27 compute-0 sudo[54624]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:51:28 compute-0 sshd-session[54662]: Connection closed by authenticating user root 157.245.78.139 port 33176 [preauth]
Jan 20 13:51:28 compute-0 sudo[54789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujfxdwyyqvbmlujelmpivuxitafrnmtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917088.4055955-197-79165080303026/AnsiballZ_stat.py'
Jan 20 13:51:28 compute-0 sudo[54789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:29 compute-0 python3.9[54791]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:29 compute-0 sudo[54789]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:29 compute-0 sudo[54912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duiswcrhjpbosfvkexbvymjarvxooikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917088.4055955-197-79165080303026/AnsiballZ_copy.py'
Jan 20 13:51:29 compute-0 sudo[54912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:29 compute-0 python3.9[54914]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917088.4055955-197-79165080303026/.source.json follow=False _original_basename=podman_network_config.j2 checksum=b25d86a9024c7cd9481634607454b3452d89facd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:29 compute-0 sudo[54912]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:30 compute-0 sudo[55064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiogdombezxaoifrobtgketrcsvplugc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917090.1408567-242-245473412842306/AnsiballZ_stat.py'
Jan 20 13:51:30 compute-0 sudo[55064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:30 compute-0 python3.9[55066]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:30 compute-0 sudo[55064]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:30 compute-0 sudo[55187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robxncgimienmwvlztacnfwyqmhbiryu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917090.1408567-242-245473412842306/AnsiballZ_copy.py'
Jan 20 13:51:30 compute-0 sudo[55187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:31 compute-0 python3.9[55189]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917090.1408567-242-245473412842306/.source.conf follow=False _original_basename=registries.conf.j2 checksum=88781afee5b5da15b4e5a77559a69fa53d49a457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:31 compute-0 sudo[55187]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:32 compute-0 sudo[55339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfsvsgcmhgyaggffztioerdqiyipwfsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917091.7011056-290-55019540746193/AnsiballZ_ini_file.py'
Jan 20 13:51:32 compute-0 sudo[55339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:32 compute-0 python3.9[55341]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:32 compute-0 sudo[55339]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:32 compute-0 sudo[55492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knkxmtunilzpwdhdcchbbdydxexoqads ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917092.4649098-290-76772695199834/AnsiballZ_ini_file.py'
Jan 20 13:51:32 compute-0 sudo[55492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:32 compute-0 python3.9[55494]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:32 compute-0 sudo[55492]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:33 compute-0 sudo[55645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asshbxjboayiuqgynwvgidzzdajosrsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917093.1367912-290-34595812320488/AnsiballZ_ini_file.py'
Jan 20 13:51:33 compute-0 sudo[55645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:33 compute-0 python3.9[55647]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:33 compute-0 sudo[55645]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:33 compute-0 sudo[55797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euntjdmbjyxamollwrqemcqjfoxjycme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917093.7102349-290-220112972976776/AnsiballZ_ini_file.py'
Jan 20 13:51:33 compute-0 sudo[55797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:34 compute-0 python3.9[55799]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:51:34 compute-0 sudo[55797]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:35 compute-0 sudo[55949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqimnkcjeoqvshpnefobnxpnruczqidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917095.0505307-383-49094437968565/AnsiballZ_dnf.py'
Jan 20 13:51:35 compute-0 sudo[55949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:35 compute-0 python3.9[55951]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:36 compute-0 sudo[55949]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:37 compute-0 sudo[56102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cihblyhoqmmnepucwdiddouxwbiepexf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917097.3845818-416-26305545276990/AnsiballZ_setup.py'
Jan 20 13:51:37 compute-0 sudo[56102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:37 compute-0 python3.9[56104]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:51:37 compute-0 sudo[56102]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:38 compute-0 sudo[56256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yafpjfkvsxadrdizptrtsqrnfbzkahjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917098.293548-440-163156362301341/AnsiballZ_stat.py'
Jan 20 13:51:38 compute-0 sudo[56256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:38 compute-0 python3.9[56258]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:51:38 compute-0 sudo[56256]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:39 compute-0 sudo[56408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wylxddldqgnswwgpnelagjvkkfwgjiqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917099.156911-467-102423921121404/AnsiballZ_stat.py'
Jan 20 13:51:39 compute-0 sudo[56408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:39 compute-0 sshd-session[55446]: Connection closed by authenticating user root 159.223.5.14 port 59482 [preauth]
Jan 20 13:51:39 compute-0 python3.9[56410]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:51:39 compute-0 sudo[56408]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:40 compute-0 sudo[56560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqyddwsrrjzxavyforhozofrjpfevqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917100.2179377-497-244836313428035/AnsiballZ_command.py'
Jan 20 13:51:40 compute-0 sudo[56560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:40 compute-0 python3.9[56562]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:51:40 compute-0 sudo[56560]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:41 compute-0 sudo[56713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdmfsuujpurydrfnohhudbahvwbgcow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917101.2263036-527-51266426938276/AnsiballZ_service_facts.py'
Jan 20 13:51:41 compute-0 sudo[56713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:41 compute-0 python3.9[56715]: ansible-service_facts Invoked
Jan 20 13:51:41 compute-0 network[56732]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:51:41 compute-0 network[56733]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:51:41 compute-0 network[56734]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:51:44 compute-0 sudo[56713]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:46 compute-0 sudo[57017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekjuavrcsrusetvsfhrobaekgcctfap ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768917106.636825-572-216626116771680/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768917106.636825-572-216626116771680/args'
Jan 20 13:51:46 compute-0 sudo[57017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:47 compute-0 sudo[57017]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:47 compute-0 sudo[57184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjwxuvjgavknqvknqjqqcmfisijotzep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917107.4172401-605-232338339383972/AnsiballZ_dnf.py'
Jan 20 13:51:47 compute-0 sudo[57184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:47 compute-0 python3.9[57186]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:51:49 compute-0 sudo[57184]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:50 compute-0 sudo[57337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpsnycvpchngfodkkmgweefjxjkmdzpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917109.7981753-644-155664708528059/AnsiballZ_package_facts.py'
Jan 20 13:51:50 compute-0 sudo[57337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:50 compute-0 python3.9[57339]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 13:51:50 compute-0 sudo[57337]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:52 compute-0 sudo[57489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setaxpyiofduaxgbuojffxcksrndwodv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917111.7548072-674-233769110175193/AnsiballZ_stat.py'
Jan 20 13:51:52 compute-0 sudo[57489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:52 compute-0 python3.9[57491]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:52 compute-0 sudo[57489]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:52 compute-0 sudo[57614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apdcqtjyzuxshddorxhgeluttgvpxbls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917111.7548072-674-233769110175193/AnsiballZ_copy.py'
Jan 20 13:51:52 compute-0 sudo[57614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:52 compute-0 python3.9[57616]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917111.7548072-674-233769110175193/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:52 compute-0 sudo[57614]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:53 compute-0 sudo[57768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzlbcdcoatbqkcruerjnfebgopbfgsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917113.220256-719-65898808418223/AnsiballZ_stat.py'
Jan 20 13:51:53 compute-0 sudo[57768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:53 compute-0 python3.9[57770]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:51:53 compute-0 sudo[57768]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:54 compute-0 sudo[57893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luszrsaanmpkxetreeznpcyoizspckzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917113.220256-719-65898808418223/AnsiballZ_copy.py'
Jan 20 13:51:54 compute-0 sudo[57893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:54 compute-0 python3.9[57895]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917113.220256-719-65898808418223/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:54 compute-0 sudo[57893]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:56 compute-0 sudo[58047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hompzxdrshlkbxdlzeumfmkavfiftghr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917115.5773954-782-190813745949253/AnsiballZ_lineinfile.py'
Jan 20 13:51:56 compute-0 sudo[58047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:56 compute-0 python3.9[58049]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:51:56 compute-0 sudo[58047]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:57 compute-0 sudo[58201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmvlszxjoydnuqppvmgvlcklnybluhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917117.5159936-827-66828636972069/AnsiballZ_setup.py'
Jan 20 13:51:57 compute-0 sudo[58201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:58 compute-0 python3.9[58203]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:51:58 compute-0 sudo[58201]: pam_unix(sudo:session): session closed for user root
Jan 20 13:51:59 compute-0 sudo[58285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjprcnlwnrnysqtsiuuhewwsgteuemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917117.5159936-827-66828636972069/AnsiballZ_systemd.py'
Jan 20 13:51:59 compute-0 sudo[58285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:51:59 compute-0 python3.9[58287]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:51:59 compute-0 sudo[58285]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:00 compute-0 sudo[58439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irweymgsxiuevkahrrnnyokmppcqunfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917120.3111765-875-23930397184917/AnsiballZ_setup.py'
Jan 20 13:52:00 compute-0 sudo[58439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:00 compute-0 python3.9[58441]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:52:01 compute-0 sudo[58439]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:01 compute-0 sudo[58523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdwawzfqjnqpsicwyebxbgtvmbwgpimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917120.3111765-875-23930397184917/AnsiballZ_systemd.py'
Jan 20 13:52:01 compute-0 sudo[58523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:01 compute-0 python3.9[58525]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:52:01 compute-0 chronyd[791]: chronyd exiting
Jan 20 13:52:01 compute-0 systemd[1]: Stopping NTP client/server...
Jan 20 13:52:01 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 20 13:52:01 compute-0 systemd[1]: Stopped NTP client/server.
Jan 20 13:52:01 compute-0 systemd[1]: Starting NTP client/server...
Jan 20 13:52:01 compute-0 chronyd[58534]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:52:01 compute-0 chronyd[58534]: Frequency -26.376 +/- 0.190 ppm read from /var/lib/chrony/drift
Jan 20 13:52:01 compute-0 chronyd[58534]: Loaded seccomp filter (level 2)
Jan 20 13:52:01 compute-0 systemd[1]: Started NTP client/server.
Jan 20 13:52:01 compute-0 sudo[58523]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:02 compute-0 sshd-session[53581]: Connection closed by 192.168.122.30 port 60766
Jan 20 13:52:02 compute-0 sshd-session[53577]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:52:02 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 20 13:52:02 compute-0 systemd[1]: session-12.scope: Consumed 23.678s CPU time.
Jan 20 13:52:02 compute-0 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Jan 20 13:52:02 compute-0 systemd-logind[796]: Removed session 12.
Jan 20 13:52:08 compute-0 sshd-session[58560]: Accepted publickey for zuul from 192.168.122.30 port 43036 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:52:08 compute-0 systemd-logind[796]: New session 13 of user zuul.
Jan 20 13:52:08 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 20 13:52:08 compute-0 sshd-session[58560]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:52:08 compute-0 sudo[58713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnueznbsvyezqhrapxmxxsqniiasulou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917128.4216967-26-28472229820089/AnsiballZ_file.py'
Jan 20 13:52:08 compute-0 sudo[58713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:09 compute-0 python3.9[58715]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:09 compute-0 sudo[58713]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:10 compute-0 sudo[58865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrltoknfyurqopkzilmxzwdsibtatnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917129.512274-62-188329926809679/AnsiballZ_stat.py'
Jan 20 13:52:10 compute-0 sudo[58865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:10 compute-0 python3.9[58867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:10 compute-0 sudo[58865]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:10 compute-0 sudo[58988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwswbxndznhbslnpqrylwfzuhuizolya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917129.512274-62-188329926809679/AnsiballZ_copy.py'
Jan 20 13:52:10 compute-0 sudo[58988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:11 compute-0 python3.9[58990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917129.512274-62-188329926809679/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:11 compute-0 sudo[58988]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:11 compute-0 sshd-session[58563]: Connection closed by 192.168.122.30 port 43036
Jan 20 13:52:11 compute-0 sshd-session[58560]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:52:11 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 20 13:52:11 compute-0 systemd[1]: session-13.scope: Consumed 1.808s CPU time.
Jan 20 13:52:11 compute-0 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Jan 20 13:52:11 compute-0 systemd-logind[796]: Removed session 13.
Jan 20 13:52:16 compute-0 sshd-session[59015]: Connection closed by authenticating user root 157.245.78.139 port 42966 [preauth]
Jan 20 13:52:17 compute-0 sshd-session[59017]: Accepted publickey for zuul from 192.168.122.30 port 40390 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:52:17 compute-0 systemd-logind[796]: New session 14 of user zuul.
Jan 20 13:52:17 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 20 13:52:17 compute-0 sshd-session[59017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:52:18 compute-0 python3.9[59170]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:52:20 compute-0 sudo[59324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgrqqjtzbaouemclzcpthqkusnmxbgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917139.6720638-59-162907176477906/AnsiballZ_file.py'
Jan 20 13:52:20 compute-0 sudo[59324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:20 compute-0 python3.9[59326]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:20 compute-0 sudo[59324]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:21 compute-0 sudo[59499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltnnuumcfchaqkfiogfrkddjbauzrgwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917141.1000361-83-245339438868333/AnsiballZ_stat.py'
Jan 20 13:52:21 compute-0 sudo[59499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:21 compute-0 python3.9[59501]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:21 compute-0 sudo[59499]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:22 compute-0 sudo[59622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurlxixdbniesttpgcymqtmdhpzxmdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917141.1000361-83-245339438868333/AnsiballZ_copy.py'
Jan 20 13:52:22 compute-0 sudo[59622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:22 compute-0 python3.9[59624]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768917141.1000361-83-245339438868333/.source.json _original_basename=.4feafb0y follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:22 compute-0 sudo[59622]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:23 compute-0 sudo[59774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmarhfbkiwejgrwuuynnntxemkucquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917143.060092-152-148850595877291/AnsiballZ_stat.py'
Jan 20 13:52:23 compute-0 sudo[59774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:23 compute-0 python3.9[59776]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:23 compute-0 sudo[59774]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:24 compute-0 sudo[59897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpntjnvnfnaenkcviswnwzyisvzrrbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917143.060092-152-148850595877291/AnsiballZ_copy.py'
Jan 20 13:52:24 compute-0 sudo[59897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:24 compute-0 python3.9[59899]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917143.060092-152-148850595877291/.source _original_basename=.dwuphfgw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:24 compute-0 sudo[59897]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:24 compute-0 sudo[60049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckjkfkudrpwduifvkhksgcvfbpgvcesq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917144.5758865-200-110553860918981/AnsiballZ_file.py'
Jan 20 13:52:24 compute-0 sudo[60049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:24 compute-0 python3.9[60051]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:25 compute-0 sudo[60049]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:25 compute-0 sudo[60201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktoxbrwdacomrmwiwkbadyjzswlzdhwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917145.4269645-224-219949213197847/AnsiballZ_stat.py'
Jan 20 13:52:25 compute-0 sudo[60201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:25 compute-0 python3.9[60203]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:25 compute-0 sudo[60201]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:26 compute-0 sudo[60324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stjwyrhsjaketqrwcgjthvepxeknzghn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917145.4269645-224-219949213197847/AnsiballZ_copy.py'
Jan 20 13:52:26 compute-0 sudo[60324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:26 compute-0 python3.9[60326]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917145.4269645-224-219949213197847/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:26 compute-0 sudo[60324]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:26 compute-0 sudo[60476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtmjnravgffgtnnfpamvckkuyqpjoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917146.6765742-224-73573484624046/AnsiballZ_stat.py'
Jan 20 13:52:26 compute-0 sudo[60476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:27 compute-0 python3.9[60478]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:27 compute-0 sudo[60476]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:27 compute-0 sudo[60599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drqinbhpmqjbgppmlqioonmqktstbzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917146.6765742-224-73573484624046/AnsiballZ_copy.py'
Jan 20 13:52:27 compute-0 sudo[60599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:27 compute-0 python3.9[60601]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768917146.6765742-224-73573484624046/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:52:27 compute-0 sudo[60599]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:28 compute-0 sudo[60751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcoyhqhjzakitbcgzhredwveyatnfkdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917148.3452582-311-152776444216652/AnsiballZ_file.py'
Jan 20 13:52:28 compute-0 sudo[60751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:28 compute-0 python3.9[60753]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:28 compute-0 sudo[60751]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:29 compute-0 sudo[60903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtagkrhogcijiicovwurtjteawidcgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917149.212506-335-164893014414053/AnsiballZ_stat.py'
Jan 20 13:52:29 compute-0 sudo[60903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:29 compute-0 python3.9[60905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:29 compute-0 sudo[60903]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:30 compute-0 sudo[61026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utedyeegqtlcihwweuffdwtvhrwulqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917149.212506-335-164893014414053/AnsiballZ_copy.py'
Jan 20 13:52:30 compute-0 sudo[61026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:30 compute-0 python3.9[61028]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917149.212506-335-164893014414053/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:30 compute-0 sudo[61026]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:30 compute-0 sudo[61178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvdqotsqoglmkxuevsekzwplrkrakprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917150.6269135-380-11965199303016/AnsiballZ_stat.py'
Jan 20 13:52:30 compute-0 sudo[61178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:31 compute-0 python3.9[61180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:31 compute-0 sudo[61178]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:31 compute-0 sudo[61301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymckgdlrczghyjmndrafcbkzotnmezo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917150.6269135-380-11965199303016/AnsiballZ_copy.py'
Jan 20 13:52:31 compute-0 sudo[61301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:31 compute-0 python3.9[61303]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917150.6269135-380-11965199303016/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:31 compute-0 sudo[61301]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:32 compute-0 sudo[61453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asbgxajhxhcuuaggydbepbojgyfxycln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917151.977163-425-125608130471071/AnsiballZ_systemd.py'
Jan 20 13:52:32 compute-0 sudo[61453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:32 compute-0 python3.9[61455]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:32 compute-0 systemd[1]: Reloading.
Jan 20 13:52:32 compute-0 systemd-rc-local-generator[61481]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:32 compute-0 systemd-sysv-generator[61484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:33 compute-0 systemd[1]: Reloading.
Jan 20 13:52:33 compute-0 systemd-rc-local-generator[61521]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:33 compute-0 systemd-sysv-generator[61525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:33 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 20 13:52:33 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 20 13:52:33 compute-0 sudo[61453]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:34 compute-0 sudo[61680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfnbvdbshqbvbvkeosmkokrcapefsnhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917153.7306366-449-274986787233881/AnsiballZ_stat.py'
Jan 20 13:52:34 compute-0 sudo[61680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:34 compute-0 python3.9[61682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:34 compute-0 sudo[61680]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:34 compute-0 sudo[61803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzdpqsuureatikjpwbcalpzgidwwjcja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917153.7306366-449-274986787233881/AnsiballZ_copy.py'
Jan 20 13:52:34 compute-0 sudo[61803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:34 compute-0 python3.9[61805]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917153.7306366-449-274986787233881/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:34 compute-0 sudo[61803]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:35 compute-0 sudo[61955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhuumzclcuskmlvsdrrvvjkdxujxvojl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917155.2755036-494-228952948384200/AnsiballZ_stat.py'
Jan 20 13:52:35 compute-0 sudo[61955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:35 compute-0 python3.9[61957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:35 compute-0 sudo[61955]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:36 compute-0 sudo[62078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfnqcfalhrssdzrvvnzvmtnyopqiimfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917155.2755036-494-228952948384200/AnsiballZ_copy.py'
Jan 20 13:52:36 compute-0 sudo[62078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:36 compute-0 python3.9[62080]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917155.2755036-494-228952948384200/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:36 compute-0 sudo[62078]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:37 compute-0 sudo[62230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsldtrpznxzzgjzmnaqwchudrpkdzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917157.2160835-539-255869738022850/AnsiballZ_systemd.py'
Jan 20 13:52:37 compute-0 sudo[62230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:37 compute-0 python3.9[62232]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:37 compute-0 systemd[1]: Reloading.
Jan 20 13:52:37 compute-0 systemd-rc-local-generator[62259]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:37 compute-0 systemd-sysv-generator[62262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:38 compute-0 systemd[1]: Reloading.
Jan 20 13:52:38 compute-0 systemd-rc-local-generator[62299]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:38 compute-0 systemd-sysv-generator[62302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:38 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 13:52:38 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 13:52:38 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 13:52:38 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 13:52:38 compute-0 sudo[62230]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:40 compute-0 python3.9[62457]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:52:40 compute-0 network[62474]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:52:40 compute-0 network[62475]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:52:40 compute-0 network[62476]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:52:45 compute-0 sudo[62738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgoztmnkqzsiazpwkswqrwjkbwvqsepb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917165.644958-587-232028370363310/AnsiballZ_systemd.py'
Jan 20 13:52:45 compute-0 sudo[62738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:45 compute-0 sshd-session[62482]: Connection closed by authenticating user root 159.223.5.14 port 34502 [preauth]
Jan 20 13:52:46 compute-0 python3.9[62740]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:46 compute-0 systemd[1]: Reloading.
Jan 20 13:52:46 compute-0 systemd-rc-local-generator[62769]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:46 compute-0 systemd-sysv-generator[62772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:46 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 20 13:52:46 compute-0 iptables.init[62781]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 20 13:52:46 compute-0 iptables.init[62781]: iptables: Flushing firewall rules: [  OK  ]
Jan 20 13:52:46 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 20 13:52:46 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 20 13:52:46 compute-0 sudo[62738]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:47 compute-0 sudo[62975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvyajvmpagrjsfuafkbopsddkssuexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917167.0102315-587-112220812715658/AnsiballZ_systemd.py'
Jan 20 13:52:47 compute-0 sudo[62975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:47 compute-0 python3.9[62977]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:47 compute-0 sudo[62975]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:48 compute-0 sudo[63129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvdzvqjsejflqsvjifixfygzjaiwmlcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917168.3687255-635-23610949151469/AnsiballZ_systemd.py'
Jan 20 13:52:48 compute-0 sudo[63129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:49 compute-0 python3.9[63131]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:52:50 compute-0 systemd[1]: Reloading.
Jan 20 13:52:50 compute-0 systemd-rc-local-generator[63161]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:52:50 compute-0 systemd-sysv-generator[63166]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:52:50 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 20 13:52:50 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 20 13:52:50 compute-0 sudo[63129]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:51 compute-0 sudo[63322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnoagakcnovqwnpgzeylhsbnkeablqwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917170.564169-659-156423155617578/AnsiballZ_command.py'
Jan 20 13:52:51 compute-0 sudo[63322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:51 compute-0 python3.9[63324]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:52:51 compute-0 sudo[63322]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:52 compute-0 sudo[63475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdevtkfzymoeetbrrwiqvebwupkqqvfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917172.319037-701-134374277929210/AnsiballZ_stat.py'
Jan 20 13:52:52 compute-0 sudo[63475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:52 compute-0 python3.9[63477]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:52 compute-0 sudo[63475]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:53 compute-0 sudo[63600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wudsxjhqmrhzfygnmtoysxwmzbvgotwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917172.319037-701-134374277929210/AnsiballZ_copy.py'
Jan 20 13:52:53 compute-0 sudo[63600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:53 compute-0 python3.9[63602]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917172.319037-701-134374277929210/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:53 compute-0 sudo[63600]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:54 compute-0 sudo[63753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oovuqaeedayyrfqogwunfmriuhmhkfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917173.7856076-746-203898275121193/AnsiballZ_systemd.py'
Jan 20 13:52:54 compute-0 sudo[63753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:54 compute-0 python3.9[63755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:52:54 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 20 13:52:54 compute-0 sshd[1004]: Received SIGHUP; restarting.
Jan 20 13:52:54 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 20 13:52:54 compute-0 sshd[1004]: Server listening on :: port 22.
Jan 20 13:52:54 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 20 13:52:54 compute-0 sudo[63753]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:55 compute-0 sudo[63909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhndgkbyeirdmezziwbnanuvvfgmmsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917174.8451235-770-270895118674656/AnsiballZ_file.py'
Jan 20 13:52:55 compute-0 sudo[63909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:55 compute-0 python3.9[63911]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:55 compute-0 sudo[63909]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:56 compute-0 sudo[64061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhojwbebnibkhimjumtserxstregbect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917175.7128313-794-279952979929786/AnsiballZ_stat.py'
Jan 20 13:52:56 compute-0 sudo[64061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:56 compute-0 python3.9[64063]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:56 compute-0 sudo[64061]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:56 compute-0 sudo[64184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgdeqlaxrbjubewzdqjcajyearplmbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917175.7128313-794-279952979929786/AnsiballZ_copy.py'
Jan 20 13:52:56 compute-0 sudo[64184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:56 compute-0 python3.9[64186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917175.7128313-794-279952979929786/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:56 compute-0 sudo[64184]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:57 compute-0 sudo[64336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fogjlpfbjhjksjdohpcevpnmrapnncjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917177.4501781-848-39616279116847/AnsiballZ_timezone.py'
Jan 20 13:52:57 compute-0 sudo[64336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:58 compute-0 python3.9[64338]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:52:58 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 13:52:58 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 13:52:58 compute-0 sudo[64336]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:58 compute-0 sudo[64492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghqccnwclpkxkdxhifcuvsznncefiabm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917178.5853155-875-269630236191878/AnsiballZ_file.py'
Jan 20 13:52:58 compute-0 sudo[64492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:59 compute-0 python3.9[64494]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:52:59 compute-0 sudo[64492]: pam_unix(sudo:session): session closed for user root
Jan 20 13:52:59 compute-0 sudo[64644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlitavshsarmuzyfqcqbiwdkxyrojrox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917179.3883295-899-69663577429106/AnsiballZ_stat.py'
Jan 20 13:52:59 compute-0 sudo[64644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:52:59 compute-0 python3.9[64646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:52:59 compute-0 sudo[64644]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:00 compute-0 sudo[64767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpbvgoijodslfuqeomjnpjpwlnlyunjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917179.3883295-899-69663577429106/AnsiballZ_copy.py'
Jan 20 13:53:00 compute-0 sudo[64767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:00 compute-0 python3.9[64769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917179.3883295-899-69663577429106/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:00 compute-0 sudo[64767]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:01 compute-0 sudo[64919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vchhjqastnqauxlsbsxuvrifyhrxfhlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917180.8623564-944-25463735209171/AnsiballZ_stat.py'
Jan 20 13:53:01 compute-0 sudo[64919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:01 compute-0 python3.9[64921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:01 compute-0 sudo[64919]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:01 compute-0 sudo[65042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddzunskcjfdmothepeiejesqqrjipqjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917180.8623564-944-25463735209171/AnsiballZ_copy.py'
Jan 20 13:53:01 compute-0 sudo[65042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:01 compute-0 python3.9[65044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917180.8623564-944-25463735209171/.source.yaml _original_basename=.v6513gx3 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:01 compute-0 sudo[65042]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:02 compute-0 sshd-session[65069]: Connection closed by authenticating user root 157.245.78.139 port 52060 [preauth]
Jan 20 13:53:02 compute-0 sudo[65196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvquskxdisjmzoqmheejpssrjgshwqvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917182.323603-989-42785387027749/AnsiballZ_stat.py'
Jan 20 13:53:02 compute-0 sudo[65196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:02 compute-0 python3.9[65198]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:02 compute-0 sudo[65196]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:03 compute-0 sudo[65319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlwwkeqzxwzsdlhrieozepuoqevvbccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917182.323603-989-42785387027749/AnsiballZ_copy.py'
Jan 20 13:53:03 compute-0 sudo[65319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:03 compute-0 python3.9[65321]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917182.323603-989-42785387027749/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:03 compute-0 sudo[65319]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:04 compute-0 sudo[65471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnkrgfnxpcfqsdhtinhgjfrxuazvcgmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917183.7565904-1034-70662843840765/AnsiballZ_command.py'
Jan 20 13:53:04 compute-0 sudo[65471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:04 compute-0 python3.9[65473]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:04 compute-0 sudo[65471]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:04 compute-0 sudo[65624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upvxlueozaexzzkkbnqfzubtjvqviuwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917184.5815518-1058-200014415239019/AnsiballZ_command.py'
Jan 20 13:53:04 compute-0 sudo[65624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:05 compute-0 python3.9[65626]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:05 compute-0 sudo[65624]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:05 compute-0 sudo[65777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqksbdalqvlsvyvhlhqplpcbydmiylx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917185.3855448-1082-231408617050625/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 13:53:05 compute-0 sudo[65777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:06 compute-0 python3[65779]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 13:53:06 compute-0 sudo[65777]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:06 compute-0 sudo[65929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htziicgmcrphaoofgzgusgcpoamozgkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917186.3742664-1106-211749726379007/AnsiballZ_stat.py'
Jan 20 13:53:06 compute-0 sudo[65929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:06 compute-0 python3.9[65931]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:06 compute-0 sudo[65929]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:07 compute-0 sudo[66052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobmnnbdpyjcskgybekclbmsnvavpdfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917186.3742664-1106-211749726379007/AnsiballZ_copy.py'
Jan 20 13:53:07 compute-0 sudo[66052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:07 compute-0 python3.9[66054]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917186.3742664-1106-211749726379007/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:07 compute-0 sudo[66052]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:08 compute-0 sudo[66204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzkdqfleaixswiftppimkjhtpeebwrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917187.9679976-1151-214992131321665/AnsiballZ_stat.py'
Jan 20 13:53:08 compute-0 sudo[66204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:08 compute-0 python3.9[66206]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:08 compute-0 sudo[66204]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:08 compute-0 sudo[66327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szbyumxredqsqnrhdtqcfmnhrfkulmcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917187.9679976-1151-214992131321665/AnsiballZ_copy.py'
Jan 20 13:53:08 compute-0 sudo[66327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:09 compute-0 python3.9[66329]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917187.9679976-1151-214992131321665/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:09 compute-0 sudo[66327]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:09 compute-0 sudo[66479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzwhihuvneqzfjmtjgjopwhmfpdfxuep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917189.4419982-1196-74695586541410/AnsiballZ_stat.py'
Jan 20 13:53:09 compute-0 sudo[66479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:09 compute-0 python3.9[66481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:09 compute-0 sudo[66479]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:10 compute-0 sudo[66602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzviyjmggtdyrgaambbnpxabgivcbyzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917189.4419982-1196-74695586541410/AnsiballZ_copy.py'
Jan 20 13:53:10 compute-0 sudo[66602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:10 compute-0 python3.9[66604]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917189.4419982-1196-74695586541410/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:10 compute-0 sudo[66602]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:11 compute-0 sudo[66754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrowrpzlzzzjuzbifsysbloxlmtlbauq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917190.8599315-1241-248112608255950/AnsiballZ_stat.py'
Jan 20 13:53:11 compute-0 sudo[66754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:11 compute-0 python3.9[66756]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:11 compute-0 sudo[66754]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:11 compute-0 sudo[66877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egbegknehfycntamzemirxmhwopvihfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917190.8599315-1241-248112608255950/AnsiballZ_copy.py'
Jan 20 13:53:11 compute-0 sudo[66877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:11 compute-0 python3.9[66879]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917190.8599315-1241-248112608255950/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:11 compute-0 sudo[66877]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:12 compute-0 sudo[67029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzrwloppgcqwiiukklpkreoeolznoqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917192.371522-1286-226801814493748/AnsiballZ_stat.py'
Jan 20 13:53:12 compute-0 sudo[67029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:12 compute-0 python3.9[67031]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:12 compute-0 sudo[67029]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:13 compute-0 sudo[67152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovhevwykyawdnqylsenmuydgxoxqxei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917192.371522-1286-226801814493748/AnsiballZ_copy.py'
Jan 20 13:53:13 compute-0 sudo[67152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:13 compute-0 python3.9[67154]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917192.371522-1286-226801814493748/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:13 compute-0 sudo[67152]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:14 compute-0 sudo[67304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnpkfcjcetnjpbucdfusirdsucgxqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917193.8865423-1331-195185764799703/AnsiballZ_file.py'
Jan 20 13:53:14 compute-0 sudo[67304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:14 compute-0 python3.9[67306]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:14 compute-0 sudo[67304]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:14 compute-0 sudo[67456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzxeqmtqyxnfbqsrsisnxdtrvrwhxzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917194.6726067-1355-265862228679138/AnsiballZ_command.py'
Jan 20 13:53:14 compute-0 sudo[67456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:15 compute-0 python3.9[67458]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:15 compute-0 sudo[67456]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:16 compute-0 sudo[67615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufeukywwqwcbultqcsfiygtgygziqdpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917195.605059-1379-36043060668796/AnsiballZ_blockinfile.py'
Jan 20 13:53:16 compute-0 sudo[67615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:16 compute-0 python3.9[67617]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:16 compute-0 sudo[67615]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:16 compute-0 sudo[67768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnzresqhmgovvuvjyywkwmdnekzngfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917196.6509273-1406-278352457947867/AnsiballZ_file.py'
Jan 20 13:53:16 compute-0 sudo[67768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:17 compute-0 python3.9[67770]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:17 compute-0 sudo[67768]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:17 compute-0 sudo[67920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvfxrcsdrntpztuhgqlunhvpyujktkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917197.2994297-1406-14696025247836/AnsiballZ_file.py'
Jan 20 13:53:17 compute-0 sudo[67920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:17 compute-0 python3.9[67922]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:17 compute-0 sudo[67920]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:18 compute-0 sudo[68072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaokzuxujturgwhngwxbkuphyhrsiyne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917198.2658796-1451-199493935217610/AnsiballZ_mount.py'
Jan 20 13:53:18 compute-0 sudo[68072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:19 compute-0 python3.9[68074]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 13:53:19 compute-0 sudo[68072]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:19 compute-0 sudo[68225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijzrneyaswvyjyrkvrhuoklhfiplofgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917199.277184-1451-180459966234230/AnsiballZ_mount.py'
Jan 20 13:53:19 compute-0 sudo[68225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:19 compute-0 python3.9[68227]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 13:53:19 compute-0 sudo[68225]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:20 compute-0 sshd-session[59020]: Connection closed by 192.168.122.30 port 40390
Jan 20 13:53:20 compute-0 sshd-session[59017]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:20 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 20 13:53:20 compute-0 systemd[1]: session-14.scope: Consumed 33.154s CPU time.
Jan 20 13:53:20 compute-0 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Jan 20 13:53:20 compute-0 systemd-logind[796]: Removed session 14.
Jan 20 13:53:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 13:53:32 compute-0 sshd-session[68255]: Accepted publickey for zuul from 192.168.122.30 port 51264 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:53:32 compute-0 systemd-logind[796]: New session 15 of user zuul.
Jan 20 13:53:33 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 20 13:53:33 compute-0 sshd-session[68255]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:53:33 compute-0 sudo[68408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxnvbiejqpmzbnhpcmmkuvgzpjvilll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917213.103525-23-125655704021117/AnsiballZ_tempfile.py'
Jan 20 13:53:33 compute-0 sudo[68408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:33 compute-0 python3.9[68410]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 13:53:33 compute-0 sudo[68408]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:34 compute-0 sudo[68560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkewqaeqktwvbfcffukztkvdxdktefop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917214.0713763-59-241344538031567/AnsiballZ_stat.py'
Jan 20 13:53:34 compute-0 sudo[68560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:34 compute-0 python3.9[68562]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:53:34 compute-0 sudo[68560]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:35 compute-0 sudo[68712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlzcecgysrzfitoidvsahdnfkajmsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917215.0441582-89-212700246848875/AnsiballZ_setup.py'
Jan 20 13:53:35 compute-0 sudo[68712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:35 compute-0 python3.9[68714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:35 compute-0 sudo[68712]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:37 compute-0 sudo[68864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtleheidwzaxlfdhjocyvhdhkdwedxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917216.5997472-114-277043317073680/AnsiballZ_blockinfile.py'
Jan 20 13:53:37 compute-0 sudo[68864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:37 compute-0 python3.9[68866]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCUasX8PhlctvvIb2eE6+Z0hELmfczQ6UoBD+mPtCobptr/s786JmwJ3D8nIoKhlCLVSmhRfbqf1Pm45RUPTEtSuaa6HBDy40dZhTXU34X4KbGfKmur2bp9S/1w83ArKvI8inSqqk2qoMx1l7ECkEgeT+GbFwKfYLnbq5OV4Ms3tzl/uFUC/Xzxs2dbXlhozQiSamcO/a6EObErTvR8PrtaOoLFtTiD/I+oN+rkdBPkBc6r0qT4jS7nU1FOlT96meSZHE7Q1n8pxcy9PEc8w9hFdd1Zj8/WcGIdeEJsekuouK1Lut/sofQLZHyUMWJTcnBjx8BsjGx9NjUHPYUWIw+DZo7lT2QurAPNnaX4rp9ciGV2Bdm3ylNoOu3izNvM1JGTw3xRyYrmyxyWv3Euc35JXa0w07Xrqr+6Ckih0WTLU6q3Rlnrc/grpDC821sHrsljerHipJVOCbZB39LvV6wDDBlqfYZzfqID3dIqlVli4eL12J0K7jr7QAlPRhNf0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG07miJwhzuA/nm0wvGIorydl2xbBiiDhE7PypnJ/jC
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKiJpWtps/bRsuEHfak4zDuqPHKOWFLaEA2h86H7tPlrZHR8okAVZWCmY7keO3Ad1DFyffUtJPKv5OvTK91xGO8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/3N9PJZXpat0uFh2x2RoV9B0Ih74HU9CPf+g/5HncM7gCVvCpW3CBde1qNDRU2iY9rzpOVPwzi4YzoAUcxB5KAiqZOI9ylmzfiD8JXQ+myLmIRLxHOdXFaEQ4mMp4W+X37hCZ6sdfm6Yqd6eqBuZrM/72ltYoewWBNCG/Hgqzu30L9WC4+BF+iADHT7Qnmvh/cc9U71WxB4h2ikBo1SdGoFCqoez7ajitqx+dw7VWaOtEPliS0LZuDtN3Zt/cBBgxhb/FaAEI3jRP2ej9X0NJW91YxzBygyxiVasslx92g/GmnDFOWVZb5ai/JJsNH6pLTjs25IzvnuWIf8/ZLgZ03zziR4mBLP12CIVF8g1CzaqK1IILDKkjS/dzDiTBefmiQ2+N0i5EEXOgmxchqOqTkFPQg/ar0+0uBPkwzAI0HDk99czhyYHFlO+PhnULVkL1z+XLwHBgOrbNNVQQcJCvady4Gadh66mu1UrLpryNYOgZiugZi67Biha4ZPzPHok=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF41dx3BXAuEvQwQNtbUM7rIrbaOLr5CRvYNdDD+UMr9
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENFrTpm22/xEaEJMzd7C5WyJttJdK+HK5kxP8/NuvvAQSlLtEulBZnvD/OX5hk3/sDYhPQelj3YsNX1Plw5PJQ=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dEYtIJ/delwiq9xMMctU8myoGU/TKMiFUM+i3BSaGKrC0rujad6qo1LAtjth5aYbBcgBhxy0UEX0oCruQQgc5qDpPmWHmJiAwdQJaDu6GxTRl3PlXF2u4rd0Rz72DAMuCxPSYedeHU91uL4vlrcD95xONWew2wa9lUuqQWdgj8DtqnB9T895BihDk9vFLXAaoGJcYZVGKJmXR8sOzNTFQxefqstVO0/dfbRUyFd0Ukp5v7rTmLxw0Np5WcGMOg9l/iRzWTopxnTRvXpBoGlFCmzNvTG2uH08dJ4FU5Wk9/iSxonuiVJu9DKs8Tp4EajaA4Y6cEuZiMhhqi7vw6zVCQuCmRBpny6Ub1Ag2CesMYgxwOVJO5cHsKh3BzuPFsh1gMgrrZK7v+qfm2r1rhHlPsCWrcnrtUIZa7gyzdFvHytTh/4uyGMgNpbwxkyCxgSN4PleQy2wvxy/DFW+JxCDzI4jK9LFH5aojzEhUtj+P3E7CXL/wRPxDJdfEU6PhTk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEa1zL0TUD00vr72wZq3y4rgtSnctWBvs+gME/0/EAsV
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2WwWe4rQW0CaFwcmci1J5n144T87fcxCH+Y2CVZd5XQ7Cvzlhh1cGNDX81Tng3KgxvKOuz3mdiSCLqx8noiD0=
                                             create=True mode=0644 path=/tmp/ansible.3wu16y_e state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:37 compute-0 sudo[68864]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:38 compute-0 sudo[69016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-girnowslpzvryjnerwsixdszrgpoqdvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917217.503886-138-80784156371269/AnsiballZ_command.py'
Jan 20 13:53:38 compute-0 sudo[69016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:38 compute-0 python3.9[69018]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3wu16y_e' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:38 compute-0 sudo[69016]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:39 compute-0 sudo[69170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzdyctspbfwcrbryoeunxedycebzotf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917218.920368-162-107200204392791/AnsiballZ_file.py'
Jan 20 13:53:39 compute-0 sudo[69170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:39 compute-0 python3.9[69172]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3wu16y_e state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:39 compute-0 sudo[69170]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:40 compute-0 sshd-session[68258]: Connection closed by 192.168.122.30 port 51264
Jan 20 13:53:40 compute-0 sshd-session[68255]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:40 compute-0 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Jan 20 13:53:40 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 20 13:53:40 compute-0 systemd[1]: session-15.scope: Consumed 3.074s CPU time.
Jan 20 13:53:40 compute-0 systemd-logind[796]: Removed session 15.
Jan 20 13:53:46 compute-0 sshd-session[69197]: Accepted publickey for zuul from 192.168.122.30 port 44946 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:53:46 compute-0 systemd-logind[796]: New session 16 of user zuul.
Jan 20 13:53:46 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 20 13:53:46 compute-0 sshd-session[69197]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:53:47 compute-0 python3.9[69350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:48 compute-0 sudo[69504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqdqsnscrfnnzbgwcnzctnzhmhiwzdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917228.186371-56-15305861186716/AnsiballZ_systemd.py'
Jan 20 13:53:48 compute-0 sudo[69504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:49 compute-0 python3.9[69506]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 13:53:49 compute-0 sudo[69504]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:49 compute-0 sudo[69660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjoywrglqsdweswhfhuavvqqmevrpnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917229.5163972-80-106532801343166/AnsiballZ_systemd.py'
Jan 20 13:53:49 compute-0 sudo[69660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:50 compute-0 sshd-session[69608]: Connection closed by authenticating user root 157.245.78.139 port 59584 [preauth]
Jan 20 13:53:50 compute-0 python3.9[69662]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:53:50 compute-0 sudo[69660]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:51 compute-0 sudo[69814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opxgsnumilxzlgyacwixnofmsrdwayje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917230.8474245-107-60501906388701/AnsiballZ_command.py'
Jan 20 13:53:51 compute-0 sudo[69814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:51 compute-0 python3.9[69816]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:51 compute-0 sudo[69814]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:52 compute-0 sudo[69968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvkwcdpttzzjammujewlvxxuxjiajdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917232.2432828-131-60829048579820/AnsiballZ_stat.py'
Jan 20 13:53:52 compute-0 sudo[69968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:53 compute-0 python3.9[69970]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:53:53 compute-0 sudo[69968]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:53 compute-0 sudo[70122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetwjnilgujhwdpjfiqjfzipeuijankp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917233.5118082-155-123998162819445/AnsiballZ_command.py'
Jan 20 13:53:53 compute-0 sudo[70122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:53 compute-0 python3.9[70124]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:54 compute-0 sudo[70122]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:54 compute-0 sshd-session[69740]: Connection closed by authenticating user root 159.223.5.14 port 60258 [preauth]
Jan 20 13:53:54 compute-0 sudo[70277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwinxyycphiofkxwozwlidurboeogxsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917234.3772304-179-107188749189604/AnsiballZ_file.py'
Jan 20 13:53:54 compute-0 sudo[70277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:53:54 compute-0 python3.9[70279]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:55 compute-0 sudo[70277]: pam_unix(sudo:session): session closed for user root
Jan 20 13:53:55 compute-0 sshd-session[69200]: Connection closed by 192.168.122.30 port 44946
Jan 20 13:53:55 compute-0 sshd-session[69197]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:53:55 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 20 13:53:55 compute-0 systemd[1]: session-16.scope: Consumed 4.376s CPU time.
Jan 20 13:53:55 compute-0 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Jan 20 13:53:55 compute-0 systemd-logind[796]: Removed session 16.
Jan 20 13:54:01 compute-0 sshd-session[70304]: Accepted publickey for zuul from 192.168.122.30 port 40112 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:54:01 compute-0 systemd-logind[796]: New session 17 of user zuul.
Jan 20 13:54:01 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 20 13:54:01 compute-0 sshd-session[70304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:54:02 compute-0 python3.9[70457]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:54:03 compute-0 sudo[70611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcsacvvapjfmowwwumuqjigvctrwgpjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917243.094253-61-158805502247773/AnsiballZ_setup.py'
Jan 20 13:54:03 compute-0 sudo[70611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:03 compute-0 python3.9[70613]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:54:03 compute-0 sudo[70611]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:04 compute-0 sudo[70695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zttajsefblwagodbzqdblpulydazljrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917243.094253-61-158805502247773/AnsiballZ_dnf.py'
Jan 20 13:54:04 compute-0 sudo[70695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:04 compute-0 python3.9[70697]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 13:54:05 compute-0 sudo[70695]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:06 compute-0 python3.9[70848]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:08 compute-0 python3.9[70999]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 13:54:09 compute-0 python3.9[71149]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:54:09 compute-0 python3.9[71299]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:54:10 compute-0 sshd-session[70307]: Connection closed by 192.168.122.30 port 40112
Jan 20 13:54:10 compute-0 sshd-session[70304]: pam_unix(sshd:session): session closed for user zuul
Jan 20 13:54:10 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 20 13:54:10 compute-0 systemd[1]: session-17.scope: Consumed 5.917s CPU time.
Jan 20 13:54:10 compute-0 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Jan 20 13:54:10 compute-0 systemd-logind[796]: Removed session 17.
Jan 20 13:54:11 compute-0 chronyd[58534]: Selected source 23.159.16.194 (pool.ntp.org)
Jan 20 13:54:19 compute-0 sshd-session[71325]: Accepted publickey for zuul from 38.102.83.230 port 48916 ssh2: RSA SHA256:r50QbT7bSKscUimrVpe816OyonJCbpigaVSUx3I8hI8
Jan 20 13:54:19 compute-0 systemd-logind[796]: New session 18 of user zuul.
Jan 20 13:54:19 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 20 13:54:19 compute-0 sshd-session[71325]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:54:19 compute-0 sudo[71401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsuzkatlrsiudfmrwbntdoxhnvytlnya ; /usr/bin/python3'
Jan 20 13:54:19 compute-0 sudo[71401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:20 compute-0 useradd[71405]: new group: name=ceph-admin, GID=42478
Jan 20 13:54:20 compute-0 useradd[71405]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 20 13:54:20 compute-0 sudo[71401]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:20 compute-0 sudo[71487]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscjhhiqzkpzcftpdkhudfxixnvwirmz ; /usr/bin/python3'
Jan 20 13:54:20 compute-0 sudo[71487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:20 compute-0 sudo[71487]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:21 compute-0 sudo[71560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtemwixcgzmutobrbajifpnvurtssrzg ; /usr/bin/python3'
Jan 20 13:54:21 compute-0 sudo[71560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:21 compute-0 sudo[71560]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:21 compute-0 sudo[71610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjhkbwaczyhpeuobexfbwuolllbhldx ; /usr/bin/python3'
Jan 20 13:54:21 compute-0 sudo[71610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:21 compute-0 sudo[71610]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:22 compute-0 sudo[71636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgzsnwqhdgxyhpktzfwryazhpyllumvl ; /usr/bin/python3'
Jan 20 13:54:22 compute-0 sudo[71636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:22 compute-0 sudo[71636]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:22 compute-0 sudo[71662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkvuzwiyfgbprcphreprkgnvgygpujy ; /usr/bin/python3'
Jan 20 13:54:22 compute-0 sudo[71662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:22 compute-0 sudo[71662]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:23 compute-0 sudo[71688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apkkljfmxbqhxlxfjyawajluucuibdxo ; /usr/bin/python3'
Jan 20 13:54:23 compute-0 sudo[71688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:23 compute-0 sudo[71688]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:23 compute-0 sudo[71766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dviyrxfoxonxtrrkaidzktjjauoblefh ; /usr/bin/python3'
Jan 20 13:54:23 compute-0 sudo[71766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:23 compute-0 sudo[71766]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:23 compute-0 sudo[71839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwsrfnhngvcngdotbkjtsftpnwywbnpd ; /usr/bin/python3'
Jan 20 13:54:23 compute-0 sudo[71839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:24 compute-0 sudo[71839]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:24 compute-0 sudo[71941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtdaokixtyjvyikpjouoyuwizzwypbr ; /usr/bin/python3'
Jan 20 13:54:24 compute-0 sudo[71941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:24 compute-0 sudo[71941]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:24 compute-0 sudo[72014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvvicaxuoqglhoembmksqkwbalxorwd ; /usr/bin/python3'
Jan 20 13:54:24 compute-0 sudo[72014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:25 compute-0 sudo[72014]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:25 compute-0 sudo[72064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhekzsaukhowchlxowmmcfypflmrexp ; /usr/bin/python3'
Jan 20 13:54:25 compute-0 sudo[72064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:26 compute-0 python3[72066]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:54:26 compute-0 sudo[72064]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:27 compute-0 sudo[72159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyingqjbjcvdqnhckvlsbsusphlgmunm ; /usr/bin/python3'
Jan 20 13:54:27 compute-0 sudo[72159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:27 compute-0 python3[72161]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:54:29 compute-0 sudo[72159]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:29 compute-0 sudo[72186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhlvusvvcfgqckbmtlvhuatnjojrevf ; /usr/bin/python3'
Jan 20 13:54:29 compute-0 sudo[72186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:29 compute-0 python3[72188]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:29 compute-0 sudo[72186]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:29 compute-0 sudo[72212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtcmfchuicpvsaczkqdqchlmafkfkum ; /usr/bin/python3'
Jan 20 13:54:29 compute-0 sudo[72212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:29 compute-0 python3[72214]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:30 compute-0 kernel: loop: module loaded
Jan 20 13:54:30 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Jan 20 13:54:30 compute-0 sudo[72212]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:30 compute-0 sudo[72247]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinjoeiifljfajmiaqweeadtpanampma ; /usr/bin/python3'
Jan 20 13:54:30 compute-0 sudo[72247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:30 compute-0 python3[72249]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:30 compute-0 lvm[72252]: PV /dev/loop3 not used.
Jan 20 13:54:30 compute-0 lvm[72254]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:30 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 20 13:54:30 compute-0 lvm[72261]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 20 13:54:30 compute-0 lvm[72264]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:30 compute-0 lvm[72264]: VG ceph_vg0 finished
Jan 20 13:54:30 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 20 13:54:30 compute-0 sudo[72247]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:31 compute-0 sudo[72340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuanewngpckpqqsumnylubzseokbrigr ; /usr/bin/python3'
Jan 20 13:54:31 compute-0 sudo[72340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:31 compute-0 python3[72342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:54:31 compute-0 sudo[72340]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:31 compute-0 sudo[72413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goxxcbpcgtulibvbsqkwoydlfxdfqhqo ; /usr/bin/python3'
Jan 20 13:54:31 compute-0 sudo[72413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:31 compute-0 python3[72415]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917270.9730954-36957-257715627372866/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:31 compute-0 sudo[72413]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:32 compute-0 sudo[72463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vetrlfsxlownrubakadqsiyydyunvqbl ; /usr/bin/python3'
Jan 20 13:54:32 compute-0 sudo[72463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:32 compute-0 python3[72465]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:54:32 compute-0 systemd[1]: Reloading.
Jan 20 13:54:32 compute-0 systemd-sysv-generator[72497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:54:32 compute-0 systemd-rc-local-generator[72490]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:54:32 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 20 13:54:32 compute-0 bash[72505]: /dev/loop3: [64513]:4329676 (/var/lib/ceph-osd-0.img)
Jan 20 13:54:32 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 20 13:54:33 compute-0 lvm[72506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:54:33 compute-0 sudo[72463]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:33 compute-0 lvm[72506]: VG ceph_vg0 finished
Jan 20 13:54:35 compute-0 python3[72530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:54:36 compute-0 sshd-session[72574]: Connection closed by authenticating user root 157.245.78.139 port 35884 [preauth]
Jan 20 13:54:38 compute-0 sudo[72623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshbvtwoahwqhmtlnodkaggriyteydco ; /usr/bin/python3'
Jan 20 13:54:38 compute-0 sudo[72623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:38 compute-0 python3[72625]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:54:39 compute-0 groupadd[72631]: group added to /etc/group: name=cephadm, GID=993
Jan 20 13:54:39 compute-0 groupadd[72631]: group added to /etc/gshadow: name=cephadm
Jan 20 13:54:39 compute-0 groupadd[72631]: new group: name=cephadm, GID=993
Jan 20 13:54:39 compute-0 useradd[72638]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 20 13:54:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:54:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:54:40 compute-0 sudo[72623]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:40 compute-0 sudo[72734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyegvpaefuzvgegeytmqwifturczwgal ; /usr/bin/python3'
Jan 20 13:54:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:54:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:54:40 compute-0 systemd[1]: run-rf3a038429af949d39476bf92c50f6afd.service: Deactivated successfully.
Jan 20 13:54:40 compute-0 sudo[72734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:40 compute-0 python3[72737]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:40 compute-0 sudo[72734]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:40 compute-0 sudo[72763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owmtstoyqthtqossmrxfhsbzzghziibb ; /usr/bin/python3'
Jan 20 13:54:40 compute-0 sudo[72763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:40 compute-0 python3[72765]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:54:41 compute-0 sudo[72763]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:41 compute-0 sudo[72828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyxsdhrugxzajggsjfkzvcoenfemwdrs ; /usr/bin/python3'
Jan 20 13:54:41 compute-0 sudo[72828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:41 compute-0 python3[72830]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:41 compute-0 sudo[72828]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:41 compute-0 sudo[72854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbjxhqcrblakmtwhipedzprzjhcwbmrr ; /usr/bin/python3'
Jan 20 13:54:41 compute-0 sudo[72854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:41 compute-0 python3[72856]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:41 compute-0 sudo[72854]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:54:42 compute-0 sudo[72932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwyjkggiytvbhibhqpxvblwrytcbisvy ; /usr/bin/python3'
Jan 20 13:54:42 compute-0 sudo[72932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:42 compute-0 python3[72934]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:54:42 compute-0 sudo[72932]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:42 compute-0 sudo[73005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brnhmcaymxwsfsntgszicohlpvqgwubs ; /usr/bin/python3'
Jan 20 13:54:42 compute-0 sudo[73005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:42 compute-0 python3[73007]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917282.2726512-37148-281032490176136/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:42 compute-0 sudo[73005]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:43 compute-0 sudo[73107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsfgkpexiowdbagmzwlpqeebkhhhjlv ; /usr/bin/python3'
Jan 20 13:54:43 compute-0 sudo[73107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:43 compute-0 python3[73109]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:54:43 compute-0 sudo[73107]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:43 compute-0 sudo[73180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxpjnpjebtrfkwgwkkojhzmjiwndqdjm ; /usr/bin/python3'
Jan 20 13:54:43 compute-0 sudo[73180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:43 compute-0 python3[73182]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917283.3596203-37166-38448512481702/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:54:44 compute-0 sudo[73180]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:44 compute-0 sudo[73230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjxrfckpikzkcbeumydctknwwkofhdk ; /usr/bin/python3'
Jan 20 13:54:44 compute-0 sudo[73230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:44 compute-0 python3[73232]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:44 compute-0 sudo[73230]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:44 compute-0 sudo[73258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jftxkiyhidyrhpnycrszjemlipninjzl ; /usr/bin/python3'
Jan 20 13:54:44 compute-0 sudo[73258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:44 compute-0 python3[73260]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:44 compute-0 sudo[73258]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:44 compute-0 sudo[73286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krfjwohviknvpfsggiqyginraibofgpp ; /usr/bin/python3'
Jan 20 13:54:44 compute-0 sudo[73286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:45 compute-0 python3[73288]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:45 compute-0 sudo[73286]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:45 compute-0 python3[73314]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:54:45 compute-0 sudo[73338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfrsscnxrzcawvvqzqlwieokemiypgcb ; /usr/bin/python3'
Jan 20 13:54:45 compute-0 sudo[73338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:54:45 compute-0 python3[73340]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:54:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:54:45 compute-0 sshd-session[73357]: Accepted publickey for ceph-admin from 192.168.122.100 port 45966 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:54:45 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 13:54:45 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 13:54:45 compute-0 systemd-logind[796]: New session 19 of user ceph-admin.
Jan 20 13:54:45 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 13:54:45 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 13:54:45 compute-0 systemd[73361]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:54:46 compute-0 systemd[73361]: Queued start job for default target Main User Target.
Jan 20 13:54:46 compute-0 systemd[73361]: Created slice User Application Slice.
Jan 20 13:54:46 compute-0 systemd[73361]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:54:46 compute-0 systemd[73361]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:54:46 compute-0 systemd[73361]: Reached target Paths.
Jan 20 13:54:46 compute-0 systemd[73361]: Reached target Timers.
Jan 20 13:54:46 compute-0 systemd[73361]: Starting D-Bus User Message Bus Socket...
Jan 20 13:54:46 compute-0 systemd[73361]: Starting Create User's Volatile Files and Directories...
Jan 20 13:54:46 compute-0 systemd[73361]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:54:46 compute-0 systemd[73361]: Reached target Sockets.
Jan 20 13:54:46 compute-0 systemd[73361]: Finished Create User's Volatile Files and Directories.
Jan 20 13:54:46 compute-0 systemd[73361]: Reached target Basic System.
Jan 20 13:54:46 compute-0 systemd[73361]: Reached target Main User Target.
Jan 20 13:54:46 compute-0 systemd[73361]: Startup finished in 125ms.
Jan 20 13:54:46 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 13:54:46 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 20 13:54:46 compute-0 sshd-session[73357]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:54:46 compute-0 sudo[73377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 20 13:54:46 compute-0 sudo[73377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:54:46 compute-0 sudo[73377]: pam_unix(sudo:session): session closed for user root
Jan 20 13:54:46 compute-0 sshd-session[73376]: Received disconnect from 192.168.122.100 port 45966:11: disconnected by user
Jan 20 13:54:46 compute-0 sshd-session[73376]: Disconnected from user ceph-admin 192.168.122.100 port 45966
Jan 20 13:54:46 compute-0 sshd-session[73357]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 13:54:46 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 20 13:54:46 compute-0 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Jan 20 13:54:46 compute-0 systemd-logind[796]: Removed session 19.
Jan 20 13:54:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3465192872-merged.mount: Deactivated successfully.
Jan 20 13:54:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3465192872-lower\x2dmapped.mount: Deactivated successfully.
Jan 20 13:54:56 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 13:54:56 compute-0 systemd[73361]: Activating special unit Exit the Session...
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped target Main User Target.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped target Basic System.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped target Paths.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped target Sockets.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped target Timers.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 13:54:56 compute-0 systemd[73361]: Closed D-Bus User Message Bus Socket.
Jan 20 13:54:56 compute-0 systemd[73361]: Stopped Create User's Volatile Files and Directories.
Jan 20 13:54:56 compute-0 systemd[73361]: Removed slice User Application Slice.
Jan 20 13:54:56 compute-0 systemd[73361]: Reached target Shutdown.
Jan 20 13:54:56 compute-0 systemd[73361]: Finished Exit the Session.
Jan 20 13:54:56 compute-0 systemd[73361]: Reached target Exit the Session.
Jan 20 13:54:56 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 13:54:56 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 13:54:56 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 13:54:56 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 13:54:56 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 13:54:56 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 13:54:56 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 13:55:02 compute-0 podman[73414]: 2026-01-20 13:55:02.084169573 +0000 UTC m=+15.805636674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:02 compute-0 podman[73474]: 2026-01-20 13:55:02.194984335 +0000 UTC m=+0.088502368 container create bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:02 compute-0 podman[73474]: 2026-01-20 13:55:02.127460086 +0000 UTC m=+0.020978099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:02 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 20 13:55:02 compute-0 systemd[1]: Started libpod-conmon-bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e.scope.
Jan 20 13:55:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:02 compute-0 podman[73474]: 2026-01-20 13:55:02.286224107 +0000 UTC m=+0.179742140 container init bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:55:02 compute-0 podman[73474]: 2026-01-20 13:55:02.292488887 +0000 UTC m=+0.186006890 container start bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 13:55:02 compute-0 podman[73474]: 2026-01-20 13:55:02.29628092 +0000 UTC m=+0.189798923 container attach bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:55:02 compute-0 amazing_villani[73491]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 20 13:55:02 compute-0 systemd[1]: libpod-bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e.scope: Deactivated successfully.
Jan 20 13:55:02 compute-0 podman[73496]: 2026-01-20 13:55:02.632635351 +0000 UTC m=+0.026294963 container died bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccfe3cbb91c1d5db5d1ea9ba4e309548b9ec491ede4c4c582df40ce5be2813bd-merged.mount: Deactivated successfully.
Jan 20 13:55:02 compute-0 podman[73496]: 2026-01-20 13:55:02.788231816 +0000 UTC m=+0.181891408 container remove bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e (image=quay.io/ceph/ceph:v18, name=amazing_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:55:02 compute-0 systemd[1]: libpod-conmon-bd5262406054fee19ffc0d52962c41b8e2b29a1900bf0ea01437995bb8cd745e.scope: Deactivated successfully.
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.858010096 +0000 UTC m=+0.045078252 container create 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 13:55:02 compute-0 systemd[1]: Started libpod-conmon-90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211.scope.
Jan 20 13:55:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.925915985 +0000 UTC m=+0.112984181 container init 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.834327035 +0000 UTC m=+0.021395281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.932582116 +0000 UTC m=+0.119650262 container start 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.935063954 +0000 UTC m=+0.122132120 container attach 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 13:55:02 compute-0 clever_aryabhata[73526]: 167 167
Jan 20 13:55:02 compute-0 systemd[1]: libpod-90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211.scope: Deactivated successfully.
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.937698515 +0000 UTC m=+0.124766721 container died 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:55:02 compute-0 podman[73510]: 2026-01-20 13:55:02.98255236 +0000 UTC m=+0.169620526 container remove 90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211 (image=quay.io/ceph/ceph:v18, name=clever_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:55:02 compute-0 systemd[1]: libpod-conmon-90d2293eb44575bb03049e2f44bc107b543bd756c1d7c10da0c2b6948e882211.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.052151976 +0000 UTC m=+0.047155079 container create 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:03 compute-0 systemd[1]: Started libpod-conmon-43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7.scope.
Jan 20 13:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.108133061 +0000 UTC m=+0.103136205 container init 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.114133675 +0000 UTC m=+0.109136788 container start 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.117440704 +0000 UTC m=+0.112443857 container attach 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.032219545 +0000 UTC m=+0.027222698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:03 compute-0 elated_wozniak[73559]: AQA3iW9p/3s2CBAAMiRbCEkJbEfK3SyjxpfrQA==
Jan 20 13:55:03 compute-0 systemd[1]: libpod-43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.141395843 +0000 UTC m=+0.136398936 container died 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f45d52b2838d293b160eaf3c3352826dd2c1006ac36b411c1ba4839e005b86-merged.mount: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73543]: 2026-01-20 13:55:03.176997808 +0000 UTC m=+0.172000921 container remove 43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7 (image=quay.io/ceph/ceph:v18, name=elated_wozniak, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:03 compute-0 systemd[1]: libpod-conmon-43f88857984318757b7730e0d878e33c05cf961f444ab8a806ee9753f4a905b7.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.233160009 +0000 UTC m=+0.036824539 container create 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:55:03 compute-0 systemd[1]: Started libpod-conmon-8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6.scope.
Jan 20 13:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.289279809 +0000 UTC m=+0.092944379 container init 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.293703359 +0000 UTC m=+0.097367899 container start 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.29709526 +0000 UTC m=+0.100759800 container attach 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.215803238 +0000 UTC m=+0.019467818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:03 compute-0 fervent_hypatia[73594]: AQA3iW9piOCsEhAA0+ESec/Mftx4mqS0uh4R4w==
Jan 20 13:55:03 compute-0 systemd[1]: libpod-8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.31627729 +0000 UTC m=+0.119941820 container died 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:03 compute-0 podman[73578]: 2026-01-20 13:55:03.361541426 +0000 UTC m=+0.165205956 container remove 8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6 (image=quay.io/ceph/ceph:v18, name=fervent_hypatia, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 13:55:03 compute-0 systemd[1]: libpod-conmon-8c9652c946d63523cc695402ccb719ab6b80f52b811de8ea7d36dee4e88a05c6.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.412935128 +0000 UTC m=+0.033327914 container create 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:03 compute-0 systemd[1]: Started libpod-conmon-8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6.scope.
Jan 20 13:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.458808081 +0000 UTC m=+0.079200927 container init 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.464889956 +0000 UTC m=+0.085282742 container start 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.468048572 +0000 UTC m=+0.088441378 container attach 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:55:03 compute-0 tender_colden[73629]: AQA3iW9pZBe+HBAAYk5amoGW4SLIjop+ftG99A==
Jan 20 13:55:03 compute-0 systemd[1]: libpod-8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.485011392 +0000 UTC m=+0.105404188 container died 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.398883908 +0000 UTC m=+0.019276694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:03 compute-0 podman[73613]: 2026-01-20 13:55:03.514275284 +0000 UTC m=+0.134668070 container remove 8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6 (image=quay.io/ceph/ceph:v18, name=tender_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 13:55:03 compute-0 systemd[1]: libpod-conmon-8feda4193ada0cdb93180e4adab6ca8b67beaae8dccc1c15e619fd2bc9240ca6.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.569321265 +0000 UTC m=+0.037339533 container create f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:03 compute-0 systemd[1]: Started libpod-conmon-f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd.scope.
Jan 20 13:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd9946c3d1d0009fca16eb38532006e80a873b9bcf65bc95d93389edc8a7df3/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.630251065 +0000 UTC m=+0.098269353 container init f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.635168039 +0000 UTC m=+0.103186307 container start f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.638203571 +0000 UTC m=+0.106221859 container attach f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.553218258 +0000 UTC m=+0.021236556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:03 compute-0 wizardly_agnesi[73665]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 20 13:55:03 compute-0 wizardly_agnesi[73665]: setting min_mon_release = pacific
Jan 20 13:55:03 compute-0 wizardly_agnesi[73665]: /usr/bin/monmaptool: set fsid to e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:03 compute-0 wizardly_agnesi[73665]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 20 13:55:03 compute-0 systemd[1]: libpod-f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.673502697 +0000 UTC m=+0.141520965 container died f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 13:55:03 compute-0 podman[73648]: 2026-01-20 13:55:03.699251585 +0000 UTC m=+0.167269853 container remove f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd (image=quay.io/ceph/ceph:v18, name=wizardly_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:03 compute-0 systemd[1]: libpod-conmon-f1e8aead53cdf2aa7ea75bc583d051a487b7c3432ae85fea001199a4dcba6ccd.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.759456946 +0000 UTC m=+0.038290269 container create 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:55:03 compute-0 systemd[1]: Started libpod-conmon-8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142.scope.
Jan 20 13:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c14d5e0b49e862000ed7e5a0e5444400b519b3085e8504c42c7ff8fb255917/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c14d5e0b49e862000ed7e5a0e5444400b519b3085e8504c42c7ff8fb255917/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c14d5e0b49e862000ed7e5a0e5444400b519b3085e8504c42c7ff8fb255917/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c14d5e0b49e862000ed7e5a0e5444400b519b3085e8504c42c7ff8fb255917/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.812904534 +0000 UTC m=+0.091737857 container init 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.818425313 +0000 UTC m=+0.097258636 container start 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.82273946 +0000 UTC m=+0.101572773 container attach 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.74521324 +0000 UTC m=+0.024046583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:03 compute-0 systemd[1]: libpod-8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.880069883 +0000 UTC m=+0.158903196 container died 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:03 compute-0 podman[73683]: 2026-01-20 13:55:03.909234934 +0000 UTC m=+0.188068257 container remove 8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142 (image=quay.io/ceph/ceph:v18, name=sleepy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:03 compute-0 systemd[1]: libpod-conmon-8eedfa3702aafeb415769878b416d207e0a41db40b2db425c27d9fb691654142.scope: Deactivated successfully.
Jan 20 13:55:03 compute-0 systemd[1]: Reloading.
Jan 20 13:55:04 compute-0 systemd-rc-local-generator[73770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:04 compute-0 systemd-sysv-generator[73773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:04 compute-0 systemd[1]: Reloading.
Jan 20 13:55:04 compute-0 systemd-rc-local-generator[73806]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:04 compute-0 systemd-sysv-generator[73811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:04 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 20 13:55:04 compute-0 systemd[1]: Reloading.
Jan 20 13:55:04 compute-0 systemd-rc-local-generator[73845]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:04 compute-0 systemd-sysv-generator[73849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:04 compute-0 systemd[1]: Reached target Ceph cluster e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:04 compute-0 systemd[1]: Reloading.
Jan 20 13:55:04 compute-0 systemd-sysv-generator[73885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:04 compute-0 systemd-rc-local-generator[73881]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:04 compute-0 systemd[1]: Reloading.
Jan 20 13:55:04 compute-0 sshd-session[73472]: Connection closed by authenticating user root 159.223.5.14 port 40628 [preauth]
Jan 20 13:55:04 compute-0 systemd-rc-local-generator[73925]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:04 compute-0 systemd-sysv-generator[73928]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:05 compute-0 systemd[1]: Created slice Slice /system/ceph-e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:05 compute-0 systemd[1]: Reached target System Time Set.
Jan 20 13:55:05 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 20 13:55:05 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:05 compute-0 podman[73982]: 2026-01-20 13:55:05.225252534 +0000 UTC m=+0.035919485 container create c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba0836534ca822106374c98525e10bf02d841453a73f8d23c18d7baa90ff1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba0836534ca822106374c98525e10bf02d841453a73f8d23c18d7baa90ff1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba0836534ca822106374c98525e10bf02d841453a73f8d23c18d7baa90ff1e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba0836534ca822106374c98525e10bf02d841453a73f8d23c18d7baa90ff1e5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 podman[73982]: 2026-01-20 13:55:05.278963838 +0000 UTC m=+0.089630819 container init c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 13:55:05 compute-0 podman[73982]: 2026-01-20 13:55:05.286906534 +0000 UTC m=+0.097573485 container start c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:55:05 compute-0 bash[73982]: c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3
Jan 20 13:55:05 compute-0 podman[73982]: 2026-01-20 13:55:05.210007101 +0000 UTC m=+0.020674062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:05 compute-0 systemd[1]: Started Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:05 compute-0 ceph-mon[74002]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: pidfile_write: ignore empty --pid-file
Jan 20 13:55:05 compute-0 ceph-mon[74002]: load: jerasure load: lrc 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Git sha 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: DB SUMMARY
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: DB Session ID:  H94TODKW9DUZ4GQ200BS
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                                     Options.env: 0x55e52de65c40
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                                Options.info_log: 0x55e52f358ec0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                                 Options.wal_dir: 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                    Options.write_buffer_manager: 0x55e52f368b40
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                               Options.row_cache: None
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                              Options.wal_filter: None
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.wal_compression: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.max_background_jobs: 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Compression algorithms supported:
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kZSTD supported: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:           Options.merge_operator: 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:        Options.compaction_filter: None
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e52f358aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e52f3511f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.compression: NoCompression
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.num_levels: 7
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1688fb6f-f397-4aac-a0e8-874ff91d97a7
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917305325542, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917305326953, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "H94TODKW9DUZ4GQ200BS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917305327074, "job": 1, "event": "recovery_finished"}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e52f37ae00
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: DB pointer 0x55e52f404000
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 13:55:05 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.3      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e52f3511f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 13:55:05 compute-0 ceph-mon[74002]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@-1(???) e0 preinit fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 20 13:55:05 compute-0 ceph-mon[74002]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 13:55:05 compute-0 ceph-mon[74002]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-20T13:55:03.848243Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).mds e1 new map
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mkfs e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.374530117 +0000 UTC m=+0.056618945 container create dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:05 compute-0 systemd[1]: Started libpod-conmon-dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb.scope.
Jan 20 13:55:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25befca59823e35f40e6682da398d31db52ac34e6d9be9af4e480340123620/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25befca59823e35f40e6682da398d31db52ac34e6d9be9af4e480340123620/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf25befca59823e35f40e6682da398d31db52ac34e6d9be9af4e480340123620/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.351977846 +0000 UTC m=+0.034066754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.45103742 +0000 UTC m=+0.133126258 container init dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.457414552 +0000 UTC m=+0.139503380 container start dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.46062738 +0000 UTC m=+0.142716228 container attach dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:05 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 20 13:55:05 compute-0 ceph-mon[74002]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835626863' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 13:55:05 compute-0 pensive_allen[74057]:   cluster:
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     id:     e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     health: HEALTH_OK
Jan 20 13:55:05 compute-0 pensive_allen[74057]:  
Jan 20 13:55:05 compute-0 pensive_allen[74057]:   services:
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     mon: 1 daemons, quorum compute-0 (age 0.512801s)
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     mgr: no daemons active
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     osd: 0 osds: 0 up, 0 in
Jan 20 13:55:05 compute-0 pensive_allen[74057]:  
Jan 20 13:55:05 compute-0 pensive_allen[74057]:   data:
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     pools:   0 pools, 0 pgs
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     objects: 0 objects, 0 B
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     usage:   0 B used, 0 B / 0 B avail
Jan 20 13:55:05 compute-0 pensive_allen[74057]:     pgs:     
Jan 20 13:55:05 compute-0 pensive_allen[74057]:  
Jan 20 13:55:05 compute-0 systemd[1]: libpod-dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb.scope: Deactivated successfully.
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.882930489 +0000 UTC m=+0.565019317 container died dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:05 compute-0 podman[74003]: 2026-01-20 13:55:05.921291469 +0000 UTC m=+0.603380297 container remove dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb (image=quay.io/ceph/ceph:v18, name=pensive_allen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 13:55:05 compute-0 systemd[1]: libpod-conmon-dfbde3e5fd861e424ba37c66fe7b1f4346313bcb4b422fd04df11f65c6ed7efb.scope: Deactivated successfully.
Jan 20 13:55:05 compute-0 podman[74096]: 2026-01-20 13:55:05.979808413 +0000 UTC m=+0.039505221 container create 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:06 compute-0 systemd[1]: Started libpod-conmon-8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d.scope.
Jan 20 13:55:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b702714f65f0c48b68d45aac4050ab5a51733aefc24cbf666dc37993148c94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b702714f65f0c48b68d45aac4050ab5a51733aefc24cbf666dc37993148c94/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b702714f65f0c48b68d45aac4050ab5a51733aefc24cbf666dc37993148c94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b702714f65f0c48b68d45aac4050ab5a51733aefc24cbf666dc37993148c94/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:06.042651606 +0000 UTC m=+0.102348444 container init 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:06.050294603 +0000 UTC m=+0.109991411 container start 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:06.053661934 +0000 UTC m=+0.113358742 container attach 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:05.964047296 +0000 UTC m=+0.023744114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:06 compute-0 ceph-mon[74002]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:06 compute-0 ceph-mon[74002]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:55:06 compute-0 ceph-mon[74002]: fsmap 
Jan 20 13:55:06 compute-0 ceph-mon[74002]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 13:55:06 compute-0 ceph-mon[74002]: mgrmap e1: no daemons active
Jan 20 13:55:06 compute-0 ceph-mon[74002]: from='client.? 192.168.122.100:0/835626863' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 13:55:06 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 20 13:55:06 compute-0 ceph-mon[74002]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/649563350' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:55:06 compute-0 ceph-mon[74002]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/649563350' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 13:55:06 compute-0 frosty_chatterjee[74112]: 
Jan 20 13:55:06 compute-0 frosty_chatterjee[74112]: [global]
Jan 20 13:55:06 compute-0 frosty_chatterjee[74112]:         fsid = e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:06 compute-0 frosty_chatterjee[74112]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 13:55:06 compute-0 systemd[1]: libpod-8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d.scope: Deactivated successfully.
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:06.426724201 +0000 UTC m=+0.486421009 container died 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-42b702714f65f0c48b68d45aac4050ab5a51733aefc24cbf666dc37993148c94-merged.mount: Deactivated successfully.
Jan 20 13:55:06 compute-0 podman[74096]: 2026-01-20 13:55:06.466113428 +0000 UTC m=+0.525810236 container remove 8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d (image=quay.io/ceph/ceph:v18, name=frosty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 13:55:06 compute-0 systemd[1]: libpod-conmon-8d9553f5daaa43f2bce1f91a31d81ec3bb390eafa78b9b535b643b5073bd404d.scope: Deactivated successfully.
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.519282128 +0000 UTC m=+0.033674343 container create c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:06 compute-0 systemd[1]: Started libpod-conmon-c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2.scope.
Jan 20 13:55:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc60c3d6140b58047489e1ff6fa6c1ba85efc02217274ae2022f2590dd26e3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc60c3d6140b58047489e1ff6fa6c1ba85efc02217274ae2022f2590dd26e3e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc60c3d6140b58047489e1ff6fa6c1ba85efc02217274ae2022f2590dd26e3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc60c3d6140b58047489e1ff6fa6c1ba85efc02217274ae2022f2590dd26e3e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.578470981 +0000 UTC m=+0.092863206 container init c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.582637994 +0000 UTC m=+0.097030209 container start c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.585579773 +0000 UTC m=+0.099971988 container attach c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.505161995 +0000 UTC m=+0.019554240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:06 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:55:06 compute-0 ceph-mon[74002]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248058190' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:55:06 compute-0 systemd[1]: libpod-c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2.scope: Deactivated successfully.
Jan 20 13:55:06 compute-0 podman[74152]: 2026-01-20 13:55:06.948425853 +0000 UTC m=+0.462818078 container died c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 13:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dc60c3d6140b58047489e1ff6fa6c1ba85efc02217274ae2022f2590dd26e3e-merged.mount: Deactivated successfully.
Jan 20 13:55:07 compute-0 podman[74152]: 2026-01-20 13:55:07.008033498 +0000 UTC m=+0.522425713 container remove c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2 (image=quay.io/ceph/ceph:v18, name=intelligent_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:07 compute-0 systemd[1]: libpod-conmon-c46e892d36ed3e9619a4a92543656ae3e7912f142f62cda5169ab70d581b4af2.scope: Deactivated successfully.
Jan 20 13:55:07 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:55:07 compute-0 ceph-mon[74002]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 13:55:07 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 13:55:07 compute-0 ceph-mon[74002]: mon.compute-0@0(leader) e1 shutdown
Jan 20 13:55:07 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 13:55:07 compute-0 ceph-mon[74002]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 13:55:07 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0[73998]: 2026-01-20T13:55:07.197+0000 7f5c569af640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 13:55:07 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0[73998]: 2026-01-20T13:55:07.197+0000 7f5c569af640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 13:55:07 compute-0 podman[74234]: 2026-01-20 13:55:07.40090236 +0000 UTC m=+0.262219854 container died c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bba0836534ca822106374c98525e10bf02d841453a73f8d23c18d7baa90ff1e5-merged.mount: Deactivated successfully.
Jan 20 13:55:07 compute-0 podman[74234]: 2026-01-20 13:55:07.435630531 +0000 UTC m=+0.296948015 container remove c71e300066dcc364b75c6d61fa272e832d884f91e3c6c6f512339bab6daca4c3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:55:07 compute-0 bash[74234]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0
Jan 20 13:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:55:07 compute-0 systemd[1]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mon.compute-0.service: Deactivated successfully.
Jan 20 13:55:07 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:07 compute-0 systemd[1]: Starting Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:55:07 compute-0 podman[74339]: 2026-01-20 13:55:07.716037577 +0000 UTC m=+0.036982633 container create a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3369a1b3283e884e3c28efd6d81c091d1fcd511034a60a5e68dc62eb3650b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3369a1b3283e884e3c28efd6d81c091d1fcd511034a60a5e68dc62eb3650b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3369a1b3283e884e3c28efd6d81c091d1fcd511034a60a5e68dc62eb3650b0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3369a1b3283e884e3c28efd6d81c091d1fcd511034a60a5e68dc62eb3650b0d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 podman[74339]: 2026-01-20 13:55:07.768776195 +0000 UTC m=+0.089721271 container init a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 13:55:07 compute-0 podman[74339]: 2026-01-20 13:55:07.775616641 +0000 UTC m=+0.096561697 container start a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:07 compute-0 bash[74339]: a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3
Jan 20 13:55:07 compute-0 podman[74339]: 2026-01-20 13:55:07.699615422 +0000 UTC m=+0.020560508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:07 compute-0 systemd[1]: Started Ceph mon.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:07 compute-0 ceph-mon[74360]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: pidfile_write: ignore empty --pid-file
Jan 20 13:55:07 compute-0 ceph-mon[74360]: load: jerasure load: lrc 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Git sha 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: DB SUMMARY
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: DB Session ID:  5V3N2TVXYZBCXP55EZHK
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 51604 ; 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                                     Options.env: 0x5576ae494c40
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                                Options.info_log: 0x5576af64d040
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                                 Options.wal_dir: 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                    Options.write_buffer_manager: 0x5576af65cb40
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                               Options.row_cache: None
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                              Options.wal_filter: None
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.wal_compression: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.max_background_jobs: 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Compression algorithms supported:
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kZSTD supported: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:           Options.merge_operator: 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:        Options.compaction_filter: None
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5576af64cc40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5576af6451f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.compression: NoCompression
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.num_levels: 7
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1688fb6f-f397-4aac-a0e8-874ff91d97a7
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917307812071, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917307816153, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51378, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 127, "table_properties": {"data_size": 49931, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2823, "raw_average_key_size": 30, "raw_value_size": 47663, "raw_average_value_size": 507, "num_data_blocks": 7, "num_entries": 94, "num_filter_entries": 94, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917307, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917307816249, "job": 1, "event": "recovery_finished"}
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5576af66ee00
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: DB pointer 0x5576af776000
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 13:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   52.07 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   52.07 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.74 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.74 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 13:55:07 compute-0 ceph-mon[74360]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???) e1 preinit fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).mds e1 new map
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 13:55:07 compute-0 ceph-mon[74360]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:55:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:55:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 13:55:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 13:55:07 compute-0 podman[74361]: 2026-01-20 13:55:07.844051755 +0000 UTC m=+0.040692704 container create 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:07 compute-0 systemd[1]: Started libpod-conmon-39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf.scope.
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 13:55:07 compute-0 ceph-mon[74360]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:55:07 compute-0 ceph-mon[74360]: fsmap 
Jan 20 13:55:07 compute-0 ceph-mon[74360]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 13:55:07 compute-0 ceph-mon[74360]: mgrmap e1: no daemons active
Jan 20 13:55:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5174c45e4064fd31bde7b15b5b8e7e65c0b5178fa3439632420e45d0d0f1fad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5174c45e4064fd31bde7b15b5b8e7e65c0b5178fa3439632420e45d0d0f1fad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5174c45e4064fd31bde7b15b5b8e7e65c0b5178fa3439632420e45d0d0f1fad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:07 compute-0 podman[74361]: 2026-01-20 13:55:07.914822441 +0000 UTC m=+0.111463400 container init 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 13:55:07 compute-0 podman[74361]: 2026-01-20 13:55:07.92179223 +0000 UTC m=+0.118433189 container start 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:55:07 compute-0 podman[74361]: 2026-01-20 13:55:07.829905202 +0000 UTC m=+0.026546171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:07 compute-0 podman[74361]: 2026-01-20 13:55:07.924943936 +0000 UTC m=+0.121584905 container attach 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 20 13:55:08 compute-0 systemd[1]: libpod-39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf.scope: Deactivated successfully.
Jan 20 13:55:08 compute-0 podman[74361]: 2026-01-20 13:55:08.318925969 +0000 UTC m=+0.515566928 container died 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:08 compute-0 podman[74361]: 2026-01-20 13:55:08.360159826 +0000 UTC m=+0.556800775 container remove 39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf (image=quay.io/ceph/ceph:v18, name=zealous_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 13:55:08 compute-0 systemd[1]: libpod-conmon-39136a689d955a1c1a60fa1782e3896db08805ebb06104c17587dd4ec1bdbdaf.scope: Deactivated successfully.
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.41828267 +0000 UTC m=+0.036971152 container create a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:08 compute-0 systemd[1]: Started libpod-conmon-a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f.scope.
Jan 20 13:55:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706e916b047d501b233202fbae49157deb5946494f4fd33b53b42706f65e94c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706e916b047d501b233202fbae49157deb5946494f4fd33b53b42706f65e94c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706e916b047d501b233202fbae49157deb5946494f4fd33b53b42706f65e94c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.495134592 +0000 UTC m=+0.113823084 container init a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.401713281 +0000 UTC m=+0.020401763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.501434412 +0000 UTC m=+0.120122894 container start a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.505277647 +0000 UTC m=+0.123966139 container attach a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:55:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 20 13:55:08 compute-0 systemd[1]: libpod-a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f.scope: Deactivated successfully.
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.918769628 +0000 UTC m=+0.537458110 container died a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 13:55:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-706e916b047d501b233202fbae49157deb5946494f4fd33b53b42706f65e94c9-merged.mount: Deactivated successfully.
Jan 20 13:55:08 compute-0 podman[74453]: 2026-01-20 13:55:08.958003321 +0000 UTC m=+0.576691803 container remove a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f (image=quay.io/ceph/ceph:v18, name=jovial_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:08 compute-0 systemd[1]: libpod-conmon-a013278c23dc7f87377bbb41517a9a454a56dec199bcc3109fc9711798ffca8f.scope: Deactivated successfully.
Jan 20 13:55:09 compute-0 systemd[1]: Reloading.
Jan 20 13:55:09 compute-0 systemd-sysv-generator[74537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:09 compute-0 systemd-rc-local-generator[74534]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:09 compute-0 systemd[1]: Reloading.
Jan 20 13:55:09 compute-0 systemd-sysv-generator[74572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:55:09 compute-0 systemd-rc-local-generator[74569]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:55:09 compute-0 systemd[1]: Starting Ceph mgr.compute-0.wookjv for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:55:09 compute-0 podman[74634]: 2026-01-20 13:55:09.721135524 +0000 UTC m=+0.038859674 container create ad330ec75efbdfaec081b942072a288ad7eb936b24cf88c6163a5db8b25f11eb (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131882c24cc42f7aa1ac0d9fba1e541a98f0fa85e81b6250e667054d826ae37e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131882c24cc42f7aa1ac0d9fba1e541a98f0fa85e81b6250e667054d826ae37e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131882c24cc42f7aa1ac0d9fba1e541a98f0fa85e81b6250e667054d826ae37e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131882c24cc42f7aa1ac0d9fba1e541a98f0fa85e81b6250e667054d826ae37e/merged/var/lib/ceph/mgr/ceph-compute-0.wookjv supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 podman[74634]: 2026-01-20 13:55:09.780899822 +0000 UTC m=+0.098623972 container init ad330ec75efbdfaec081b942072a288ad7eb936b24cf88c6163a5db8b25f11eb (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:09 compute-0 podman[74634]: 2026-01-20 13:55:09.791366666 +0000 UTC m=+0.109090816 container start ad330ec75efbdfaec081b942072a288ad7eb936b24cf88c6163a5db8b25f11eb (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:55:09 compute-0 bash[74634]: ad330ec75efbdfaec081b942072a288ad7eb936b24cf88c6163a5db8b25f11eb
Jan 20 13:55:09 compute-0 podman[74634]: 2026-01-20 13:55:09.702712184 +0000 UTC m=+0.020436384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.wookjv for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:55:09 compute-0 ceph-mgr[74653]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:55:09 compute-0 ceph-mgr[74653]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 20 13:55:09 compute-0 ceph-mgr[74653]: pidfile_write: ignore empty --pid-file
Jan 20 13:55:09 compute-0 podman[74654]: 2026-01-20 13:55:09.884342715 +0000 UTC m=+0.053475490 container create 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:09 compute-0 systemd[1]: Started libpod-conmon-1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289.scope.
Jan 20 13:55:09 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'alerts'
Jan 20 13:55:09 compute-0 podman[74654]: 2026-01-20 13:55:09.85797233 +0000 UTC m=+0.027105155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f9eaa47962255145f1d5a8b1668ccc84f53e45c0762734e384acf2a329e05c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f9eaa47962255145f1d5a8b1668ccc84f53e45c0762734e384acf2a329e05c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f9eaa47962255145f1d5a8b1668ccc84f53e45c0762734e384acf2a329e05c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:09 compute-0 podman[74654]: 2026-01-20 13:55:09.986262656 +0000 UTC m=+0.155395421 container init 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 13:55:09 compute-0 podman[74654]: 2026-01-20 13:55:09.998146927 +0000 UTC m=+0.167279672 container start 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:55:10 compute-0 podman[74654]: 2026-01-20 13:55:10.008013834 +0000 UTC m=+0.177146609 container attach 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:10 compute-0 ceph-mgr[74653]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:55:10 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'balancer'
Jan 20 13:55:10 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:10.243+0000 7f3d8b306140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:55:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50427810' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]: 
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]: {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "health": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "status": "HEALTH_OK",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "checks": {},
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "mutes": []
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "election_epoch": 5,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "quorum": [
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         0
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     ],
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "quorum_names": [
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "compute-0"
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     ],
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "quorum_age": 2,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "monmap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "epoch": 1,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "min_mon_release_name": "reef",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_mons": 1
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "osdmap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "epoch": 1,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_osds": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_up_osds": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "osd_up_since": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_in_osds": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "osd_in_since": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_remapped_pgs": 0
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "pgmap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "pgs_by_state": [],
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_pgs": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_pools": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_objects": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "data_bytes": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "bytes_used": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "bytes_avail": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "bytes_total": 0
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "fsmap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "epoch": 1,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "by_rank": [],
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "up:standby": 0
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "mgrmap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "available": false,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "num_standbys": 0,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "modules": [
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:             "iostat",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:             "nfs",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:             "restful"
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         ],
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "services": {}
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "servicemap": {
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "epoch": 1,
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:         "services": {}
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     },
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]:     "progress_events": {}
Jan 20 13:55:10 compute-0 sweet_hamilton[74695]: }
Jan 20 13:55:10 compute-0 systemd[1]: libpod-1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289.scope: Deactivated successfully.
Jan 20 13:55:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/50427810' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:10 compute-0 podman[74721]: 2026-01-20 13:55:10.423178841 +0000 UTC m=+0.028162283 container died 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:55:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f9eaa47962255145f1d5a8b1668ccc84f53e45c0762734e384acf2a329e05c-merged.mount: Deactivated successfully.
Jan 20 13:55:10 compute-0 podman[74721]: 2026-01-20 13:55:10.471279084 +0000 UTC m=+0.076262476 container remove 1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289 (image=quay.io/ceph/ceph:v18, name=sweet_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:10 compute-0 systemd[1]: libpod-conmon-1b05c03bc9783278f4fb55ca889cbadbac923509fe49f98c0c1a512ed8f60289.scope: Deactivated successfully.
Jan 20 13:55:10 compute-0 ceph-mgr[74653]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:55:10 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'cephadm'
Jan 20 13:55:10 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:10.492+0000 7f3d8b306140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:55:12 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'crash'
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.548081724 +0000 UTC m=+0.041951017 container create aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:12 compute-0 systemd[1]: Started libpod-conmon-aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7.scope.
Jan 20 13:55:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e98ef9139d8adc53a4c8c09632ab58ef0b5ff1d832663c4abbecbc76f035c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e98ef9139d8adc53a4c8c09632ab58ef0b5ff1d832663c4abbecbc76f035c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e98ef9139d8adc53a4c8c09632ab58ef0b5ff1d832663c4abbecbc76f035c2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.601494201 +0000 UTC m=+0.095363514 container init aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:55:12 compute-0 ceph-mgr[74653]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:55:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:12.604+0000 7f3d8b306140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:55:12 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'dashboard'
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.605874729 +0000 UTC m=+0.099744022 container start aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.609726984 +0000 UTC m=+0.103596287 container attach aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.530163179 +0000 UTC m=+0.024032492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3879846585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:12 compute-0 musing_keldysh[74764]: 
Jan 20 13:55:12 compute-0 musing_keldysh[74764]: {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "health": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "status": "HEALTH_OK",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "checks": {},
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "mutes": []
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "election_epoch": 5,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "quorum": [
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         0
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     ],
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "quorum_names": [
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "compute-0"
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     ],
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "quorum_age": 5,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "monmap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "epoch": 1,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "min_mon_release_name": "reef",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_mons": 1
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "osdmap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "epoch": 1,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_osds": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_up_osds": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "osd_up_since": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_in_osds": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "osd_in_since": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_remapped_pgs": 0
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "pgmap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "pgs_by_state": [],
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_pgs": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_pools": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_objects": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "data_bytes": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "bytes_used": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "bytes_avail": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "bytes_total": 0
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "fsmap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "epoch": 1,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "by_rank": [],
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "up:standby": 0
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "mgrmap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "available": false,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "num_standbys": 0,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "modules": [
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:             "iostat",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:             "nfs",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:             "restful"
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         ],
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "services": {}
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "servicemap": {
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "epoch": 1,
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:         "services": {}
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     },
Jan 20 13:55:12 compute-0 musing_keldysh[74764]:     "progress_events": {}
Jan 20 13:55:12 compute-0 musing_keldysh[74764]: }
Jan 20 13:55:12 compute-0 systemd[1]: libpod-aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7.scope: Deactivated successfully.
Jan 20 13:55:12 compute-0 podman[74747]: 2026-01-20 13:55:12.981224577 +0000 UTC m=+0.475093870 container died aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3879846585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e98ef9139d8adc53a4c8c09632ab58ef0b5ff1d832663c4abbecbc76f035c2e-merged.mount: Deactivated successfully.
Jan 20 13:55:13 compute-0 podman[74747]: 2026-01-20 13:55:13.05071309 +0000 UTC m=+0.544582373 container remove aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7 (image=quay.io/ceph/ceph:v18, name=musing_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:13 compute-0 systemd[1]: libpod-conmon-aa9328f747086b3d30c1a0fca16066921866af228c13487fc1a09b3881c8e0e7.scope: Deactivated successfully.
Jan 20 13:55:13 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'devicehealth'
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:14.207+0000 7f3d8b306140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   from numpy import show_config as show_numpy_config
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:14.731+0000 7f3d8b306140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'influx'
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:55:14 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'insights'
Jan 20 13:55:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:14.964+0000 7f3d8b306140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.142927237 +0000 UTC m=+0.054291522 container create 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 13:55:15 compute-0 systemd[1]: Started libpod-conmon-75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6.scope.
Jan 20 13:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eb69536ef41d1b9b494378ac0dc4a2158dc918b5f138a4dfc0518a93245dc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eb69536ef41d1b9b494378ac0dc4a2158dc918b5f138a4dfc0518a93245dc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07eb69536ef41d1b9b494378ac0dc4a2158dc918b5f138a4dfc0518a93245dc8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:15 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'iostat'
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.216087518 +0000 UTC m=+0.127451833 container init 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.123239833 +0000 UTC m=+0.034604138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.220893109 +0000 UTC m=+0.132257414 container start 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.228432383 +0000 UTC m=+0.139796698 container attach 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:15 compute-0 ceph-mgr[74653]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:55:15 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'k8sevents'
Jan 20 13:55:15 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:15.433+0000 7f3d8b306140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:55:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836633381' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]: 
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]: {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "health": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "status": "HEALTH_OK",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "checks": {},
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "mutes": []
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "election_epoch": 5,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "quorum": [
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         0
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     ],
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "quorum_names": [
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "compute-0"
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     ],
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "quorum_age": 7,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "monmap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "epoch": 1,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "min_mon_release_name": "reef",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_mons": 1
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "osdmap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "epoch": 1,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_osds": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_up_osds": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "osd_up_since": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_in_osds": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "osd_in_since": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_remapped_pgs": 0
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "pgmap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "pgs_by_state": [],
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_pgs": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_pools": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_objects": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "data_bytes": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "bytes_used": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "bytes_avail": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "bytes_total": 0
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "fsmap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "epoch": 1,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "by_rank": [],
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "up:standby": 0
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "mgrmap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "available": false,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "num_standbys": 0,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "modules": [
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:             "iostat",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:             "nfs",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:             "restful"
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         ],
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "services": {}
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "servicemap": {
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "epoch": 1,
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:         "services": {}
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     },
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]:     "progress_events": {}
Jan 20 13:55:15 compute-0 hopeful_johnson[74817]: }
Jan 20 13:55:15 compute-0 systemd[1]: libpod-75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6.scope: Deactivated successfully.
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.601564048 +0000 UTC m=+0.512928323 container died 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 13:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-07eb69536ef41d1b9b494378ac0dc4a2158dc918b5f138a4dfc0518a93245dc8-merged.mount: Deactivated successfully.
Jan 20 13:55:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1836633381' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:15 compute-0 podman[74801]: 2026-01-20 13:55:15.638936628 +0000 UTC m=+0.550300913 container remove 75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6 (image=quay.io/ceph/ceph:v18, name=hopeful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:55:15 compute-0 systemd[1]: libpod-conmon-75631643bc6221a134802c5bd14475563775f458930bdebf056f221b241660b6.scope: Deactivated successfully.
Jan 20 13:55:17 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'localpool'
Jan 20 13:55:17 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 13:55:17 compute-0 podman[74857]: 2026-01-20 13:55:17.704690909 +0000 UTC m=+0.042191344 container create 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:55:17 compute-0 systemd[1]: Started libpod-conmon-05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa.scope.
Jan 20 13:55:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1275b3c2bc88c46849d4f019e48b298d8ff85241be77b7dd9fad34f422c20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1275b3c2bc88c46849d4f019e48b298d8ff85241be77b7dd9fad34f422c20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1275b3c2bc88c46849d4f019e48b298d8ff85241be77b7dd9fad34f422c20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:17 compute-0 podman[74857]: 2026-01-20 13:55:17.688144757 +0000 UTC m=+0.025645202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:17 compute-0 podman[74857]: 2026-01-20 13:55:17.78425515 +0000 UTC m=+0.121755605 container init 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 13:55:17 compute-0 podman[74857]: 2026-01-20 13:55:17.791026084 +0000 UTC m=+0.128526529 container start 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:55:17 compute-0 podman[74857]: 2026-01-20 13:55:17.794181958 +0000 UTC m=+0.131682393 container attach 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 13:55:17 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'mirroring'
Jan 20 13:55:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600355359' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:18 compute-0 cool_rosalind[74873]: 
Jan 20 13:55:18 compute-0 cool_rosalind[74873]: {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "health": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "status": "HEALTH_OK",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "checks": {},
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "mutes": []
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "election_epoch": 5,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "quorum": [
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         0
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     ],
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "quorum_names": [
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "compute-0"
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     ],
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "quorum_age": 10,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "monmap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "epoch": 1,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "min_mon_release_name": "reef",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_mons": 1
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "osdmap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "epoch": 1,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_osds": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_up_osds": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "osd_up_since": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_in_osds": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "osd_in_since": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_remapped_pgs": 0
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "pgmap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "pgs_by_state": [],
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_pgs": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_pools": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_objects": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "data_bytes": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "bytes_used": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "bytes_avail": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "bytes_total": 0
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "fsmap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "epoch": 1,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "by_rank": [],
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "up:standby": 0
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "mgrmap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "available": false,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "num_standbys": 0,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "modules": [
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:             "iostat",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:             "nfs",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:             "restful"
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         ],
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "services": {}
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "servicemap": {
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "epoch": 1,
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:         "services": {}
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     },
Jan 20 13:55:18 compute-0 cool_rosalind[74873]:     "progress_events": {}
Jan 20 13:55:18 compute-0 cool_rosalind[74873]: }
Jan 20 13:55:18 compute-0 systemd[1]: libpod-05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa.scope: Deactivated successfully.
Jan 20 13:55:18 compute-0 podman[74857]: 2026-01-20 13:55:18.191734063 +0000 UTC m=+0.529234498 container died 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 13:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-24f1275b3c2bc88c46849d4f019e48b298d8ff85241be77b7dd9fad34f422c20-merged.mount: Deactivated successfully.
Jan 20 13:55:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3600355359' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:18 compute-0 podman[74857]: 2026-01-20 13:55:18.235871088 +0000 UTC m=+0.573371563 container remove 05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa (image=quay.io/ceph/ceph:v18, name=cool_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:18 compute-0 systemd[1]: libpod-conmon-05a8d35c7c7b1b4199c0eef0299d5ef54f5ac5bce3b88645fface8d3dccff0aa.scope: Deactivated successfully.
Jan 20 13:55:18 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'nfs'
Jan 20 13:55:18 compute-0 ceph-mgr[74653]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:55:18 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'orchestrator'
Jan 20 13:55:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:18.942+0000 7f3d8b306140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:55:19 compute-0 ceph-mgr[74653]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:19 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 13:55:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:19.551+0000 7f3d8b306140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:19 compute-0 ceph-mgr[74653]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:55:19 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'osd_support'
Jan 20 13:55:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:19.797+0000 7f3d8b306140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 13:55:20 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:20.016+0000 7f3d8b306140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'progress'
Jan 20 13:55:20 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:20.268+0000 7f3d8b306140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.350865648 +0000 UTC m=+0.085370447 container create d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:20 compute-0 systemd[1]: Started libpod-conmon-d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846.scope.
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.305075796 +0000 UTC m=+0.039580595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca73ec679ca6f455317c7769c4d680767184cb430d9e2a7b654b3a92f6d2420e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca73ec679ca6f455317c7769c4d680767184cb430d9e2a7b654b3a92f6d2420e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca73ec679ca6f455317c7769c4d680767184cb430d9e2a7b654b3a92f6d2420e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.459057583 +0000 UTC m=+0.193562372 container init d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.468439045 +0000 UTC m=+0.202943804 container start d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.471724381 +0000 UTC m=+0.206229150 container attach d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:20.494+0000 7f3d8b306140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:55:20 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'prometheus'
Jan 20 13:55:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266167382' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:20 compute-0 reverent_neumann[74927]: 
Jan 20 13:55:20 compute-0 reverent_neumann[74927]: {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "health": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "status": "HEALTH_OK",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "checks": {},
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "mutes": []
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "election_epoch": 5,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "quorum": [
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         0
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     ],
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "quorum_names": [
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "compute-0"
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     ],
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "quorum_age": 12,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "monmap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "epoch": 1,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "min_mon_release_name": "reef",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_mons": 1
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "osdmap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "epoch": 1,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_osds": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_up_osds": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "osd_up_since": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_in_osds": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "osd_in_since": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_remapped_pgs": 0
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "pgmap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "pgs_by_state": [],
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_pgs": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_pools": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_objects": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "data_bytes": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "bytes_used": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "bytes_avail": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "bytes_total": 0
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "fsmap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "epoch": 1,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "by_rank": [],
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "up:standby": 0
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "mgrmap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "available": false,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "num_standbys": 0,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "modules": [
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:             "iostat",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:             "nfs",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:             "restful"
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         ],
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "services": {}
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "servicemap": {
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "epoch": 1,
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:         "services": {}
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     },
Jan 20 13:55:20 compute-0 reverent_neumann[74927]:     "progress_events": {}
Jan 20 13:55:20 compute-0 reverent_neumann[74927]: }
Jan 20 13:55:20 compute-0 systemd[1]: libpod-d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846.scope: Deactivated successfully.
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.843087109 +0000 UTC m=+0.577591878 container died d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3266167382' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca73ec679ca6f455317c7769c4d680767184cb430d9e2a7b654b3a92f6d2420e-merged.mount: Deactivated successfully.
Jan 20 13:55:20 compute-0 podman[74911]: 2026-01-20 13:55:20.899795701 +0000 UTC m=+0.634300460 container remove d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846 (image=quay.io/ceph/ceph:v18, name=reverent_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:55:20 compute-0 systemd[1]: libpod-conmon-d784973c2416762e89f943d1ecbd57cf7bd6ce70ff61864b4d2974408c7d6846.scope: Deactivated successfully.
Jan 20 13:55:21 compute-0 ceph-mgr[74653]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:55:21 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rbd_support'
Jan 20 13:55:21 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:21.419+0000 7f3d8b306140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:55:21 compute-0 ceph-mgr[74653]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:55:21 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'restful'
Jan 20 13:55:21 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:21.696+0000 7f3d8b306140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:55:22 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rgw'
Jan 20 13:55:23 compute-0 ceph-mgr[74653]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:55:23 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rook'
Jan 20 13:55:23 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:23.036+0000 7f3d8b306140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:55:23 compute-0 podman[74968]: 2026-01-20 13:55:22.950925641 +0000 UTC m=+0.029111160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:23 compute-0 podman[74968]: 2026-01-20 13:55:23.084983669 +0000 UTC m=+0.163169098 container create ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:23 compute-0 systemd[1]: Started libpod-conmon-ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d.scope.
Jan 20 13:55:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6673eb5753c277d44d402f17547a9cabf00ca2702c81183110fb610875525bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6673eb5753c277d44d402f17547a9cabf00ca2702c81183110fb610875525bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6673eb5753c277d44d402f17547a9cabf00ca2702c81183110fb610875525bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:23 compute-0 podman[74968]: 2026-01-20 13:55:23.142008564 +0000 UTC m=+0.220193993 container init ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:23 compute-0 podman[74968]: 2026-01-20 13:55:23.150707319 +0000 UTC m=+0.228892748 container start ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:55:23 compute-0 podman[74968]: 2026-01-20 13:55:23.15357264 +0000 UTC m=+0.231758089 container attach ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 13:55:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282836932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]: 
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]: {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "health": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "status": "HEALTH_OK",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "checks": {},
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "mutes": []
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "election_epoch": 5,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "quorum": [
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         0
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     ],
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "quorum_names": [
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "compute-0"
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     ],
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "quorum_age": 15,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "monmap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "epoch": 1,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "min_mon_release_name": "reef",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_mons": 1
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "osdmap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "epoch": 1,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_osds": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_up_osds": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "osd_up_since": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_in_osds": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "osd_in_since": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_remapped_pgs": 0
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "pgmap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "pgs_by_state": [],
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_pgs": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_pools": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_objects": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "data_bytes": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "bytes_used": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "bytes_avail": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "bytes_total": 0
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "fsmap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "epoch": 1,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "by_rank": [],
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "up:standby": 0
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "mgrmap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "available": false,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "num_standbys": 0,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "modules": [
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:             "iostat",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:             "nfs",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:             "restful"
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         ],
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "services": {}
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "servicemap": {
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "epoch": 1,
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:         "services": {}
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     },
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]:     "progress_events": {}
Jan 20 13:55:23 compute-0 cranky_wilbur[74985]: }
Jan 20 13:55:23 compute-0 systemd[1]: libpod-ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d.scope: Deactivated successfully.
Jan 20 13:55:23 compute-0 conmon[74985]: conmon ba8c689ae60d712165fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d.scope/container/memory.events
Jan 20 13:55:23 compute-0 podman[75011]: 2026-01-20 13:55:23.537542038 +0000 UTC m=+0.022067913 container died ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 13:55:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/282836932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6673eb5753c277d44d402f17547a9cabf00ca2702c81183110fb610875525bd-merged.mount: Deactivated successfully.
Jan 20 13:55:23 compute-0 podman[75011]: 2026-01-20 13:55:23.573091677 +0000 UTC m=+0.057617552 container remove ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d (image=quay.io/ceph/ceph:v18, name=cranky_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 13:55:23 compute-0 systemd[1]: libpod-conmon-ba8c689ae60d712165fb86349d6844c3f80ae2013a047b1ac8fb483adebea05d.scope: Deactivated successfully.
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'selftest'
Jan 20 13:55:25 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:25.054+0000 7f3d8b306140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 sshd-session[75027]: Connection closed by authenticating user root 157.245.78.139 port 37556 [preauth]
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'snap_schedule'
Jan 20 13:55:25 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:25.289+0000 7f3d8b306140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'stats'
Jan 20 13:55:25 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:25.547+0000 7f3d8b306140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:55:25 compute-0 podman[75029]: 2026-01-20 13:55:25.640539596 +0000 UTC m=+0.039762917 container create 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:55:25 compute-0 systemd[1]: Started libpod-conmon-05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62.scope.
Jan 20 13:55:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8513b8a71ae8495b78d62bc097a69d3d85d7c5c51ae049d815de99ac8727f8fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8513b8a71ae8495b78d62bc097a69d3d85d7c5c51ae049d815de99ac8727f8fe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8513b8a71ae8495b78d62bc097a69d3d85d7c5c51ae049d815de99ac8727f8fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:25 compute-0 podman[75029]: 2026-01-20 13:55:25.705239465 +0000 UTC m=+0.104462786 container init 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:55:25 compute-0 podman[75029]: 2026-01-20 13:55:25.713167111 +0000 UTC m=+0.112390422 container start 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:55:25 compute-0 podman[75029]: 2026-01-20 13:55:25.716559079 +0000 UTC m=+0.115782460 container attach 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 13:55:25 compute-0 podman[75029]: 2026-01-20 13:55:25.621171674 +0000 UTC m=+0.020395015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:25 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'status'
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'telegraf'
Jan 20 13:55:26 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:26.043+0000 7f3d8b306140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:55:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262405375' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:26 compute-0 interesting_black[75045]: 
Jan 20 13:55:26 compute-0 interesting_black[75045]: {
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "health": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "status": "HEALTH_OK",
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "checks": {},
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "mutes": []
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "election_epoch": 5,
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "quorum": [
Jan 20 13:55:26 compute-0 interesting_black[75045]:         0
Jan 20 13:55:26 compute-0 interesting_black[75045]:     ],
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "quorum_names": [
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "compute-0"
Jan 20 13:55:26 compute-0 interesting_black[75045]:     ],
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "quorum_age": 18,
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "monmap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "epoch": 1,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "min_mon_release_name": "reef",
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_mons": 1
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "osdmap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "epoch": 1,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_osds": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_up_osds": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "osd_up_since": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_in_osds": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "osd_in_since": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_remapped_pgs": 0
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "pgmap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "pgs_by_state": [],
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_pgs": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_pools": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_objects": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "data_bytes": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "bytes_used": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "bytes_avail": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "bytes_total": 0
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "fsmap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "epoch": 1,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "by_rank": [],
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "up:standby": 0
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "mgrmap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "available": false,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "num_standbys": 0,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "modules": [
Jan 20 13:55:26 compute-0 interesting_black[75045]:             "iostat",
Jan 20 13:55:26 compute-0 interesting_black[75045]:             "nfs",
Jan 20 13:55:26 compute-0 interesting_black[75045]:             "restful"
Jan 20 13:55:26 compute-0 interesting_black[75045]:         ],
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "services": {}
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "servicemap": {
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "epoch": 1,
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:26 compute-0 interesting_black[75045]:         "services": {}
Jan 20 13:55:26 compute-0 interesting_black[75045]:     },
Jan 20 13:55:26 compute-0 interesting_black[75045]:     "progress_events": {}
Jan 20 13:55:26 compute-0 interesting_black[75045]: }
Jan 20 13:55:26 compute-0 systemd[1]: libpod-05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62.scope: Deactivated successfully.
Jan 20 13:55:26 compute-0 podman[75029]: 2026-01-20 13:55:26.131272642 +0000 UTC m=+0.530495963 container died 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8513b8a71ae8495b78d62bc097a69d3d85d7c5c51ae049d815de99ac8727f8fe-merged.mount: Deactivated successfully.
Jan 20 13:55:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4262405375' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:26 compute-0 podman[75029]: 2026-01-20 13:55:26.17219293 +0000 UTC m=+0.571416241 container remove 05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62 (image=quay.io/ceph/ceph:v18, name=interesting_black, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:55:26 compute-0 systemd[1]: libpod-conmon-05b664b1126e9b51e9e60c7abf513895c86f5ec5c3e873499f61af93e1a27a62.scope: Deactivated successfully.
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'telemetry'
Jan 20 13:55:26 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:26.286+0000 7f3d8b306140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:55:26 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 13:55:26 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:26.869+0000 7f3d8b306140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:55:27 compute-0 ceph-mgr[74653]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:27 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'volumes'
Jan 20 13:55:27 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:27.518+0000 7f3d8b306140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'zabbix'
Jan 20 13:55:28 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:28.209+0000 7f3d8b306140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.22511217 +0000 UTC m=+0.034484728 container create 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:28 compute-0 systemd[1]: Started libpod-conmon-97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d.scope.
Jan 20 13:55:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312c75d902d2898a55b4d67a7ce0cca7f058d7d206120bf38c57e78d28b55fb9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312c75d902d2898a55b4d67a7ce0cca7f058d7d206120bf38c57e78d28b55fb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312c75d902d2898a55b4d67a7ce0cca7f058d7d206120bf38c57e78d28b55fb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.287009249 +0000 UTC m=+0.096381827 container init 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.291492748 +0000 UTC m=+0.100865306 container start 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.294344689 +0000 UTC m=+0.103717247 container attach 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.210370109 +0000 UTC m=+0.019742697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:55:28 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:28.448+0000 7f3d8b306140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: ms_deliver_dispatch: unhandled message 0x558df034cf20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wookjv
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr handle_mgr_map Activating!
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr handle_mgr_map I am now activating
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.wookjv(active, starting, since 0.01298s)
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: balancer
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: crash
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Manager daemon compute-0.wookjv is now available
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer INFO root] Starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:55:28
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [balancer INFO root] No pools available
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: devicehealth
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: iostat
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: nfs
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: orchestrator
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: pg_autoscaler
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: progress
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [progress INFO root] Loading...
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [progress INFO root] No stored events to load
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [progress INFO root] Loaded [] historic events
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] recovery thread starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] starting setup
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: rbd_support
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: restful
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [restful WARNING root] server not running: no certificate configured
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: status
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: telemetry
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] PerfHandler: starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TaskHandler: starting
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: [rbd_support INFO root] setup complete
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 20 13:55:28 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: volumes
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: Activating manager daemon compute-0.wookjv
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mgrmap e2: compute-0.wookjv(active, starting, since 0.01298s)
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: Manager daemon compute-0.wookjv is now available
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"}]: dispatch
Jan 20 13:55:28 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1360676413' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]: 
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]: {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "health": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "status": "HEALTH_OK",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "checks": {},
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "mutes": []
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "election_epoch": 5,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "quorum": [
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         0
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     ],
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "quorum_names": [
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "compute-0"
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     ],
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "quorum_age": 20,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "monmap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "epoch": 1,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "min_mon_release_name": "reef",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_mons": 1
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "osdmap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "epoch": 1,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_osds": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_up_osds": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "osd_up_since": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_in_osds": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "osd_in_since": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_remapped_pgs": 0
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "pgmap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "pgs_by_state": [],
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_pgs": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_pools": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_objects": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "data_bytes": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "bytes_used": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "bytes_avail": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "bytes_total": 0
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "fsmap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "epoch": 1,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "by_rank": [],
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "up:standby": 0
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "mgrmap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "available": false,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "num_standbys": 0,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "modules": [
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:             "iostat",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:             "nfs",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:             "restful"
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         ],
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "services": {}
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "servicemap": {
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "epoch": 1,
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:         "services": {}
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     },
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]:     "progress_events": {}
Jan 20 13:55:28 compute-0 peaceful_proskuriakova[75100]: }
Jan 20 13:55:28 compute-0 systemd[1]: libpod-97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d.scope: Deactivated successfully.
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.669532639 +0000 UTC m=+0.478905197 container died 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-312c75d902d2898a55b4d67a7ce0cca7f058d7d206120bf38c57e78d28b55fb9-merged.mount: Deactivated successfully.
Jan 20 13:55:28 compute-0 podman[75084]: 2026-01-20 13:55:28.704761575 +0000 UTC m=+0.514134133 container remove 97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d (image=quay.io/ceph/ceph:v18, name=peaceful_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:28 compute-0 systemd[1]: libpod-conmon-97654d4e7451406f158122fe6684a13134bab0c86a63c6013f77b02414edb41d.scope: Deactivated successfully.
Jan 20 13:55:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.wookjv(active, since 1.86526s)
Jan 20 13:55:30 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:55:30 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:30 compute-0 ceph-mon[74360]: from='mgr.14102 192.168.122.100:0/2564049595' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1360676413' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:30 compute-0 podman[75217]: 2026-01-20 13:55:30.748423963 +0000 UTC m=+0.022304705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:30 compute-0 podman[75217]: 2026-01-20 13:55:30.936425793 +0000 UTC m=+0.210306505 container create 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:55:30 compute-0 systemd[1]: Started libpod-conmon-126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d.scope.
Jan 20 13:55:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bfd906075733d60c6c9d8b52e6c80486e7fc5482e0e5a3c3ce53027a9b92094/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bfd906075733d60c6c9d8b52e6c80486e7fc5482e0e5a3c3ce53027a9b92094/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bfd906075733d60c6c9d8b52e6c80486e7fc5482e0e5a3c3ce53027a9b92094/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:31 compute-0 podman[75217]: 2026-01-20 13:55:31.006580471 +0000 UTC m=+0.280461193 container init 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 13:55:31 compute-0 podman[75217]: 2026-01-20 13:55:31.011362203 +0000 UTC m=+0.285242925 container start 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 13:55:31 compute-0 podman[75217]: 2026-01-20 13:55:31.014508238 +0000 UTC m=+0.288388940 container attach 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.wookjv(active, since 2s)
Jan 20 13:55:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 20 13:55:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549045892' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:31 compute-0 confident_kilby[75233]: 
Jan 20 13:55:31 compute-0 confident_kilby[75233]: {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "health": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "status": "HEALTH_OK",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "checks": {},
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "mutes": []
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "election_epoch": 5,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "quorum": [
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         0
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     ],
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "quorum_names": [
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "compute-0"
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     ],
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "quorum_age": 23,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "monmap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "epoch": 1,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "min_mon_release_name": "reef",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_mons": 1
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "osdmap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "epoch": 1,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_osds": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_up_osds": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "osd_up_since": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_in_osds": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "osd_in_since": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_remapped_pgs": 0
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "pgmap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "pgs_by_state": [],
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_pgs": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_pools": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_objects": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "data_bytes": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "bytes_used": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "bytes_avail": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "bytes_total": 0
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "fsmap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "epoch": 1,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "by_rank": [],
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "up:standby": 0
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "mgrmap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "available": true,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "num_standbys": 0,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "modules": [
Jan 20 13:55:31 compute-0 confident_kilby[75233]:             "iostat",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:             "nfs",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:             "restful"
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         ],
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "services": {}
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "servicemap": {
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "epoch": 1,
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "modified": "2026-01-20T13:55:05.359486+0000",
Jan 20 13:55:31 compute-0 confident_kilby[75233]:         "services": {}
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     },
Jan 20 13:55:31 compute-0 confident_kilby[75233]:     "progress_events": {}
Jan 20 13:55:31 compute-0 confident_kilby[75233]: }
Jan 20 13:55:31 compute-0 systemd[1]: libpod-126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d.scope: Deactivated successfully.
Jan 20 13:55:31 compute-0 podman[75217]: 2026-01-20 13:55:31.61075216 +0000 UTC m=+0.884632892 container died 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bfd906075733d60c6c9d8b52e6c80486e7fc5482e0e5a3c3ce53027a9b92094-merged.mount: Deactivated successfully.
Jan 20 13:55:31 compute-0 podman[75217]: 2026-01-20 13:55:31.655166376 +0000 UTC m=+0.929047098 container remove 126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d (image=quay.io/ceph/ceph:v18, name=confident_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:55:31 compute-0 systemd[1]: libpod-conmon-126ef180c6405423bc91d3fb35dccb9a62404c1b3988e74021ec7093846fc40d.scope: Deactivated successfully.
Jan 20 13:55:31 compute-0 ceph-mon[74360]: mgrmap e3: compute-0.wookjv(active, since 1.86526s)
Jan 20 13:55:31 compute-0 ceph-mon[74360]: mgrmap e4: compute-0.wookjv(active, since 2s)
Jan 20 13:55:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1549045892' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 13:55:31 compute-0 podman[75271]: 2026-01-20 13:55:31.727676021 +0000 UTC m=+0.042173793 container create 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:31 compute-0 systemd[1]: Started libpod-conmon-7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31.scope.
Jan 20 13:55:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46c52834dbbafe8bcff2699f2fe59e221b2f0ea86c7f14177491b6932f6497f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46c52834dbbafe8bcff2699f2fe59e221b2f0ea86c7f14177491b6932f6497f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46c52834dbbafe8bcff2699f2fe59e221b2f0ea86c7f14177491b6932f6497f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b46c52834dbbafe8bcff2699f2fe59e221b2f0ea86c7f14177491b6932f6497f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:31 compute-0 podman[75271]: 2026-01-20 13:55:31.796230942 +0000 UTC m=+0.110728734 container init 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 13:55:31 compute-0 podman[75271]: 2026-01-20 13:55:31.803849336 +0000 UTC m=+0.118347108 container start 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:31 compute-0 podman[75271]: 2026-01-20 13:55:31.711750667 +0000 UTC m=+0.026248459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:31 compute-0 podman[75271]: 2026-01-20 13:55:31.80702943 +0000 UTC m=+0.121527232 container attach 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 13:55:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 20 13:55:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1201799019' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:55:32 compute-0 systemd[1]: libpod-7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31.scope: Deactivated successfully.
Jan 20 13:55:32 compute-0 conmon[75288]: conmon 7d48773da147979387d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31.scope/container/memory.events
Jan 20 13:55:32 compute-0 podman[75271]: 2026-01-20 13:55:32.349398072 +0000 UTC m=+0.663895864 container died 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b46c52834dbbafe8bcff2699f2fe59e221b2f0ea86c7f14177491b6932f6497f-merged.mount: Deactivated successfully.
Jan 20 13:55:32 compute-0 podman[75271]: 2026-01-20 13:55:32.409356199 +0000 UTC m=+0.723853971 container remove 7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31 (image=quay.io/ceph/ceph:v18, name=flamboyant_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 13:55:32 compute-0 systemd[1]: libpod-conmon-7d48773da147979387d604d965c13d66b08a74746f0762804648b7ac3cf7ac31.scope: Deactivated successfully.
Jan 20 13:55:32 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:55:32 compute-0 podman[75329]: 2026-01-20 13:55:32.497358793 +0000 UTC m=+0.060737847 container create af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 13:55:32 compute-0 systemd[1]: Started libpod-conmon-af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199.scope.
Jan 20 13:55:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349ed3f1225b6e5ebfd3d47c3050ea5267ea94db63c68716abd070b0a521e47f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349ed3f1225b6e5ebfd3d47c3050ea5267ea94db63c68716abd070b0a521e47f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/349ed3f1225b6e5ebfd3d47c3050ea5267ea94db63c68716abd070b0a521e47f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:32 compute-0 podman[75329]: 2026-01-20 13:55:32.558745786 +0000 UTC m=+0.122124930 container init af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:32 compute-0 podman[75329]: 2026-01-20 13:55:32.564024223 +0000 UTC m=+0.127403287 container start af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:55:32 compute-0 podman[75329]: 2026-01-20 13:55:32.471672511 +0000 UTC m=+0.035051655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:32 compute-0 podman[75329]: 2026-01-20 13:55:32.567197168 +0000 UTC m=+0.130576262 container attach af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:55:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1201799019' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:55:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 20 13:55:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3171722498' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 20 13:55:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3171722498' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 20 13:55:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3171722498' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 13:55:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.wookjv(active, since 5s)
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  1: '-n'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  2: 'mgr.compute-0.wookjv'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  3: '-f'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  4: '--setuser'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  5: 'ceph'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  6: '--setgroup'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  7: 'ceph'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr respawn  exe_path /proc/self/exe
Jan 20 13:55:33 compute-0 systemd[1]: libpod-af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199.scope: Deactivated successfully.
Jan 20 13:55:33 compute-0 conmon[75345]: conmon af424d3d3cf377cae62b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199.scope/container/memory.events
Jan 20 13:55:33 compute-0 podman[75329]: 2026-01-20 13:55:33.719067666 +0000 UTC m=+1.282446740 container died af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-349ed3f1225b6e5ebfd3d47c3050ea5267ea94db63c68716abd070b0a521e47f-merged.mount: Deactivated successfully.
Jan 20 13:55:33 compute-0 podman[75329]: 2026-01-20 13:55:33.766632518 +0000 UTC m=+1.330011592 container remove af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199 (image=quay.io/ceph/ceph:v18, name=musing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 13:55:33 compute-0 systemd[1]: libpod-conmon-af424d3d3cf377cae62b3ee4344135e7299f31d39acd78dba4d0ee2efe9b9199.scope: Deactivated successfully.
Jan 20 13:55:33 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: ignoring --setuser ceph since I am not root
Jan 20 13:55:33 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: ignoring --setgroup ceph since I am not root
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: pidfile_write: ignore empty --pid-file
Jan 20 13:55:33 compute-0 podman[75382]: 2026-01-20 13:55:33.838631807 +0000 UTC m=+0.049966179 container create 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:33 compute-0 systemd[1]: Started libpod-conmon-79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe.scope.
Jan 20 13:55:33 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'alerts'
Jan 20 13:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12803e1688272591b8e20bed61f34cdd2718d8df664a4addb9863b47f6c778c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12803e1688272591b8e20bed61f34cdd2718d8df664a4addb9863b47f6c778c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12803e1688272591b8e20bed61f34cdd2718d8df664a4addb9863b47f6c778c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:33 compute-0 podman[75382]: 2026-01-20 13:55:33.817633197 +0000 UTC m=+0.028967589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:33 compute-0 podman[75382]: 2026-01-20 13:55:33.919787746 +0000 UTC m=+0.131122158 container init 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:55:33 compute-0 podman[75382]: 2026-01-20 13:55:33.925283446 +0000 UTC m=+0.136617818 container start 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 13:55:33 compute-0 podman[75382]: 2026-01-20 13:55:33.930944188 +0000 UTC m=+0.142278590 container attach 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:55:34 compute-0 ceph-mgr[74653]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:55:34 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:34.216+0000 7f45c5196140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 13:55:34 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'balancer'
Jan 20 13:55:34 compute-0 ceph-mgr[74653]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:55:34 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:34.464+0000 7f45c5196140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 13:55:34 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'cephadm'
Jan 20 13:55:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 20 13:55:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524147913' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]: {
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]:     "epoch": 5,
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]:     "available": true,
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]:     "active_name": "compute-0.wookjv",
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]:     "num_standby": 0
Jan 20 13:55:34 compute-0 heuristic_goldberg[75420]: }
Jan 20 13:55:34 compute-0 systemd[1]: libpod-79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe.scope: Deactivated successfully.
Jan 20 13:55:34 compute-0 podman[75382]: 2026-01-20 13:55:34.527315162 +0000 UTC m=+0.738649494 container died 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-12803e1688272591b8e20bed61f34cdd2718d8df664a4addb9863b47f6c778c2-merged.mount: Deactivated successfully.
Jan 20 13:55:34 compute-0 podman[75382]: 2026-01-20 13:55:34.571537426 +0000 UTC m=+0.782871778 container remove 79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe (image=quay.io/ceph/ceph:v18, name=heuristic_goldberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 13:55:34 compute-0 systemd[1]: libpod-conmon-79daa1ce1754681eca176ea110b528daf61e6ebe6cbb1b2f8c396fdcf1402dfe.scope: Deactivated successfully.
Jan 20 13:55:34 compute-0 podman[75459]: 2026-01-20 13:55:34.63398731 +0000 UTC m=+0.041214323 container create 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:34 compute-0 systemd[1]: Started libpod-conmon-2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70.scope.
Jan 20 13:55:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a379a5638f91c55878c268259a9cb4e6165f05b776c4fd5fa74403415611d7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a379a5638f91c55878c268259a9cb4e6165f05b776c4fd5fa74403415611d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a379a5638f91c55878c268259a9cb4e6165f05b776c4fd5fa74403415611d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3171722498' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 13:55:34 compute-0 ceph-mon[74360]: mgrmap e5: compute-0.wookjv(active, since 5s)
Jan 20 13:55:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1524147913' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 13:55:34 compute-0 podman[75459]: 2026-01-20 13:55:34.700018113 +0000 UTC m=+0.107245146 container init 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:55:34 compute-0 podman[75459]: 2026-01-20 13:55:34.704725464 +0000 UTC m=+0.111952487 container start 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:55:34 compute-0 podman[75459]: 2026-01-20 13:55:34.614163023 +0000 UTC m=+0.021390076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:34 compute-0 podman[75459]: 2026-01-20 13:55:34.72531896 +0000 UTC m=+0.132546003 container attach 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:55:36 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'crash'
Jan 20 13:55:36 compute-0 ceph-mgr[74653]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:55:36 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:36.552+0000 7f45c5196140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 13:55:36 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'dashboard'
Jan 20 13:55:37 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'devicehealth'
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:38.151+0000 7f45c5196140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   from numpy import show_config as show_numpy_config
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:38.651+0000 7f45c5196140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'influx'
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:55:38 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'insights'
Jan 20 13:55:38 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:38.879+0000 7f45c5196140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 13:55:39 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'iostat'
Jan 20 13:55:39 compute-0 ceph-mgr[74653]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:55:39 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'k8sevents'
Jan 20 13:55:39 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:39.339+0000 7f45c5196140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 13:55:41 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'localpool'
Jan 20 13:55:41 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 13:55:41 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'mirroring'
Jan 20 13:55:42 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'nfs'
Jan 20 13:55:42 compute-0 ceph-mgr[74653]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:55:42 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'orchestrator'
Jan 20 13:55:42 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:42.899+0000 7f45c5196140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 13:55:43 compute-0 ceph-mgr[74653]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:43 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:43.534+0000 7f45c5196140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:43 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 13:55:43 compute-0 ceph-mgr[74653]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:55:43 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'osd_support'
Jan 20 13:55:43 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:43.788+0000 7f45c5196140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 13:55:44 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:44.010+0000 7f45c5196140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'progress'
Jan 20 13:55:44 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:44.259+0000 7f45c5196140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:55:44 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'prometheus'
Jan 20 13:55:44 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:44.486+0000 7f45c5196140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 13:55:45 compute-0 ceph-mgr[74653]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:55:45 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rbd_support'
Jan 20 13:55:45 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:45.406+0000 7f45c5196140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 13:55:45 compute-0 ceph-mgr[74653]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:55:45 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'restful'
Jan 20 13:55:45 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:45.680+0000 7f45c5196140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 13:55:46 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rgw'
Jan 20 13:55:46 compute-0 ceph-mgr[74653]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:55:46 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'rook'
Jan 20 13:55:46 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:46.971+0000 7f45c5196140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 13:55:48 compute-0 ceph-mgr[74653]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:55:48 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:48.950+0000 7f45c5196140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 13:55:48 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'selftest'
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'snap_schedule'
Jan 20 13:55:49 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:49.192+0000 7f45c5196140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'stats'
Jan 20 13:55:49 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:49.443+0000 7f45c5196140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'status'
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:55:49 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'telegraf'
Jan 20 13:55:49 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:49.925+0000 7f45c5196140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 13:55:50 compute-0 ceph-mgr[74653]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:55:50 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'telemetry'
Jan 20 13:55:50 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:50.160+0000 7f45c5196140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 13:55:50 compute-0 ceph-mgr[74653]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:55:50 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 13:55:50 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:50.735+0000 7f45c5196140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 13:55:51 compute-0 ceph-mgr[74653]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:51 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'volumes'
Jan 20 13:55:51 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:51.431+0000 7f45c5196140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr[py] Loading python module 'zabbix'
Jan 20 13:55:52 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:52.107+0000 7f45c5196140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:55:52 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:55:52.335+0000 7f45c5196140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wookjv restarted
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: ms_deliver_dispatch: unhandled message 0x560fb975c420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wookjv
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr handle_mgr_map Activating!
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr handle_mgr_map I am now activating
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.wookjv(active, starting, since 0.0285437s)
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: balancer
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:55:52
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Manager daemon compute-0.wookjv is now available
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] No pools available
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: cephadm
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: crash
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: devicehealth
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: iostat
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: nfs
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: orchestrator
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: pg_autoscaler
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: progress
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [progress INFO root] Loading...
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [progress INFO root] No stored events to load
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [progress INFO root] Loaded [] historic events
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: Active manager daemon compute-0.wookjv restarted
Jan 20 13:55:52 compute-0 ceph-mon[74360]: Activating manager daemon compute-0.wookjv
Jan 20 13:55:52 compute-0 ceph-mon[74360]: osdmap e2: 0 total, 0 up, 0 in
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mgrmap e6: compute-0.wookjv(active, starting, since 0.0285437s)
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wookjv", "id": "compute-0.wookjv"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mon[74360]: Manager daemon compute-0.wookjv is now available
Jan 20 13:55:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] recovery thread starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] starting setup
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: rbd_support
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: restful
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: status
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: telemetry
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [restful WARNING root] server not running: no certificate configured
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] PerfHandler: starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TaskHandler: starting
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"} v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"}]: dispatch
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] setup complete
Jan 20 13:55:52 compute-0 ceph-mgr[74653]: mgr load Constructed class from module: volumes
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 20 13:55:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019928417 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:55:53 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 13:55:53 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.wookjv(active, since 1.04239s)
Jan 20 13:55:53 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 13:55:53 compute-0 nifty_sutherland[75475]: {
Jan 20 13:55:53 compute-0 nifty_sutherland[75475]:     "mgrmap_epoch": 7,
Jan 20 13:55:53 compute-0 nifty_sutherland[75475]:     "initialized": true
Jan 20 13:55:53 compute-0 nifty_sutherland[75475]: }
Jan 20 13:55:53 compute-0 systemd[1]: libpod-2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70.scope: Deactivated successfully.
Jan 20 13:55:53 compute-0 podman[75459]: 2026-01-20 13:55:53.419300682 +0000 UTC m=+18.826527705 container died 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:55:53 compute-0 ceph-mon[74360]: Found migration_current of "None". Setting to last migration.
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/mirror_snapshot_schedule"}]: dispatch
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wookjv/trash_purge_schedule"}]: dispatch
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:53 compute-0 ceph-mon[74360]: mgrmap e7: compute-0.wookjv(active, since 1.04239s)
Jan 20 13:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a379a5638f91c55878c268259a9cb4e6165f05b776c4fd5fa74403415611d7-merged.mount: Deactivated successfully.
Jan 20 13:55:53 compute-0 podman[75459]: 2026-01-20 13:55:53.472664558 +0000 UTC m=+18.879891581 container remove 2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70 (image=quay.io/ceph/ceph:v18, name=nifty_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 13:55:53 compute-0 systemd[1]: libpod-conmon-2bd782dcb0e0c9e81405e0653c60ec5d231738a61773281f5b83f73bd427cc70.scope: Deactivated successfully.
Jan 20 13:55:53 compute-0 podman[75636]: 2026-01-20 13:55:53.563810947 +0000 UTC m=+0.054644832 container create be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:55:53 compute-0 systemd[1]: Started libpod-conmon-be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3.scope.
Jan 20 13:55:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99483d8e374c07e27900b62d27645a927eeddff026d09034e5b4c4321c39c90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99483d8e374c07e27900b62d27645a927eeddff026d09034e5b4c4321c39c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99483d8e374c07e27900b62d27645a927eeddff026d09034e5b4c4321c39c90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:53 compute-0 podman[75636]: 2026-01-20 13:55:53.545907801 +0000 UTC m=+0.036741736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:53 compute-0 podman[75636]: 2026-01-20 13:55:53.650341771 +0000 UTC m=+0.141175726 container init be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:53 compute-0 podman[75636]: 2026-01-20 13:55:53.655970923 +0000 UTC m=+0.146804818 container start be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:53 compute-0 podman[75636]: 2026-01-20 13:55:53.659510369 +0000 UTC m=+0.150344354 container attach be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:55:53 compute-0 ceph-mgr[74653]: [cephadm INFO cherrypy.error] [20/Jan/2026:13:55:53] ENGINE Bus STARTING
Jan 20 13:55:53 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : [20/Jan/2026:13:55:53] ENGINE Bus STARTING
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO cherrypy.error] [20/Jan/2026:13:55:54] ENGINE Serving on https://192.168.122.100:7150
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : [20/Jan/2026:13:55:54] ENGINE Serving on https://192.168.122.100:7150
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO cherrypy.error] [20/Jan/2026:13:55:54] ENGINE Client ('192.168.122.100', 48528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : [20/Jan/2026:13:55:54] ENGINE Client ('192.168.122.100', 48528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO cherrypy.error] [20/Jan/2026:13:55:54] ENGINE Serving on http://192.168.122.100:8765
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : [20/Jan/2026:13:55:54] ENGINE Serving on http://192.168.122.100:8765
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO cherrypy.error] [20/Jan/2026:13:55:54] ENGINE Bus STARTED
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : [20/Jan/2026:13:55:54] ENGINE Bus STARTED
Jan 20 13:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:55:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 20 13:55:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:55:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:54 compute-0 systemd[1]: libpod-be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3.scope: Deactivated successfully.
Jan 20 13:55:54 compute-0 podman[75636]: 2026-01-20 13:55:54.22104905 +0000 UTC m=+0.711882945 container died be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99483d8e374c07e27900b62d27645a927eeddff026d09034e5b4c4321c39c90-merged.mount: Deactivated successfully.
Jan 20 13:55:54 compute-0 podman[75636]: 2026-01-20 13:55:54.264088727 +0000 UTC m=+0.754922612 container remove be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3 (image=quay.io/ceph/ceph:v18, name=optimistic_pare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:54 compute-0 systemd[1]: libpod-conmon-be08685eec2f53249e015c8fde9feebd85491efd3644607460f4070aebfe0da3.scope: Deactivated successfully.
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.326354894 +0000 UTC m=+0.043210232 container create d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 13:55:54 compute-0 systemd[1]: Started libpod-conmon-d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3.scope.
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:55:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5506effb3c0ebaf46ff95f58a5dafab205696f19cf21421eb52aa383bba64cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5506effb3c0ebaf46ff95f58a5dafab205696f19cf21421eb52aa383bba64cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5506effb3c0ebaf46ff95f58a5dafab205696f19cf21421eb52aa383bba64cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.307749199 +0000 UTC m=+0.024604507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.409785663 +0000 UTC m=+0.126641031 container init d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.418489709 +0000 UTC m=+0.135344997 container start d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.42146194 +0000 UTC m=+0.138317268 container attach d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 13:55:54 compute-0 ceph-mon[74360]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mon[74360]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 20 13:55:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO root] Set ssh ssh_user
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 20 13:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 20 13:55:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO root] Set ssh ssh_config
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 20 13:55:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 20 13:55:54 compute-0 fervent_mayer[75732]: ssh user set to ceph-admin. sudo will be used
Jan 20 13:55:54 compute-0 systemd[1]: libpod-d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3.scope: Deactivated successfully.
Jan 20 13:55:54 compute-0 podman[75715]: 2026-01-20 13:55:54.994507583 +0000 UTC m=+0.711362921 container died d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 13:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5506effb3c0ebaf46ff95f58a5dafab205696f19cf21421eb52aa383bba64cc-merged.mount: Deactivated successfully.
Jan 20 13:55:55 compute-0 podman[75715]: 2026-01-20 13:55:55.041438164 +0000 UTC m=+0.758293502 container remove d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3 (image=quay.io/ceph/ceph:v18, name=fervent_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 13:55:55 compute-0 systemd[1]: libpod-conmon-d6067770ead391e0ea0332291968d3a2b50f5ce4858c44a9fe823877fd543ab3.scope: Deactivated successfully.
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.136550831 +0000 UTC m=+0.066856663 container create ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:55 compute-0 systemd[1]: Started libpod-conmon-ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925.scope.
Jan 20 13:55:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.105585322 +0000 UTC m=+0.035891224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wookjv(active, since 2s)
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.222112848 +0000 UTC m=+0.152418750 container init ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.233995791 +0000 UTC m=+0.164301593 container start ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.237345281 +0000 UTC m=+0.167651163 container attach ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:55:55 compute-0 ceph-mon[74360]: [20/Jan/2026:13:55:53] ENGINE Bus STARTING
Jan 20 13:55:55 compute-0 ceph-mon[74360]: [20/Jan/2026:13:55:54] ENGINE Serving on https://192.168.122.100:7150
Jan 20 13:55:55 compute-0 ceph-mon[74360]: [20/Jan/2026:13:55:54] ENGINE Client ('192.168.122.100', 48528) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 13:55:55 compute-0 ceph-mon[74360]: [20/Jan/2026:13:55:54] ENGINE Serving on http://192.168.122.100:8765
Jan 20 13:55:55 compute-0 ceph-mon[74360]: [20/Jan/2026:13:55:54] ENGINE Bus STARTED
Jan 20 13:55:55 compute-0 ceph-mon[74360]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:55 compute-0 ceph-mon[74360]: mgrmap e8: compute-0.wookjv(active, since 2s)
Jan 20 13:55:55 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 20 13:55:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:55 compute-0 ceph-mgr[74653]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 20 13:55:55 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 20 13:55:55 compute-0 ceph-mgr[74653]: [cephadm INFO root] Set ssh private key
Jan 20 13:55:55 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 20 13:55:55 compute-0 systemd[1]: libpod-ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925.scope: Deactivated successfully.
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.781137013 +0000 UTC m=+0.711442815 container died ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 13:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6866ea76719cf506109b7a97f5533ccca6b771fd734326cd621599dbc3d1888a-merged.mount: Deactivated successfully.
Jan 20 13:55:55 compute-0 podman[75772]: 2026-01-20 13:55:55.824162208 +0000 UTC m=+0.754468010 container remove ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925 (image=quay.io/ceph/ceph:v18, name=ecstatic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:55 compute-0 systemd[1]: libpod-conmon-ae97410c8033f57858d338ec1ad6bb347d7148c6d51be1e6c199cae3d4241925.scope: Deactivated successfully.
Jan 20 13:55:55 compute-0 podman[75827]: 2026-01-20 13:55:55.911844133 +0000 UTC m=+0.057772496 container create e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:55:55 compute-0 systemd[1]: Started libpod-conmon-e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f.scope.
Jan 20 13:55:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:55 compute-0 podman[75827]: 2026-01-20 13:55:55.892356055 +0000 UTC m=+0.038284418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:55 compute-0 podman[75827]: 2026-01-20 13:55:55.992509859 +0000 UTC m=+0.138438242 container init e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:56 compute-0 podman[75827]: 2026-01-20 13:55:56.001680187 +0000 UTC m=+0.147608540 container start e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:56 compute-0 podman[75827]: 2026-01-20 13:55:56.004946775 +0000 UTC m=+0.150875138 container attach e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:55:56 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:55:56 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 20 13:55:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:56 compute-0 ceph-mgr[74653]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 20 13:55:56 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 20 13:55:56 compute-0 systemd[1]: libpod-e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f.scope: Deactivated successfully.
Jan 20 13:55:56 compute-0 conmon[75843]: conmon e34b3ef7dbd83f06317a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f.scope/container/memory.events
Jan 20 13:55:56 compute-0 podman[75827]: 2026-01-20 13:55:56.58726653 +0000 UTC m=+0.733194883 container died e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4da8462685bde91de8b077c142cfea8bd0e9616d824ea2638b5fb7938d5feac-merged.mount: Deactivated successfully.
Jan 20 13:55:56 compute-0 podman[75827]: 2026-01-20 13:55:56.623263955 +0000 UTC m=+0.769192308 container remove e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f (image=quay.io/ceph/ceph:v18, name=peaceful_knuth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 20 13:55:56 compute-0 systemd[1]: libpod-conmon-e34b3ef7dbd83f06317aaac70efdff7dd7676bcd0edbc2db68cebdacea991d1f.scope: Deactivated successfully.
Jan 20 13:55:56 compute-0 podman[75882]: 2026-01-20 13:55:56.663984368 +0000 UTC m=+0.022643824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:56 compute-0 podman[75882]: 2026-01-20 13:55:56.813069277 +0000 UTC m=+0.171728703 container create e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:56 compute-0 ceph-mon[74360]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:56 compute-0 ceph-mon[74360]: Set ssh ssh_user
Jan 20 13:55:56 compute-0 ceph-mon[74360]: Set ssh ssh_config
Jan 20 13:55:56 compute-0 ceph-mon[74360]: ssh user set to ceph-admin. sudo will be used
Jan 20 13:55:56 compute-0 ceph-mon[74360]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:56 compute-0 ceph-mon[74360]: Set ssh ssh_identity_key
Jan 20 13:55:56 compute-0 ceph-mon[74360]: Set ssh private key
Jan 20 13:55:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:55:56 compute-0 systemd[1]: Started libpod-conmon-e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6.scope.
Jan 20 13:55:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e874d16abf3f9349d90fa367d01840016fd6970b56e430ae4b1a3c22319e7ded/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e874d16abf3f9349d90fa367d01840016fd6970b56e430ae4b1a3c22319e7ded/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e874d16abf3f9349d90fa367d01840016fd6970b56e430ae4b1a3c22319e7ded/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:56 compute-0 podman[75882]: 2026-01-20 13:55:56.876392352 +0000 UTC m=+0.235051798 container init e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 13:55:56 compute-0 podman[75882]: 2026-01-20 13:55:56.888482289 +0000 UTC m=+0.247141715 container start e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:55:56 compute-0 podman[75882]: 2026-01-20 13:55:56.891318887 +0000 UTC m=+0.249978303 container attach e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 13:55:57 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:57 compute-0 strange_allen[75898]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLcwlJHg97yVJWF+wAhMiVTFDHJVVgxiCRnBlfBKai2whh7CrPieiGTrvXQglvFMGFaD62l7YrANVxnr4fW8C9caiXy94AZlOQEME3+U4KJ3LxKcQWXBrpdUrZi4pAokQ73vJmlxoSVfhYv0qqs6QgH/sBzcB4TqYYudkWpKy9ovjd3kX7gmgxWR9iEat8h2yVwJffOoPn6V135HSo6upx45/TgLWCzRqEBj/AxVLU6E0q9WtsFJWWVkV0yYJXl73wM0WhqtVvv8qTJPtkLC1RvwqORCkVjaozhWLi5/nqgnpeXd+GxnlmlK4w+EQcf24oVOPpdcyDQwdakRQogtR0CRKz+kBo93f0N8vP7DhuaCOltkoy8sutmyrN7LXuK2pYIeYzLGmp228de8dyMF1pz02a+Y/RBiMDALxaSiA0p/Z6DKGXrrF0nMS2+k9LwqQ30xgBCTW1c+h7e+7rxXBXZdMyn0t+VnIIuvHzMnXQtMOHkCV4UWpljexYL0pssBc= zuul@controller
Jan 20 13:55:57 compute-0 systemd[1]: libpod-e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6.scope: Deactivated successfully.
Jan 20 13:55:57 compute-0 podman[75882]: 2026-01-20 13:55:57.418563069 +0000 UTC m=+0.777222515 container died e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e874d16abf3f9349d90fa367d01840016fd6970b56e430ae4b1a3c22319e7ded-merged.mount: Deactivated successfully.
Jan 20 13:55:57 compute-0 podman[75882]: 2026-01-20 13:55:57.468129681 +0000 UTC m=+0.826789117 container remove e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6 (image=quay.io/ceph/ceph:v18, name=strange_allen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:55:57 compute-0 systemd[1]: libpod-conmon-e14f83a285a5e1ddd119865bf107942f47ab320232ba420f90d71055719449d6.scope: Deactivated successfully.
Jan 20 13:55:57 compute-0 podman[75936]: 2026-01-20 13:55:57.530269655 +0000 UTC m=+0.044386003 container create ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:55:57 compute-0 systemd[1]: Started libpod-conmon-ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf.scope.
Jan 20 13:55:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81c6312abf993002c87d8f875a28908eb9ee85d2e2545af55efe988704315f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81c6312abf993002c87d8f875a28908eb9ee85d2e2545af55efe988704315f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81c6312abf993002c87d8f875a28908eb9ee85d2e2545af55efe988704315f6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:55:57 compute-0 podman[75936]: 2026-01-20 13:55:57.504938079 +0000 UTC m=+0.019054457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:55:57 compute-0 podman[75936]: 2026-01-20 13:55:57.619863722 +0000 UTC m=+0.133980160 container init ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:55:57 compute-0 podman[75936]: 2026-01-20 13:55:57.629241507 +0000 UTC m=+0.143357905 container start ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:55:57 compute-0 podman[75936]: 2026-01-20 13:55:57.632886385 +0000 UTC m=+0.147002783 container attach ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:55:57 compute-0 ceph-mon[74360]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:57 compute-0 ceph-mon[74360]: Set ssh ssh_identity_pub
Jan 20 13:55:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053114 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:55:58 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:58 compute-0 sshd-session[75978]: Accepted publickey for ceph-admin from 192.168.122.100 port 47460 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:55:58 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:55:58 compute-0 systemd-logind[796]: New session 21 of user ceph-admin.
Jan 20 13:55:58 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 13:55:58 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 13:55:58 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 13:55:58 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 13:55:58 compute-0 systemd[75982]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:55:58 compute-0 systemd[75982]: Queued start job for default target Main User Target.
Jan 20 13:55:58 compute-0 systemd[75982]: Created slice User Application Slice.
Jan 20 13:55:58 compute-0 systemd[75982]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:55:58 compute-0 systemd[75982]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:55:58 compute-0 systemd[75982]: Reached target Paths.
Jan 20 13:55:58 compute-0 systemd[75982]: Reached target Timers.
Jan 20 13:55:58 compute-0 systemd[75982]: Starting D-Bus User Message Bus Socket...
Jan 20 13:55:58 compute-0 sshd-session[75996]: Accepted publickey for ceph-admin from 192.168.122.100 port 47466 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:55:58 compute-0 systemd[75982]: Starting Create User's Volatile Files and Directories...
Jan 20 13:55:58 compute-0 systemd-logind[796]: New session 23 of user ceph-admin.
Jan 20 13:55:58 compute-0 systemd[75982]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:55:58 compute-0 systemd[75982]: Reached target Sockets.
Jan 20 13:55:58 compute-0 systemd[75982]: Finished Create User's Volatile Files and Directories.
Jan 20 13:55:58 compute-0 systemd[75982]: Reached target Basic System.
Jan 20 13:55:58 compute-0 systemd[75982]: Reached target Main User Target.
Jan 20 13:55:58 compute-0 systemd[75982]: Startup finished in 135ms.
Jan 20 13:55:58 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 13:55:58 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 20 13:55:58 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 20 13:55:58 compute-0 sshd-session[75978]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:55:58 compute-0 sshd-session[75996]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:55:58 compute-0 sudo[76003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:55:58 compute-0 sudo[76003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:58 compute-0 sudo[76003]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:58 compute-0 sudo[76028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:55:58 compute-0 sudo[76028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:58 compute-0 sudo[76028]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:58 compute-0 ceph-mon[74360]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:58 compute-0 sshd-session[76053]: Accepted publickey for ceph-admin from 192.168.122.100 port 47472 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:55:58 compute-0 systemd-logind[796]: New session 24 of user ceph-admin.
Jan 20 13:55:59 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 20 13:55:59 compute-0 sshd-session[76053]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:55:59 compute-0 sudo[76057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:55:59 compute-0 sudo[76057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:59 compute-0 sudo[76057]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:59 compute-0 sudo[76082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 20 13:55:59 compute-0 sudo[76082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:59 compute-0 sudo[76082]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:59 compute-0 sshd-session[76107]: Accepted publickey for ceph-admin from 192.168.122.100 port 47486 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:55:59 compute-0 systemd-logind[796]: New session 25 of user ceph-admin.
Jan 20 13:55:59 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 20 13:55:59 compute-0 sshd-session[76107]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:55:59 compute-0 sudo[76111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:55:59 compute-0 sudo[76111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:59 compute-0 sudo[76111]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:59 compute-0 sudo[76136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 20 13:55:59 compute-0 sudo[76136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:55:59 compute-0 sudo[76136]: pam_unix(sudo:session): session closed for user root
Jan 20 13:55:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 20 13:55:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 20 13:55:59 compute-0 ceph-mon[74360]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:55:59 compute-0 sshd-session[76161]: Accepted publickey for ceph-admin from 192.168.122.100 port 47502 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:55:59 compute-0 systemd-logind[796]: New session 26 of user ceph-admin.
Jan 20 13:55:59 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 20 13:55:59 compute-0 sshd-session[76161]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:00 compute-0 sudo[76165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:00 compute-0 sudo[76165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76165]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:00 compute-0 sudo[76190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:00 compute-0 sudo[76190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76190]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:00 compute-0 sshd-session[76215]: Accepted publickey for ceph-admin from 192.168.122.100 port 47510 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:00 compute-0 systemd-logind[796]: New session 27 of user ceph-admin.
Jan 20 13:56:00 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 20 13:56:00 compute-0 sshd-session[76215]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:00 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:00 compute-0 sudo[76219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:00 compute-0 sudo[76219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76219]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:00 compute-0 sudo[76244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:00 compute-0 sudo[76244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76244]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:00 compute-0 sshd-session[76269]: Accepted publickey for ceph-admin from 192.168.122.100 port 47518 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:00 compute-0 systemd-logind[796]: New session 28 of user ceph-admin.
Jan 20 13:56:00 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 20 13:56:00 compute-0 sshd-session[76269]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:00 compute-0 ceph-mon[74360]: Deploying cephadm binary to compute-0
Jan 20 13:56:00 compute-0 sudo[76273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:00 compute-0 sudo[76273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76273]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:00 compute-0 sudo[76298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 20 13:56:00 compute-0 sudo[76298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:00 compute-0 sudo[76298]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:01 compute-0 sshd-session[76323]: Accepted publickey for ceph-admin from 192.168.122.100 port 47522 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:01 compute-0 systemd-logind[796]: New session 29 of user ceph-admin.
Jan 20 13:56:01 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 20 13:56:01 compute-0 sshd-session[76323]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:01 compute-0 sudo[76327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:01 compute-0 sudo[76327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:01 compute-0 sudo[76327]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:01 compute-0 sudo[76352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:01 compute-0 sudo[76352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:01 compute-0 sudo[76352]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:01 compute-0 sshd-session[76377]: Accepted publickey for ceph-admin from 192.168.122.100 port 47526 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:01 compute-0 systemd-logind[796]: New session 30 of user ceph-admin.
Jan 20 13:56:01 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 20 13:56:01 compute-0 sshd-session[76377]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:01 compute-0 sudo[76381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:01 compute-0 sudo[76381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:01 compute-0 sudo[76381]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:01 compute-0 sudo[76406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 20 13:56:01 compute-0 sudo[76406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:01 compute-0 sudo[76406]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:01 compute-0 sshd-session[76431]: Accepted publickey for ceph-admin from 192.168.122.100 port 47534 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:01 compute-0 systemd-logind[796]: New session 31 of user ceph-admin.
Jan 20 13:56:01 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 20 13:56:01 compute-0 sshd-session[76431]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:02 compute-0 sshd-session[76458]: Accepted publickey for ceph-admin from 192.168.122.100 port 43610 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:02 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:02 compute-0 systemd-logind[796]: New session 32 of user ceph-admin.
Jan 20 13:56:02 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 20 13:56:02 compute-0 sshd-session[76458]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:02 compute-0 sudo[76462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:02 compute-0 sudo[76462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:02 compute-0 sudo[76462]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:02 compute-0 sudo[76487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 20 13:56:02 compute-0 sudo[76487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:02 compute-0 sudo[76487]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:02 compute-0 sshd-session[76512]: Accepted publickey for ceph-admin from 192.168.122.100 port 43612 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 13:56:02 compute-0 systemd-logind[796]: New session 33 of user ceph-admin.
Jan 20 13:56:02 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 20 13:56:02 compute-0 sshd-session[76512]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 13:56:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:02 compute-0 sudo[76516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:02 compute-0 sudo[76516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:02 compute-0 sudo[76516]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:02 compute-0 sudo[76541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 20 13:56:02 compute-0 sudo[76541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:03 compute-0 sudo[76541]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:03 compute-0 ceph-mgr[74653]: [cephadm INFO root] Added host compute-0
Jan 20 13:56:03 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 13:56:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:56:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:56:03 compute-0 angry_newton[75952]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 13:56:03 compute-0 systemd[1]: libpod-ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf.scope: Deactivated successfully.
Jan 20 13:56:03 compute-0 podman[75936]: 2026-01-20 13:56:03.324262021 +0000 UTC m=+5.838378419 container died ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c81c6312abf993002c87d8f875a28908eb9ee85d2e2545af55efe988704315f6-merged.mount: Deactivated successfully.
Jan 20 13:56:03 compute-0 podman[75936]: 2026-01-20 13:56:03.376016643 +0000 UTC m=+5.890133001 container remove ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf (image=quay.io/ceph/ceph:v18, name=angry_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 13:56:03 compute-0 systemd[1]: libpod-conmon-ca6bdc4ecc8cfa85f9efe10aafa23b5545d4ab72397d37b802706e874b26f6cf.scope: Deactivated successfully.
Jan 20 13:56:03 compute-0 sudo[76588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:03 compute-0 sudo[76588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:03 compute-0 sudo[76588]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:03 compute-0 podman[76623]: 2026-01-20 13:56:03.461402366 +0000 UTC m=+0.055514085 container create 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 13:56:03 compute-0 sudo[76632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:03 compute-0 sudo[76632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:03 compute-0 sudo[76632]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:03 compute-0 systemd[1]: Started libpod-conmon-00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832.scope.
Jan 20 13:56:03 compute-0 podman[76623]: 2026-01-20 13:56:03.443631414 +0000 UTC m=+0.037743163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b0e180ca9b3664939fffd49e69fa5149fc3e1e8c56a58539757de360addba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b0e180ca9b3664939fffd49e69fa5149fc3e1e8c56a58539757de360addba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b0e180ca9b3664939fffd49e69fa5149fc3e1e8c56a58539757de360addba6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:03 compute-0 podman[76623]: 2026-01-20 13:56:03.571485148 +0000 UTC m=+0.165596957 container init 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:56:03 compute-0 sudo[76666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:03 compute-0 sudo[76666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:03 compute-0 sudo[76666]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:03 compute-0 podman[76623]: 2026-01-20 13:56:03.582964089 +0000 UTC m=+0.177075808 container start 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:56:03 compute-0 podman[76623]: 2026-01-20 13:56:03.586264488 +0000 UTC m=+0.180376247 container attach 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:03 compute-0 sudo[76696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Jan 20 13:56:03 compute-0 sudo[76696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:03 compute-0 podman[76756]: 2026-01-20 13:56:03.971336439 +0000 UTC m=+0.050149549 container create 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:04 compute-0 systemd[1]: Started libpod-conmon-35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee.scope.
Jan 20 13:56:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:03.944837662 +0000 UTC m=+0.023650792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:04.049763734 +0000 UTC m=+0.128576924 container init 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:04.059353224 +0000 UTC m=+0.138166364 container start 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:04.064335579 +0000 UTC m=+0.143148799 container attach 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 13:56:04 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:04 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 20 13:56:04 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 20 13:56:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:56:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:04 compute-0 pedantic_goldstine[76668]: Scheduled mon update...
Jan 20 13:56:04 compute-0 systemd[1]: libpod-00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832.scope: Deactivated successfully.
Jan 20 13:56:04 compute-0 conmon[76668]: conmon 00b9867e80703478b9c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832.scope/container/memory.events
Jan 20 13:56:04 compute-0 podman[76623]: 2026-01-20 13:56:04.192912102 +0000 UTC m=+0.787023851 container died 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b0e180ca9b3664939fffd49e69fa5149fc3e1e8c56a58539757de360addba6-merged.mount: Deactivated successfully.
Jan 20 13:56:04 compute-0 podman[76623]: 2026-01-20 13:56:04.259722312 +0000 UTC m=+0.853834021 container remove 00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832 (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:56:04 compute-0 systemd[1]: libpod-conmon-00b9867e80703478b9c45d693554fc09a2d8de4795f988c2b5c4821c5860b832.scope: Deactivated successfully.
Jan 20 13:56:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:04 compute-0 ceph-mon[74360]: Added host compute-0
Jan 20 13:56:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:56:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:04 compute-0 podman[76799]: 2026-01-20 13:56:04.345721742 +0000 UTC m=+0.054752825 container create df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 13:56:04 compute-0 dazzling_meitner[76781]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:04.373128154 +0000 UTC m=+0.451941264 container died 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 13:56:04 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:04 compute-0 systemd[1]: Started libpod-conmon-df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06.scope.
Jan 20 13:56:04 compute-0 systemd[1]: libpod-35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee.scope: Deactivated successfully.
Jan 20 13:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c6266c65ad31819c7a8b8c6e0a5e0d636ced2086b7dec6fc4b2cd190fb52e50-merged.mount: Deactivated successfully.
Jan 20 13:56:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:04 compute-0 podman[76799]: 2026-01-20 13:56:04.327028645 +0000 UTC m=+0.036059748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:04 compute-0 podman[76756]: 2026-01-20 13:56:04.424371972 +0000 UTC m=+0.503185122 container remove 35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee (image=quay.io/ceph/ceph:v18, name=dazzling_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0918c9ca2a31d8f8c214ebe9ad97e4f357299cebf1e038d2d014ccbfefc71cdd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0918c9ca2a31d8f8c214ebe9ad97e4f357299cebf1e038d2d014ccbfefc71cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0918c9ca2a31d8f8c214ebe9ad97e4f357299cebf1e038d2d014ccbfefc71cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:04 compute-0 systemd[1]: libpod-conmon-35d7ae06df7a4b53da12d3603c47dd7307a148cc9d46b29e7f754a11429e90ee.scope: Deactivated successfully.
Jan 20 13:56:04 compute-0 podman[76799]: 2026-01-20 13:56:04.446410119 +0000 UTC m=+0.155441202 container init df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:56:04 compute-0 podman[76799]: 2026-01-20 13:56:04.451504707 +0000 UTC m=+0.160535780 container start df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 13:56:04 compute-0 podman[76799]: 2026-01-20 13:56:04.454438206 +0000 UTC m=+0.163469319 container attach df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:04 compute-0 sudo[76696]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 20 13:56:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:04 compute-0 sudo[76835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:04 compute-0 sudo[76835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:04 compute-0 sudo[76835]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:04 compute-0 sudo[76860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:04 compute-0 sudo[76860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:04 compute-0 sudo[76860]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:04 compute-0 sudo[76885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:04 compute-0 sudo[76885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:04 compute-0 sudo[76885]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:04 compute-0 sudo[76911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 13:56:04 compute-0 sudo[76911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:04 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:05 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 20 13:56:05 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 20 13:56:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:56:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 great_mcclintock[76823]: Scheduled mgr update...
Jan 20 13:56:05 compute-0 systemd[1]: libpod-df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06.scope: Deactivated successfully.
Jan 20 13:56:05 compute-0 podman[76799]: 2026-01-20 13:56:05.030648846 +0000 UTC m=+0.739679929 container died df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0918c9ca2a31d8f8c214ebe9ad97e4f357299cebf1e038d2d014ccbfefc71cdd-merged.mount: Deactivated successfully.
Jan 20 13:56:05 compute-0 sudo[76911]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:05 compute-0 podman[76799]: 2026-01-20 13:56:05.073471415 +0000 UTC m=+0.782502498 container remove df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06 (image=quay.io/ceph/ceph:v18, name=great_mcclintock, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 systemd[1]: libpod-conmon-df2f17b159a4681971a876290d010a47a305318c6a78126a67c8137acae16f06.scope: Deactivated successfully.
Jan 20 13:56:05 compute-0 sudo[76988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:05 compute-0 sudo[76988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:05 compute-0 sudo[76988]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.173127675 +0000 UTC m=+0.070057619 container create 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:56:05 compute-0 systemd[1]: Started libpod-conmon-1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae.scope.
Jan 20 13:56:05 compute-0 sudo[77027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:05 compute-0 sudo[77027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.142676801 +0000 UTC m=+0.039606835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:05 compute-0 sudo[77027]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7faa22aa4684f57ac2ef3f75a444ed81712dcc5437f5c451b94405caecc95a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7faa22aa4684f57ac2ef3f75a444ed81712dcc5437f5c451b94405caecc95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc7faa22aa4684f57ac2ef3f75a444ed81712dcc5437f5c451b94405caecc95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.296490578 +0000 UTC m=+0.193420562 container init 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.305933783 +0000 UTC m=+0.202863747 container start 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:56:05 compute-0 sudo[77057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:05 compute-0 sudo[77057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.311488673 +0000 UTC m=+0.208418617 container attach 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:05 compute-0 sudo[77057]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:05 compute-0 sudo[77084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:56:05 compute-0 sudo[77084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:05 compute-0 ceph-mon[74360]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:05 compute-0 ceph-mon[74360]: Saving service mon spec with placement count:5
Jan 20 13:56:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:05 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service crash spec with placement *
Jan 20 13:56:05 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 20 13:56:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:56:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:05 compute-0 brave_murdock[77052]: Scheduled crash update...
Jan 20 13:56:05 compute-0 systemd[1]: libpod-1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae.scope: Deactivated successfully.
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.844593275 +0000 UTC m=+0.741523279 container died 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc7faa22aa4684f57ac2ef3f75a444ed81712dcc5437f5c451b94405caecc95a-merged.mount: Deactivated successfully.
Jan 20 13:56:05 compute-0 podman[76989]: 2026-01-20 13:56:05.892700338 +0000 UTC m=+0.789630312 container remove 1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae (image=quay.io/ceph/ceph:v18, name=brave_murdock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:05 compute-0 systemd[1]: libpod-conmon-1971166e3308e59b021ceaf2176d70ed8a4c5b19bf9105aa880f2ce5f76e59ae.scope: Deactivated successfully.
Jan 20 13:56:05 compute-0 podman[77208]: 2026-01-20 13:56:05.955951992 +0000 UTC m=+0.070765128 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 13:56:05 compute-0 podman[77221]: 2026-01-20 13:56:05.984948377 +0000 UTC m=+0.060229813 container create 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:06 compute-0 systemd[1]: Started libpod-conmon-2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23.scope.
Jan 20 13:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7919cc61b0f904759872d0eb064e8f571307c31ddf178fba7e09296da80cdb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7919cc61b0f904759872d0eb064e8f571307c31ddf178fba7e09296da80cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7919cc61b0f904759872d0eb064e8f571307c31ddf178fba7e09296da80cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:05.965584592 +0000 UTC m=+0.040866068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:06.076909338 +0000 UTC m=+0.152190874 container init 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:06.082739866 +0000 UTC m=+0.158021322 container start 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:06.086356924 +0000 UTC m=+0.161638410 container attach 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:06 compute-0 podman[77208]: 2026-01-20 13:56:06.361776565 +0000 UTC m=+0.476589691 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:06 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:06 compute-0 sudo[77084]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:06 compute-0 sudo[77302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:06 compute-0 sudo[77302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:06 compute-0 sudo[77302]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 20 13:56:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3416590019' entity='client.admin' 
Jan 20 13:56:06 compute-0 systemd[1]: libpod-2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23.scope: Deactivated successfully.
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:06.626462785 +0000 UTC m=+0.701744261 container died 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:06 compute-0 sudo[77327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:06 compute-0 sudo[77327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a7919cc61b0f904759872d0eb064e8f571307c31ddf178fba7e09296da80cdb-merged.mount: Deactivated successfully.
Jan 20 13:56:06 compute-0 sudo[77327]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:06 compute-0 podman[77221]: 2026-01-20 13:56:06.669161292 +0000 UTC m=+0.744442738 container remove 2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23 (image=quay.io/ceph/ceph:v18, name=bold_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:56:06 compute-0 systemd[1]: libpod-conmon-2ee223af2789160470ae090c7fd4a844d462c4b93780bf64479e0898e4b79a23.scope: Deactivated successfully.
Jan 20 13:56:06 compute-0 sudo[77363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:06 compute-0 sudo[77363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:06 compute-0 sudo[77363]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:06 compute-0 podman[77375]: 2026-01-20 13:56:06.732920639 +0000 UTC m=+0.041501775 container create 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 13:56:06 compute-0 systemd[1]: Started libpod-conmon-657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea.scope.
Jan 20 13:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:06 compute-0 sudo[77407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65268a25a8c2491eba3cd2caad85a66a466138917285d8222d59bad1e54cfb28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65268a25a8c2491eba3cd2caad85a66a466138917285d8222d59bad1e54cfb28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65268a25a8c2491eba3cd2caad85a66a466138917285d8222d59bad1e54cfb28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:06 compute-0 sudo[77407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:06 compute-0 podman[77375]: 2026-01-20 13:56:06.801021634 +0000 UTC m=+0.109602840 container init 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:06 compute-0 podman[77375]: 2026-01-20 13:56:06.807824398 +0000 UTC m=+0.116405534 container start 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 13:56:06 compute-0 podman[77375]: 2026-01-20 13:56:06.811778605 +0000 UTC m=+0.120359771 container attach 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 13:56:06 compute-0 podman[77375]: 2026-01-20 13:56:06.715804275 +0000 UTC m=+0.024385441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:06 compute-0 ceph-mon[74360]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:06 compute-0 ceph-mon[74360]: Saving service mgr spec with placement count:2
Jan 20 13:56:06 compute-0 ceph-mon[74360]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:06 compute-0 ceph-mon[74360]: Saving service crash spec with placement *
Jan 20 13:56:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3416590019' entity='client.admin' 
Jan 20 13:56:06 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77454 (sysctl)
Jan 20 13:56:06 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 20 13:56:07 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 20 13:56:07 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:07 compute-0 sudo[77407]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 20 13:56:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:07 compute-0 systemd[1]: libpod-657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea.scope: Deactivated successfully.
Jan 20 13:56:07 compute-0 podman[77375]: 2026-01-20 13:56:07.351725372 +0000 UTC m=+0.660306518 container died 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-65268a25a8c2491eba3cd2caad85a66a466138917285d8222d59bad1e54cfb28-merged.mount: Deactivated successfully.
Jan 20 13:56:07 compute-0 podman[77375]: 2026-01-20 13:56:07.39484828 +0000 UTC m=+0.703429406 container remove 657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea (image=quay.io/ceph/ceph:v18, name=angry_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:56:07 compute-0 systemd[1]: libpod-conmon-657ab0fc76834bb4dabb2129fb2c926a996f3da0ca977755d4a833bb7bb4d2ea.scope: Deactivated successfully.
Jan 20 13:56:07 compute-0 sudo[77498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:07 compute-0 sudo[77498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 sudo[77498]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 podman[77532]: 2026-01-20 13:56:07.464959639 +0000 UTC m=+0.047611611 container create 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 13:56:07 compute-0 sudo[77535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:07 compute-0 sudo[77535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 sudo[77535]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 systemd[1]: Started libpod-conmon-6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55.scope.
Jan 20 13:56:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49e340dc4c07c5c122682c1cf3479172701620e5bae952dc3358c710e36a9c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49e340dc4c07c5c122682c1cf3479172701620e5bae952dc3358c710e36a9c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e49e340dc4c07c5c122682c1cf3479172701620e5bae952dc3358c710e36a9c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:07 compute-0 podman[77532]: 2026-01-20 13:56:07.524738469 +0000 UTC m=+0.107390431 container init 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:56:07 compute-0 podman[77532]: 2026-01-20 13:56:07.530615818 +0000 UTC m=+0.113267760 container start 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:07 compute-0 sudo[77571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:07 compute-0 sudo[77571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 podman[77532]: 2026-01-20 13:56:07.533601469 +0000 UTC m=+0.116253411 container attach 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:07 compute-0 sudo[77571]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 podman[77532]: 2026-01-20 13:56:07.445264346 +0000 UTC m=+0.027916308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:07 compute-0 sudo[77603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 13:56:07 compute-0 sudo[77603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 sudo[77603]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:07 compute-0 sudo[77646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:07 compute-0 sudo[77646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 sudo[77646]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:07 compute-0 sudo[77690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:07 compute-0 sudo[77690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:07 compute-0 sudo[77690]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:08 compute-0 sudo[77715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:08 compute-0 sudo[77715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:08 compute-0 sudo[77715]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:08 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:08 compute-0 ceph-mgr[74653]: [cephadm INFO root] Added label _admin to host compute-0
Jan 20 13:56:08 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 20 13:56:08 compute-0 strange_austin[77591]: Added label _admin to host compute-0
Jan 20 13:56:08 compute-0 podman[77532]: 2026-01-20 13:56:08.082104927 +0000 UTC m=+0.664756869 container died 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 13:56:08 compute-0 systemd[1]: libpod-6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55.scope: Deactivated successfully.
Jan 20 13:56:08 compute-0 sudo[77740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 13:56:08 compute-0 sudo[77740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e49e340dc4c07c5c122682c1cf3479172701620e5bae952dc3358c710e36a9c8-merged.mount: Deactivated successfully.
Jan 20 13:56:08 compute-0 podman[77532]: 2026-01-20 13:56:08.121598987 +0000 UTC m=+0.704250929 container remove 6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55 (image=quay.io/ceph/ceph:v18, name=strange_austin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 13:56:08 compute-0 systemd[1]: libpod-conmon-6660514fe1de3a0a2f99b0d3a633577767513fb77ac0b5e574a67e516e3c9a55.scope: Deactivated successfully.
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.196325852 +0000 UTC m=+0.052653778 container create b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 13:56:08 compute-0 systemd[1]: Started libpod-conmon-b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04.scope.
Jan 20 13:56:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.167626014 +0000 UTC m=+0.023954030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7398eb1d8d8b36e47a4ac2ae1b66328caf65a523d821507381a96f904547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7398eb1d8d8b36e47a4ac2ae1b66328caf65a523d821507381a96f904547/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0edf7398eb1d8d8b36e47a4ac2ae1b66328caf65a523d821507381a96f904547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.270483341 +0000 UTC m=+0.126811277 container init b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.277617744 +0000 UTC m=+0.133945660 container start b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.28854631 +0000 UTC m=+0.144874236 container attach b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:08 compute-0 ceph-mon[74360]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:08 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.445051149 +0000 UTC m=+0.055368760 container create 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:56:08 compute-0 systemd[1]: Started libpod-conmon-65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e.scope.
Jan 20 13:56:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.508647553 +0000 UTC m=+0.118965194 container init 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.513287468 +0000 UTC m=+0.123605089 container start 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:56:08 compute-0 relaxed_black[77854]: 167 167
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.516299039 +0000 UTC m=+0.126616680 container attach 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.516609178 +0000 UTC m=+0.126926799 container died 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:08 compute-0 systemd[1]: libpod-65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e.scope: Deactivated successfully.
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.425988553 +0000 UTC m=+0.036306224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33d087a9c9b40b09a3c598e5f4ee9bc0b4be78fd8567edc915baec17685eaa6-merged.mount: Deactivated successfully.
Jan 20 13:56:08 compute-0 podman[77839]: 2026-01-20 13:56:08.548306367 +0000 UTC m=+0.158623998 container remove 65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_black, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 13:56:08 compute-0 systemd[1]: libpod-conmon-65d5a23ae36d06d9d1bb50d00f81ae4419e647be0ac7078ad309b6e3102cb47e.scope: Deactivated successfully.
Jan 20 13:56:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 20 13:56:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/471860250' entity='client.admin' 
Jan 20 13:56:08 compute-0 systemd[1]: libpod-b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04.scope: Deactivated successfully.
Jan 20 13:56:08 compute-0 conmon[77808]: conmon b0698b1d37fe9e84421e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04.scope/container/memory.events
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.809583425 +0000 UTC m=+0.665911341 container died b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 13:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0edf7398eb1d8d8b36e47a4ac2ae1b66328caf65a523d821507381a96f904547-merged.mount: Deactivated successfully.
Jan 20 13:56:08 compute-0 podman[77780]: 2026-01-20 13:56:08.938835116 +0000 UTC m=+0.795163062 container remove b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04 (image=quay.io/ceph/ceph:v18, name=vibrant_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:56:08 compute-0 systemd[1]: libpod-conmon-b0698b1d37fe9e84421ed2b6b3e933e069ca01cda2191551dcc67a20823acc04.scope: Deactivated successfully.
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.005503702 +0000 UTC m=+0.046061299 container create 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:09 compute-0 systemd[1]: Started libpod-conmon-6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01.scope.
Jan 20 13:56:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b3caa9f55463572eef2d6a7d397cc979971dcd717a1bc0e5201b53b16c6d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b3caa9f55463572eef2d6a7d397cc979971dcd717a1bc0e5201b53b16c6d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46b3caa9f55463572eef2d6a7d397cc979971dcd717a1bc0e5201b53b16c6d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:08.985786588 +0000 UTC m=+0.026344215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.086885767 +0000 UTC m=+0.127443394 container init 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.098137551 +0000 UTC m=+0.138695148 container start 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.101201464 +0000 UTC m=+0.141759061 container attach 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 20 13:56:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577453859' entity='client.admin' 
Jan 20 13:56:09 compute-0 jovial_pare[77920]: set mgr/dashboard/cluster/status
Jan 20 13:56:09 compute-0 systemd[1]: libpod-6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01.scope: Deactivated successfully.
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.779529109 +0000 UTC m=+0.820086726 container died 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:09 compute-0 ceph-mon[74360]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:09 compute-0 ceph-mon[74360]: Added label _admin to host compute-0
Jan 20 13:56:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/471860250' entity='client.admin' 
Jan 20 13:56:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/577453859' entity='client.admin' 
Jan 20 13:56:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e46b3caa9f55463572eef2d6a7d397cc979971dcd717a1bc0e5201b53b16c6d0-merged.mount: Deactivated successfully.
Jan 20 13:56:09 compute-0 podman[77903]: 2026-01-20 13:56:09.880897056 +0000 UTC m=+0.921454643 container remove 6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01 (image=quay.io/ceph/ceph:v18, name=jovial_pare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 13:56:09 compute-0 systemd[1]: libpod-conmon-6f4057485a2acc1f57b10a478ad2652799106e324419bb1ac01a46c48e357d01.scope: Deactivated successfully.
Jan 20 13:56:09 compute-0 sudo[73338]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:10 compute-0 podman[77965]: 2026-01-20 13:56:10.067265434 +0000 UTC m=+0.062939186 container create de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 13:56:10 compute-0 systemd[1]: Started libpod-conmon-de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134.scope.
Jan 20 13:56:10 compute-0 podman[77965]: 2026-01-20 13:56:10.023176159 +0000 UTC m=+0.018849901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af58f19c2281c1a455a4c274a095b121756249638e50e2f99b8273b805656de7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af58f19c2281c1a455a4c274a095b121756249638e50e2f99b8273b805656de7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af58f19c2281c1a455a4c274a095b121756249638e50e2f99b8273b805656de7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af58f19c2281c1a455a4c274a095b121756249638e50e2f99b8273b805656de7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 podman[77965]: 2026-01-20 13:56:10.189964409 +0000 UTC m=+0.185638151 container init de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:56:10 compute-0 podman[77965]: 2026-01-20 13:56:10.200559445 +0000 UTC m=+0.196233187 container start de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:56:10 compute-0 podman[77965]: 2026-01-20 13:56:10.204178483 +0000 UTC m=+0.199852215 container attach de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 13:56:10 compute-0 sudo[78009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzqpkyhkvfvewrzepdiqvhrrszaykkoq ; /usr/bin/python3'
Jan 20 13:56:10 compute-0 sudo[78009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:10 compute-0 ceph-mgr[74653]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 13:56:10 compute-0 python3[78011]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:10 compute-0 podman[78012]: 2026-01-20 13:56:10.5569564 +0000 UTC m=+0.086938686 container create ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:56:10 compute-0 podman[78012]: 2026-01-20 13:56:10.491322952 +0000 UTC m=+0.021305258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:10 compute-0 systemd[1]: Started libpod-conmon-ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947.scope.
Jan 20 13:56:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7af8b5c32b48ddfc790f698bbaff0bc6e5377742a4b973f26dfeb67f844a3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7af8b5c32b48ddfc790f698bbaff0bc6e5377742a4b973f26dfeb67f844a3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:10 compute-0 podman[78012]: 2026-01-20 13:56:10.652736454 +0000 UTC m=+0.182718830 container init ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:10 compute-0 podman[78012]: 2026-01-20 13:56:10.664985396 +0000 UTC m=+0.194967682 container start ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 13:56:10 compute-0 podman[78012]: 2026-01-20 13:56:10.668644605 +0000 UTC m=+0.198626931 container attach ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1045140599' entity='client.admin' 
Jan 20 13:56:11 compute-0 systemd[1]: libpod-ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947.scope: Deactivated successfully.
Jan 20 13:56:11 compute-0 podman[78267]: 2026-01-20 13:56:11.227039432 +0000 UTC m=+0.021399791 container died ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 13:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd7af8b5c32b48ddfc790f698bbaff0bc6e5377742a4b973f26dfeb67f844a3f-merged.mount: Deactivated successfully.
Jan 20 13:56:11 compute-0 podman[78267]: 2026-01-20 13:56:11.264935258 +0000 UTC m=+0.059295607 container remove ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947 (image=quay.io/ceph/ceph:v18, name=amazing_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:11 compute-0 systemd[1]: libpod-conmon-ce4cad0b3851a2a86a990c6a485d066050c100a4f385e24304886b8c47dfb947.scope: Deactivated successfully.
Jan 20 13:56:11 compute-0 sudo[78009]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 charming_jang[77981]: [
Jan 20 13:56:11 compute-0 charming_jang[77981]:     {
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "available": false,
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "ceph_device": false,
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "lsm_data": {},
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "lvs": [],
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "path": "/dev/sr0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "rejected_reasons": [
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "Has a FileSystem",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "Insufficient space (<5GB)"
Jan 20 13:56:11 compute-0 charming_jang[77981]:         ],
Jan 20 13:56:11 compute-0 charming_jang[77981]:         "sys_api": {
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "actuators": null,
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "device_nodes": "sr0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "devname": "sr0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "human_readable_size": "482.00 KB",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "id_bus": "ata",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "model": "QEMU DVD-ROM",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "nr_requests": "2",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "parent": "/dev/sr0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "partitions": {},
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "path": "/dev/sr0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "removable": "1",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "rev": "2.5+",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "ro": "0",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "rotational": "1",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "sas_address": "",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "sas_device_handle": "",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "scheduler_mode": "mq-deadline",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "sectors": 0,
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "sectorsize": "2048",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "size": 493568.0,
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "support_discard": "2048",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "type": "disk",
Jan 20 13:56:11 compute-0 charming_jang[77981]:             "vendor": "QEMU"
Jan 20 13:56:11 compute-0 charming_jang[77981]:         }
Jan 20 13:56:11 compute-0 charming_jang[77981]:     }
Jan 20 13:56:11 compute-0 charming_jang[77981]: ]
Jan 20 13:56:11 compute-0 systemd[1]: libpod-de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134.scope: Deactivated successfully.
Jan 20 13:56:11 compute-0 podman[77965]: 2026-01-20 13:56:11.36353079 +0000 UTC m=+1.359204512 container died de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:56:11 compute-0 systemd[1]: libpod-de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134.scope: Consumed 1.129s CPU time.
Jan 20 13:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-af58f19c2281c1a455a4c274a095b121756249638e50e2f99b8273b805656de7-merged.mount: Deactivated successfully.
Jan 20 13:56:11 compute-0 podman[77965]: 2026-01-20 13:56:11.419180796 +0000 UTC m=+1.414854518 container remove de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 13:56:11 compute-0 systemd[1]: libpod-conmon-de6a8ec9f918b7013e2981209006d7521e3898a2008ce22baca8569c84c41134.scope: Deactivated successfully.
Jan 20 13:56:11 compute-0 sudo[77740]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:56:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:11 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:56:11 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:56:11 compute-0 sudo[79094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:11 compute-0 sudo[79094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79094]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:56:11 compute-0 sudo[79119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79119]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:11 compute-0 sudo[79144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79144]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:56:11 compute-0 sudo[79169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79169]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:11 compute-0 sudo[79222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79222]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:11 compute-0 sudo[79272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79272]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:11 compute-0 sudo[79319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79319]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:11 compute-0 sudo[79344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79344]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:11 compute-0 sudo[79369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:11 compute-0 sudo[79369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:11 compute-0 sudo[79369]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:12 compute-0 sudo[79394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79394]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79489]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1045140599' entity='client.admin' 
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:12 compute-0 ceph-mon[74360]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:56:12 compute-0 sudo[79543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpfmdtpiapqyxotuljhaxkjukhmlrxeu ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917371.7017336-37216-50604205890628/async_wrapper.py j540994363849 30 /home/zuul/.ansible/tmp/ansible-tmp-1768917371.7017336-37216-50604205890628/AnsiballZ_command.py _'
Jan 20 13:56:12 compute-0 sudo[79543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:12 compute-0 sudo[79537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:12 compute-0 sudo[79537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79537]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79567]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:56:12 compute-0 sudo[79592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79592]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 ansible-async_wrapper.py[79563]: Invoked with j540994363849 30 /home/zuul/.ansible/tmp/ansible-tmp-1768917371.7017336-37216-50604205890628/AnsiballZ_command.py _
Jan 20 13:56:12 compute-0 ansible-async_wrapper.py[79621]: Starting module and watcher
Jan 20 13:56:12 compute-0 ansible-async_wrapper.py[79621]: Start watching 79624 (30)
Jan 20 13:56:12 compute-0 ansible-async_wrapper.py[79624]: Start module (79624)
Jan 20 13:56:12 compute-0 ansible-async_wrapper.py[79563]: Return async_wrapper task started.
Jan 20 13:56:12 compute-0 sudo[79543]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 ceph-mgr[74653]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 20 13:56:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 13:56:12 compute-0 sudo[79618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79618]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 13:56:12 compute-0 sudo[79647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79647]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:12 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:12 compute-0 sudo[79672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79672]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 python3[79627]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:12 compute-0 sudo[79699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:12 compute-0 sudo[79699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79699]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 podman[79700]: 2026-01-20 13:56:12.556067174 +0000 UTC m=+0.036421068 container create 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:56:12 compute-0 systemd[1]: Started libpod-conmon-211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962.scope.
Jan 20 13:56:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:12 compute-0 sudo[79737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c69dd5ccafcdc5073f25d773c4c33d2836794c177d9f81ddfe664fbfc32e81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c69dd5ccafcdc5073f25d773c4c33d2836794c177d9f81ddfe664fbfc32e81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:12 compute-0 sudo[79737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79737]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 podman[79700]: 2026-01-20 13:56:12.633627315 +0000 UTC m=+0.113981229 container init 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:12 compute-0 podman[79700]: 2026-01-20 13:56:12.541651244 +0000 UTC m=+0.022005158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:12 compute-0 podman[79700]: 2026-01-20 13:56:12.640378579 +0000 UTC m=+0.120732473 container start 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 13:56:12 compute-0 podman[79700]: 2026-01-20 13:56:12.643098842 +0000 UTC m=+0.123452736 container attach 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:12 compute-0 sudo[79768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:12 compute-0 sudo[79768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79768]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79794]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:12 compute-0 sudo[79819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79819]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79844]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:12 compute-0 sudo[79869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:12 compute-0 sudo[79869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79869]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sshd-session[79695]: Connection closed by authenticating user root 157.245.78.139 port 39716 [preauth]
Jan 20 13:56:12 compute-0 sudo[79894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:12 compute-0 sudo[79894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79894]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:12 compute-0 sudo[79938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:12 compute-0 sudo[79938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:12 compute-0 sudo[79938]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[79986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[79986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[79986]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:13 compute-0 sudo[80011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80011]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:56:13 compute-0 adoring_curran[79763]: 
Jan 20 13:56:13 compute-0 adoring_curran[79763]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 13:56:13 compute-0 ceph-mon[74360]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:13 compute-0 ceph-mon[74360]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 13:56:13 compute-0 ceph-mon[74360]: Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:13 compute-0 systemd[1]: libpod-211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962.scope: Deactivated successfully.
Jan 20 13:56:13 compute-0 podman[79700]: 2026-01-20 13:56:13.181205789 +0000 UTC m=+0.661559683 container died 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c69dd5ccafcdc5073f25d773c4c33d2836794c177d9f81ddfe664fbfc32e81-merged.mount: Deactivated successfully.
Jan 20 13:56:13 compute-0 podman[79700]: 2026-01-20 13:56:13.227618026 +0000 UTC m=+0.707971950 container remove 211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962 (image=quay.io/ceph/ceph:v18, name=adoring_curran, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:56:13 compute-0 systemd[1]: libpod-conmon-211461bbb182295ac56e6554ac9b3fcfc0b31535c683a74d96cda2316629c962.scope: Deactivated successfully.
Jan 20 13:56:13 compute-0 sudo[80038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80038]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 ansible-async_wrapper.py[79624]: Module complete (79624)
Jan 20 13:56:13 compute-0 sudo[80074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:56:13 compute-0 sudo[80074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80074]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80099]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:13 compute-0 sudo[80124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80124]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:13 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:13 compute-0 sudo[80161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80161]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:56:13 compute-0 sudo[80197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80197]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80222]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:56:13 compute-0 sudo[80247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80247]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdaehddunwjhuuwmanrimckoljwudgcy ; /usr/bin/python3'
Jan 20 13:56:13 compute-0 sudo[80310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:13 compute-0 sudo[80284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80284]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:13 compute-0 sudo[80323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80323]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80348]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 python3[80320]: ansible-ansible.legacy.async_status Invoked with jid=j540994363849.79563 mode=status _async_dir=/root/.ansible_async
Jan 20 13:56:13 compute-0 sudo[80310]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:13 compute-0 sudo[80373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80373]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:13 compute-0 sudo[80409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80409]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:13 compute-0 sudo[80484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkasrmyusjladoiiryfinvnirbntazkp ; /usr/bin/python3'
Jan 20 13:56:13 compute-0 sudo[80484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:13 compute-0 sudo[80455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:13 compute-0 sudo[80455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:13 compute-0 sudo[80455]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80520]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 python3[80494]: ansible-ansible.legacy.async_status Invoked with jid=j540994363849.79563 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 13:56:14 compute-0 sudo[80484]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:14 compute-0 sudo[80545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80545]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80570]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 ceph-mon[74360]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:56:14 compute-0 ceph-mon[74360]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:14 compute-0 sudo[80595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new
Jan 20 13:56:14 compute-0 sudo[80595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80595]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80620]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:14 compute-0 sudo[80645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80645]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:14 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:14 compute-0 sudo[80712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovjzfvekxcsomgigmcnxtmugwxiospfo ; /usr/bin/python3'
Jan 20 13:56:14 compute-0 sudo[80712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:14 compute-0 sudo[80676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:14 compute-0 sudo[80676]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:14 compute-0 sudo[80721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80721]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80746]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 python3[80718]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:56:14 compute-0 sudo[80712]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:56:14 compute-0 sudo[80771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80771]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sshd-session[78025]: Connection closed by authenticating user root 159.223.5.14 port 40254 [preauth]
Jan 20 13:56:14 compute-0 sudo[80798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80798]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:14 compute-0 sudo[80823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80823]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80848]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:14 compute-0 sudo[80873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80873]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80898]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktruzgxmdgiuyfpcvlqtrjybzqhridrq ; /usr/bin/python3'
Jan 20 13:56:14 compute-0 sudo[80923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:14 compute-0 sudo[80969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:14 compute-0 sudo[80923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80923]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[80997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:14 compute-0 sudo[80997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 sudo[80997]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:14 compute-0 sudo[81022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:14 compute-0 sudo[81022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:14 compute-0 python3[80972]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:15 compute-0 sudo[81022]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 sudo[81048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:15 compute-0 sudo[81048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81048]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.090447149 +0000 UTC m=+0.074578552 container create b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:56:15 compute-0 systemd[1]: Started libpod-conmon-b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08.scope.
Jan 20 13:56:15 compute-0 sudo[81085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new
Jan 20 13:56:15 compute-0 sudo[81085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81085]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.041932325 +0000 UTC m=+0.026063748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a0d0aa41a42d3ade9afdc900fd8399a8b8fa84fcf9f75830675c0a8b163806/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a0d0aa41a42d3ade9afdc900fd8399a8b8fa84fcf9f75830675c0a8b163806/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a0d0aa41a42d3ade9afdc900fd8399a8b8fa84fcf9f75830675c0a8b163806/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:15 compute-0 sudo[81115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:15 compute-0 sudo[81115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81115]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 sudo[81140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:15 compute-0 sudo[81140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81140]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.268918064 +0000 UTC m=+0.253049517 container init b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.277557887 +0000 UTC m=+0.261689310 container start b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.380942078 +0000 UTC m=+0.365073521 container attach b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:56:15 compute-0 ceph-mon[74360]: Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:15 compute-0 ceph-mon[74360]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:15 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 16bd11c2-7fe8-410a-885b-00aca79ade69 (Updating crash deployment (+1 -> 1))
Jan 20 13:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:15 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 20 13:56:15 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 20 13:56:15 compute-0 sudo[81166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:15 compute-0 sudo[81166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81166]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 sudo[81191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:15 compute-0 sudo[81191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81191]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 sudo[81226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:15 compute-0 sudo[81226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 sudo[81226]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:15 compute-0 sudo[81260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:15 compute-0 sudo[81260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:15 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:56:15 compute-0 elastic_ride[81111]: 
Jan 20 13:56:15 compute-0 elastic_ride[81111]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 13:56:15 compute-0 systemd[1]: libpod-b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08.scope: Deactivated successfully.
Jan 20 13:56:15 compute-0 podman[81047]: 2026-01-20 13:56:15.848501474 +0000 UTC m=+0.832632897 container died b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1a0d0aa41a42d3ade9afdc900fd8399a8b8fa84fcf9f75830675c0a8b163806-merged.mount: Deactivated successfully.
Jan 20 13:56:16 compute-0 podman[81047]: 2026-01-20 13:56:16.006127895 +0000 UTC m=+0.990259298 container remove b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08 (image=quay.io/ceph/ceph:v18, name=elastic_ride, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 13:56:16 compute-0 systemd[1]: libpod-conmon-b5f73f9201ac3f582e50bfbb994ee72346881b363c2528c8508982f9cfdbdb08.scope: Deactivated successfully.
Jan 20 13:56:16 compute-0 sudo[80969]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:16 compute-0 podman[81338]: 2026-01-20 13:56:16.10343881 +0000 UTC m=+0.036532810 container create 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:16 compute-0 systemd[1]: Started libpod-conmon-4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee.scope.
Jan 20 13:56:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:16 compute-0 podman[81338]: 2026-01-20 13:56:16.16470281 +0000 UTC m=+0.097796880 container init 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:56:16 compute-0 podman[81338]: 2026-01-20 13:56:16.169867819 +0000 UTC m=+0.102961829 container start 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:56:16 compute-0 podman[81338]: 2026-01-20 13:56:16.172874212 +0000 UTC m=+0.105968222 container attach 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:56:16 compute-0 sweet_edison[81354]: 167 167
Jan 20 13:56:16 compute-0 systemd[1]: libpod-4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee.scope: Deactivated successfully.
Jan 20 13:56:16 compute-0 podman[81338]: 2026-01-20 13:56:16.087503359 +0000 UTC m=+0.020597389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:16 compute-0 podman[81359]: 2026-01-20 13:56:16.210013427 +0000 UTC m=+0.022423568 container died 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc82d941d8987825c73e2bdea808a8ef8c0197785ae3d3648ac3237c74f7283-merged.mount: Deactivated successfully.
Jan 20 13:56:16 compute-0 podman[81359]: 2026-01-20 13:56:16.251137112 +0000 UTC m=+0.063547273 container remove 4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:56:16 compute-0 systemd[1]: libpod-conmon-4d21c3c4b1529838a7393a34fc780e8a4fe81d9b019a4b444b254f31f3dfcbee.scope: Deactivated successfully.
Jan 20 13:56:16 compute-0 sudo[81399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btinyupghnezlsjnqjjujfxwakquqelr ; /usr/bin/python3'
Jan 20 13:56:16 compute-0 sudo[81399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:16 compute-0 ceph-mon[74360]: Deploying daemon crash.compute-0 on compute-0
Jan 20 13:56:16 compute-0 ceph-mon[74360]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:56:16 compute-0 systemd[1]: Reloading.
Jan 20 13:56:16 compute-0 systemd-rc-local-generator[81427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:16 compute-0 systemd-sysv-generator[81431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:56:16 compute-0 python3[81401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:16 compute-0 podman[81437]: 2026-01-20 13:56:16.928199582 +0000 UTC m=+0.037833635 container create 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:16.913187716 +0000 UTC m=+0.022821789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:17 compute-0 systemd[1]: Started libpod-conmon-35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662.scope.
Jan 20 13:56:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff659cafd3d632430da4f98ec6a849c656449a13edc478a2b8ae7d046d8cad0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff659cafd3d632430da4f98ec6a849c656449a13edc478a2b8ae7d046d8cad0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff659cafd3d632430da4f98ec6a849c656449a13edc478a2b8ae7d046d8cad0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:17.069228653 +0000 UTC m=+0.178862716 container init 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:56:17 compute-0 systemd[1]: Reloading.
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:17.076688575 +0000 UTC m=+0.186322628 container start 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:17.07906922 +0000 UTC m=+0.188703293 container attach 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:56:17 compute-0 systemd-rc-local-generator[81486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:17 compute-0 systemd-sysv-generator[81489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:56:17 compute-0 systemd[1]: Starting Ceph crash.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:56:17 compute-0 ansible-async_wrapper.py[79621]: Done in kid B.
Jan 20 13:56:17 compute-0 podman[81564]: 2026-01-20 13:56:17.50577727 +0000 UTC m=+0.039473512 container create 5c427c7f90ded53008d5402c1f0f8e229d849a21f0d69821b7cc5c963cbf6c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1d027998a0f79c9234f07c6cd2800bd37cfc72a8fdf9910aea515a36643e3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1d027998a0f79c9234f07c6cd2800bd37cfc72a8fdf9910aea515a36643e3a/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1d027998a0f79c9234f07c6cd2800bd37cfc72a8fdf9910aea515a36643e3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1d027998a0f79c9234f07c6cd2800bd37cfc72a8fdf9910aea515a36643e3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:17 compute-0 podman[81564]: 2026-01-20 13:56:17.573888144 +0000 UTC m=+0.107584406 container init 5c427c7f90ded53008d5402c1f0f8e229d849a21f0d69821b7cc5c963cbf6c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:56:17 compute-0 podman[81564]: 2026-01-20 13:56:17.582758525 +0000 UTC m=+0.116454767 container start 5c427c7f90ded53008d5402c1f0f8e229d849a21f0d69821b7cc5c963cbf6c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 13:56:17 compute-0 podman[81564]: 2026-01-20 13:56:17.486452976 +0000 UTC m=+0.020149248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:17 compute-0 bash[81564]: 5c427c7f90ded53008d5402c1f0f8e229d849a21f0d69821b7cc5c963cbf6c33
Jan 20 13:56:17 compute-0 systemd[1]: Started Ceph crash.compute-0 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:56:17 compute-0 sudo[81260]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/645972064' entity='client.admin' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 systemd[1]: libpod-35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662.scope: Deactivated successfully.
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:56:17 compute-0 conmon[81454]: conmon 35105c5ccd523d72ac4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662.scope/container/memory.events
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:17.655248918 +0000 UTC m=+0.764882981 container died 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 16bd11c2-7fe8-410a-885b-00aca79ade69 (Updating crash deployment (+1 -> 1))
Jan 20 13:56:17 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 16bd11c2-7fe8-410a-885b-00aca79ade69 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a8f20aca-30b3-44f7-9721-726f8ab8d818 does not exist
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa87c027-528b-43b9-8155-ef3de5b8914a does not exist
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:56:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff659cafd3d632430da4f98ec6a849c656449a13edc478a2b8ae7d046d8cad0-merged.mount: Deactivated successfully.
Jan 20 13:56:17 compute-0 podman[81437]: 2026-01-20 13:56:17.717596897 +0000 UTC m=+0.827230980 container remove 35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662 (image=quay.io/ceph/ceph:v18, name=elegant_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 20 13:56:17 compute-0 systemd[1]: libpod-conmon-35105c5ccd523d72ac4d70783c6f030e80d4e337e3478c8c6cee6c45c7365662.scope: Deactivated successfully.
Jan 20 13:56:17 compute-0 sudo[81399]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 sudo[81594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:17 compute-0 sudo[81594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:17 compute-0 sudo[81594]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 ceph-mon[74360]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/645972064' entity='client.admin' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:17 compute-0 sudo[81623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:56:17 compute-0 sudo[81623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:17 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 20 13:56:17 compute-0 sudo[81623]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:17 compute-0 sudo[81650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:17 compute-0 sudo[81650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:17 compute-0 sudo[81650]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 sudo[81700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebijtaciyqxiyvtuavvqsjpxossjxgx ; /usr/bin/python3'
Jan 20 13:56:17 compute-0 sudo[81700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:17 compute-0 sudo[81692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:17 compute-0 sudo[81692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:17 compute-0 sudo[81692]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:17 compute-0 sudo[81726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:17 compute-0 sudo[81726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:17 compute-0 sudo[81726]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.013+0000 7fa28af50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.013+0000 7fa28af50640 -1 AuthRegistry(0x7fa284066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.015+0000 7fa28af50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.015+0000 7fa28af50640 -1 AuthRegistry(0x7fa28af4f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.018+0000 7fa288cc5640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: 2026-01-20T13:56:18.018+0000 7fa28af50640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 20 13:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 20 13:56:18 compute-0 python3[81717]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:18 compute-0 sudo[81751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:56:18 compute-0 sudo[81751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.093817059 +0000 UTC m=+0.045939045 container create 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 13:56:18 compute-0 systemd[1]: Started libpod-conmon-73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503.scope.
Jan 20 13:56:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8acf0ec18340359327d90b431d6fa820677607f1f36bb00176a4681004394/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8acf0ec18340359327d90b431d6fa820677607f1f36bb00176a4681004394/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c8acf0ec18340359327d90b431d6fa820677607f1f36bb00176a4681004394/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.077108547 +0000 UTC m=+0.029230553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.17177056 +0000 UTC m=+0.123892566 container init 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.185093682 +0000 UTC m=+0.137215668 container start 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.195806701 +0000 UTC m=+0.147928717 container attach 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:18 compute-0 podman[81894]: 2026-01-20 13:56:18.605046408 +0000 UTC m=+0.062868304 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/88407011' entity='client.admin' 
Jan 20 13:56:18 compute-0 podman[81894]: 2026-01-20 13:56:18.730895427 +0000 UTC m=+0.188717333 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:56:18 compute-0 systemd[1]: libpod-73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503.scope: Deactivated successfully.
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.745665127 +0000 UTC m=+0.697787153 container died 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-25c8acf0ec18340359327d90b431d6fa820677607f1f36bb00176a4681004394-merged.mount: Deactivated successfully.
Jan 20 13:56:18 compute-0 podman[81781]: 2026-01-20 13:56:18.823087535 +0000 UTC m=+0.775209521 container remove 73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503 (image=quay.io/ceph/ceph:v18, name=sleepy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 13:56:18 compute-0 systemd[1]: libpod-conmon-73a832e663b5db8eb6356ca14e150f682a9e2e8a282ab7db02c43ada1f972503.scope: Deactivated successfully.
Jan 20 13:56:18 compute-0 sudo[81700]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:18 compute-0 sudo[81751]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:56:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:18 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5791b52f-1e4b-495a-b137-15948bf3f928 does not exist
Jan 20 13:56:18 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2a5e4292-d6fb-4bdb-be65-8f00b03b2576 does not exist
Jan 20 13:56:18 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7ff7f04b-7876-4996-93e0-a5fc2fa42d13 does not exist
Jan 20 13:56:19 compute-0 sudo[81976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:19 compute-0 sudo[81976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[81976]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 sudo[82001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:56:19 compute-0 sudo[82001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82001]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 20 13:56:19 compute-0 sudo[82049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpicxiaeccblnmiogcfnhhqnkkzkdezc ; /usr/bin/python3'
Jan 20 13:56:19 compute-0 sudo[82049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:56:19 compute-0 sudo[82052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:19 compute-0 sudo[82052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82052]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 sudo[82077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:19 compute-0 sudo[82077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82077]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 python3[82051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:19 compute-0 sudo[82102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:19 compute-0 sudo[82102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82102]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 podman[82125]: 2026-01-20 13:56:19.336236605 +0000 UTC m=+0.062617497 container create e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 13:56:19 compute-0 sudo[82136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:19 compute-0 sudo[82136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 systemd[1]: Started libpod-conmon-e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1.scope.
Jan 20 13:56:19 compute-0 podman[82125]: 2026-01-20 13:56:19.317081246 +0000 UTC m=+0.043462158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b81cea0326ad94f85be9417a25a2458ac1cdb44ebf2ee925ad078b06f62bb7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b81cea0326ad94f85be9417a25a2458ac1cdb44ebf2ee925ad078b06f62bb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b81cea0326ad94f85be9417a25a2458ac1cdb44ebf2ee925ad078b06f62bb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:19 compute-0 podman[82125]: 2026-01-20 13:56:19.43055074 +0000 UTC m=+0.156931662 container init e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:56:19 compute-0 podman[82125]: 2026-01-20 13:56:19.43979693 +0000 UTC m=+0.166177822 container start e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:19 compute-0 podman[82125]: 2026-01-20 13:56:19.4427294 +0000 UTC m=+0.169110312 container attach e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.641045832 +0000 UTC m=+0.065217678 container create 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:56:19 compute-0 systemd[1]: Started libpod-conmon-5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb.scope.
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.612157279 +0000 UTC m=+0.036329155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:19 compute-0 ceph-mon[74360]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/88407011' entity='client.admin' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.721168273 +0000 UTC m=+0.145340169 container init 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.725765487 +0000 UTC m=+0.149937313 container start 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.728509101 +0000 UTC m=+0.152680917 container attach 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:56:19 compute-0 quizzical_proskuriakova[82204]: 167 167
Jan 20 13:56:19 compute-0 systemd[1]: libpod-5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb.scope: Deactivated successfully.
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.731105882 +0000 UTC m=+0.155277698 container died 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 13:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f93ec9429b65012f096dcf42ca336ed534bca350aa94ba172a83ab68e8b28d-merged.mount: Deactivated successfully.
Jan 20 13:56:19 compute-0 podman[82186]: 2026-01-20 13:56:19.765163024 +0000 UTC m=+0.189334850 container remove 5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:19 compute-0 systemd[1]: libpod-conmon-5a73a7e2566fa7e8c2518eed97e80ff84773cddc1aa5e36130c56a1cc2a6daeb.scope: Deactivated successfully.
Jan 20 13:56:19 compute-0 sudo[82136]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wookjv (unknown last config time)...
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wookjv (unknown last config time)...
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:56:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:56:19 compute-0 sudo[82241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:19 compute-0 sudo[82241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82241]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 20 13:56:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/298940417' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 20 13:56:19 compute-0 sudo[82266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:19 compute-0 sudo[82266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:19 compute-0 sudo[82266]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:20 compute-0 sudo[82292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:20 compute-0 sudo[82292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:20 compute-0 sudo[82292]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:20 compute-0 sudo[82317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:56:20 compute-0 sudo[82317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.351355324 +0000 UTC m=+0.041819964 container create a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 13:56:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:20 compute-0 systemd[1]: Started libpod-conmon-a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998.scope.
Jan 20 13:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.333684155 +0000 UTC m=+0.024148815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.433686744 +0000 UTC m=+0.124151404 container init a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.438359911 +0000 UTC m=+0.128824571 container start a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.441988349 +0000 UTC m=+0.132452999 container attach a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:20 compute-0 adoring_heyrovsky[82374]: 167 167
Jan 20 13:56:20 compute-0 systemd[1]: libpod-a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998.scope: Deactivated successfully.
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.443359756 +0000 UTC m=+0.133824426 container died a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-535f58eee6b7022acf4364524e202a5a119ca551c07702cfc4e92a988e6f2d4e-merged.mount: Deactivated successfully.
Jan 20 13:56:20 compute-0 podman[82358]: 2026-01-20 13:56:20.479140315 +0000 UTC m=+0.169604985 container remove a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 13:56:20 compute-0 systemd[1]: libpod-conmon-a9abda5c1e1eaca2db912a79792c1afb4e4c43abd5aecd35143db552a34e6998.scope: Deactivated successfully.
Jan 20 13:56:20 compute-0 sudo[82317]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a9cd01e1-2d51-40e3-b466-6909207f9f26 does not exist
Jan 20 13:56:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d3e7bced-19fc-4af0-8f47-b06f004da261 does not exist
Jan 20 13:56:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 89679103-5bb4-4c02-85b6-ba86045729b3 does not exist
Jan 20 13:56:20 compute-0 sudo[82393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:20 compute-0 sudo[82393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:20 compute-0 sudo[82393]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:20 compute-0 sudo[82418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:56:20 compute-0 sudo[82418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:20 compute-0 sudo[82418]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:20 compute-0 ceph-mon[74360]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 13:56:20 compute-0 ceph-mon[74360]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/298940417' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/298940417' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 13:56:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 20 13:56:20 compute-0 pedantic_curran[82167]: set require_min_compat_client to mimic
Jan 20 13:56:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 20 13:56:20 compute-0 systemd[1]: libpod-e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1.scope: Deactivated successfully.
Jan 20 13:56:20 compute-0 podman[82125]: 2026-01-20 13:56:20.86844438 +0000 UTC m=+1.594825282 container died e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 20 13:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b81cea0326ad94f85be9417a25a2458ac1cdb44ebf2ee925ad078b06f62bb7-merged.mount: Deactivated successfully.
Jan 20 13:56:20 compute-0 podman[82125]: 2026-01-20 13:56:20.90445647 +0000 UTC m=+1.630837372 container remove e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1 (image=quay.io/ceph/ceph:v18, name=pedantic_curran, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:56:20 compute-0 systemd[1]: libpod-conmon-e8a3788ae807f0c2b8198cc586fc98215088cffb93fe17bfda48ddfce60f68c1.scope: Deactivated successfully.
Jan 20 13:56:20 compute-0 sudo[82049]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:21 compute-0 sudo[82480]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmxihhaylfrdkirgqyqvphfuftwnhrmc ; /usr/bin/python3'
Jan 20 13:56:21 compute-0 sudo[82480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:21 compute-0 python3[82482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:21 compute-0 podman[82483]: 2026-01-20 13:56:21.635067314 +0000 UTC m=+0.059857006 container create 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:21 compute-0 systemd[1]: Started libpod-conmon-5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147.scope.
Jan 20 13:56:21 compute-0 podman[82483]: 2026-01-20 13:56:21.605062099 +0000 UTC m=+0.029851841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b0948230afba7e08dfd3a9a8370538675bd49f1fa70d1e9a58446fdedf1244/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b0948230afba7e08dfd3a9a8370538675bd49f1fa70d1e9a58446fdedf1244/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b0948230afba7e08dfd3a9a8370538675bd49f1fa70d1e9a58446fdedf1244/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:21 compute-0 podman[82483]: 2026-01-20 13:56:21.711283161 +0000 UTC m=+0.136072843 container init 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 13:56:21 compute-0 podman[82483]: 2026-01-20 13:56:21.716230889 +0000 UTC m=+0.141020551 container start 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:56:21 compute-0 podman[82483]: 2026-01-20 13:56:21.719192925 +0000 UTC m=+0.143982617 container attach 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:21 compute-0 ceph-mon[74360]: Reconfiguring mgr.compute-0.wookjv (unknown last config time)...
Jan 20 13:56:21 compute-0 ceph-mon[74360]: Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:56:21 compute-0 ceph-mon[74360]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/298940417' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 13:56:21 compute-0 ceph-mon[74360]: osdmap e3: 0 total, 0 up, 0 in
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:22 compute-0 sudo[82522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:22 compute-0 sudo[82522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:22 compute-0 sudo[82522]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 1 completed events
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:22 compute-0 sudo[82547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:56:22 compute-0 sudo[82547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:22 compute-0 sudo[82547]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:22 compute-0 sudo[82572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:22 compute-0 sudo[82572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:22 compute-0 sudo[82572]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:22 compute-0 sudo[82597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 20 13:56:22 compute-0 sudo[82597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:22 compute-0 sudo[82597]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [cephadm INFO root] Added host compute-0
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6c70a3ad-1787-454d-b69a-1c5567a9c95c does not exist
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5845f74d-3d26-47d5-9573-66cbcfd12fdb does not exist
Jan 20 13:56:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a1f68301-2876-4b53-b8f3-11aa05e2a962 does not exist
Jan 20 13:56:23 compute-0 sudo[82642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:56:23 compute-0 sudo[82642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:23 compute-0 sudo[82642]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:23 compute-0 sudo[82667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:56:23 compute-0 sudo[82667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:56:23 compute-0 sudo[82667]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:56:23 compute-0 ceph-mon[74360]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:24 compute-0 ceph-mon[74360]: Added host compute-0
Jan 20 13:56:24 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 20 13:56:24 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 20 13:56:25 compute-0 ceph-mon[74360]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:25 compute-0 ceph-mon[74360]: Deploying cephadm binary to compute-1
Jan 20 13:56:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:27 compute-0 ceph-mon[74360]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:28 compute-0 ceph-mgr[74653]: [cephadm INFO root] Added host compute-1
Jan 20 13:56:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 20 13:56:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:29 compute-0 ceph-mon[74360]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:29 compute-0 ceph-mon[74360]: Added host compute-1
Jan 20 13:56:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:30 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 20 13:56:30 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 20 13:56:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:31 compute-0 ceph-mon[74360]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:31 compute-0 ceph-mon[74360]: Deploying cephadm binary to compute-2
Jan 20 13:56:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:33 compute-0 ceph-mon[74360]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 20 13:56:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Added host compute-2
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:56:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:56:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 20 13:56:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Added host 'compute-1' with addr '192.168.122.101'
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Added host 'compute-2' with addr '192.168.122.102'
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Scheduled mon update...
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Scheduled mgr update...
Jan 20 13:56:35 compute-0 boring_dewdney[82498]: Scheduled osd.default_drive_group update...
Jan 20 13:56:35 compute-0 ceph-mon[74360]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:35 compute-0 systemd[1]: libpod-5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147.scope: Deactivated successfully.
Jan 20 13:56:35 compute-0 podman[82483]: 2026-01-20 13:56:35.456324188 +0000 UTC m=+13.881113870 container died 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:56:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b0948230afba7e08dfd3a9a8370538675bd49f1fa70d1e9a58446fdedf1244-merged.mount: Deactivated successfully.
Jan 20 13:56:35 compute-0 podman[82483]: 2026-01-20 13:56:35.524203121 +0000 UTC m=+13.948992793 container remove 5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147 (image=quay.io/ceph/ceph:v18, name=boring_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:35 compute-0 systemd[1]: libpod-conmon-5a3233d586a5f06a72f6261635dce37ac80a8ded019ae099a2f87f7a30f23147.scope: Deactivated successfully.
Jan 20 13:56:35 compute-0 sudo[82480]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:35 compute-0 sudo[82726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrdvhyrcqixpfwyxzardmjgfezvtcti ; /usr/bin/python3'
Jan 20 13:56:35 compute-0 sudo[82726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:56:36 compute-0 python3[82728]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:36 compute-0 podman[82730]: 2026-01-20 13:56:36.095461703 +0000 UTC m=+0.044260193 container create 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:56:36 compute-0 systemd[1]: Started libpod-conmon-8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661.scope.
Jan 20 13:56:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480f664ff64a4fed0b39eebc4c241a3c6d26ddcb092968b5a471e09f743fc5da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480f664ff64a4fed0b39eebc4c241a3c6d26ddcb092968b5a471e09f743fc5da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480f664ff64a4fed0b39eebc4c241a3c6d26ddcb092968b5a471e09f743fc5da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:56:36 compute-0 podman[82730]: 2026-01-20 13:56:36.077000456 +0000 UTC m=+0.025798936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:56:36 compute-0 podman[82730]: 2026-01-20 13:56:36.176886414 +0000 UTC m=+0.125684924 container init 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:56:36 compute-0 podman[82730]: 2026-01-20 13:56:36.18368238 +0000 UTC m=+0.132480840 container start 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 13:56:36 compute-0 podman[82730]: 2026-01-20 13:56:36.186497792 +0000 UTC m=+0.135296252 container attach 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:56:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Added host compute-2
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 13:56:36 compute-0 ceph-mon[74360]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 13:56:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 20 13:56:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815564575' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:56:36 compute-0 gracious_joliot[82746]: 
Jan 20 13:56:36 compute-0 gracious_joliot[82746]: {"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-20T13:55:05.359486+0000","services":{}},"progress_events":{}}
Jan 20 13:56:36 compute-0 systemd[1]: libpod-8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661.scope: Deactivated successfully.
Jan 20 13:56:36 compute-0 podman[82771]: 2026-01-20 13:56:36.889345981 +0000 UTC m=+0.043817973 container died 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 13:56:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-480f664ff64a4fed0b39eebc4c241a3c6d26ddcb092968b5a471e09f743fc5da-merged.mount: Deactivated successfully.
Jan 20 13:56:36 compute-0 podman[82771]: 2026-01-20 13:56:36.924683512 +0000 UTC m=+0.079155484 container remove 8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661 (image=quay.io/ceph/ceph:v18, name=gracious_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:56:36 compute-0 systemd[1]: libpod-conmon-8658a14a269bb320adc185dad574b02fa811f6922bcfa7a867292931d491b661.scope: Deactivated successfully.
Jan 20 13:56:36 compute-0 sudo[82726]: pam_unix(sudo:session): session closed for user root
Jan 20 13:56:37 compute-0 ceph-mon[74360]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3815564575' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:56:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:39 compute-0 ceph-mon[74360]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:41 compute-0 ceph-mon[74360]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:43 compute-0 ceph-mon[74360]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:45 compute-0 ceph-mon[74360]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:47 compute-0 ceph-mon[74360]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:49 compute-0 ceph-mon[74360]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:51 compute-0 ceph-mon[74360]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:56:52
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] No pools available
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:56:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:53 compute-0 ceph-mon[74360]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:56:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:54 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:56:54 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:56:55 compute-0 ceph-mon[74360]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:56:55 compute-0 ceph-mon[74360]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:56:55 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:55 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:57 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:57 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:57 compute-0 ceph-mon[74360]: Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:56:57 compute-0 ceph-mon[74360]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:56:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:58 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:58 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:58 compute-0 ceph-mon[74360]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:56:58 compute-0 sshd-session[82785]: Connection closed by authenticating user root 157.245.78.139 port 44242 [preauth]
Jan 20 13:56:59 compute-0 ceph-mon[74360]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:59 compute-0 ceph-mon[74360]: Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:56:59.977+0000 7f45543a6640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: service_name: mon
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: placement:
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   hosts:
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-0
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-1
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-2
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:56:59.980+0000 7f45543a6640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: service_name: mgr
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: placement:
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   hosts:
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-0
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-1
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-2
Jan 20 13:56:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 22252ba7-d5c8-4bd4-a3e9-4a88977e01f0 (Updating crash deployment (+1 -> 2))
Jan 20 13:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 20 13:56:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:00 compute-0 ceph-mon[74360]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 13:57:00 compute-0 ceph-mon[74360]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:00 compute-0 ceph-mon[74360]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 13:57:00 compute-0 ceph-mon[74360]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:57:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:00 compute-0 ceph-mon[74360]: Deploying daemon crash.compute-1 on compute-1
Jan 20 13:57:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 13:57:01 compute-0 ceph-mon[74360]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 13:57:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:02 compute-0 ceph-mon[74360]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:03 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 22252ba7-d5c8-4bd4-a3e9-4a88977e01f0 (Updating crash deployment (+1 -> 2))
Jan 20 13:57:03 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 22252ba7-d5c8-4bd4-a3e9-4a88977e01f0 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:57:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:03 compute-0 sudo[82787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:03 compute-0 sudo[82787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-0 sudo[82787]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-0 sudo[82812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:03 compute-0 sudo[82812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-0 sudo[82812]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-0 sudo[82837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:03 compute-0 sudo[82837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-0 sudo[82837]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:03 compute-0 sudo[82862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:57:03 compute-0 sudo[82862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:03 compute-0 podman[82926]: 2026-01-20 13:57:03.99525289 +0000 UTC m=+0.064894346 container create 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:57:04 compute-0 systemd[1]: Started libpod-conmon-953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d.scope.
Jan 20 13:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:03.968936331 +0000 UTC m=+0.038577836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:04.070323108 +0000 UTC m=+0.139964533 container init 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:04.075683586 +0000 UTC m=+0.145325001 container start 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:04.078112889 +0000 UTC m=+0.147754384 container attach 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:04 compute-0 wizardly_keldysh[82943]: 167 167
Jan 20 13:57:04 compute-0 systemd[1]: libpod-953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d.scope: Deactivated successfully.
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:04.085300664 +0000 UTC m=+0.154942109 container died 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bbeb4f32828283f233cd750fd98f320a3d96740cb91aff99fbce04d5d9f9e42-merged.mount: Deactivated successfully.
Jan 20 13:57:04 compute-0 podman[82926]: 2026-01-20 13:57:04.132369089 +0000 UTC m=+0.202010544 container remove 953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:04 compute-0 systemd[1]: libpod-conmon-953c0aa3f6b553610cebb5854297cde957a21b2bb684f9dea83f1c5213add30d.scope: Deactivated successfully.
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:57:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:04 compute-0 podman[82967]: 2026-01-20 13:57:04.320593696 +0000 UTC m=+0.055534373 container create 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:57:04 compute-0 systemd[1]: Started libpod-conmon-04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7.scope.
Jan 20 13:57:04 compute-0 podman[82967]: 2026-01-20 13:57:04.301554195 +0000 UTC m=+0.036494862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:04 compute-0 podman[82967]: 2026-01-20 13:57:04.427232788 +0000 UTC m=+0.162173515 container init 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:04 compute-0 podman[82967]: 2026-01-20 13:57:04.439452814 +0000 UTC m=+0.174393481 container start 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:04 compute-0 podman[82967]: 2026-01-20 13:57:04.443010786 +0000 UTC m=+0.177951463 container attach 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 13:57:05 compute-0 magical_pascal[82982]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:57:05 compute-0 magical_pascal[82982]: --> relative data size: 1.0
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 13:57:05 compute-0 ceph-mon[74360]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new afc3740a-ccee-46f8-b343-22648cc89376
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "afc3740a-ccee-46f8-b343-22648cc89376"} v 0) v1
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1058863016' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc3740a-ccee-46f8-b343-22648cc89376"}]: dispatch
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1058863016' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afc3740a-ccee-46f8-b343-22648cc89376"}]': finished
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:05 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 20 13:57:05 compute-0 lvm[83030]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 13:57:05 compute-0 lvm[83030]: VG ceph_vg0 finished
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:05 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9"} v 0) v1
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2881784555' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9"}]: dispatch
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2881784555' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9"}]': finished
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:05 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:05 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1058863016' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "afc3740a-ccee-46f8-b343-22648cc89376"}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1058863016' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "afc3740a-ccee-46f8-b343-22648cc89376"}]': finished
Jan 20 13:57:06 compute-0 ceph-mon[74360]: osdmap e4: 1 total, 0 up, 1 in
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2881784555' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9"}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2881784555' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "562c52e7-0678-4614-81fd-9a9eecf7d0f9"}]': finished
Jan 20 13:57:06 compute-0 ceph-mon[74360]: osdmap e5: 2 total, 0 up, 2 in
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 20 13:57:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918764992' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:57:06 compute-0 magical_pascal[82982]:  stderr: got monmap epoch 1
Jan 20 13:57:06 compute-0 magical_pascal[82982]: --> Creating keyring file for osd.0
Jan 20 13:57:06 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 20 13:57:06 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 20 13:57:06 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid afc3740a-ccee-46f8-b343-22648cc89376 --setuser ceph --setgroup ceph
Jan 20 13:57:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 20 13:57:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3878784036' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:57:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 13:57:07 compute-0 sudo[83174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhthstrnjwniojnollnegwmjxxbuxrcw ; /usr/bin/python3'
Jan 20 13:57:07 compute-0 sudo[83174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:07 compute-0 python3[83311]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:07 compute-0 ceph-mon[74360]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2918764992' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:57:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3878784036' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:57:07 compute-0 ceph-mon[74360]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 13:57:07 compute-0 podman[83313]: 2026-01-20 13:57:07.356865354 +0000 UTC m=+0.064354762 container create 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 13:57:07 compute-0 systemd[1]: Started libpod-conmon-2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81.scope.
Jan 20 13:57:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:07 compute-0 podman[83313]: 2026-01-20 13:57:07.331099488 +0000 UTC m=+0.038588956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ff4f139aa5dea6f695104511263bfb1bcd853a594af313bcf711cf912a7074/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ff4f139aa5dea6f695104511263bfb1bcd853a594af313bcf711cf912a7074/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ff4f139aa5dea6f695104511263bfb1bcd853a594af313bcf711cf912a7074/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:07 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 2 completed events
Jan 20 13:57:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:57:07 compute-0 podman[83313]: 2026-01-20 13:57:07.460625701 +0000 UTC m=+0.168115169 container init 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 13:57:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:07 compute-0 podman[83313]: 2026-01-20 13:57:07.473712989 +0000 UTC m=+0.181202397 container start 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:07 compute-0 podman[83313]: 2026-01-20 13:57:07.477760323 +0000 UTC m=+0.185249781 container attach 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:57:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 20 13:57:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225268271' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:57:08 compute-0 gallant_herschel[83512]: 
Jan 20 13:57:08 compute-0 gallant_herschel[83512]: {"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1768917425,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T13:56:54.382913+0000","services":{}},"progress_events":{}}
Jan 20 13:57:08 compute-0 systemd[1]: libpod-2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81.scope: Deactivated successfully.
Jan 20 13:57:08 compute-0 podman[83313]: 2026-01-20 13:57:08.063259743 +0000 UTC m=+0.770749141 container died 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ff4f139aa5dea6f695104511263bfb1bcd853a594af313bcf711cf912a7074-merged.mount: Deactivated successfully.
Jan 20 13:57:08 compute-0 podman[83313]: 2026-01-20 13:57:08.119176206 +0000 UTC m=+0.826665604 container remove 2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81 (image=quay.io/ceph/ceph:v18, name=gallant_herschel, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:08 compute-0 systemd[1]: libpod-conmon-2bdc5129b329bf18bfafbafd5a67a9487edd37966836911ffd60b25ca587ba81.scope: Deactivated successfully.
Jan 20 13:57:08 compute-0 sudo[83174]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/225268271' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:57:08 compute-0 magical_pascal[82982]:  stderr: 2026-01-20T13:57:06.494+0000 7f1ac4ba8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:08 compute-0 magical_pascal[82982]:  stderr: 2026-01-20T13:57:06.494+0000 7f1ac4ba8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:08 compute-0 magical_pascal[82982]:  stderr: 2026-01-20T13:57:06.494+0000 7f1ac4ba8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 20 13:57:08 compute-0 magical_pascal[82982]:  stderr: 2026-01-20T13:57:06.495+0000 7f1ac4ba8740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 20 13:57:08 compute-0 magical_pascal[82982]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:08 compute-0 magical_pascal[82982]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:08 compute-0 magical_pascal[82982]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 13:57:08 compute-0 magical_pascal[82982]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 20 13:57:08 compute-0 systemd[1]: libpod-04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7.scope: Deactivated successfully.
Jan 20 13:57:08 compute-0 systemd[1]: libpod-04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7.scope: Consumed 2.660s CPU time.
Jan 20 13:57:08 compute-0 conmon[82982]: conmon 04805eb0fe96c6d4d277 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7.scope/container/memory.events
Jan 20 13:57:08 compute-0 podman[82967]: 2026-01-20 13:57:08.88846651 +0000 UTC m=+4.623407147 container died 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8c4278ce8c748f0206ab9e520fece094c654d8e5c92dba087a9b16d2e331ab6-merged.mount: Deactivated successfully.
Jan 20 13:57:08 compute-0 podman[82967]: 2026-01-20 13:57:08.979715004 +0000 UTC m=+4.714655641 container remove 04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:08 compute-0 systemd[1]: libpod-conmon-04805eb0fe96c6d4d277e0b690df107c96c8cc7b83bd7e87127b7d42511c82e7.scope: Deactivated successfully.
Jan 20 13:57:09 compute-0 sudo[82862]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:09 compute-0 sudo[84039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:09 compute-0 sudo[84039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:09 compute-0 sudo[84039]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:09 compute-0 sudo[84064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:09 compute-0 sudo[84064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:09 compute-0 sudo[84064]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:09 compute-0 sudo[84089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:09 compute-0 sudo[84089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:09 compute-0 sudo[84089]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:09 compute-0 sudo[84114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:57:09 compute-0 sudo[84114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:09 compute-0 ceph-mon[74360]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.547707133 +0000 UTC m=+0.042649242 container create 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 13:57:09 compute-0 systemd[1]: Started libpod-conmon-93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c.scope.
Jan 20 13:57:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.528918467 +0000 UTC m=+0.023860596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.628660721 +0000 UTC m=+0.123602880 container init 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.636627597 +0000 UTC m=+0.131569696 container start 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.640442416 +0000 UTC m=+0.135384545 container attach 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:09 compute-0 sad_cerf[84196]: 167 167
Jan 20 13:57:09 compute-0 systemd[1]: libpod-93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c.scope: Deactivated successfully.
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.642331504 +0000 UTC m=+0.137273633 container died 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:57:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5792278e43f7e7884883189af419a7995c6d8407babeb820d458db36c01bcd03-merged.mount: Deactivated successfully.
Jan 20 13:57:09 compute-0 podman[84179]: 2026-01-20 13:57:09.676232969 +0000 UTC m=+0.171175058 container remove 93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cerf, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:09 compute-0 systemd[1]: libpod-conmon-93806a1849494e62678840b48634344e6c0b028b9950486dc9228d02c7508e2c.scope: Deactivated successfully.
Jan 20 13:57:09 compute-0 podman[84219]: 2026-01-20 13:57:09.840587011 +0000 UTC m=+0.048705588 container create 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 13:57:09 compute-0 systemd[1]: Started libpod-conmon-9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3.scope.
Jan 20 13:57:09 compute-0 podman[84219]: 2026-01-20 13:57:09.816241652 +0000 UTC m=+0.024360279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c5408fa18c521e188449bd19d518c6f078c9595814864f0c07c4060a1fadf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c5408fa18c521e188449bd19d518c6f078c9595814864f0c07c4060a1fadf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c5408fa18c521e188449bd19d518c6f078c9595814864f0c07c4060a1fadf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c5408fa18c521e188449bd19d518c6f078c9595814864f0c07c4060a1fadf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:09 compute-0 podman[84219]: 2026-01-20 13:57:09.926255562 +0000 UTC m=+0.134374199 container init 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:09 compute-0 podman[84219]: 2026-01-20 13:57:09.941909665 +0000 UTC m=+0.150028242 container start 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:57:09 compute-0 podman[84219]: 2026-01-20 13:57:09.945419766 +0000 UTC m=+0.153538343 container attach 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 13:57:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:10 compute-0 mystifying_curran[84235]: {
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:     "0": [
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:         {
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "devices": [
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "/dev/loop3"
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             ],
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "lv_name": "ceph_lv0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "lv_size": "7511998464",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "name": "ceph_lv0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "tags": {
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.cluster_name": "ceph",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.crush_device_class": "",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.encrypted": "0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.osd_id": "0",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.type": "block",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:                 "ceph.vdo": "0"
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             },
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "type": "block",
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:             "vg_name": "ceph_vg0"
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:         }
Jan 20 13:57:10 compute-0 mystifying_curran[84235]:     ]
Jan 20 13:57:10 compute-0 mystifying_curran[84235]: }
Jan 20 13:57:10 compute-0 systemd[1]: libpod-9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3.scope: Deactivated successfully.
Jan 20 13:57:10 compute-0 podman[84219]: 2026-01-20 13:57:10.743523293 +0000 UTC m=+0.951641870 container died 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 13:57:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c5408fa18c521e188449bd19d518c6f078c9595814864f0c07c4060a1fadf9-merged.mount: Deactivated successfully.
Jan 20 13:57:10 compute-0 podman[84219]: 2026-01-20 13:57:10.802571007 +0000 UTC m=+1.010689594 container remove 9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:57:10 compute-0 systemd[1]: libpod-conmon-9bc0363ab0e520edfa6289b4646c555e59bb03d9c51d1b940751d182c05fdff3.scope: Deactivated successfully.
Jan 20 13:57:10 compute-0 sudo[84114]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 20 13:57:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 13:57:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:10 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 20 13:57:10 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 20 13:57:10 compute-0 sudo[84257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:10 compute-0 sudo[84257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:10 compute-0 sudo[84257]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:10 compute-0 sudo[84282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:10 compute-0 sudo[84282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:10 compute-0 sudo[84282]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:11 compute-0 sudo[84307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:11 compute-0 sudo[84307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:11 compute-0 sudo[84307]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:11 compute-0 sudo[84332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:57:11 compute-0 sudo[84332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.46375916 +0000 UTC m=+0.066115057 container create 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:11 compute-0 ceph-mon[74360]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 13:57:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:11 compute-0 ceph-mon[74360]: Deploying daemon osd.0 on compute-0
Jan 20 13:57:11 compute-0 systemd[1]: Started libpod-conmon-38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d.scope.
Jan 20 13:57:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.437274356 +0000 UTC m=+0.039630263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.53467774 +0000 UTC m=+0.137033667 container init 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.54127564 +0000 UTC m=+0.143631517 container start 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.544252208 +0000 UTC m=+0.146608085 container attach 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 13:57:11 compute-0 amazing_austin[84414]: 167 167
Jan 20 13:57:11 compute-0 systemd[1]: libpod-38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d.scope: Deactivated successfully.
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.547496791 +0000 UTC m=+0.149852658 container died 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 13:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f72e0c850a4c0bda8e512375961bd8d5c1cbba57ffc8bebee43569113cc55471-merged.mount: Deactivated successfully.
Jan 20 13:57:11 compute-0 podman[84398]: 2026-01-20 13:57:11.577856435 +0000 UTC m=+0.180212312 container remove 38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_austin, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:11 compute-0 systemd[1]: libpod-conmon-38bca34e47bd0770843e32c9f9cb9627c939250f653a859409741941a6ec8d8d.scope: Deactivated successfully.
Jan 20 13:57:11 compute-0 podman[84444]: 2026-01-20 13:57:11.870464246 +0000 UTC m=+0.069371692 container create 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 13:57:11 compute-0 systemd[1]: Started libpod-conmon-56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010.scope.
Jan 20 13:57:11 compute-0 podman[84444]: 2026-01-20 13:57:11.842916365 +0000 UTC m=+0.041823861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:11 compute-0 podman[84444]: 2026-01-20 13:57:11.971711379 +0000 UTC m=+0.170618855 container init 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:11 compute-0 podman[84444]: 2026-01-20 13:57:11.990585675 +0000 UTC m=+0.189493121 container start 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:57:11 compute-0 podman[84444]: 2026-01-20 13:57:11.994319982 +0000 UTC m=+0.193227458 container attach 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 13:57:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test[84461]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 20 13:57:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test[84461]:                             [--no-systemd] [--no-tmpfs]
Jan 20 13:57:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test[84461]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 13:57:12 compute-0 systemd[1]: libpod-56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010.scope: Deactivated successfully.
Jan 20 13:57:12 compute-0 podman[84444]: 2026-01-20 13:57:12.694680556 +0000 UTC m=+0.893587952 container died 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d01402231f869859b8486f602e01bac992eb4bff4e24483b7d62af7511229b-merged.mount: Deactivated successfully.
Jan 20 13:57:12 compute-0 podman[84444]: 2026-01-20 13:57:12.749033479 +0000 UTC m=+0.947940925 container remove 56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:12 compute-0 systemd[1]: libpod-conmon-56ae80ec5d374080f96a1771f6d1eb58e37a66f6908f4ccee0b4da783dec4010.scope: Deactivated successfully.
Jan 20 13:57:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:13 compute-0 systemd[1]: Reloading.
Jan 20 13:57:13 compute-0 systemd-sysv-generator[84522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:13 compute-0 systemd-rc-local-generator[84519]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 20 13:57:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 13:57:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:13 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 20 13:57:13 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 20 13:57:13 compute-0 systemd[1]: Reloading.
Jan 20 13:57:13 compute-0 systemd-rc-local-generator[84567]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:13 compute-0 systemd-sysv-generator[84571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:13 compute-0 ceph-mon[74360]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 13:57:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:13 compute-0 systemd[1]: Starting Ceph osd.0 for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:57:13 compute-0 podman[84622]: 2026-01-20 13:57:13.983702742 +0000 UTC m=+0.085663062 container create d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:14 compute-0 podman[84622]: 2026-01-20 13:57:13.957162637 +0000 UTC m=+0.059122987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:14 compute-0 podman[84622]: 2026-01-20 13:57:14.060426282 +0000 UTC m=+0.162386612 container init d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:14 compute-0 podman[84622]: 2026-01-20 13:57:14.065203785 +0000 UTC m=+0.167164105 container start d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:57:14 compute-0 podman[84622]: 2026-01-20 13:57:14.068554331 +0000 UTC m=+0.170514661 container attach d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:14 compute-0 ceph-mon[74360]: Deploying daemon osd.1 on compute-1
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:14 compute-0 bash[84622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 13:57:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate[84637]: --> ceph-volume raw activate successful for osd ID: 0
Jan 20 13:57:14 compute-0 bash[84622]: --> ceph-volume raw activate successful for osd ID: 0
Jan 20 13:57:15 compute-0 systemd[1]: libpod-d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1.scope: Deactivated successfully.
Jan 20 13:57:15 compute-0 podman[84622]: 2026-01-20 13:57:15.011143947 +0000 UTC m=+1.113104277 container died d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 13:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4de66ac6aa66d37e84a1fa8aca16479284538e15ae15386dc1060ff502587ca-merged.mount: Deactivated successfully.
Jan 20 13:57:15 compute-0 podman[84622]: 2026-01-20 13:57:15.066209978 +0000 UTC m=+1.168170288 container remove d185aa09d05263245ad660be9ed0f5825ba76c8ed3e17d8342bae669531cb4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:57:15 compute-0 podman[84796]: 2026-01-20 13:57:15.249999601 +0000 UTC m=+0.040119196 container create 1bb19f8e00aedfda790faeb9c03f4d14a333a73309dbf009dad3a0846420e858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3967a816edd088dce69a499fb353ca1fc04ef92e4e1333bf9dd90262cf7cef66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3967a816edd088dce69a499fb353ca1fc04ef92e4e1333bf9dd90262cf7cef66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3967a816edd088dce69a499fb353ca1fc04ef92e4e1333bf9dd90262cf7cef66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3967a816edd088dce69a499fb353ca1fc04ef92e4e1333bf9dd90262cf7cef66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3967a816edd088dce69a499fb353ca1fc04ef92e4e1333bf9dd90262cf7cef66/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:15 compute-0 podman[84796]: 2026-01-20 13:57:15.305066883 +0000 UTC m=+0.095186508 container init 1bb19f8e00aedfda790faeb9c03f4d14a333a73309dbf009dad3a0846420e858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:57:15 compute-0 podman[84796]: 2026-01-20 13:57:15.315921272 +0000 UTC m=+0.106040877 container start 1bb19f8e00aedfda790faeb9c03f4d14a333a73309dbf009dad3a0846420e858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 13:57:15 compute-0 bash[84796]: 1bb19f8e00aedfda790faeb9c03f4d14a333a73309dbf009dad3a0846420e858
Jan 20 13:57:15 compute-0 podman[84796]: 2026-01-20 13:57:15.233888286 +0000 UTC m=+0.024007901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:15 compute-0 systemd[1]: Started Ceph osd.0 for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:57:15 compute-0 ceph-osd[84815]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:57:15 compute-0 ceph-osd[84815]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 20 13:57:15 compute-0 ceph-osd[84815]: pidfile_write: ignore empty --pid-file
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb85f3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb85f3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb85f3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb85f3800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb942b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb942b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb942b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb942b800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb942b800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 13:57:15 compute-0 sudo[84332]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:57:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:57:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:15 compute-0 sudo[84828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:15 compute-0 sudo[84828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:15 compute-0 sudo[84828]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:15 compute-0 ceph-mon[74360]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:15 compute-0 sudo[84853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:15 compute-0 sudo[84853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:15 compute-0 sudo[84853]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:15 compute-0 sudo[84878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:15 compute-0 sudo[84878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:15 compute-0 sudo[84878]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb85f3800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 13:57:15 compute-0 sudo[84903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:57:15 compute-0 sudo[84903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:15 compute-0 ceph-osd[84815]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 20 13:57:15 compute-0 ceph-osd[84815]: load: jerasure load: lrc 
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:15 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 13:57:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.023250907 +0000 UTC m=+0.043196876 container create 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:57:16 compute-0 systemd[1]: Started libpod-conmon-9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94.scope.
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.003212879 +0000 UTC m=+0.023158888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.120156767 +0000 UTC m=+0.140102786 container init 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.132168627 +0000 UTC m=+0.152114596 container start 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.137992537 +0000 UTC m=+0.157938466 container attach 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:57:16 compute-0 musing_banzai[84991]: 167 167
Jan 20 13:57:16 compute-0 systemd[1]: libpod-9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94.scope: Deactivated successfully.
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.141087558 +0000 UTC m=+0.161033497 container died 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe41f4ba49d156b12374133f30d1f28471ce6931f7e21a5dbd2d23bdc0a7fae8-merged.mount: Deactivated successfully.
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 13:57:16 compute-0 podman[84975]: 2026-01-20 13:57:16.187000852 +0000 UTC m=+0.206946781 container remove 9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:16 compute-0 systemd[1]: libpod-conmon-9a542e6b81bdfd380df8a95ab52ee59c0eb4db4912b452db11ebe74da16f7c94.scope: Deactivated successfully.
Jan 20 13:57:16 compute-0 podman[85018]: 2026-01-20 13:57:16.365330034 +0000 UTC m=+0.020867939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:16 compute-0 podman[85018]: 2026-01-20 13:57:16.552648639 +0000 UTC m=+0.208186494 container create 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94acc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs mount
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs mount shared_bdev_used = 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Git sha 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DB SUMMARY
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DB Session ID:  I2AYA41M6TZPHXD3PTJN
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                     Options.env: 0x562bb947dc70
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                Options.info_log: 0x562bb8670ba0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.write_buffer_manager: 0x562bb9586460
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:57:16 compute-0 systemd[1]: Started libpod-conmon-432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a.scope.
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.row_cache: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.wal_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.wal_compression: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_background_jobs: 4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Compression algorithms supported:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZSTD supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb8670600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb86705c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb86705c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb86705c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8666430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b715084b-5c1e-423c-a54d-3a29be4010c7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436611665, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436611869, "job": 1, "event": "recovery_finished"}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: freelist init
Jan 20 13:57:16 compute-0 ceph-osd[84815]: freelist _read_cfg
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs umount
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 13:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32404eb07b09c26eb0656f2b9f986703aa09c8d0b9a5e64765550aa80478d1a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32404eb07b09c26eb0656f2b9f986703aa09c8d0b9a5e64765550aa80478d1a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32404eb07b09c26eb0656f2b9f986703aa09c8d0b9a5e64765550aa80478d1a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32404eb07b09c26eb0656f2b9f986703aa09c8d0b9a5e64765550aa80478d1a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:16 compute-0 podman[85018]: 2026-01-20 13:57:16.622754148 +0000 UTC m=+0.278292033 container init 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:57:16 compute-0 podman[85018]: 2026-01-20 13:57:16.638804362 +0000 UTC m=+0.294342227 container start 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:16 compute-0 podman[85018]: 2026-01-20 13:57:16.642329273 +0000 UTC m=+0.297867128 container attach 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bdev(0x562bb94ad400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs mount
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluefs mount shared_bdev_used = 4718592
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: RocksDB version: 7.9.2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Git sha 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DB SUMMARY
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DB Session ID:  I2AYA41M6TZPHXD3PTJM
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: CURRENT file:  CURRENT
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.error_if_exists: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.create_if_missing: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                     Options.env: 0x562bb86b25b0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                Options.info_log: 0x562bb9479000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.statistics: (nil)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.use_fsync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.db_log_dir: 
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.write_buffer_manager: 0x562bb9586960
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.unordered_write: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.row_cache: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                              Options.wal_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.two_write_queues: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.wal_compression: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.atomic_flush: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_background_jobs: 4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_background_compactions: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_subcompactions: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.max_open_files: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Compression algorithms supported:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZSTD supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kXpressCompression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kBZip2Compression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kLZ4Compression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kZlibCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         kSnappyCompression supported: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb867a6e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb86671f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb9479140)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8667610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb9479140)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8667610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:           Options.merge_operator: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bb9479140)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562bb8667610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.compression: LZ4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.num_levels: 7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.bloom_locality: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                               Options.ttl: 2592000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                       Options.enable_blob_files: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                           Options.min_blob_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b715084b-5c1e-423c-a54d-3a29be4010c7
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436881790, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436886056, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917436, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b715084b-5c1e-423c-a54d-3a29be4010c7", "db_session_id": "I2AYA41M6TZPHXD3PTJM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436888981, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917436, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b715084b-5c1e-423c-a54d-3a29be4010c7", "db_session_id": "I2AYA41M6TZPHXD3PTJM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436891054, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917436, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b715084b-5c1e-423c-a54d-3a29be4010c7", "db_session_id": "I2AYA41M6TZPHXD3PTJM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917436892414, "job": 1, "event": "recovery_finished"}
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562bb8725880
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: DB pointer 0x562bb956fa00
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 20 13:57:16 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 13:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 13:57:16 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 13:57:16 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 13:57:16 compute-0 ceph-osd[84815]: _get_class not permitted to load lua
Jan 20 13:57:16 compute-0 ceph-osd[84815]: _get_class not permitted to load sdk
Jan 20 13:57:16 compute-0 ceph-osd[84815]: _get_class not permitted to load test_remote_reads
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 load_pgs
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 load_pgs opened 0 pgs
Jan 20 13:57:16 compute-0 ceph-osd[84815]: osd.0 0 log_to_monitors true
Jan 20 13:57:16 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0[84811]: 2026-01-20T13:57:16.916+0000 7f472e54b740 -1 osd.0 0 log_to_monitors true
Jan 20 13:57:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 20 13:57:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]: {
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:         "osd_id": 0,
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:         "type": "bluestore"
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]:     }
Jan 20 13:57:17 compute-0 youthful_chatelet[85048]: }
Jan 20 13:57:17 compute-0 systemd[1]: libpod-432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a.scope: Deactivated successfully.
Jan 20 13:57:17 compute-0 podman[85018]: 2026-01-20 13:57:17.527339413 +0000 UTC m=+1.182877338 container died 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:17 compute-0 ceph-mon[74360]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:17 compute-0 ceph-mon[74360]: from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 20 13:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-32404eb07b09c26eb0656f2b9f986703aa09c8d0b9a5e64765550aa80478d1a9-merged.mount: Deactivated successfully.
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:17 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:17 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:17 compute-0 podman[85018]: 2026-01-20 13:57:17.600037289 +0000 UTC m=+1.255575174 container remove 432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 20 13:57:17 compute-0 systemd[1]: libpod-conmon-432ce0d29989f6599b85d4e3d40fdd2608ccd2710c2b6295cea9554c2034865a.scope: Deactivated successfully.
Jan 20 13:57:17 compute-0 sudo[84903]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:57:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 13:57:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 13:57:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 done with init, starting boot process
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 start_boot
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 13:57:18 compute-0 ceph-osd[84815]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 20 13:57:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:18 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 13:57:18 compute-0 ceph-mon[74360]: osdmap e6: 2 total, 0 up, 2 in
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:18 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3314149516; not ready for session (expect reconnect)
Jan 20 13:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:18 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:19 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3314149516; not ready for session (expect reconnect)
Jan 20 13:57:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:19 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:19 compute-0 ceph-mon[74360]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:19 compute-0 ceph-mon[74360]: from='osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 13:57:19 compute-0 ceph-mon[74360]: osdmap e7: 2 total, 0 up, 2 in
Jan 20 13:57:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:20 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3314149516; not ready for session (expect reconnect)
Jan 20 13:57:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:20 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:20 compute-0 ceph-mon[74360]: purged_snaps scrub starts
Jan 20 13:57:20 compute-0 ceph-mon[74360]: purged_snaps scrub ok
Jan 20 13:57:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:21 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3314149516; not ready for session (expect reconnect)
Jan 20 13:57:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:21 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:21 compute-0 ceph-mon[74360]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.379 iops: 7265.084 elapsed_sec: 0.413
Jan 20 13:57:21 compute-0 ceph-osd[84815]: log_channel(cluster) log [WRN] : OSD bench result of 7265.084457 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 0 waiting for initial osdmap
Jan 20 13:57:21 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0[84811]: 2026-01-20T13:57:21.893+0000 7f472a4cb640 -1 osd.0 0 waiting for initial osdmap
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 check_osdmap_features require_osd_release unknown -> reef
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 set_numa_affinity not setting numa affinity
Jan 20 13:57:21 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-osd-0[84811]: 2026-01-20T13:57:21.915+0000 7f4725af3640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 13:57:21 compute-0 ceph-osd[84815]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 20 13:57:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3314149516; not ready for session (expect reconnect)
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:22 compute-0 ceph-mon[74360]: OSD bench result of 7265.084457 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:57:22 compute-0 ceph-mon[74360]: from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516] boot
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:22 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:22 compute-0 ceph-osd[84815]: osd.0 8 state: booting -> active
Jan 20 13:57:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:23 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:23 compute-0 ceph-mon[74360]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 13:57:23 compute-0 ceph-mon[74360]: from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 13:57:23 compute-0 ceph-mon[74360]: osd.0 [v2:192.168.122.100:6802/3314149516,v1:192.168.122.100:6803/3314149516] boot
Jan 20 13:57:23 compute-0 ceph-mon[74360]: osdmap e8: 2 total, 1 up, 2 in
Jan 20 13:57:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 13:57:23 compute-0 ceph-mon[74360]: from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 20 13:57:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:23 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:23 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:23 compute-0 sudo[85481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:23 compute-0 sudo[85481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:23 compute-0 sudo[85481]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:23 compute-0 sudo[85506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:57:23 compute-0 sudo[85506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:23 compute-0 sudo[85506]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:24 compute-0 sudo[85531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:24 compute-0 sudo[85531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-0 sudo[85531]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-0 sudo[85556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:24 compute-0 sudo[85556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-0 ceph-mgr[74653]: [devicehealth INFO root] creating mgr pool
Jan 20 13:57:24 compute-0 sudo[85556]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 20 13:57:24 compute-0 sudo[85583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:24 compute-0 sudo[85583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-0 sudo[85583]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:24 compute-0 sudo[85608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:57:24 compute-0 sudo[85608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 20 13:57:24 compute-0 ceph-mon[74360]: osdmap e9: 2 total, 1 up, 2 in
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 20 13:57:24 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:24 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:24 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 20 13:57:24 compute-0 ceph-osd[84815]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 13:57:24 compute-0 ceph-osd[84815]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 20 13:57:24 compute-0 ceph-osd[84815]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 13:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 podman[85706]: 2026-01-20 13:57:25.148229886 +0000 UTC m=+0.062217507 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:25 compute-0 podman[85706]: 2026-01-20 13:57:25.243271549 +0000 UTC m=+0.157259180 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 sudo[85608]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 sudo[85792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:25 compute-0 sudo[85792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-0 sudo[85792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:25 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:25 compute-0 ceph-mon[74360]: purged_snaps scrub starts
Jan 20 13:57:25 compute-0 ceph-mon[74360]: purged_snaps scrub ok
Jan 20 13:57:25 compute-0 ceph-mon[74360]: pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 13:57:25 compute-0 ceph-mon[74360]: osdmap e10: 2 total, 1 up, 2 in
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 sudo[85817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:25 compute-0 sudo[85817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-0 sudo[85817]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:25 compute-0 sudo[85842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:25 compute-0 sudo[85842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-0 sudo[85842]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 20 13:57:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:25 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:25 compute-0 sudo[85867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:57:25 compute-0 sudo[85867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:25 compute-0 sshd-session[85477]: Connection closed by authenticating user root 159.223.5.14 port 48910 [preauth]
Jan 20 13:57:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:26 compute-0 sudo[85867]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:26 compute-0 sudo[85924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:26 compute-0 sudo[85924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:26 compute-0 sudo[85924]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:26 compute-0 sudo[85949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:57:26 compute-0 sudo[85949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:26 compute-0 sudo[85949]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:26 compute-0 sudo[85974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:57:26 compute-0 sudo[85974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:26 compute-0 sudo[85974]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:26 compute-0 sudo[85999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 13:57:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:26 compute-0 sudo[85999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:57:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:26 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:26 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 13:57:26 compute-0 ceph-mon[74360]: osdmap e11: 2 total, 1 up, 2 in
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:26 compute-0 podman[86065]: 2026-01-20 13:57:26.94850162 +0000 UTC m=+0.045630662 container create ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 13:57:26 compute-0 systemd[1]: Started libpod-conmon-ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654.scope.
Jan 20 13:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:27.018987453 +0000 UTC m=+0.116116495 container init ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:26.929219019 +0000 UTC m=+0.026348101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:27.026222725 +0000 UTC m=+0.123351767 container start ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:27.02978838 +0000 UTC m=+0.126917442 container attach ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 20 13:57:27 compute-0 focused_raman[86082]: 167 167
Jan 20 13:57:27 compute-0 systemd[1]: libpod-ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654.scope: Deactivated successfully.
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:27.032480091 +0000 UTC m=+0.129609133 container died ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c586a9f4b57c3a4ad447518d10db66912db97ac758c2d613ad2dd80a7dffa4bd-merged.mount: Deactivated successfully.
Jan 20 13:57:27 compute-0 podman[86065]: 2026-01-20 13:57:27.082369386 +0000 UTC m=+0.179498428 container remove ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_raman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:57:27 compute-0 systemd[1]: libpod-conmon-ae9515ab5af5a06d5068eaaa2bce4a00147c2c61abcdbdc3bc8808f2832db654.scope: Deactivated successfully.
Jan 20 13:57:27 compute-0 podman[86108]: 2026-01-20 13:57:27.277690875 +0000 UTC m=+0.047339799 container create b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:27 compute-0 systemd[1]: Started libpod-conmon-b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72.scope.
Jan 20 13:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb585cfb5a80039f75386221427dd6c36fd10440dbaf6a0422ae784804ed49f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb585cfb5a80039f75386221427dd6c36fd10440dbaf6a0422ae784804ed49f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb585cfb5a80039f75386221427dd6c36fd10440dbaf6a0422ae784804ed49f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb585cfb5a80039f75386221427dd6c36fd10440dbaf6a0422ae784804ed49f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:27 compute-0 podman[86108]: 2026-01-20 13:57:27.256285675 +0000 UTC m=+0.025934629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:57:27 compute-0 podman[86108]: 2026-01-20 13:57:27.353827837 +0000 UTC m=+0.123476791 container init b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:27 compute-0 podman[86108]: 2026-01-20 13:57:27.365311601 +0000 UTC m=+0.134960555 container start b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:57:27 compute-0 podman[86108]: 2026-01-20 13:57:27.369297087 +0000 UTC m=+0.138946041 container attach b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 13:57:27 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:27 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:27 compute-0 ceph-mon[74360]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:28 compute-0 tender_black[86124]: [
Jan 20 13:57:28 compute-0 tender_black[86124]:     {
Jan 20 13:57:28 compute-0 tender_black[86124]:         "available": false,
Jan 20 13:57:28 compute-0 tender_black[86124]:         "ceph_device": false,
Jan 20 13:57:28 compute-0 tender_black[86124]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 13:57:28 compute-0 tender_black[86124]:         "lsm_data": {},
Jan 20 13:57:28 compute-0 tender_black[86124]:         "lvs": [],
Jan 20 13:57:28 compute-0 tender_black[86124]:         "path": "/dev/sr0",
Jan 20 13:57:28 compute-0 tender_black[86124]:         "rejected_reasons": [
Jan 20 13:57:28 compute-0 tender_black[86124]:             "Insufficient space (<5GB)",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "Has a FileSystem"
Jan 20 13:57:28 compute-0 tender_black[86124]:         ],
Jan 20 13:57:28 compute-0 tender_black[86124]:         "sys_api": {
Jan 20 13:57:28 compute-0 tender_black[86124]:             "actuators": null,
Jan 20 13:57:28 compute-0 tender_black[86124]:             "device_nodes": "sr0",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "devname": "sr0",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "human_readable_size": "482.00 KB",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "id_bus": "ata",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "model": "QEMU DVD-ROM",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "nr_requests": "2",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "parent": "/dev/sr0",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "partitions": {},
Jan 20 13:57:28 compute-0 tender_black[86124]:             "path": "/dev/sr0",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "removable": "1",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "rev": "2.5+",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "ro": "0",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "rotational": "1",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "sas_address": "",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "sas_device_handle": "",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "scheduler_mode": "mq-deadline",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "sectors": 0,
Jan 20 13:57:28 compute-0 tender_black[86124]:             "sectorsize": "2048",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "size": 493568.0,
Jan 20 13:57:28 compute-0 tender_black[86124]:             "support_discard": "2048",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "type": "disk",
Jan 20 13:57:28 compute-0 tender_black[86124]:             "vendor": "QEMU"
Jan 20 13:57:28 compute-0 tender_black[86124]:         }
Jan 20 13:57:28 compute-0 tender_black[86124]:     }
Jan 20 13:57:28 compute-0 tender_black[86124]: ]
Jan 20 13:57:28 compute-0 systemd[1]: libpod-b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72.scope: Deactivated successfully.
Jan 20 13:57:28 compute-0 podman[86108]: 2026-01-20 13:57:28.550614754 +0000 UTC m=+1.320263668 container died b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:57:28 compute-0 systemd[1]: libpod-b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72.scope: Consumed 1.206s CPU time.
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb585cfb5a80039f75386221427dd6c36fd10440dbaf6a0422ae784804ed49f2-merged.mount: Deactivated successfully.
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 podman[86108]: 2026-01-20 13:57:28.619349899 +0000 UTC m=+1.388998823 container remove b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 13:57:28 compute-0 systemd[1]: libpod-conmon-b5b39f14deb4002694b6f10f7613a882c4d807084c4912d64091c4868b9dba72.scope: Deactivated successfully.
Jan 20 13:57:28 compute-0 sudo[85999]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 20 13:57:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 13:57:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:57:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:57:29 compute-0 ceph-mon[74360]: pgmap v51: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:29 compute-0 ceph-mon[74360]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:29 compute-0 ceph-mon[74360]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 13:57:29 compute-0 ceph-mon[74360]: Unable to set osd_memory_target on compute-0 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:57:29 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1226142229; not ready for session (expect reconnect)
Jan 20 13:57:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:29 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 13:57:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 20 13:57:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 20 13:57:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:30 compute-0 ceph-mon[74360]: OSD bench result of 2720.011715 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:57:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229] boot
Jan 20 13:57:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 20 13:57:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 20 13:57:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 20 13:57:31 compute-0 ceph-mon[74360]: pgmap v52: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 20 13:57:31 compute-0 ceph-mon[74360]: osd.1 [v2:192.168.122.101:6800/1226142229,v1:192.168.122.101:6801/1226142229] boot
Jan 20 13:57:31 compute-0 ceph-mon[74360]: osdmap e12: 2 total, 2 up, 2 in
Jan 20 13:57:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 13:57:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 20 13:57:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 20 13:57:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] creating main.db for devicehealth
Jan 20 13:57:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 13:57:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 13:57:31 compute-0 sudo[87312]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 20 13:57:31 compute-0 sudo[87312]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 20 13:57:31 compute-0 sudo[87312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 20 13:57:31 compute-0 sudo[87312]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 13:57:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 20 13:57:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:57:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 20 13:57:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 20 13:57:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 20 13:57:32 compute-0 ceph-mon[74360]: osdmap e13: 2 total, 2 up, 2 in
Jan 20 13:57:32 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 13:57:32 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 13:57:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:57:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 20 13:57:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wookjv(active, since 100s)
Jan 20 13:57:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:33 compute-0 ceph-mon[74360]: pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 20 13:57:33 compute-0 ceph-mon[74360]: osdmap e14: 2 total, 2 up, 2 in
Jan 20 13:57:33 compute-0 ceph-mon[74360]: mgrmap e9: compute-0.wookjv(active, since 100s)
Jan 20 13:57:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:35 compute-0 ceph-mon[74360]: pgmap v57: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:37 compute-0 ceph-mon[74360]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:38 compute-0 sudo[87338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwhpnjgxyylzxkcbrbmniczecxpyahoe ; /usr/bin/python3'
Jan 20 13:57:38 compute-0 sudo[87338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:38 compute-0 python3[87340]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:38 compute-0 podman[87342]: 2026-01-20 13:57:38.436297822 +0000 UTC m=+0.036102120 container create 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:38 compute-0 systemd[1]: Started libpod-conmon-5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f.scope.
Jan 20 13:57:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf8bb962ebd393137629315da35a8cbb5863bf13297ffe2dfee85f084245c7a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf8bb962ebd393137629315da35a8cbb5863bf13297ffe2dfee85f084245c7a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf8bb962ebd393137629315da35a8cbb5863bf13297ffe2dfee85f084245c7a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:38 compute-0 podman[87342]: 2026-01-20 13:57:38.50961509 +0000 UTC m=+0.109419408 container init 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:57:38 compute-0 podman[87342]: 2026-01-20 13:57:38.515456755 +0000 UTC m=+0.115261053 container start 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:38 compute-0 podman[87342]: 2026-01-20 13:57:38.421203671 +0000 UTC m=+0.021007989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:38 compute-0 podman[87342]: 2026-01-20 13:57:38.518872925 +0000 UTC m=+0.118677243 container attach 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 20 13:57:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133122133' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:57:39 compute-0 kind_joliot[87359]: 
Jan 20 13:57:39 compute-0 kind_joliot[87359]: {"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":151,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1768917450,"num_in_osds":2,"osd_in_since":1768917425,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475226112,"bytes_avail":14548770816,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T13:56:54.382913+0000","services":{}},"progress_events":{}}
Jan 20 13:57:39 compute-0 systemd[1]: libpod-5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f.scope: Deactivated successfully.
Jan 20 13:57:39 compute-0 podman[87342]: 2026-01-20 13:57:39.106869173 +0000 UTC m=+0.706673471 container died 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 13:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf8bb962ebd393137629315da35a8cbb5863bf13297ffe2dfee85f084245c7a-merged.mount: Deactivated successfully.
Jan 20 13:57:39 compute-0 podman[87342]: 2026-01-20 13:57:39.142935621 +0000 UTC m=+0.742739949 container remove 5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f (image=quay.io/ceph/ceph:v18, name=kind_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:57:39 compute-0 systemd[1]: libpod-conmon-5f4a864a287761f82eb44737f60b7c7d4a7b6e89fdf10c04eb4615c63a586e0f.scope: Deactivated successfully.
Jan 20 13:57:39 compute-0 sudo[87338]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:39 compute-0 sudo[87421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uazzppjcyvgjkeencaiytyrejqpvectz ; /usr/bin/python3'
Jan 20 13:57:39 compute-0 sudo[87421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:39 compute-0 python3[87423]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:39 compute-0 ceph-mon[74360]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2133122133' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:57:39 compute-0 podman[87424]: 2026-01-20 13:57:39.675648849 +0000 UTC m=+0.053280786 container create d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:39 compute-0 systemd[1]: Started libpod-conmon-d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553.scope.
Jan 20 13:57:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800683647fdbb68f138c2b417f3f879cc6079f2f5aa2f1e4cb497b128a699b05/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/800683647fdbb68f138c2b417f3f879cc6079f2f5aa2f1e4cb497b128a699b05/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:39 compute-0 podman[87424]: 2026-01-20 13:57:39.657297302 +0000 UTC m=+0.034929219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:39 compute-0 podman[87424]: 2026-01-20 13:57:39.758811709 +0000 UTC m=+0.136443636 container init d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 13:57:39 compute-0 podman[87424]: 2026-01-20 13:57:39.766105182 +0000 UTC m=+0.143737079 container start d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 13:57:39 compute-0 podman[87424]: 2026-01-20 13:57:39.769336378 +0000 UTC m=+0.146968275 container attach d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1872028230' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 20 13:57:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1872028230' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1872028230' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 20 13:57:40 compute-0 interesting_lederberg[87440]: pool 'vms' created
Jan 20 13:57:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 20 13:57:40 compute-0 systemd[1]: libpod-d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553.scope: Deactivated successfully.
Jan 20 13:57:40 compute-0 podman[87424]: 2026-01-20 13:57:40.692034705 +0000 UTC m=+1.069666612 container died d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 13:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-800683647fdbb68f138c2b417f3f879cc6079f2f5aa2f1e4cb497b128a699b05-merged.mount: Deactivated successfully.
Jan 20 13:57:40 compute-0 podman[87424]: 2026-01-20 13:57:40.726426799 +0000 UTC m=+1.104058696 container remove d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553 (image=quay.io/ceph/ceph:v18, name=interesting_lederberg, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:40 compute-0 systemd[1]: libpod-conmon-d574b6d7b9fb46c18aadbc9a851fc02fa4020c95ecace206da31c350b2786553.scope: Deactivated successfully.
Jan 20 13:57:40 compute-0 sudo[87421]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:40 compute-0 sudo[87501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myzephbtngblknhudegcddrujkmujfnu ; /usr/bin/python3'
Jan 20 13:57:40 compute-0 sudo[87501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:41 compute-0 python3[87503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:41 compute-0 podman[87504]: 2026-01-20 13:57:41.09132037 +0000 UTC m=+0.045321764 container create ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:41 compute-0 systemd[1]: Started libpod-conmon-ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729.scope.
Jan 20 13:57:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364ec49a98d5af4102120e9de106568595c590f55919841e0fd9eb8b13f4a8ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364ec49a98d5af4102120e9de106568595c590f55919841e0fd9eb8b13f4a8ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:41 compute-0 podman[87504]: 2026-01-20 13:57:41.066147272 +0000 UTC m=+0.020148676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:41 compute-0 podman[87504]: 2026-01-20 13:57:41.16964354 +0000 UTC m=+0.123644934 container init ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:41 compute-0 podman[87504]: 2026-01-20 13:57:41.174580632 +0000 UTC m=+0.128582016 container start ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:57:41 compute-0 podman[87504]: 2026-01-20 13:57:41.178389713 +0000 UTC m=+0.132391117 container attach ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:57:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 20 13:57:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2247952115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:41 compute-0 ceph-mon[74360]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1872028230' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:41 compute-0 ceph-mon[74360]: osdmap e15: 2 total, 2 up, 2 in
Jan 20 13:57:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 20 13:57:41 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 20 13:57:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v63: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2247952115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:42 compute-0 ceph-mon[74360]: osdmap e16: 2 total, 2 up, 2 in
Jan 20 13:57:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 20 13:57:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2247952115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 20 13:57:42 compute-0 bold_fermat[87518]: pool 'volumes' created
Jan 20 13:57:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 20 13:57:42 compute-0 systemd[1]: libpod-ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729.scope: Deactivated successfully.
Jan 20 13:57:42 compute-0 podman[87504]: 2026-01-20 13:57:42.711745089 +0000 UTC m=+1.665746473 container died ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-364ec49a98d5af4102120e9de106568595c590f55919841e0fd9eb8b13f4a8ba-merged.mount: Deactivated successfully.
Jan 20 13:57:42 compute-0 podman[87504]: 2026-01-20 13:57:42.753805277 +0000 UTC m=+1.707806671 container remove ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729 (image=quay.io/ceph/ceph:v18, name=bold_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:42 compute-0 systemd[1]: libpod-conmon-ea4c26104f573190fb1dadd2b113fe26da08c68a5fdc45847b1bf8d2d2096729.scope: Deactivated successfully.
Jan 20 13:57:42 compute-0 sudo[87501]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:42 compute-0 sudo[87582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsjaxzwqtblrxwebcawhuukogrxitpje ; /usr/bin/python3'
Jan 20 13:57:42 compute-0 sudo[87582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:42 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:43 compute-0 python3[87584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:43 compute-0 podman[87585]: 2026-01-20 13:57:43.105911288 +0000 UTC m=+0.044866293 container create 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:43 compute-0 systemd[1]: Started libpod-conmon-8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de.scope.
Jan 20 13:57:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489b8c85637a10c69aa9d0012e0da117fad8e7e730221783e193e844805cffb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/489b8c85637a10c69aa9d0012e0da117fad8e7e730221783e193e844805cffb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:43 compute-0 podman[87585]: 2026-01-20 13:57:43.082781804 +0000 UTC m=+0.021736829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:43 compute-0 podman[87585]: 2026-01-20 13:57:43.189766555 +0000 UTC m=+0.128721610 container init 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:57:43 compute-0 podman[87585]: 2026-01-20 13:57:43.200028709 +0000 UTC m=+0.138983694 container start 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:57:43 compute-0 podman[87585]: 2026-01-20 13:57:43.203296355 +0000 UTC m=+0.142251350 container attach 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:57:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4271762588' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:43 compute-0 ceph-mon[74360]: pgmap v63: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:43 compute-0 ceph-mon[74360]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2247952115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:43 compute-0 ceph-mon[74360]: osdmap e17: 2 total, 2 up, 2 in
Jan 20 13:57:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4271762588' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 20 13:57:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4271762588' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 20 13:57:43 compute-0 practical_wozniak[87600]: pool 'backups' created
Jan 20 13:57:43 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 20 13:57:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:43 compute-0 systemd[1]: libpod-8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de.scope: Deactivated successfully.
Jan 20 13:57:43 compute-0 podman[87627]: 2026-01-20 13:57:43.768169888 +0000 UTC m=+0.028852707 container died 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-489b8c85637a10c69aa9d0012e0da117fad8e7e730221783e193e844805cffb5-merged.mount: Deactivated successfully.
Jan 20 13:57:43 compute-0 podman[87627]: 2026-01-20 13:57:43.807778311 +0000 UTC m=+0.068461100 container remove 8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de (image=quay.io/ceph/ceph:v18, name=practical_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 13:57:43 compute-0 systemd[1]: libpod-conmon-8180795da02854fc7af02d35399ecaf64f1f11e07dde6431fae0df9bd65047de.scope: Deactivated successfully.
Jan 20 13:57:43 compute-0 sudo[87582]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:43 compute-0 sudo[87665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbbnduzonxlkfiptwgfbbcesawpximlk ; /usr/bin/python3'
Jan 20 13:57:43 compute-0 sudo[87665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v66: 4 pgs: 1 creating+peering, 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:44 compute-0 python3[87667]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:44 compute-0 podman[87668]: 2026-01-20 13:57:44.152538297 +0000 UTC m=+0.085992145 container create 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 13:57:44 compute-0 systemd[1]: Started libpod-conmon-4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7.scope.
Jan 20 13:57:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0f5f8c3b801615c29025bcac15b83276ade9b04314ec82577511d24e665f84/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0f5f8c3b801615c29025bcac15b83276ade9b04314ec82577511d24e665f84/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:44 compute-0 podman[87668]: 2026-01-20 13:57:44.207649321 +0000 UTC m=+0.141103219 container init 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 20 13:57:44 compute-0 podman[87668]: 2026-01-20 13:57:44.213293781 +0000 UTC m=+0.146747629 container start 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 13:57:44 compute-0 podman[87668]: 2026-01-20 13:57:44.216704792 +0000 UTC m=+0.150158750 container attach 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 13:57:44 compute-0 podman[87668]: 2026-01-20 13:57:44.131989781 +0000 UTC m=+0.065443669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:57:44 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:57:44 compute-0 sshd-session[87686]: Connection closed by authenticating user root 157.245.78.139 port 50034 [preauth]
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 20 13:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286712736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4271762588' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:44 compute-0 ceph-mon[74360]: osdmap e18: 2 total, 2 up, 2 in
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:57:44 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 20 13:57:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3286712736' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 20 13:57:45 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:57:45 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:57:45 compute-0 distracted_einstein[87683]: pool 'images' created
Jan 20 13:57:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 20 13:57:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:45 compute-0 ceph-mon[74360]: pgmap v66: 4 pgs: 1 creating+peering, 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:45 compute-0 ceph-mon[74360]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:57:45 compute-0 ceph-mon[74360]: osdmap e19: 2 total, 2 up, 2 in
Jan 20 13:57:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3286712736' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3286712736' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:45 compute-0 ceph-mon[74360]: osdmap e20: 2 total, 2 up, 2 in
Jan 20 13:57:45 compute-0 systemd[1]: libpod-4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7.scope: Deactivated successfully.
Jan 20 13:57:45 compute-0 podman[87668]: 2026-01-20 13:57:45.758615325 +0000 UTC m=+1.692069203 container died 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d0f5f8c3b801615c29025bcac15b83276ade9b04314ec82577511d24e665f84-merged.mount: Deactivated successfully.
Jan 20 13:57:45 compute-0 podman[87668]: 2026-01-20 13:57:45.819753259 +0000 UTC m=+1.753207137 container remove 4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7 (image=quay.io/ceph/ceph:v18, name=distracted_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:57:45 compute-0 systemd[1]: libpod-conmon-4e012412add53e48735e6838bb625dee566c7ebb3512eccd63d553dcc5768ea7.scope: Deactivated successfully.
Jan 20 13:57:45 compute-0 sudo[87665]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v69: 5 pgs: 2 active+clean, 1 creating+peering, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:46 compute-0 sudo[87747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alwahujujjxgahfnrvnqwbwszynvsmka ; /usr/bin/python3'
Jan 20 13:57:46 compute-0 sudo[87747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:46 compute-0 python3[87749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:46 compute-0 podman[87750]: 2026-01-20 13:57:46.246860443 +0000 UTC m=+0.044336819 container create 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:57:46 compute-0 systemd[1]: Started libpod-conmon-13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf.scope.
Jan 20 13:57:46 compute-0 podman[87750]: 2026-01-20 13:57:46.2271811 +0000 UTC m=+0.024657496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8615d0978b5d2c974a90708330974e5eb739d0da85a6cfd1636d2e556be223f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8615d0978b5d2c974a90708330974e5eb739d0da85a6cfd1636d2e556be223f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:46 compute-0 podman[87750]: 2026-01-20 13:57:46.343854519 +0000 UTC m=+0.141330985 container init 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 20 13:57:46 compute-0 podman[87750]: 2026-01-20 13:57:46.350572627 +0000 UTC m=+0.148049013 container start 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:46 compute-0 podman[87750]: 2026-01-20 13:57:46.354438079 +0000 UTC m=+0.151914535 container attach 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:57:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 20 13:57:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 20 13:57:46 compute-0 ceph-mon[74360]: Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:57:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 20 13:57:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3530884063' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:46 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:57:46 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:57:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 20 13:57:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3530884063' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 20 13:57:47 compute-0 festive_mendeleev[87765]: pool 'cephfs.cephfs.meta' created
Jan 20 13:57:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 20 13:57:47 compute-0 ceph-mon[74360]: pgmap v69: 5 pgs: 2 active+clean, 1 creating+peering, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:47 compute-0 ceph-mon[74360]: osdmap e21: 2 total, 2 up, 2 in
Jan 20 13:57:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3530884063' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:47 compute-0 systemd[1]: libpod-13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf.scope: Deactivated successfully.
Jan 20 13:57:47 compute-0 podman[87792]: 2026-01-20 13:57:47.811480119 +0000 UTC m=+0.031399204 container died 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 13:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8615d0978b5d2c974a90708330974e5eb739d0da85a6cfd1636d2e556be223f2-merged.mount: Deactivated successfully.
Jan 20 13:57:47 compute-0 podman[87792]: 2026-01-20 13:57:47.854205685 +0000 UTC m=+0.074124780 container remove 13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:47 compute-0 systemd[1]: libpod-conmon-13e615d311d661e15e83ec5bded7013d965a787d167c3b7d21dbaa0645c8cdbf.scope: Deactivated successfully.
Jan 20 13:57:47 compute-0 sudo[87747]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:57:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v72: 6 pgs: 2 active+clean, 1 creating+peering, 3 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:48 compute-0 sudo[87830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoaiyqvujxstbneiuvybtdgnhwurujaj ; /usr/bin/python3'
Jan 20 13:57:48 compute-0 sudo[87830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:48 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:57:48 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:57:48 compute-0 python3[87832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:48 compute-0 podman[87833]: 2026-01-20 13:57:48.260689201 +0000 UTC m=+0.040722243 container create 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:57:48 compute-0 systemd[1]: Started libpod-conmon-6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac.scope.
Jan 20 13:57:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14686f16e691de70459801ec0538f713d799457c6c280ef2dd5817b19aa836d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c14686f16e691de70459801ec0538f713d799457c6c280ef2dd5817b19aa836d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:48 compute-0 podman[87833]: 2026-01-20 13:57:48.322900962 +0000 UTC m=+0.102934024 container init 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:57:48 compute-0 podman[87833]: 2026-01-20 13:57:48.328970364 +0000 UTC m=+0.109003396 container start 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:57:48 compute-0 podman[87833]: 2026-01-20 13:57:48.331685817 +0000 UTC m=+0.111718849 container attach 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:48 compute-0 podman[87833]: 2026-01-20 13:57:48.244652574 +0000 UTC m=+0.024685626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 20 13:57:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 20 13:57:48 compute-0 ceph-mon[74360]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 13:57:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3530884063' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:48 compute-0 ceph-mon[74360]: osdmap e22: 2 total, 2 up, 2 in
Jan 20 13:57:48 compute-0 ceph-mon[74360]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 20 13:57:48 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:57:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 20 13:57:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:49 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev ccd4f199-36ad-45b0-a11d-e43482175a3b (Updating mon deployment (+2 -> 3))
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 20 13:57:49 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 20 13:57:49 compute-0 wonderful_heisenberg[87848]: pool 'cephfs.cephfs.data' created
Jan 20 13:57:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 20 13:57:49 compute-0 ceph-mon[74360]: pgmap v72: 6 pgs: 2 active+clean, 1 creating+peering, 3 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:49 compute-0 ceph-mon[74360]: Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.client.admin.keyring
Jan 20 13:57:49 compute-0 ceph-mon[74360]: osdmap e23: 2 total, 2 up, 2 in
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:57:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:49 compute-0 systemd[1]: libpod-6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac.scope: Deactivated successfully.
Jan 20 13:57:49 compute-0 podman[87833]: 2026-01-20 13:57:49.793463152 +0000 UTC m=+1.573496204 container died 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 13:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c14686f16e691de70459801ec0538f713d799457c6c280ef2dd5817b19aa836d-merged.mount: Deactivated successfully.
Jan 20 13:57:49 compute-0 podman[87833]: 2026-01-20 13:57:49.846939422 +0000 UTC m=+1.626972464 container remove 6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac (image=quay.io/ceph/ceph:v18, name=wonderful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 13:57:49 compute-0 systemd[1]: libpod-conmon-6e66d11a17827db8d0fb6ab0febaa29e7bb53ec30e5b9f73bfdc55d125b79fac.scope: Deactivated successfully.
Jan 20 13:57:49 compute-0 sudo[87830]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:50 compute-0 sudo[87910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emavhmfmjuncifgurtjauzpkehknylif ; /usr/bin/python3'
Jan 20 13:57:50 compute-0 sudo[87910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:50 compute-0 python3[87912]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 20 13:57:50 compute-0 podman[87913]: 2026-01-20 13:57:50.247998214 +0000 UTC m=+0.043152318 container create cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:50 compute-0 systemd[1]: Started libpod-conmon-cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26.scope.
Jan 20 13:57:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d003571b4c408568e03b86a2024503025a585ae5d4c0e88753d8777d4669c2f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d003571b4c408568e03b86a2024503025a585ae5d4c0e88753d8777d4669c2f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:50 compute-0 podman[87913]: 2026-01-20 13:57:50.318441035 +0000 UTC m=+0.113595159 container init cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 20 13:57:50 compute-0 podman[87913]: 2026-01-20 13:57:50.325268766 +0000 UTC m=+0.120422890 container start cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 13:57:50 compute-0 podman[87913]: 2026-01-20 13:57:50.231808834 +0000 UTC m=+0.026962958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:50 compute-0 podman[87913]: 2026-01-20 13:57:50.328570644 +0000 UTC m=+0.123724778 container attach cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:50 compute-0 ceph-mon[74360]: pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:50 compute-0 ceph-mon[74360]: Deploying daemon mon.compute-2 on compute-2
Jan 20 13:57:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3880793223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 13:57:50 compute-0 ceph-mon[74360]: osdmap e24: 2 total, 2 up, 2 in
Jan 20 13:57:50 compute-0 ceph-mon[74360]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 20 13:57:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 20 13:57:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 20 13:57:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 20 13:57:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 13:57:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 20 13:57:51 compute-0 condescending_montalcini[87928]: enabled application 'rbd' on pool 'vms'
Jan 20 13:57:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:57:51 compute-0 systemd[1]: libpod-cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26.scope: Deactivated successfully.
Jan 20 13:57:51 compute-0 podman[87913]: 2026-01-20 13:57:51.232341468 +0000 UTC m=+1.027495582 container died cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d003571b4c408568e03b86a2024503025a585ae5d4c0e88753d8777d4669c2f-merged.mount: Deactivated successfully.
Jan 20 13:57:51 compute-0 podman[87913]: 2026-01-20 13:57:51.279726157 +0000 UTC m=+1.074880261 container remove cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26 (image=quay.io/ceph/ceph:v18, name=condescending_montalcini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 13:57:51 compute-0 systemd[1]: libpod-conmon-cd75e58bb5e46b7f99f2fd93afdcc0bed1d0ee1adb11d972d5ac73599053cc26.scope: Deactivated successfully.
Jan 20 13:57:51 compute-0 sudo[87910]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:51 compute-0 sudo[87988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqhuxgglxjhrfciurprwruxigxffqyfm ; /usr/bin/python3'
Jan 20 13:57:51 compute-0 sudo[87988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:51 compute-0 python3[87990]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:51 compute-0 podman[87991]: 2026-01-20 13:57:51.642099022 +0000 UTC m=+0.051620572 container create f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:57:51 compute-0 systemd[1]: Started libpod-conmon-f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134.scope.
Jan 20 13:57:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71fe4fcc12c534ed1ee1b78d22bdf3431bba8630fbc611f0c9600be605a39b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71fe4fcc12c534ed1ee1b78d22bdf3431bba8630fbc611f0c9600be605a39b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:51 compute-0 podman[87991]: 2026-01-20 13:57:51.624112434 +0000 UTC m=+0.033633954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:51 compute-0 podman[87991]: 2026-01-20 13:57:51.722031925 +0000 UTC m=+0.131553475 container init f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:51 compute-0 podman[87991]: 2026-01-20 13:57:51.731861996 +0000 UTC m=+0.141383516 container start f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:57:51 compute-0 podman[87991]: 2026-01-20 13:57:51.735573115 +0000 UTC m=+0.145094665 container attach f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 13:57:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 20 13:57:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3950308669' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 13:57:51 compute-0 ceph-mon[74360]: osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 13:57:52 compute-0 ceph-mon[74360]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:57:52
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Some PGs (0.285714) are inactive; try again later
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:57:52 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:57:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:57:52 compute-0 ceph-mgr[74653]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 20 13:57:53 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:53 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:53 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:54 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:54 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 13:57:54 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:54 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 13:57:55 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:55 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:55 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:55 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:55 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:55 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 13:57:56 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:56 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:56 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:56 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 13:57:56 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:56 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:56 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 13:57:57 compute-0 ceph-mon[74360]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 6 pool(s) do not have an application enabled
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 6 pool(s) do not have an application enabled
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev ccd4f199-36ad-45b0-a11d-e43482175a3b (Updating mon deployment (+2 -> 3))
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event ccd4f199-36ad-45b0-a11d-e43482175a3b (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 41022f99-f23f-49f2-a63d-83424f848bbb (Updating mgr deployment (+2 -> 3))
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: Deploying daemon mon.compute-1 on compute-1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0 calling monitor election
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: pgmap v78: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-2 calling monitor election
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: pgmap v79: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:57:57 compute-0 ceph-mon[74360]: fsmap 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: osdmap e25: 2 total, 2 up, 2 in
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: Health detail: HEALTH_WARN 6 pool(s) do not have an application enabled
Jan 20 13:57:57 compute-0 ceph-mon[74360]: [WRN] POOL_APP_NOT_ENABLED: 6 pool(s) do not have an application enabled
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'vms'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'volumes'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'backups'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'images'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:57:57 compute-0 ceph-mon[74360]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 20 13:57:57 compute-0 thirsty_darwin[88006]: enabled application 'rbd' on pool 'volumes'
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev b0a89543-d124-45ca-a00b-91ecf15ac7eb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:57:57 compute-0 systemd[1]: libpod-f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134.scope: Deactivated successfully.
Jan 20 13:57:57 compute-0 podman[87991]: 2026-01-20 13:57:57.29451013 +0000 UTC m=+5.704031680 container died f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:57:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a71fe4fcc12c534ed1ee1b78d22bdf3431bba8630fbc611f0c9600be605a39b1-merged.mount: Deactivated successfully.
Jan 20 13:57:57 compute-0 podman[87991]: 2026-01-20 13:57:57.352141281 +0000 UTC m=+5.761662821 container remove f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134 (image=quay.io/ceph/ceph:v18, name=thirsty_darwin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:57:57 compute-0 systemd[1]: libpod-conmon-f3a16836e2b3556bacaa8c005ae974dd3a1a7aca972ea73cbc84814be93fa134.scope: Deactivated successfully.
Jan 20 13:57:57 compute-0 sudo[87988]: pam_unix(sudo:session): session closed for user root
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 3 completed events
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 2f5df5fb-6506-46aa-ac9e-331f0b6659de (Global Recovery Event) in 5 seconds
Jan 20 13:57:57 compute-0 sudo[88066]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcktbrwgmpcxqnnsgukptpvsmeuwpjqq ; /usr/bin/python3'
Jan 20 13:57:57 compute-0 sudo[88066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:57:57 compute-0 python3[88068]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:57 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 13:57:57 compute-0 podman[88069]: 2026-01-20 13:57:57.691117814 +0000 UTC m=+0.048433397 container create 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:57:57 compute-0 systemd[1]: Started libpod-conmon-6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7.scope.
Jan 20 13:57:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc278cef591c84943f5198ea2322be8ab9df032d5a492e8673ac0e87dac038db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc278cef591c84943f5198ea2322be8ab9df032d5a492e8673ac0e87dac038db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:57:57 compute-0 podman[88069]: 2026-01-20 13:57:57.762215663 +0000 UTC m=+0.119531246 container init 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:57:57 compute-0 podman[88069]: 2026-01-20 13:57:57.767534824 +0000 UTC m=+0.124850407 container start 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 13:57:57 compute-0 podman[88069]: 2026-01-20 13:57:57.675423278 +0000 UTC m=+0.032738881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:57:57 compute-0 podman[88069]: 2026-01-20 13:57:57.770919604 +0000 UTC m=+0.128235177 container attach 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 13:57:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:57:58 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1484068279; not ready for session (expect reconnect)
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:58 compute-0 ceph-mon[74360]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3099254653' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:57:58 compute-0 ceph-mon[74360]: osdmap e26: 2 total, 2 up, 2 in
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: Deploying daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 20 13:57:58 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 54d73c73-6d70-4f57-8984-408734761c13 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 20 13:57:58 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:57:58 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 13:57:58 compute-0 ceph-mon[74360]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 20 13:57:58 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:57:59 compute-0 ceph-mgr[74653]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 20 13:57:59 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:57:59.129+0000 7f4562bc3640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:57:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:57:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:57:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:57:59 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:57:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:57:59 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:00 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:00 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:58:00 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:58:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:00 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:58:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:01 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:01 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:01 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:58:01 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:58:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:01 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:58:02 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:02 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:02 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 4 completed events
Jan 20 13:58:02 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:58:02 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:58:02 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:58:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:02 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:58:02 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 13:58:03 compute-0 ceph-mon[74360]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-0 agitated_jennings[88084]: enabled application 'rbd' on pool 'backups'
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev dff59163-5836-44fe-ac33-93d7d943a1c9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0 calling monitor election
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-2 calling monitor election
Jan 20 13:58:03 compute-0 ceph-mon[74360]: pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-1 calling monitor election
Jan 20 13:58:03 compute-0 ceph-mon[74360]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 13:58:03 compute-0 ceph-mon[74360]: fsmap 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: osdmap e27: 2 total, 2 up, 2 in
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mgrmap e9: compute-0.wookjv(active, since 2m)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-0 ceph-mon[74360]: [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     application not enabled on pool 'volumes'
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     application not enabled on pool 'backups'
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     application not enabled on pool 'images'
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     application not enabled on pool 'cephfs.cephfs.meta'
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     application not enabled on pool 'cephfs.cephfs.data'
Jan 20 13:58:03 compute-0 ceph-mon[74360]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:58:03 compute-0 systemd[1]: libpod-6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7.scope: Deactivated successfully.
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:58:03 compute-0 podman[88069]: 2026-01-20 13:58:03.778681302 +0000 UTC m=+6.135996945 container died 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.oweoeg on compute-1
Jan 20 13:58:03 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.oweoeg on compute-1
Jan 20 13:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc278cef591c84943f5198ea2322be8ab9df032d5a492e8673ac0e87dac038db-merged.mount: Deactivated successfully.
Jan 20 13:58:03 compute-0 systemd[75982]: Starting Mark boot as successful...
Jan 20 13:58:03 compute-0 systemd[75982]: Finished Mark boot as successful.
Jan 20 13:58:03 compute-0 podman[88069]: 2026-01-20 13:58:03.838659135 +0000 UTC m=+6.195974758 container remove 6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7 (image=quay.io/ceph/ceph:v18, name=agitated_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:03 compute-0 systemd[1]: libpod-conmon-6170634a7b1908a20dc9a97f1b92b84ac44a1fb61b0f4d324bf42d9288805ad7.scope: Deactivated successfully.
Jan 20 13:58:03 compute-0 sudo[88066]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:03 compute-0 sudo[88146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsimomzpoyjjhxunflufraoszjbcsxar ; /usr/bin/python3'
Jan 20 13:58:04 compute-0 sudo[88146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:04 compute-0 python3[88148]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:04 compute-0 podman[88149]: 2026-01-20 13:58:04.21614606 +0000 UTC m=+0.063338903 container create f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:04 compute-0 systemd[1]: Started libpod-conmon-f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628.scope.
Jan 20 13:58:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d75de1703469fee23ccadf6e367ccebfa3f4700fd6c2d87608d0304f767b6ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d75de1703469fee23ccadf6e367ccebfa3f4700fd6c2d87608d0304f767b6ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:04 compute-0 podman[88149]: 2026-01-20 13:58:04.282916844 +0000 UTC m=+0.130109737 container init f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:04 compute-0 podman[88149]: 2026-01-20 13:58:04.193264823 +0000 UTC m=+0.040457646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:04 compute-0 podman[88149]: 2026-01-20 13:58:04.290469355 +0000 UTC m=+0.137662198 container start f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:04 compute-0 podman[88149]: 2026-01-20 13:58:04.294301976 +0000 UTC m=+0.141494829 container attach f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28 pruub=11.091842651s) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active pruub 58.810642242s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28 pruub=11.091842651s) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown pruub 58.810642242s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2066742569; not ready for session (expect reconnect)
Jan 20 13:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 20 13:58:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 20 13:58:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev dd154ef0-413c-40dd-a356-43ebad298350 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 13:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:58:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=17/18 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=17/17 les/c/f=18/18/0 sis=28) [0] r=0 lpr=28 pi=[17,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:04 compute-0 ceph-mon[74360]: pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:04 compute-0 ceph-mon[74360]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1076842494' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: osdmap e28: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.oweoeg", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: Deploying daemon mgr.compute-1.oweoeg on compute-1
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:04 compute-0 ceph-mon[74360]: osdmap e29: 2 total, 2 up, 2 in
Jan 20 13:58:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 20 13:58:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 20 13:58:04 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 20 13:58:04 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v88: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 20 13:58:05 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T13:58:05.640+0000 7f4562bc3640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Jan 20 13:58:05 compute-0 gifted_lalande[88164]: enabled application 'rbd' on pool 'images'
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 32102c21-a3f7-4510-b0cd-80391c50a8ad (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 systemd[1]: libpod-f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628.scope: Deactivated successfully.
Jan 20 13:58:05 compute-0 conmon[88164]: conmon f1a30e271d4d6012b827 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628.scope/container/memory.events
Jan 20 13:58:05 compute-0 podman[88149]: 2026-01-20 13:58:05.785924105 +0000 UTC m=+1.633116948 container died f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mon[74360]: 3.1 scrub starts
Jan 20 13:58:05 compute-0 ceph-mon[74360]: 3.1 scrub ok
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1913464166' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: osdmap e30: 2 total, 2 up, 2 in
Jan 20 13:58:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d75de1703469fee23ccadf6e367ccebfa3f4700fd6c2d87608d0304f767b6ae-merged.mount: Deactivated successfully.
Jan 20 13:58:05 compute-0 podman[88149]: 2026-01-20 13:58:05.834598247 +0000 UTC m=+1.681791050 container remove f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628 (image=quay.io/ceph/ceph:v18, name=gifted_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:05 compute-0 systemd[1]: libpod-conmon-f1a30e271d4d6012b827af7ed7fc5f029ddf5ee95c4370c345eb4184a72af628.scope: Deactivated successfully.
Jan 20 13:58:05 compute-0 sudo[88146]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 41022f99-f23f-49f2-a63d-83424f848bbb (Updating mgr deployment (+2 -> 3))
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 41022f99-f23f-49f2-a63d-83424f848bbb (Updating mgr deployment (+2 -> 3)) in 9 seconds
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 2f0c619d-ed8e-4e21-aa88-1be8b840c0f5 (Updating crash deployment (+1 -> 3))
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:58:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 20 13:58:05 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 20 13:58:05 compute-0 sudo[88225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqgswfltjsqlrsqxhemphjxcouzzdjyp ; /usr/bin/python3'
Jan 20 13:58:05 compute-0 sudo[88225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:06 compute-0 python3[88227]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:06 compute-0 podman[88228]: 2026-01-20 13:58:06.193112539 +0000 UTC m=+0.065275014 container create f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:06 compute-0 systemd[1]: Started libpod-conmon-f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144.scope.
Jan 20 13:58:06 compute-0 podman[88228]: 2026-01-20 13:58:06.167934601 +0000 UTC m=+0.040097126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b050e984de3f8edb2ac47a140bfa900ae2de6077b4ede587d5e58422a3076f12/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b050e984de3f8edb2ac47a140bfa900ae2de6077b4ede587d5e58422a3076f12/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:06 compute-0 podman[88228]: 2026-01-20 13:58:06.296167857 +0000 UTC m=+0.168330392 container init f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:58:06 compute-0 podman[88228]: 2026-01-20 13:58:06.305850964 +0000 UTC m=+0.178013439 container start f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:06 compute-0 podman[88228]: 2026-01-20 13:58:06.310079707 +0000 UTC m=+0.182242232 container attach f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=30 pruub=10.395595551s) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active pruub 59.815471649s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=12.427693367s) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active pruub 61.847671509s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=12.427693367s) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown pruub 61.847671509s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=30 pruub=10.395595551s) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown pruub 59.815471649s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 20 13:58:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Jan 20 13:58:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev c45a9bb0-133b-472e-bb27-2f83521ab2fb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev b0a89543-d124-45ca-a00b-91ecf15ac7eb (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event b0a89543-d124-45ca-a00b-91ecf15ac7eb (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 54d73c73-6d70-4f57-8984-408734761c13 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 54d73c73-6d70-4f57-8984-408734761c13 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev dff59163-5836-44fe-ac33-93d7d943a1c9 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event dff59163-5836-44fe-ac33-93d7d943a1c9 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev dd154ef0-413c-40dd-a356-43ebad298350 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event dd154ef0-413c-40dd-a356-43ebad298350 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 32102c21-a3f7-4510-b0cd-80391c50a8ad (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 32102c21-a3f7-4510-b0cd-80391c50a8ad (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev c45a9bb0-133b-472e-bb27-2f83521ab2fb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 13:58:06 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event c45a9bb0-133b-472e-bb27-2f83521ab2fb (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1e( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.14( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.17( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.a( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.c( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.6( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.3( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.5( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1d( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.19( empty local-lis/les=20/21 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=18/19 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1e( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.11( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.10( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.12( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.14( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.16( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.17( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.17( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.b( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.6( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.7( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=30/31 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.3( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.5( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.4( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.f( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=18/18 les/c/f=19/19/0 sis=30) [0] r=0 lpr=30 pi=[18,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 31 pg[5.19( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=20/20 les/c/f=21/21/0 sis=30) [0] r=0 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 20 13:58:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 20 13:58:06 compute-0 ceph-mon[74360]: pgmap v88: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:58:06 compute-0 ceph-mon[74360]: osdmap e31: 2 total, 2 up, 2 in
Jan 20 13:58:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Jan 20 13:58:07 compute-0 musing_lewin[88243]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 20 13:58:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Jan 20 13:58:07 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.985056877s) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active pruub 63.869476318s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:07 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=32 pruub=12.985056877s) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown pruub 63.869476318s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:07 compute-0 systemd[1]: libpod-f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144.scope: Deactivated successfully.
Jan 20 13:58:07 compute-0 podman[88228]: 2026-01-20 13:58:07.821293864 +0000 UTC m=+1.693456349 container died f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:07 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 13:58:07 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 13:58:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b050e984de3f8edb2ac47a140bfa900ae2de6077b4ede587d5e58422a3076f12-merged.mount: Deactivated successfully.
Jan 20 13:58:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:07 compute-0 podman[88228]: 2026-01-20 13:58:07.868202221 +0000 UTC m=+1.740364666 container remove f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144 (image=quay.io/ceph/ceph:v18, name=musing_lewin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 13:58:07 compute-0 systemd[1]: libpod-conmon-f29eb7030881d367f698e0d433c1c5cab4789dbb405cfbd215479f522cb05144.scope: Deactivated successfully.
Jan 20 13:58:07 compute-0 sudo[88225]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:08 compute-0 ceph-mon[74360]: Deploying daemon crash.compute-2 on compute-2
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: pgmap v91: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4079761379' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:58:08 compute-0 ceph-mon[74360]: osdmap e32: 2 total, 2 up, 2 in
Jan 20 13:58:08 compute-0 ceph-mon[74360]: 3.2 scrub starts
Jan 20 13:58:08 compute-0 ceph-mon[74360]: 3.2 scrub ok
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:08 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 2f0c619d-ed8e-4e21-aa88-1be8b840c0f5 (Updating crash deployment (+1 -> 3))
Jan 20 13:58:08 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 2f0c619d-ed8e-4e21-aa88-1be8b840c0f5 (Updating crash deployment (+1 -> 3)) in 2 seconds
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:08 compute-0 sudo[88279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:08 compute-0 sudo[88279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:08 compute-0 sudo[88279]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:08 compute-0 sudo[88304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:08 compute-0 sudo[88304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:08 compute-0 sudo[88304]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:08 compute-0 sudo[88329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:08 compute-0 sudo[88329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:08 compute-0 sudo[88329]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:08 compute-0 sudo[88354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:58:08 compute-0 sudo[88354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:08 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 12 completed events
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 20 13:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Jan 20 13:58:08 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=22/23 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.18( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1f( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.c( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.6( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.0( empty local-lis/les=32/33 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.f( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.4( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.9( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.14( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.16( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.13( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1d( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.10( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 33 pg[6.11( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=22/22 les/c/f=23/23/0 sis=32) [0] r=0 lpr=32 pi=[22,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:08 compute-0 sudo[88415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmhgtdwsythwxpkgbuofemjkmebfhbk ; /usr/bin/python3'
Jan 20 13:58:08 compute-0 sudo[88415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:09 compute-0 python3[88427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.016447018 +0000 UTC m=+0.060459086 container create 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:09 compute-0 systemd[1]: Started libpod-conmon-33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21.scope.
Jan 20 13:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.079162924 +0000 UTC m=+0.123175042 container init 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:58:09 compute-0 podman[88458]: 2026-01-20 13:58:09.082945144 +0000 UTC m=+0.059768158 container create 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:08.991129376 +0000 UTC m=+0.035141534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.084928297 +0000 UTC m=+0.128940395 container start 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.088496472 +0000 UTC m=+0.132508590 container attach 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:09 compute-0 sharp_pike[88471]: 167 167
Jan 20 13:58:09 compute-0 systemd[1]: libpod-33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21.scope: Deactivated successfully.
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.090609898 +0000 UTC m=+0.134621976 container died 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:58:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7863c0fe0b5acd8061e4b4b2410ca47892bd646e756e987ac711b9a95c1f21-merged.mount: Deactivated successfully.
Jan 20 13:58:09 compute-0 systemd[1]: Started libpod-conmon-2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5.scope.
Jan 20 13:58:09 compute-0 podman[88444]: 2026-01-20 13:58:09.134043222 +0000 UTC m=+0.178055300 container remove 33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:58:09 compute-0 systemd[1]: libpod-conmon-33945383f2262efeaaf2dd6e08c57d9ef4f19a3630f17184c67a7ba3a0798c21.scope: Deactivated successfully.
Jan 20 13:58:09 compute-0 podman[88458]: 2026-01-20 13:58:09.05453956 +0000 UTC m=+0.031362624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317795257ecd7f3a0fd31afb0e9b2087d7a5d32d39363a2b065acf37f80d3f33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317795257ecd7f3a0fd31afb0e9b2087d7a5d32d39363a2b065acf37f80d3f33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 podman[88458]: 2026-01-20 13:58:09.176189821 +0000 UTC m=+0.153012805 container init 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 13:58:09 compute-0 podman[88458]: 2026-01-20 13:58:09.18068223 +0000 UTC m=+0.157505234 container start 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:58:09 compute-0 podman[88458]: 2026-01-20 13:58:09.18369974 +0000 UTC m=+0.160522754 container attach 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:58:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:09 compute-0 ceph-mon[74360]: osdmap e33: 2 total, 2 up, 2 in
Jan 20 13:58:09 compute-0 podman[88501]: 2026-01-20 13:58:09.316894059 +0000 UTC m=+0.053226276 container create c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 13:58:09 compute-0 systemd[1]: Started libpod-conmon-c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82.scope.
Jan 20 13:58:09 compute-0 podman[88501]: 2026-01-20 13:58:09.289039509 +0000 UTC m=+0.025371786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:09 compute-0 podman[88501]: 2026-01-20 13:58:09.412172129 +0000 UTC m=+0.148504406 container init c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 13:58:09 compute-0 podman[88501]: 2026-01-20 13:58:09.428757579 +0000 UTC m=+0.165089806 container start c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:09 compute-0 podman[88501]: 2026-01-20 13:58:09.432439987 +0000 UTC m=+0.168772204 container attach c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 20 13:58:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 20 13:58:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:10 compute-0 modest_hoover[88517]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:58:10 compute-0 modest_hoover[88517]: --> relative data size: 1.0
Jan 20 13:58:10 compute-0 modest_hoover[88517]: --> All data devices are unavailable
Jan 20 13:58:10 compute-0 systemd[1]: libpod-c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82.scope: Deactivated successfully.
Jan 20 13:58:10 compute-0 podman[88501]: 2026-01-20 13:58:10.248854531 +0000 UTC m=+0.985186758 container died c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 20 13:58:10 compute-0 ceph-mon[74360]: pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 20 13:58:10 compute-0 ceph-mon[74360]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 13:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3565a4c9de005b2d0650666a71b0fc1c4d815d19fbc9a6276c763dac0431b887-merged.mount: Deactivated successfully.
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Jan 20 13:58:10 compute-0 silly_montalcini[88490]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Jan 20 13:58:10 compute-0 systemd[1]: libpod-2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5.scope: Deactivated successfully.
Jan 20 13:58:10 compute-0 podman[88458]: 2026-01-20 13:58:10.313255292 +0000 UTC m=+1.290078306 container died 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:10 compute-0 podman[88501]: 2026-01-20 13:58:10.330740807 +0000 UTC m=+1.067073034 container remove c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hoover, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 13:58:10 compute-0 systemd[1]: libpod-conmon-c79cd041a049a1c6a9a0e8249e378400df6ac7f0095058432fcbba48a9cb3c82.scope: Deactivated successfully.
Jan 20 13:58:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-317795257ecd7f3a0fd31afb0e9b2087d7a5d32d39363a2b065acf37f80d3f33-merged.mount: Deactivated successfully.
Jan 20 13:58:10 compute-0 podman[88458]: 2026-01-20 13:58:10.365327345 +0000 UTC m=+1.342150329 container remove 2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5 (image=quay.io/ceph/ceph:v18, name=silly_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:58:10 compute-0 sudo[88354]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:10 compute-0 systemd[1]: libpod-conmon-2da78b355dcb43f60f03370959ef58dfc0172db39848f019716849f7ddb52ed5.scope: Deactivated successfully.
Jan 20 13:58:10 compute-0 sudo[88415]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"} v 0) v1
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]: dispatch
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]': finished
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Jan 20 13:58:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:10 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:10 compute-0 sudo[88576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:10 compute-0 sudo[88576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:10 compute-0 sudo[88576]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:10 compute-0 sudo[88601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:10 compute-0 sudo[88601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:10 compute-0 sudo[88601]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:10 compute-0 sudo[88626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:10 compute-0 sudo[88626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:10 compute-0 sudo[88626]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:10 compute-0 sudo[88651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:58:10 compute-0 sudo[88651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 20 13:58:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.061534157 +0000 UTC m=+0.057025836 container create 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:11 compute-0 systemd[1]: Started libpod-conmon-89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268.scope.
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.042247824 +0000 UTC m=+0.037739483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.174202719 +0000 UTC m=+0.169694458 container init 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.180100965 +0000 UTC m=+0.175592644 container start 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:11 compute-0 youthful_elbakyan[88761]: 167 167
Jan 20 13:58:11 compute-0 systemd[1]: libpod-89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268.scope: Deactivated successfully.
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.1874309 +0000 UTC m=+0.182922579 container attach 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.187872812 +0000 UTC m=+0.183364491 container died 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 13:58:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dafcec21ce8cddc775e0d5c9103e156ef3f44feb9eff5752db73b1aa39cc3726-merged.mount: Deactivated successfully.
Jan 20 13:58:11 compute-0 podman[88715]: 2026-01-20 13:58:11.232513578 +0000 UTC m=+0.228005227 container remove 89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 13:58:11 compute-0 systemd[1]: libpod-conmon-89e0b66a4d2f5bb3df11f7027624ae2abb12f08dbb5379e9f50dd855141e7268.scope: Deactivated successfully.
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3413961177' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 13:58:11 compute-0 ceph-mon[74360]: osdmap e34: 2 total, 2 up, 2 in
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3257799028' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]: dispatch
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]: dispatch
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "36aff1f5-bc44-4633-b417-95c5b1ee6391"}]': finished
Jan 20 13:58:11 compute-0 ceph-mon[74360]: osdmap e35: 3 total, 2 up, 3 in
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:11 compute-0 ceph-mon[74360]: 3.3 scrub starts
Jan 20 13:58:11 compute-0 ceph-mon[74360]: 3.3 scrub ok
Jan 20 13:58:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/672023475' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 13:58:11 compute-0 python3[88825]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:58:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 13:58:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 13:58:11 compute-0 podman[88831]: 2026-01-20 13:58:11.435798977 +0000 UTC m=+0.054921469 container create 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:11 compute-0 systemd[1]: Started libpod-conmon-08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b.scope.
Jan 20 13:58:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c26d82bd65e21f6b2c1a967ebfdac1df881fdcd36c0c1355a7b1023112ee327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c26d82bd65e21f6b2c1a967ebfdac1df881fdcd36c0c1355a7b1023112ee327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c26d82bd65e21f6b2c1a967ebfdac1df881fdcd36c0c1355a7b1023112ee327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c26d82bd65e21f6b2c1a967ebfdac1df881fdcd36c0c1355a7b1023112ee327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:11 compute-0 podman[88831]: 2026-01-20 13:58:11.407581008 +0000 UTC m=+0.026703570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:11 compute-0 podman[88831]: 2026-01-20 13:58:11.51761764 +0000 UTC m=+0.136740142 container init 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:58:11 compute-0 podman[88831]: 2026-01-20 13:58:11.523896967 +0000 UTC m=+0.143019489 container start 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:11 compute-0 podman[88831]: 2026-01-20 13:58:11.52777096 +0000 UTC m=+0.146893452 container attach 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:58:11 compute-0 python3[88922]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917491.088713-37336-119519879458344/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:12 compute-0 boring_yonath[88868]: {
Jan 20 13:58:12 compute-0 boring_yonath[88868]:     "0": [
Jan 20 13:58:12 compute-0 boring_yonath[88868]:         {
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "devices": [
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "/dev/loop3"
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             ],
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "lv_name": "ceph_lv0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "lv_size": "7511998464",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "name": "ceph_lv0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "tags": {
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:58:12 compute-0 sudo[89026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkucvscwprdugitiqmkdmbftjfkskazw ; /usr/bin/python3'
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.cluster_name": "ceph",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.crush_device_class": "",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.encrypted": "0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.osd_id": "0",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.type": "block",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:                 "ceph.vdo": "0"
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             },
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "type": "block",
Jan 20 13:58:12 compute-0 boring_yonath[88868]:             "vg_name": "ceph_vg0"
Jan 20 13:58:12 compute-0 boring_yonath[88868]:         }
Jan 20 13:58:12 compute-0 boring_yonath[88868]:     ]
Jan 20 13:58:12 compute-0 boring_yonath[88868]: }
Jan 20 13:58:12 compute-0 sudo[89026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:12 compute-0 systemd[1]: libpod-08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b.scope: Deactivated successfully.
Jan 20 13:58:12 compute-0 podman[88831]: 2026-01-20 13:58:12.288234848 +0000 UTC m=+0.907357380 container died 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 13:58:12 compute-0 ceph-mon[74360]: pgmap v97: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:12 compute-0 ceph-mon[74360]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 13:58:12 compute-0 ceph-mon[74360]: Cluster is now healthy
Jan 20 13:58:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c26d82bd65e21f6b2c1a967ebfdac1df881fdcd36c0c1355a7b1023112ee327-merged.mount: Deactivated successfully.
Jan 20 13:58:12 compute-0 podman[88831]: 2026-01-20 13:58:12.356206113 +0000 UTC m=+0.975328625 container remove 08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_yonath, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 13:58:12 compute-0 systemd[1]: libpod-conmon-08fbda691feac2e08f76397b82477ea5ad68c61d840a12b4e337a85e2956c43b.scope: Deactivated successfully.
Jan 20 13:58:12 compute-0 sudo[88651]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:12 compute-0 sudo[89043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:12 compute-0 python3[89028]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:58:12 compute-0 sudo[89043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:12 compute-0 sudo[89043]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:12 compute-0 sudo[89026]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:12 compute-0 sudo[89070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:12 compute-0 sudo[89070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:12 compute-0 sudo[89070]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:12 compute-0 sudo[89116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:12 compute-0 sudo[89116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:12 compute-0 sudo[89116]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:12 compute-0 sudo[89161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:58:12 compute-0 sudo[89161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:12 compute-0 sudo[89215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggcbuxobgqiojfmqfcpifgpfngomtzeg ; /usr/bin/python3'
Jan 20 13:58:12 compute-0 sudo[89215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:12 compute-0 python3[89217]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917492.080575-37350-20921070281094/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=95189e8600669f9d289aebd9dff060e5ee33f69f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:12 compute-0 sudo[89215]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.093005772 +0000 UTC m=+0.065701605 container create ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 13:58:13 compute-0 systemd[1]: Started libpod-conmon-ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c.scope.
Jan 20 13:58:13 compute-0 sudo[89321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnlskxqjtixkrbvsnwwqxwzhwxgqwsie ; /usr/bin/python3'
Jan 20 13:58:13 compute-0 sudo[89321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.067710211 +0000 UTC m=+0.040406074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.179738986 +0000 UTC m=+0.152434879 container init ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.19154049 +0000 UTC m=+0.164236343 container start ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.195070933 +0000 UTC m=+0.167766796 container attach ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:13 compute-0 crazy_taussig[89322]: 167 167
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.197349664 +0000 UTC m=+0.170045507 container died ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:13 compute-0 systemd[1]: libpod-ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c.scope: Deactivated successfully.
Jan 20 13:58:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-582e0ceb9133ec88fe2b2ceadb9a96e9543f7dac532fe6b77f6ad566f7801afd-merged.mount: Deactivated successfully.
Jan 20 13:58:13 compute-0 podman[89282]: 2026-01-20 13:58:13.241006304 +0000 UTC m=+0.213702157 container remove ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:58:13 compute-0 systemd[1]: libpod-conmon-ed0587c0a90cfaceef4a1a759b2066a1e5a94375d8df3977ae53ebc86d21d64c.scope: Deactivated successfully.
Jan 20 13:58:13 compute-0 python3[89326]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:13 compute-0 podman[89343]: 2026-01-20 13:58:13.412900689 +0000 UTC m=+0.060077726 container create 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:58:13 compute-0 systemd[1]: Started libpod-conmon-37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be.scope.
Jan 20 13:58:13 compute-0 podman[89359]: 2026-01-20 13:58:13.467164231 +0000 UTC m=+0.063422146 container create 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:13 compute-0 podman[89343]: 2026-01-20 13:58:13.382662156 +0000 UTC m=+0.029839233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3058b27ea9be6f3dc9aed3c3d7028212dfb512f929c8ef53211dfa34af9a76a1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3058b27ea9be6f3dc9aed3c3d7028212dfb512f929c8ef53211dfa34af9a76a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3058b27ea9be6f3dc9aed3c3d7028212dfb512f929c8ef53211dfa34af9a76a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 systemd[1]: Started libpod-conmon-445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e.scope.
Jan 20 13:58:13 compute-0 podman[89343]: 2026-01-20 13:58:13.524203275 +0000 UTC m=+0.171380362 container init 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:13 compute-0 podman[89359]: 2026-01-20 13:58:13.433805865 +0000 UTC m=+0.030063810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:13 compute-0 podman[89343]: 2026-01-20 13:58:13.537493159 +0000 UTC m=+0.184670186 container start 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:13 compute-0 podman[89343]: 2026-01-20 13:58:13.541714021 +0000 UTC m=+0.188891098 container attach 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:58:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688639e123e17c7dec39851065864b1e69ff301eb5eed6ce67f551fa45816136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688639e123e17c7dec39851065864b1e69ff301eb5eed6ce67f551fa45816136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688639e123e17c7dec39851065864b1e69ff301eb5eed6ce67f551fa45816136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688639e123e17c7dec39851065864b1e69ff301eb5eed6ce67f551fa45816136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:13 compute-0 podman[89359]: 2026-01-20 13:58:13.570940747 +0000 UTC m=+0.167198732 container init 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 13:58:13 compute-0 podman[89359]: 2026-01-20 13:58:13.58424654 +0000 UTC m=+0.180504485 container start 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:13 compute-0 podman[89359]: 2026-01-20 13:58:13.588954016 +0000 UTC m=+0.185211971 container attach 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:58:13 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 13:58:13 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 13:58:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 20 13:58:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:58:14 compute-0 ceph-mon[74360]: pgmap v98: 193 pgs: 1 peering, 31 unknown, 161 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:14 compute-0 ceph-mon[74360]: 3.4 scrub starts
Jan 20 13:58:14 compute-0 ceph-mon[74360]: 3.4 scrub ok
Jan 20 13:58:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 13:58:14 compute-0 magical_kapitsa[89378]: 
Jan 20 13:58:14 compute-0 magical_kapitsa[89378]: [global]
Jan 20 13:58:14 compute-0 magical_kapitsa[89378]:         fsid = e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:14 compute-0 magical_kapitsa[89378]:         mon_host = 192.168.122.100
Jan 20 13:58:14 compute-0 systemd[1]: libpod-37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be.scope: Deactivated successfully.
Jan 20 13:58:14 compute-0 podman[89343]: 2026-01-20 13:58:14.360873998 +0000 UTC m=+1.008050995 container died 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3058b27ea9be6f3dc9aed3c3d7028212dfb512f929c8ef53211dfa34af9a76a1-merged.mount: Deactivated successfully.
Jan 20 13:58:14 compute-0 podman[89343]: 2026-01-20 13:58:14.441954821 +0000 UTC m=+1.089131818 container remove 37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 13:58:14 compute-0 systemd[1]: libpod-conmon-37586776a0a29c8c0c308675c5c79f92d463ba49d61d0300470968749caf76be.scope: Deactivated successfully.
Jan 20 13:58:14 compute-0 sudo[89321]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:14 compute-0 strange_bose[89383]: {
Jan 20 13:58:14 compute-0 strange_bose[89383]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 13:58:14 compute-0 strange_bose[89383]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:58:14 compute-0 strange_bose[89383]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:58:14 compute-0 strange_bose[89383]:         "osd_id": 0,
Jan 20 13:58:14 compute-0 strange_bose[89383]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:58:14 compute-0 strange_bose[89383]:         "type": "bluestore"
Jan 20 13:58:14 compute-0 strange_bose[89383]:     }
Jan 20 13:58:14 compute-0 strange_bose[89383]: }
Jan 20 13:58:14 compute-0 systemd[1]: libpod-445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e.scope: Deactivated successfully.
Jan 20 13:58:14 compute-0 podman[89437]: 2026-01-20 13:58:14.536734489 +0000 UTC m=+0.026939307 container died 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:58:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-688639e123e17c7dec39851065864b1e69ff301eb5eed6ce67f551fa45816136-merged.mount: Deactivated successfully.
Jan 20 13:58:14 compute-0 podman[89437]: 2026-01-20 13:58:14.590364603 +0000 UTC m=+0.080569391 container remove 445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bose, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 13:58:14 compute-0 systemd[1]: libpod-conmon-445e9ebcb753a390f5213eaab0e819527d1c4bf075ce73f4b033a4a16d48ef7e.scope: Deactivated successfully.
Jan 20 13:58:14 compute-0 sudo[89475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqykiqguosxihcxbmmvtmlwwxeuxuqyy ; /usr/bin/python3'
Jan 20 13:58:14 compute-0 sudo[89475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:14 compute-0 sudo[89161]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:14 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Jan 20 13:58:14 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Jan 20 13:58:14 compute-0 python3[89477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:14 compute-0 podman[89478]: 2026-01-20 13:58:14.814801074 +0000 UTC m=+0.037154687 container create 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 13:58:14 compute-0 systemd[1]: Started libpod-conmon-2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069.scope.
Jan 20 13:58:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2150538b41eff74c4d3c7d9415aa420a05ae1fcc5a700697e027d7a0619d4ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2150538b41eff74c4d3c7d9415aa420a05ae1fcc5a700697e027d7a0619d4ee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2150538b41eff74c4d3c7d9415aa420a05ae1fcc5a700697e027d7a0619d4ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:14 compute-0 podman[89478]: 2026-01-20 13:58:14.887186307 +0000 UTC m=+0.109539940 container init 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:58:14 compute-0 podman[89478]: 2026-01-20 13:58:14.893633757 +0000 UTC m=+0.115987370 container start 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:14 compute-0 podman[89478]: 2026-01-20 13:58:14.800568996 +0000 UTC m=+0.022922609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:14 compute-0 podman[89478]: 2026-01-20 13:58:14.896391631 +0000 UTC m=+0.118745294 container attach 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: 2.1 deep-scrub starts
Jan 20 13:58:15 compute-0 ceph-mon[74360]: 2.1 deep-scrub ok
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1467956015' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:15 compute-0 ceph-mon[74360]: 3.5 deep-scrub starts
Jan 20 13:58:15 compute-0 ceph-mon[74360]: 3.5 deep-scrub ok
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3060958510' entity='client.admin' 
Jan 20 13:58:15 compute-0 jolly_wiles[89492]: set ssl_option
Jan 20 13:58:15 compute-0 systemd[1]: libpod-2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069.scope: Deactivated successfully.
Jan 20 13:58:15 compute-0 conmon[89492]: conmon 2f64d558e3dc46b1ab04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069.scope/container/memory.events
Jan 20 13:58:15 compute-0 podman[89478]: 2026-01-20 13:58:15.525680765 +0000 UTC m=+0.748034418 container died 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2150538b41eff74c4d3c7d9415aa420a05ae1fcc5a700697e027d7a0619d4ee-merged.mount: Deactivated successfully.
Jan 20 13:58:15 compute-0 podman[89478]: 2026-01-20 13:58:15.582925526 +0000 UTC m=+0.805279169 container remove 2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069 (image=quay.io/ceph/ceph:v18, name=jolly_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:58:15 compute-0 systemd[1]: libpod-conmon-2f64d558e3dc46b1ab04868520d653c26335f3f22605b4687baf34f4637f6069.scope: Deactivated successfully.
Jan 20 13:58:15 compute-0 sudo[89475]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Jan 20 13:58:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:15 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.1d( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.19( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.13( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.10( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.14( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.e( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.b( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.a( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.9( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.8( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.f( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.e( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.1( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.4( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.3( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.6( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.2( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.4( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.6( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.9( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.1e( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.18( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.1f( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[7.1b( empty local-lis/les=0/0 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[2.1e( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102704048s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.894523621s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096446991s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888359070s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102655411s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.894523621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096515656s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888404846s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096369743s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888359070s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.18( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096369743s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888404846s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.067071915s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859237671s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096200943s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888397217s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.067041397s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859237671s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.096161842s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888397217s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095857620s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888275146s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095942497s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888343811s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.107170105s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899612427s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1a( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095826149s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888275146s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.19( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.107142448s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899612427s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095887184s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888343811s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.065606117s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858367920s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.107172966s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899971008s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095720291s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888526917s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.065556526s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858367920s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095688820s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888526917s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095388412s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888252258s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.107133865s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899971008s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1c( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095354080s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888252258s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095265388s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888351440s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.065274239s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858375549s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095226288s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888351440s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.9( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.065240860s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858375549s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.095000267s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888145447s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.094950676s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888145447s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.094631195s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888023376s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.106460571s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899848938s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.094767570s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.888206482s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.d( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.106357574s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899848938s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.2( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.094692230s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888206482s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093915939s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.887710571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.064615250s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858413696s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093884468s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.887710571s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.3( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.064570427s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858413696s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093491554s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.887626648s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.4( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093462944s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.887626648s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105741501s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899940491s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093420982s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.887611389s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.7( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105677605s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899940491s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.7( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.093360901s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.887611389s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.e( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.094602585s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.888023376s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.064092636s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858573914s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.5( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.064049721s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858573914s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.092958450s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.887588501s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105309486s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899963379s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.088374138s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.883087158s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.3( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105271339s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899963379s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105324745s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900039673s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.088342667s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.883087158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.092909813s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.887588501s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.2( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105254173s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900039673s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.105005264s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.899971008s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.087938309s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.883018494s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.5( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.104970932s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.899971008s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063220978s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858345032s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.087889671s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.883018494s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.1d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063180923s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858345032s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.087783813s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882987976s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.087758064s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882987976s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.104638100s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900032043s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.e( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.104557991s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900032043s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063712120s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858741760s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063401222s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858963013s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.a( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063167572s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858741760s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.d( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063308716s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858963013s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.104471207s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900276184s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.063396454s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858879089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.8( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.104436874s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900276184s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086939812s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882835388s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086853027s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882835388s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086621284s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882682800s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086504936s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882675171s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086570740s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882682800s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062780380s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858970642s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062825203s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858978271s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.f( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062743187s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858978271s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086460114s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882675171s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.c( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062976837s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858879089s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.e( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062722206s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858970642s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062440872s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.858985901s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103648186s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900215149s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.086020470s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882606506s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.10( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062405586s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.858985901s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.a( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103623390s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900215149s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.9( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085982323s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882606506s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103479385s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900199890s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062359810s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859130859s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.15( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103430748s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900199890s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.11( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062319756s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859130859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085485458s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882347107s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085452080s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882347107s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062056541s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859085083s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.13( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062035561s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859085083s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085249901s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882331848s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085061073s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882217407s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062047958s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859214783s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103346825s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900222778s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.14( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.062027931s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859214783s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.15( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085217476s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882331848s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085031509s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882217407s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.17( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.103001595s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900222778s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085144997s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882614136s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.061716080s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859207153s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.084650040s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882179260s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.16( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.085101128s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882614136s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.16( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.061685562s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859207153s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.10( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.084621429s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882179260s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.061778069s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active pruub 71.859191895s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.083438873s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.881164551s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[3.15( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=36 pruub=13.061459541s) [1] r=-1 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.859191895s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.083407402s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.881164551s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.084231377s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.881950378s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.11( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.084143639s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.881950378s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102427483s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900352478s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.1c( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102359772s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900352478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102138519s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active pruub 67.900352478s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.083829880s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 active pruub 73.882080078s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[6.12( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=36 pruub=9.102102280s) [1] r=-1 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.900352478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 36 pg[5.1f( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=15.083790779s) [1] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.882080078s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:15 compute-0 sudo[89550]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnlbtrcihtioaswhufiatpvquxnxlczh ; /usr/bin/python3'
Jan 20 13:58:15 compute-0 sudo[89550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:15 compute-0 python3[89552]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.018358281 +0000 UTC m=+0.052459005 container create 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:16 compute-0 systemd[1]: Started libpod-conmon-3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c.scope.
Jan 20 13:58:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222f8827a7efdf9e33850a43ff9a278fef1026ad0e321519ccca3de3ebf60f85/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222f8827a7efdf9e33850a43ff9a278fef1026ad0e321519ccca3de3ebf60f85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/222f8827a7efdf9e33850a43ff9a278fef1026ad0e321519ccca3de3ebf60f85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.00105039 +0000 UTC m=+0.035151084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.107778865 +0000 UTC m=+0.141879579 container init 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.116984239 +0000 UTC m=+0.151084933 container start 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.120265807 +0000 UTC m=+0.154366501 container attach 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 13:58:16 compute-0 ceph-mon[74360]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:16 compute-0 ceph-mon[74360]: 2.2 scrub starts
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3060958510' entity='client.admin' 
Jan 20 13:58:16 compute-0 ceph-mon[74360]: 2.2 scrub ok
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:58:16 compute-0 ceph-mon[74360]: osdmap e36: 3 total, 2 up, 3 in
Jan 20 13:58:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 20 13:58:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 20 13:58:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:58:16 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 20 13:58:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Jan 20 13:58:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Jan 20 13:58:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:16 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:16 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.1b( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.1e( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.18( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.1e( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.9( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.6( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.13( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.19( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.1d( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.10( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.a( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.e( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.4( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.b( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.8( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.2( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.6( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.3( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.1( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.4( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.9( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 silly_buck[89568]: Scheduled rgw.rgw update...
Jan 20 13:58:16 compute-0 silly_buck[89568]: Scheduled ingress.rgw.default update...
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.e( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=36) [0] r=0 lpr=36 pi=[28,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.f( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 37 pg[7.14( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=32/32 les/c/f=34/34/0 sis=36) [0] r=0 lpr=36 pi=[32,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:16 compute-0 systemd[1]: libpod-3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c.scope: Deactivated successfully.
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.719365169 +0000 UTC m=+0.753465913 container died 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-222f8827a7efdf9e33850a43ff9a278fef1026ad0e321519ccca3de3ebf60f85-merged.mount: Deactivated successfully.
Jan 20 13:58:16 compute-0 podman[89553]: 2026-01-20 13:58:16.774600956 +0000 UTC m=+0.808701660 container remove 3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c (image=quay.io/ceph/ceph:v18, name=silly_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:16 compute-0 systemd[1]: libpod-conmon-3d1bd7826f4a0ff06846751deef822fddd2a8a321802550298766c207f57303c.scope: Deactivated successfully.
Jan 20 13:58:16 compute-0 sudo[89550]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 20 13:58:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 20 13:58:17 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mon[74360]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:17 compute-0 ceph-mon[74360]: Saving service ingress.rgw.default spec with placement count:2
Jan 20 13:58:17 compute-0 ceph-mon[74360]: 3.6 scrub starts
Jan 20 13:58:17 compute-0 ceph-mon[74360]: osdmap e37: 3 total, 2 up, 3 in
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mon[74360]: 3.6 scrub ok
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:17 compute-0 python3[89680]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:58:18 compute-0 python3[89751]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917497.6112661-37391-226402380206729/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 20 13:58:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 20 13:58:18 compute-0 ceph-mon[74360]: pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:18 compute-0 ceph-mon[74360]: Deploying daemon osd.2 on compute-2
Jan 20 13:58:18 compute-0 sudo[89799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmawfizgncmzifbcswivlvwwtasvzxqa ; /usr/bin/python3'
Jan 20 13:58:18 compute-0 sudo[89799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:18 compute-0 python3[89801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.007527494 +0000 UTC m=+0.061511275 container create 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 13:58:19 compute-0 systemd[1]: Started libpod-conmon-9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a.scope.
Jan 20 13:58:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:18.986480255 +0000 UTC m=+0.040464046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4528f7ce1c3463285b52f420ae8a546d37307a6821d1880bdbfc16a2aeaee24/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4528f7ce1c3463285b52f420ae8a546d37307a6821d1880bdbfc16a2aeaee24/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4528f7ce1c3463285b52f420ae8a546d37307a6821d1880bdbfc16a2aeaee24/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.092038118 +0000 UTC m=+0.146021929 container init 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.098918421 +0000 UTC m=+0.152902172 container start 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.101676324 +0000 UTC m=+0.155660125 container attach 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14274 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 13:58:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0[74356]: 2026-01-20T13:58:19.643+0000 7f8d2519f640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e2 new map
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:19.644841+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:19 compute-0 ceph-mgr[74653]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 13:58:19 compute-0 systemd[1]: libpod-9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a.scope: Deactivated successfully.
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.702834271 +0000 UTC m=+0.756818022 container died 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 20 13:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4528f7ce1c3463285b52f420ae8a546d37307a6821d1880bdbfc16a2aeaee24-merged.mount: Deactivated successfully.
Jan 20 13:58:19 compute-0 ceph-mon[74360]: 3.7 scrub starts
Jan 20 13:58:19 compute-0 ceph-mon[74360]: 3.7 scrub ok
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 13:58:19 compute-0 ceph-mon[74360]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 13:58:19 compute-0 ceph-mon[74360]: osdmap e38: 3 total, 2 up, 3 in
Jan 20 13:58:19 compute-0 ceph-mon[74360]: fsmap cephfs:0
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:19 compute-0 podman[89802]: 2026-01-20 13:58:19.742807003 +0000 UTC m=+0.796790774 container remove 9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a (image=quay.io/ceph/ceph:v18, name=charming_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:19 compute-0 systemd[1]: libpod-conmon-9b74fa669af189bf93fc1df63bd6463707909c31e71adddfe3e21c00504e9d5a.scope: Deactivated successfully.
Jan 20 13:58:19 compute-0 sudo[89799]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:19 compute-0 sudo[89875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zatizccpjraihqoeiqeioievxtevhhks ; /usr/bin/python3'
Jan 20 13:58:19 compute-0 sudo[89875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:20 compute-0 python3[89877]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.085204016 +0000 UTC m=+0.038182144 container create 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:58:20 compute-0 systemd[1]: Started libpod-conmon-0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a.scope.
Jan 20 13:58:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486edb88cde02ca01ed6ee4204e74e2c0976245495d10665ec1259999ce88af6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486edb88cde02ca01ed6ee4204e74e2c0976245495d10665ec1259999ce88af6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486edb88cde02ca01ed6ee4204e74e2c0976245495d10665ec1259999ce88af6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.161802271 +0000 UTC m=+0.114780439 container init 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.06992082 +0000 UTC m=+0.022898958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.167141773 +0000 UTC m=+0.120119891 container start 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.169834855 +0000 UTC m=+0.122812973 container attach 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:20 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:20 compute-0 ceph-mgr[74653]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:20 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:20 compute-0 affectionate_shamir[89893]: Scheduled mds.cephfs update...
Jan 20 13:58:20 compute-0 systemd[1]: libpod-0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a.scope: Deactivated successfully.
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.710161166 +0000 UTC m=+0.663139284 container died 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:58:20 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Jan 20 13:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-486edb88cde02ca01ed6ee4204e74e2c0976245495d10665ec1259999ce88af6-merged.mount: Deactivated successfully.
Jan 20 13:58:20 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Jan 20 13:58:20 compute-0 ceph-mon[74360]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:20 compute-0 ceph-mon[74360]: 2.3 deep-scrub starts
Jan 20 13:58:20 compute-0 ceph-mon[74360]: 2.3 deep-scrub ok
Jan 20 13:58:20 compute-0 ceph-mon[74360]: from='client.14274 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:20 compute-0 ceph-mon[74360]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:20 compute-0 podman[89878]: 2026-01-20 13:58:20.745658428 +0000 UTC m=+0.698636546 container remove 0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a (image=quay.io/ceph/ceph:v18, name=affectionate_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 13:58:20 compute-0 systemd[1]: libpod-conmon-0b2395eaadbbce647780bb5ad065aed371d38f3850b95e8052e036951fac6f5a.scope: Deactivated successfully.
Jan 20 13:58:20 compute-0 sudo[89875]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:21 compute-0 ceph-mon[74360]: 2.5 scrub starts
Jan 20 13:58:21 compute-0 ceph-mon[74360]: 2.5 scrub ok
Jan 20 13:58:21 compute-0 ceph-mon[74360]: from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 13:58:21 compute-0 ceph-mon[74360]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:21 compute-0 ceph-mon[74360]: 3.8 deep-scrub starts
Jan 20 13:58:21 compute-0 ceph-mon[74360]: 3.8 deep-scrub ok
Jan 20 13:58:21 compute-0 sudo[90005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmfzhsooakvaiiulkkdsnbmmsijocrd ; /usr/bin/python3'
Jan 20 13:58:21 compute-0 sudo[90005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:21 compute-0 python3[90007]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:58:21 compute-0 sudo[90005]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-0 sudo[90078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fndvizuhzyvdfnmogfgvljnpmtguukuh ; /usr/bin/python3'
Jan 20 13:58:22 compute-0 sudo[90078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:22 compute-0 python3[90080]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768917501.6370895-37443-184065733399812/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=ddae6cb53c02baaa87ed0e28941db377a2638775 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:22 compute-0 sudo[90078]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:22 compute-0 sudo[90128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmydzkayscycnhfamnykbwkwpdjmufq ; /usr/bin/python3'
Jan 20 13:58:22 compute-0 sudo[90128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:22 compute-0 ceph-mon[74360]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 13:58:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 13:58:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:22 compute-0 python3[90130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:22 compute-0 podman[90131]: 2026-01-20 13:58:22.962762106 +0000 UTC m=+0.050552365 container create fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:22 compute-0 systemd[1]: Started libpod-conmon-fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506.scope.
Jan 20 13:58:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8e8ea8fe68bb3a3d59a1b0278ba6f531677d401d50b55e293d70720466e3c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8e8ea8fe68bb3a3d59a1b0278ba6f531677d401d50b55e293d70720466e3c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:22.944639574 +0000 UTC m=+0.032429863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:23.040617443 +0000 UTC m=+0.128407732 container init fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:23.048455501 +0000 UTC m=+0.136245790 container start fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:23.051866802 +0000 UTC m=+0.139657061 container attach fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 13:58:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 13:58:23 compute-0 systemd[1]: libpod-fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506.scope: Deactivated successfully.
Jan 20 13:58:23 compute-0 conmon[90146]: conmon fc6f77920d19def4c471 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506.scope/container/memory.events
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:23.668065228 +0000 UTC m=+0.755855487 container died fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 13:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e8e8ea8fe68bb3a3d59a1b0278ba6f531677d401d50b55e293d70720466e3c3-merged.mount: Deactivated successfully.
Jan 20 13:58:23 compute-0 podman[90131]: 2026-01-20 13:58:23.7106577 +0000 UTC m=+0.798447949 container remove fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506 (image=quay.io/ceph/ceph:v18, name=epic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:58:23 compute-0 systemd[1]: libpod-conmon-fc6f77920d19def4c47146a951a61ec2b14bc52bf415ad069a9c59b3b963c506.scope: Deactivated successfully.
Jan 20 13:58:23 compute-0 sudo[90128]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 20 13:58:23 compute-0 ceph-mon[74360]: 2.7 scrub starts
Jan 20 13:58:23 compute-0 ceph-mon[74360]: 2.7 scrub ok
Jan 20 13:58:23 compute-0 ceph-mon[74360]: 3.b scrub starts
Jan 20 13:58:23 compute-0 ceph-mon[74360]: 3.b scrub ok
Jan 20 13:58:23 compute-0 ceph-mon[74360]: from='osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2618177133' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 20 13:58:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 13:58:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e39 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-0 sudo[90184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:24 compute-0 sudo[90184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-0 sudo[90184]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-0 sudo[90209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:58:24 compute-0 sudo[90209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:24 compute-0 sudo[90209]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:24 compute-0 sudo[90257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sliclvaohwtwjfodiwjftjpajwxygoja ; /usr/bin/python3'
Jan 20 13:58:24 compute-0 sudo[90257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:24 compute-0 python3[90259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:24 compute-0 podman[90261]: 2026-01-20 13:58:24.513349489 +0000 UTC m=+0.051114959 container create 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:58:24 compute-0 systemd[1]: Started libpod-conmon-0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b.scope.
Jan 20 13:58:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508a2e76e28e8962d5c14106e65f1ac078997ecbb7cfaa08d97705fef124aa82/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508a2e76e28e8962d5c14106e65f1ac078997ecbb7cfaa08d97705fef124aa82/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:24 compute-0 podman[90261]: 2026-01-20 13:58:24.494581171 +0000 UTC m=+0.032346651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:24 compute-0 podman[90261]: 2026-01-20 13:58:24.597899174 +0000 UTC m=+0.135664694 container init 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 13:58:24 compute-0 podman[90261]: 2026-01-20 13:58:24.608060735 +0000 UTC m=+0.145826215 container start 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:58:24 compute-0 podman[90261]: 2026-01-20 13:58:24.611861595 +0000 UTC m=+0.149627035 container attach 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gunjko started
Jan 20 13:58:24 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mgr.compute-2.gunjko 192.168.122.102:0/1712381276; not ready for session (expect reconnect)
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:24 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990459442s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.888725281s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.001721382s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 75.899993896s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990459442s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888725281s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.001721382s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899993896s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990264893s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.888595581s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990264893s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888595581s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887122154s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785575867s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887122154s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785575867s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.959957123s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 79.858459473s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990021706s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.888519287s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.959957123s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858459473s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.990021706s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888519287s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989562988s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.888114929s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989562988s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888114929s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.001141548s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 75.899734497s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.001141548s) [] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899734497s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989562035s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.888229370s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989562035s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888229370s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989113808s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.887840271s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989061356s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.887794495s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989061356s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887794495s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.989113808s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887840271s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.959934235s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 79.858726501s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.959934235s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858726501s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.984177589s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.883041382s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887120247s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785987854s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.984177589s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.883041382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887335777s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.786224365s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887120247s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785987854s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887335777s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786224365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887003899s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.786026001s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983865738s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.882896423s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887003899s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983865738s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882896423s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.a( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886811256s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785881042s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.a( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886811256s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785881042s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983557701s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.882667542s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.14( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887690544s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.786819458s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983557701s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882667542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.14( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.887690544s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786819458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886744499s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785942078s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886744499s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785942078s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983558655s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.882804871s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886503220s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785766602s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886503220s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983558655s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882804871s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983026505s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.882385254s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.983026505s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882385254s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.982892036s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 active pruub 81.882324219s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.1d( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886306763s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785766602s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=40 pruub=13.982892036s) [] r=-1 lpr=40 pi=[30,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882324219s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[7.1d( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886306763s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886525154s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 83.785774231s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=40 pruub=15.886525154s) [] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785774231s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.954574585s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 79.858352661s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:24 compute-0 ceph-mon[74360]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 13:58:24 compute-0 ceph-mon[74360]: osdmap e39: 3 total, 2 up, 3 in
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:24 compute-0 ceph-mon[74360]: Standby manager daemon compute-2.gunjko started
Jan 20 13:58:24 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=11.954574585s) [] r=-1 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858352661s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:24 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3188109873; not ready for session (expect reconnect)
Jan 20 13:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:24 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gunjko", "id": "compute-2.gunjko"} v 0) v1
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gunjko", "id": "compute-2.gunjko"}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505765802' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:25 compute-0 peaceful_visvesvaraya[90277]: 
Jan 20 13:58:25 compute-0 peaceful_visvesvaraya[90277]: {"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":21,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":2,"osd_up_since":1768917450,"num_in_osds":3,"osd_in_since":1768917490,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56172544,"bytes_avail":14967824384,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T13:56:54.382913+0000","services":{}},"progress_events":{}}
Jan 20 13:58:25 compute-0 systemd[1]: libpod-0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b.scope: Deactivated successfully.
Jan 20 13:58:25 compute-0 podman[90261]: 2026-01-20 13:58:25.203999943 +0000 UTC m=+0.741765433 container died 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-508a2e76e28e8962d5c14106e65f1ac078997ecbb7cfaa08d97705fef124aa82-merged.mount: Deactivated successfully.
Jan 20 13:58:25 compute-0 podman[90261]: 2026-01-20 13:58:25.254237967 +0000 UTC m=+0.792003427 container remove 0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b (image=quay.io/ceph/ceph:v18, name=peaceful_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 13:58:25 compute-0 systemd[1]: libpod-conmon-0a20d4931ecfefecc6414d15ed91dd9678b6a9b0889d493049bef5255f17337b.scope: Deactivated successfully.
Jan 20 13:58:25 compute-0 sudo[90257]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:25 compute-0 sudo[90337]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkxdwnthpvilcyjliagwumowkdmcuhjk ; /usr/bin/python3'
Jan 20 13:58:25 compute-0 sudo[90337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:25 compute-0 python3[90339]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:25 compute-0 podman[90340]: 2026-01-20 13:58:25.621952724 +0000 UTC m=+0.052285380 container create f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 13:58:25 compute-0 systemd[1]: Started libpod-conmon-f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805.scope.
Jan 20 13:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:25 compute-0 podman[90340]: 2026-01-20 13:58:25.596762835 +0000 UTC m=+0.027095531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3aaa035cedb13c506e28968405f18f8387a47e1139b819d030c9082c272c04f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3aaa035cedb13c506e28968405f18f8387a47e1139b819d030c9082c272c04f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:25 compute-0 podman[90340]: 2026-01-20 13:58:25.70462597 +0000 UTC m=+0.134958596 container init f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:25 compute-0 podman[90340]: 2026-01-20 13:58:25.71068114 +0000 UTC m=+0.141013756 container start f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:58:25 compute-0 podman[90340]: 2026-01-20 13:58:25.713981709 +0000 UTC m=+0.144314345 container attach f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 13:58:25 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 20 13:58:25 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 20 13:58:25 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3188109873; not ready for session (expect reconnect)
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:25 compute-0 ceph-mon[74360]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 20 13:58:25 compute-0 ceph-mon[74360]: osdmap e40: 3 total, 2 up, 3 in
Jan 20 13:58:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mgrmap e10: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko
Jan 20 13:58:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gunjko", "id": "compute-2.gunjko"}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1505765802' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:58:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 13:58:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3562592918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 13:58:26 compute-0 hardcore_williamson[90355]: 
Jan 20 13:58:26 compute-0 hardcore_williamson[90355]: {"epoch":3,"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","modified":"2026-01-20T13:57:58.636725Z","created":"2026-01-20T13:55:03.669626Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 20 13:58:26 compute-0 hardcore_williamson[90355]: dumped monmap epoch 3
Jan 20 13:58:26 compute-0 systemd[1]: libpod-f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805.scope: Deactivated successfully.
Jan 20 13:58:26 compute-0 podman[90340]: 2026-01-20 13:58:26.317824277 +0000 UTC m=+0.748156933 container died f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 13:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3aaa035cedb13c506e28968405f18f8387a47e1139b819d030c9082c272c04f-merged.mount: Deactivated successfully.
Jan 20 13:58:26 compute-0 podman[90340]: 2026-01-20 13:58:26.370778873 +0000 UTC m=+0.801111529 container remove f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805 (image=quay.io/ceph/ceph:v18, name=hardcore_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:26 compute-0 systemd[1]: libpod-conmon-f84b936b20d0b22b2b7a8a3a34ecea911753db2b57c2f1abf436880e56ba4805.scope: Deactivated successfully.
Jan 20 13:58:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.oweoeg started
Jan 20 13:58:26 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from mgr.compute-1.oweoeg 192.168.122.101:0/425045238; not ready for session (expect reconnect)
Jan 20 13:58:26 compute-0 sudo[90337]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:26 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3188109873; not ready for session (expect reconnect)
Jan 20 13:58:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:26 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:26 compute-0 ceph-mon[74360]: purged_snaps scrub starts
Jan 20 13:58:26 compute-0 ceph-mon[74360]: purged_snaps scrub ok
Jan 20 13:58:26 compute-0 ceph-mon[74360]: pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:26 compute-0 ceph-mon[74360]: 3.12 scrub starts
Jan 20 13:58:26 compute-0 ceph-mon[74360]: 3.12 scrub ok
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3562592918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 13:58:26 compute-0 ceph-mon[74360]: Standby manager daemon compute-1.oweoeg started
Jan 20 13:58:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:26 compute-0 sudo[90414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adteftqdjyursyuhgtimwijacdbyzrio ; /usr/bin/python3'
Jan 20 13:58:26 compute-0 sudo[90414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:27 compute-0 python3[90416]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.090879779 +0000 UTC m=+0.047979895 container create a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:27 compute-0 systemd[1]: Started libpod-conmon-a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256.scope.
Jan 20 13:58:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a485dcf65d84d4adef5952712bed9971f99e2d66db5904dbe2ed6b419bc6d99c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a485dcf65d84d4adef5952712bed9971f99e2d66db5904dbe2ed6b419bc6d99c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.068792382 +0000 UTC m=+0.025892538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.170533555 +0000 UTC m=+0.127633701 container init a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.17564678 +0000 UTC m=+0.132746906 container start a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.178314621 +0000 UTC m=+0.135414777 container attach a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 20 13:58:27 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 13:58:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.oweoeg", "id": "compute-1.oweoeg"} v 0) v1
Jan 20 13:58:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-1.oweoeg", "id": "compute-1.oweoeg"}]: dispatch
Jan 20 13:58:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 20 13:58:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/260003973' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 20 13:58:27 compute-0 beautiful_mirzakhani[90432]: [client.openstack]
Jan 20 13:58:27 compute-0 beautiful_mirzakhani[90432]:         key = AQAciW9pAAAAABAAwJYC9p1PAwdI6pFMhbpXIA==
Jan 20 13:58:27 compute-0 beautiful_mirzakhani[90432]:         caps mgr = "allow *"
Jan 20 13:58:27 compute-0 beautiful_mirzakhani[90432]:         caps mon = "profile rbd"
Jan 20 13:58:27 compute-0 beautiful_mirzakhani[90432]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 20 13:58:27 compute-0 systemd[1]: libpod-a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256.scope: Deactivated successfully.
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.797724983 +0000 UTC m=+0.754825139 container died a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:27 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3188109873; not ready for session (expect reconnect)
Jan 20 13:58:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:27 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a485dcf65d84d4adef5952712bed9971f99e2d66db5904dbe2ed6b419bc6d99c-merged.mount: Deactivated successfully.
Jan 20 13:58:27 compute-0 podman[90417]: 2026-01-20 13:58:27.841724272 +0000 UTC m=+0.798824378 container remove a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256 (image=quay.io/ceph/ceph:v18, name=beautiful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:27 compute-0 systemd[1]: libpod-conmon-a2e6db8deda2eec630bdb55ef4f0041624a75c714eda8dfcc78929a67b3d4256.scope: Deactivated successfully.
Jan 20 13:58:27 compute-0 sudo[90414]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mgrmap e11: compute-0.wookjv(active, since 2m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 13:58:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr metadata", "who": "compute-1.oweoeg", "id": "compute-1.oweoeg"}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mon[74360]: pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/260003973' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:58:28 compute-0 sudo[90470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:28 compute-0 sudo[90470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-0 sudo[90470]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-0 sudo[90495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 13:58:28 compute-0 sudo[90495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3188109873; not ready for session (expect reconnect)
Jan 20 13:58:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:28 compute-0 ceph-mgr[74653]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 13:58:28 compute-0 sudo[90495]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-0 sudo[90524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:28 compute-0 sudo[90524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-0 sudo[90524]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:28 compute-0 sudo[90580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph
Jan 20 13:58:28 compute-0 sudo[90580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:28 compute-0 sudo[90580]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[90630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90630]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-0 sudo[90671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90671]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[90718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90718]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:29 compute-0 sudo[90767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90767]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:29 compute-0 sudo[90816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvwjwmochsyzvthofinnazdjscigterd ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917508.8692527-37515-245174742355900/async_wrapper.py j273501179630 30 /home/zuul/.ansible/tmp/ansible-tmp-1768917508.8692527-37515-245174742355900/AnsiballZ_command.py _'
Jan 20 13:58:29 compute-0 sudo[90816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:29 compute-0 sudo[90819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[90819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90819]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-0 sudo[90845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90845]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 ansible-async_wrapper.py[90821]: Invoked with j273501179630 30 /home/zuul/.ansible/tmp/ansible-tmp-1768917508.8692527-37515-245174742355900/AnsiballZ_command.py _
Jan 20 13:58:29 compute-0 ansible-async_wrapper.py[90895]: Starting module and watcher
Jan 20 13:58:29 compute-0 ansible-async_wrapper.py[90895]: Start watching 90896 (30)
Jan 20 13:58:29 compute-0 ansible-async_wrapper.py[90896]: Start module (90896)
Jan 20 13:58:29 compute-0 ansible-async_wrapper.py[90821]: Return async_wrapper task started.
Jan 20 13:58:29 compute-0 sudo[90816]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[90898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90898]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[90923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-0 sudo[90923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90923]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 python3[90897]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:29 compute-0 sudo[90948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[90948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90948]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 podman[90962]: 2026-01-20 13:58:29.593103419 +0000 UTC m=+0.044924424 container create 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:58:29 compute-0 systemd[1]: Started libpod-conmon-05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9.scope.
Jan 20 13:58:29 compute-0 sudo[90986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new
Jan 20 13:58:29 compute-0 sudo[90986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[90986]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f193a0e08ee6e6ff20fa57d2dd5ccd87187fd9372e36bbadc7c8317e6a441c34/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f193a0e08ee6e6ff20fa57d2dd5ccd87187fd9372e36bbadc7c8317e6a441c34/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 20 13:58:29 compute-0 ceph-mon[74360]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 13:58:29 compute-0 ceph-mon[74360]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:58:29 compute-0 ceph-mon[74360]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mon[74360]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mon[74360]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mon[74360]: OSD bench result of 6249.450009 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 13:58:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 20 13:58:29 compute-0 podman[90962]: 2026-01-20 13:58:29.662261225 +0000 UTC m=+0.114082340 container init 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:29 compute-0 podman[90962]: 2026-01-20 13:58:29.572190543 +0000 UTC m=+0.024011608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:29 compute-0 podman[90962]: 2026-01-20 13:58:29.671992514 +0000 UTC m=+0.123813539 container start 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 13:58:29 compute-0 podman[90962]: 2026-01-20 13:58:29.675727633 +0000 UTC m=+0.127548678 container attach 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 13:58:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 20 13:58:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873] boot
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.117205620s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888725281s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=41 pruub=3.128457069s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899993896s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.117161751s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888725281s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[6.1b( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=41 pruub=3.128428221s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899993896s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 20 13:58:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 20 13:58:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.086704731s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858352661s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013877869s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785575867s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.086654663s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858352661s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013824463s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785575867s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.116738319s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888595581s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.116719246s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888595581s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.116087914s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888519287s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.116068840s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888519287s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.085939884s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858459473s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115583420s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888114929s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=41 pruub=3.127189159s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899734497s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=28/29 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.085907936s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858459473s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[6.1( empty local-lis/les=32/33 n=0 ec=32/22 lis/c=32/32 les/c/f=33/33/0 sis=41 pruub=3.127173662s) [2] r=-1 lpr=41 pi=[32,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.899734497s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115542412s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888114929s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115531921s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888229370s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115512848s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888229370s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115096092s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887840271s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.115069389s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887840271s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.114976883s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887794495s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.0( empty local-lis/les=30/31 n=0 ec=20/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.114956856s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.887794495s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.085805893s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858726501s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=28/29 n=0 ec=17/17 lis/c=28/28 les/c/f=29/29/0 sis=41 pruub=7.085744858s) [2] r=-1 lpr=41 pi=[28,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.858726501s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013182640s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786224365s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.a( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013160706s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786224365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109955788s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.883041382s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012893677s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785987854s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.d( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109935760s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.883041382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012873650s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785987854s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012780190s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786026001s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109642982s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882896423s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.a( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012615204s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785881042s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012762070s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.b( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109619141s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882896423s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.a( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012597084s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785881042s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109292030s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882667542s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.8( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109275818s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882667542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.14( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013382912s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786819458s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.14( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.013364792s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.786819458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012468338s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785942078s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012263298s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785774231s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.10( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012434006s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785942078s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012245178s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785774231s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109258652s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882804871s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=30/31 n=0 ec=30/18 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.109245300s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882804871s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012140274s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[2.15( empty local-lis/les=36/37 n=0 ec=28/15 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.012126923s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.108655930s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882324219s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.12( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.108635902s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882324219s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.108658791s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882385254s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[5.13( empty local-lis/les=30/31 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=41 pruub=9.108644485s) [2] r=-1 lpr=41 pi=[30,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.882385254s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.1d( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.011913300s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:58:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 41 pg[7.1d( empty local-lis/les=36/37 n=0 ec=32/24 lis/c=36/36 les/c/f=37/37/0 sis=41 pruub=11.011893272s) [2] r=-1 lpr=41 pi=[36,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.785766602s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:58:29 compute-0 sudo[91016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[91016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[91016]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[91042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 13:58:29 compute-0 sudo[91042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 sudo[91042]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:29 compute-0 sudo[91067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[91067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[91067]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 20 13:58:29 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 20 13:58:29 compute-0 sudo[91093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:58:29 compute-0 sudo[91093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[91093]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:29 compute-0 sudo[91119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:29 compute-0 sudo[91119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:29 compute-0 sudo[91119]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config
Jan 20 13:58:30 compute-0 sudo[91148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91148]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sudo[91188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91188]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-0 sudo[91213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91213]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:30 compute-0 interesting_kilby[91012]: 
Jan 20 13:58:30 compute-0 interesting_kilby[91012]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 13:58:30 compute-0 sudo[91238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sshd-session[91072]: Connection closed by authenticating user root 157.245.78.139 port 57518 [preauth]
Jan 20 13:58:30 compute-0 sudo[91238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91238]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 systemd[1]: libpod-05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9.scope: Deactivated successfully.
Jan 20 13:58:30 compute-0 podman[90962]: 2026-01-20 13:58:30.230962601 +0000 UTC m=+0.682783656 container died 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f193a0e08ee6e6ff20fa57d2dd5ccd87187fd9372e36bbadc7c8317e6a441c34-merged.mount: Deactivated successfully.
Jan 20 13:58:30 compute-0 podman[90962]: 2026-01-20 13:58:30.26782813 +0000 UTC m=+0.719649145 container remove 05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9 (image=quay.io/ceph/ceph:v18, name=interesting_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 13:58:30 compute-0 systemd[1]: libpod-conmon-05f79776d3677d12c0b2212c47e92a90aeafe35b1664ecf672550df4407e7ec9.scope: Deactivated successfully.
Jan 20 13:58:30 compute-0 ansible-async_wrapper.py[90896]: Module complete (90896)
Jan 20 13:58:30 compute-0 sudo[91266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:30 compute-0 sudo[91266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91266]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sudo[91301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91301]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-0 sudo[91326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91326]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sudo[91397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91397]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91444]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpcpyxusmcfphkfnwhqealmtvrlxhwgv ; /usr/bin/python3'
Jan 20 13:58:30 compute-0 sudo[91444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:30 compute-0 sudo[91447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-0 sudo[91447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91447]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sudo[91473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91473]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 20 13:58:30 compute-0 python3[91448]: ansible-ansible.legacy.async_status Invoked with jid=j273501179630.90821 mode=status _async_dir=/root/.ansible_async
Jan 20 13:58:30 compute-0 ceph-mon[74360]: pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Jan 20 13:58:30 compute-0 ceph-mon[74360]: 2.8 scrub starts
Jan 20 13:58:30 compute-0 ceph-mon[74360]: 2.8 scrub ok
Jan 20 13:58:30 compute-0 ceph-mon[74360]: osd.2 [v2:192.168.122.102:6800/3188109873,v1:192.168.122.102:6801/3188109873] boot
Jan 20 13:58:30 compute-0 ceph-mon[74360]: osdmap e41: 3 total, 3 up, 3 in
Jan 20 13:58:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 13:58:30 compute-0 ceph-mon[74360]: Updating compute-2:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-0 ceph-mon[74360]: Updating compute-0:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-0 ceph-mon[74360]: Updating compute-1:/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-0 ceph-mon[74360]: 3.17 scrub starts
Jan 20 13:58:30 compute-0 ceph-mon[74360]: 3.17 scrub ok
Jan 20 13:58:30 compute-0 sudo[91444]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 20 13:58:30 compute-0 sudo[91498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 20 13:58:30 compute-0 sudo[91498]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:30 compute-0 sudo[91523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91523]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Jan 20 13:58:30 compute-0 sudo[91571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e399cf45-e6b6-5393-99f1-75c601d3f188/var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf.new /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/config/ceph.conf
Jan 20 13:58:30 compute-0 sudo[91571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:30 compute-0 sudo[91617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ustxjscgvdgdnaxwrdposxwzuuyzspfu ; /usr/bin/python3'
Jan 20 13:58:30 compute-0 sudo[91571]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:30 compute-0 sudo[91617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e39ba77b-9f3d-485a-8574-76752f01a291 does not exist
Jan 20 13:58:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 10e316b9-dbbc-4092-a2a6-351188e7bc19 does not exist
Jan 20 13:58:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa02b084-99ca-4e99-b456-70c8723a819e does not exist
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:30 compute-0 python3[91621]: ansible-ansible.legacy.async_status Invoked with jid=j273501179630.90821 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 13:58:30 compute-0 sudo[91617]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:31 compute-0 sudo[91622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:31 compute-0 sudo[91622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:31 compute-0 sudo[91622]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:31 compute-0 sudo[91647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:31 compute-0 sudo[91647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:31 compute-0 sudo[91647]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:31 compute-0 sudo[91672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:31 compute-0 sudo[91672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:31 compute-0 sudo[91672]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:31 compute-0 sudo[91697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:58:31 compute-0 sudo[91697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:31 compute-0 sudo[91773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlatktclrarrdptbdqobrbnxabjqtoc ; /usr/bin/python3'
Jan 20 13:58:31 compute-0 sudo[91773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.533718619 +0000 UTC m=+0.057456838 container create 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 13:58:31 compute-0 systemd[1]: Started libpod-conmon-6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632.scope.
Jan 20 13:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:31 compute-0 python3[91780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.508822358 +0000 UTC m=+0.032560587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.61367453 +0000 UTC m=+0.137412769 container init 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.625034486 +0000 UTC m=+0.148772705 container start 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.628646483 +0000 UTC m=+0.152384702 container attach 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:31 compute-0 gallant_raman[91802]: 167 167
Jan 20 13:58:31 compute-0 systemd[1]: libpod-6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632.scope: Deactivated successfully.
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.630134523 +0000 UTC m=+0.153872722 container died 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 13:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f9606684a1ab20e90741e31e092229b439d7ab50af531e3947c271e3ed20df-merged.mount: Deactivated successfully.
Jan 20 13:58:31 compute-0 podman[91805]: 2026-01-20 13:58:31.656143322 +0000 UTC m=+0.046511622 container create 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:31 compute-0 podman[91786]: 2026-01-20 13:58:31.672743119 +0000 UTC m=+0.196481328 container remove 6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:58:31 compute-0 systemd[1]: libpod-conmon-6ee25cef5794493014c228f350bebb0cbf36c5370962a7f739c72b3d18adf632.scope: Deactivated successfully.
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:31 compute-0 ceph-mon[74360]: osdmap e42: 3 total, 3 up, 3 in
Jan 20 13:58:31 compute-0 ceph-mon[74360]: 3.18 deep-scrub starts
Jan 20 13:58:31 compute-0 ceph-mon[74360]: 3.18 deep-scrub ok
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:58:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:31 compute-0 systemd[1]: Started libpod-conmon-631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7.scope.
Jan 20 13:58:31 compute-0 podman[91805]: 2026-01-20 13:58:31.631311765 +0000 UTC m=+0.021680145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd59fca0ba2fc6a7730d105dac531fa2e5e288c608ae72b906bd64a5aa7ffc51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd59fca0ba2fc6a7730d105dac531fa2e5e288c608ae72b906bd64a5aa7ffc51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 podman[91805]: 2026-01-20 13:58:31.770520341 +0000 UTC m=+0.160888661 container init 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:31 compute-0 podman[91805]: 2026-01-20 13:58:31.776605895 +0000 UTC m=+0.166974245 container start 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:31 compute-0 podman[91805]: 2026-01-20 13:58:31.781309331 +0000 UTC m=+0.171677651 container attach 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 13:58:31 compute-0 podman[91844]: 2026-01-20 13:58:31.884898179 +0000 UTC m=+0.036581716 container create d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:31 compute-0 systemd[1]: Started libpod-conmon-d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1.scope.
Jan 20 13:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:31 compute-0 podman[91844]: 2026-01-20 13:58:31.869445693 +0000 UTC m=+0.021129250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:31 compute-0 podman[91844]: 2026-01-20 13:58:31.991209469 +0000 UTC m=+0.142893046 container init d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:32 compute-0 podman[91844]: 2026-01-20 13:58:32.003253563 +0000 UTC m=+0.154937110 container start d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:32 compute-0 podman[91844]: 2026-01-20 13:58:32.006916612 +0000 UTC m=+0.158600199 container attach d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 13:58:32 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:32 compute-0 inspiring_volhard[91834]: 
Jan 20 13:58:32 compute-0 inspiring_volhard[91834]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 13:58:32 compute-0 systemd[1]: libpod-631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7.scope: Deactivated successfully.
Jan 20 13:58:32 compute-0 podman[91805]: 2026-01-20 13:58:32.344999999 +0000 UTC m=+0.735368349 container died 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd59fca0ba2fc6a7730d105dac531fa2e5e288c608ae72b906bd64a5aa7ffc51-merged.mount: Deactivated successfully.
Jan 20 13:58:32 compute-0 podman[91805]: 2026-01-20 13:58:32.397680567 +0000 UTC m=+0.788048877 container remove 631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7 (image=quay.io/ceph/ceph:v18, name=inspiring_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 13:58:32 compute-0 systemd[1]: libpod-conmon-631a4181f1d7706a95e5904284bdfd0946c546a1ed274748fcdcd2c71a2520b7.scope: Deactivated successfully.
Jan 20 13:58:32 compute-0 sudo[91773]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:32 compute-0 ceph-mon[74360]: pgmap v114: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:32 compute-0 ceph-mon[74360]: 3.1b scrub starts
Jan 20 13:58:32 compute-0 ceph-mon[74360]: 3.1b scrub ok
Jan 20 13:58:32 compute-0 intelligent_swartz[91860]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:58:32 compute-0 intelligent_swartz[91860]: --> relative data size: 1.0
Jan 20 13:58:32 compute-0 intelligent_swartz[91860]: --> All data devices are unavailable
Jan 20 13:58:32 compute-0 systemd[1]: libpod-d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1.scope: Deactivated successfully.
Jan 20 13:58:32 compute-0 podman[91844]: 2026-01-20 13:58:32.815692246 +0000 UTC m=+0.967375783 container died d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e45b35ae7291b865655f5fd684097812ef66520ccd79adbbe77f7f08a7dbac7-merged.mount: Deactivated successfully.
Jan 20 13:58:32 compute-0 podman[91844]: 2026-01-20 13:58:32.868863877 +0000 UTC m=+1.020547414 container remove d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:58:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:32 compute-0 systemd[1]: libpod-conmon-d9e2574f0d05a5752bdefd300f11d99dd2c733b297a971d320a13874202d13c1.scope: Deactivated successfully.
Jan 20 13:58:32 compute-0 sudo[91697]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:32 compute-0 sudo[91924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:32 compute-0 sudo[91924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:32 compute-0 sudo[91924]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:33 compute-0 sudo[91949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:33 compute-0 sudo[91949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:33 compute-0 sudo[91949]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:33 compute-0 sudo[91974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:33 compute-0 sudo[91974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:33 compute-0 sudo[91974]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:33 compute-0 sudo[92022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmnznqtqztrfcqizkmdozeoltuiqkfz ; /usr/bin/python3'
Jan 20 13:58:33 compute-0 sudo[92022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v115: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:33 compute-0 sudo[92023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:58:33 compute-0 sudo[92023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:33 compute-0 python3[92036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:33 compute-0 podman[92050]: 2026-01-20 13:58:33.431934648 +0000 UTC m=+0.060126589 container create 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:58:33 compute-0 systemd[1]: Started libpod-conmon-9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4.scope.
Jan 20 13:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12febee0d79235ed8f281dfdc84c296113103e220ca7b138a5dca76bcc72ad58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12febee0d79235ed8f281dfdc84c296113103e220ca7b138a5dca76bcc72ad58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:33 compute-0 podman[92050]: 2026-01-20 13:58:33.409135414 +0000 UTC m=+0.037327365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:33 compute-0 podman[92050]: 2026-01-20 13:58:33.514630283 +0000 UTC m=+0.142822284 container init 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:33 compute-0 podman[92050]: 2026-01-20 13:58:33.525484556 +0000 UTC m=+0.153676487 container start 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 20 13:58:33 compute-0 podman[92050]: 2026-01-20 13:58:33.529040331 +0000 UTC m=+0.157232272 container attach 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.649763919 +0000 UTC m=+0.042395471 container create 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 13:58:33 compute-0 systemd[1]: Started libpod-conmon-5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c.scope.
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.631301043 +0000 UTC m=+0.023932625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:33 compute-0 ceph-mon[74360]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:33 compute-0 ceph-mon[74360]: 2.11 scrub starts
Jan 20 13:58:33 compute-0 ceph-mon[74360]: 2.11 scrub ok
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.743791089 +0000 UTC m=+0.136422661 container init 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.756278815 +0000 UTC m=+0.148910357 container start 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.760194211 +0000 UTC m=+0.152825753 container attach 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:33 compute-0 naughty_jackson[92126]: 167 167
Jan 20 13:58:33 compute-0 systemd[1]: libpod-5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c.scope: Deactivated successfully.
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.762421011 +0000 UTC m=+0.155052593 container died 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:58:33 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 20 13:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc25591b768c04ad7f4a32b97b98e09fcb593d4beb25c601ab38dda167c4717-merged.mount: Deactivated successfully.
Jan 20 13:58:33 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 20 13:58:33 compute-0 podman[92109]: 2026-01-20 13:58:33.805410838 +0000 UTC m=+0.198042400 container remove 5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:33 compute-0 systemd[1]: libpod-conmon-5cdd98377ec24fdc7034ee7e478c8b8ec8a7f65d65cbce3dc620279e43bf1e8c.scope: Deactivated successfully.
Jan 20 13:58:33 compute-0 podman[92168]: 2026-01-20 13:58:33.994144106 +0000 UTC m=+0.051941258 container create 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 13:58:34 compute-0 systemd[1]: Started libpod-conmon-6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865.scope.
Jan 20 13:58:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b81ebbc16da9131e19f2421a4739139aecbb224208e404855e30f53d737be62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b81ebbc16da9131e19f2421a4739139aecbb224208e404855e30f53d737be62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b81ebbc16da9131e19f2421a4739139aecbb224208e404855e30f53d737be62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:33.968697192 +0000 UTC m=+0.026494794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b81ebbc16da9131e19f2421a4739139aecbb224208e404855e30f53d737be62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:34 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:34 compute-0 friendly_napier[92087]: 
Jan 20 13:58:34 compute-0 friendly_napier[92087]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:34.077923051 +0000 UTC m=+0.135720283 container init 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:34.087817197 +0000 UTC m=+0.145614379 container start 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 13:58:34 compute-0 systemd[1]: libpod-9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4.scope: Deactivated successfully.
Jan 20 13:58:34 compute-0 podman[92050]: 2026-01-20 13:58:34.092945565 +0000 UTC m=+0.721137466 container died 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:34.093039938 +0000 UTC m=+0.150837130 container attach 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-12febee0d79235ed8f281dfdc84c296113103e220ca7b138a5dca76bcc72ad58-merged.mount: Deactivated successfully.
Jan 20 13:58:34 compute-0 podman[92050]: 2026-01-20 13:58:34.148081469 +0000 UTC m=+0.776273400 container remove 9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4 (image=quay.io/ceph/ceph:v18, name=friendly_napier, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:58:34 compute-0 systemd[1]: libpod-conmon-9eeef334a8e25db9d287237c209b8cc2153f5d14814375ed677e4f908ecaa6c4.scope: Deactivated successfully.
Jan 20 13:58:34 compute-0 sudo[92022]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:34 compute-0 ansible-async_wrapper.py[90895]: Done in kid B.
Jan 20 13:58:34 compute-0 ceph-mon[74360]: pgmap v115: 193 pgs: 28 peering, 165 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:34 compute-0 ceph-mon[74360]: 2.14 scrub starts
Jan 20 13:58:34 compute-0 ceph-mon[74360]: 2.14 scrub ok
Jan 20 13:58:34 compute-0 ceph-mon[74360]: 3.19 scrub starts
Jan 20 13:58:34 compute-0 ceph-mon[74360]: 3.19 scrub ok
Jan 20 13:58:34 compute-0 elated_yalow[92184]: {
Jan 20 13:58:34 compute-0 elated_yalow[92184]:     "0": [
Jan 20 13:58:34 compute-0 elated_yalow[92184]:         {
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "devices": [
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "/dev/loop3"
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             ],
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "lv_name": "ceph_lv0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "lv_size": "7511998464",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "name": "ceph_lv0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "tags": {
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.cluster_name": "ceph",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.crush_device_class": "",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.encrypted": "0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.osd_id": "0",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.type": "block",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:                 "ceph.vdo": "0"
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             },
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "type": "block",
Jan 20 13:58:34 compute-0 elated_yalow[92184]:             "vg_name": "ceph_vg0"
Jan 20 13:58:34 compute-0 elated_yalow[92184]:         }
Jan 20 13:58:34 compute-0 elated_yalow[92184]:     ]
Jan 20 13:58:34 compute-0 elated_yalow[92184]: }
Jan 20 13:58:34 compute-0 systemd[1]: libpod-6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865.scope: Deactivated successfully.
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:34.8333688 +0000 UTC m=+0.891166022 container died 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 13:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b81ebbc16da9131e19f2421a4739139aecbb224208e404855e30f53d737be62-merged.mount: Deactivated successfully.
Jan 20 13:58:34 compute-0 podman[92168]: 2026-01-20 13:58:34.907268657 +0000 UTC m=+0.965065819 container remove 6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 13:58:34 compute-0 systemd[1]: libpod-conmon-6867c0d91793ecb4ef2e448abbffba9c8691e341de3e74403047faa991666865.scope: Deactivated successfully.
Jan 20 13:58:34 compute-0 sudo[92023]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:35 compute-0 sudo[92254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijlqttidicrztkmkacdbhwlfwqzcdwvp ; /usr/bin/python3'
Jan 20 13:58:35 compute-0 sudo[92254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:35 compute-0 sudo[92230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:35 compute-0 sudo[92230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:35 compute-0 sudo[92230]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:35 compute-0 sudo[92269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:35 compute-0 sudo[92269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:35 compute-0 sudo[92269]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:35 compute-0 python3[92266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:35 compute-0 sudo[92294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:35 compute-0 sudo[92294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:35 compute-0 sudo[92294]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.249367114 +0000 UTC m=+0.064131547 container create d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 13:58:35 compute-0 systemd[1]: Started libpod-conmon-d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3.scope.
Jan 20 13:58:35 compute-0 sudo[92329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:58:35 compute-0 sudo[92329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.222708746 +0000 UTC m=+0.037473189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ae8865a5ca934263b7eb1c59eba36ee9f15bf1dcca3f63147ad2ed85a92d20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ae8865a5ca934263b7eb1c59eba36ee9f15bf1dcca3f63147ad2ed85a92d20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.352853768 +0000 UTC m=+0.167618181 container init d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.370505753 +0000 UTC m=+0.185270176 container start d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.383342898 +0000 UTC m=+0.198107311 container attach d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.719197826 +0000 UTC m=+0.067293472 container create f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:35 compute-0 systemd[1]: Started libpod-conmon-f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d.scope.
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.690203736 +0000 UTC m=+0.038299422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.807203364 +0000 UTC m=+0.155299050 container init f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:35 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.817685167 +0000 UTC m=+0.165780813 container start f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.821856949 +0000 UTC m=+0.169952645 container attach f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:58:35 compute-0 nifty_chatelet[92439]: 167 167
Jan 20 13:58:35 compute-0 systemd[1]: libpod-f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d.scope: Deactivated successfully.
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.823980186 +0000 UTC m=+0.172075842 container died f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:35 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 13:58:35 compute-0 ceph-mon[74360]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:35 compute-0 ceph-mon[74360]: 2.16 scrub starts
Jan 20 13:58:35 compute-0 ceph-mon[74360]: 2.16 scrub ok
Jan 20 13:58:35 compute-0 ceph-mon[74360]: 4.1d deep-scrub starts
Jan 20 13:58:35 compute-0 ceph-mon[74360]: 4.1d deep-scrub ok
Jan 20 13:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0960631bd4b2fe4c3a3028d599413a6ea313b0d30e8f11067c7dd7b07bd23e81-merged.mount: Deactivated successfully.
Jan 20 13:58:35 compute-0 podman[92404]: 2026-01-20 13:58:35.870510637 +0000 UTC m=+0.218606293 container remove f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:35 compute-0 systemd[1]: libpod-conmon-f4f147563a10b9161e3c0d1107279910e13d93580793365890ef8a7a0c1c0e7d.scope: Deactivated successfully.
Jan 20 13:58:35 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:35 compute-0 fervent_wu[92359]: 
Jan 20 13:58:35 compute-0 fervent_wu[92359]: [{"container_id": "5c427c7f90de", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.63%", "created": "2026-01-20T13:56:17.599407Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-20T13:56:17.651683Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T13:57:25.551692Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2026-01-20T13:56:17.492090Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@crash.compute-0", "version": "18.2.7"}, {"container_id": "718ebba7a543", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.73%", "created": "2026-01-20T13:57:03.240242Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-20T13:57:03.300137Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T13:58:25.831299Z", "memory_usage": 11775508, "ports": [], "service_name": "crash", "started": "2026-01-20T13:57:02.815270Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@crash.compute-1", "version": "18.2.7"}, {"container_id": "b5aaabf9e81d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.34%", "created": "2026-01-20T13:58:08.135385Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-20T13:58:08.278561Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T13:58:26.027836Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2026-01-20T13:58:07.880268Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@crash.compute-2", "version": "18.2.7"}, {"container_id": "ad330ec75efb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "36.42%", "created": "2026-01-20T13:55:09.810192Z", "daemon_id": "compute-0.wookjv", "daemon_name": "mgr.compute-0.wookjv", "daemon_type": "mgr", "events": ["2026-01-20T13:56:20.550899Z daemon:mgr.compute-0.wookjv [INFO] \"Reconfigured mgr.compute-0.wookjv on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T13:57:25.551633Z", "memory_usage": 547042099, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-20T13:55:09.708205Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mgr.compute-0.wookjv", "version": "18.2.7"}, {"container_id": "04b4edd09536", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "99.71%", "created": "2026-01-20T13:58:05.811096Z", "daemon_id": "compute-1.oweoeg", "daemon_name": "mgr.compute-1.oweoeg", "daemon_type": "mgr", "events": ["2026-01-20T13:58:05.876451Z daemon:mgr.compute-1.oweoeg [INFO] \"Deployed mgr.compute-1.oweoeg on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T13:58:25.831914Z", "memory_usage": 490628710, "ports": [8765], "service_name": "mgr", "started": "2026-01-20T13:58:05.664415Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mgr.compute-1.oweoeg", "version": "18.2.7"}, {"container_id": "76da91143abb", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "75.15%", "created": "2026-01-20T13:57:59.129144Z", "daemon_id": "compute-2.gunjko", "daemon_name": "mgr.compute-2.gunjko", "daemon_type": "mgr", "events": ["2026-01-20T13:58:03.734146Z daemon:mgr.compute-2.gunjko [INFO] \"Deployed mgr.compute-2.gunjko on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T13:58:26.027772Z", "memory_usage": 510551654, "ports": [8765], "service_name": "mgr", "started": "2026-01-20T13:57:58.994598Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mgr.compute-2.gunjko", "version": "18.2.7"}, {"container_id": "a602f19ce9ef", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.53%", "created": "2026-01-20T13:55:05.299073Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-20T13:56:19.840490Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T13:57:25.551557Z", "memory_request": 2147483648, "memory_usage": 32830914, "ports": [], "service_name": "mon", "started": "2026-01-20T13:55:07.703598Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mon.compute-0", "version": "18.2.7"}, {"container_id": "8b3e7cd2a573", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.97%", "created": "2026-01-20T13:57:54.489753Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-20T13:57:57.210188Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T13:58:25.831765Z", "memory_request": 2147483648, "memory_usage": 28017950, "ports": [], "service_name": "mon", "started": "2026-01-20T13:57:54.336707Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mon.compute-1", "version": "18.2.7"}, {"container_id": "6d4eaf8659f1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.78%", "created": "2026-01-20T13:57:52.006954Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2026-01-20T13:57:52.052452Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T13:58:26.027664Z", "memory_request": 2147483648, "memory_usage": 29003612, "ports": [], "service_name": "mon", "started": "2026-01-20T13:57:51.911185Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@mon.compute-2", "version": "18.2.7"}, {"container_id": "1bb19f8e00ae", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "6.96%", "created": "2026-01-20T13:57:15.331860Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-20T13:57:15.393628Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T13:57:25.551749Z", "memory_request": 4294967296, "memory_usage": 55868129, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T13:57:15.239350Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@osd.0", "version": "18.2.7"}, {"container_id": "2681b7d660cd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.45%", "created": "2026-01-20T13:57:20.823740Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-20T13:57:20.924867Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T13:58:25.831597Z", "memory_request": 5502921113, "memory_usage": 67874324, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T13:57:20.703208Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@osd.1", "version": "18.2.7"}, {"container_id": "585a0e7d4bc7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.62%", "created": "2026-01-20T13:58:21.990755Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-20T13:58:22.054903Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T13:58:26.027897Z", "memory_request": 4294967296, "memory_usage": 34277949, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T13:58:21.862070Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@osd.2", "version": "18.2.7"}]
Jan 20 13:58:35 compute-0 systemd[1]: libpod-d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3.scope: Deactivated successfully.
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.924624894 +0000 UTC m=+0.739389307 container died d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 13:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2ae8865a5ca934263b7eb1c59eba36ee9f15bf1dcca3f63147ad2ed85a92d20-merged.mount: Deactivated successfully.
Jan 20 13:58:35 compute-0 sshd-session[91837]: Connection closed by authenticating user root 159.223.5.14 port 37090 [preauth]
Jan 20 13:58:35 compute-0 podman[92317]: 2026-01-20 13:58:35.978970656 +0000 UTC m=+0.793735059 container remove d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3 (image=quay.io/ceph/ceph:v18, name=fervent_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:35 compute-0 rsyslogd[1003]: message too long (12805) with configured size 8096, begin of message is: [{"container_id": "5c427c7f90de", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 20 13:58:35 compute-0 systemd[1]: libpod-conmon-d9fc6a0a3dc5fffdf7096585a83186b06e7e420d56a3b344fa58b17e546f96d3.scope: Deactivated successfully.
Jan 20 13:58:36 compute-0 sudo[92254]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:36 compute-0 podman[92477]: 2026-01-20 13:58:36.095213684 +0000 UTC m=+0.064919327 container create 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 13:58:36 compute-0 systemd[1]: Started libpod-conmon-97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2.scope.
Jan 20 13:58:36 compute-0 podman[92477]: 2026-01-20 13:58:36.06975964 +0000 UTC m=+0.039465353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6052fe115313107ad50d4ef8a7c69ab4eb4a1eb07ca02d95235890ba96926/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6052fe115313107ad50d4ef8a7c69ab4eb4a1eb07ca02d95235890ba96926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6052fe115313107ad50d4ef8a7c69ab4eb4a1eb07ca02d95235890ba96926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6052fe115313107ad50d4ef8a7c69ab4eb4a1eb07ca02d95235890ba96926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:36 compute-0 podman[92477]: 2026-01-20 13:58:36.200036845 +0000 UTC m=+0.169742568 container init 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:36 compute-0 podman[92477]: 2026-01-20 13:58:36.216488518 +0000 UTC m=+0.186194191 container start 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:36 compute-0 podman[92477]: 2026-01-20 13:58:36.220828615 +0000 UTC m=+0.190534318 container attach 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:58:36 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 20 13:58:36 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 20 13:58:36 compute-0 ceph-mon[74360]: pgmap v116: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:36 compute-0 ceph-mon[74360]: 3.1e scrub starts
Jan 20 13:58:36 compute-0 ceph-mon[74360]: 3.1e scrub ok
Jan 20 13:58:36 compute-0 ceph-mon[74360]: from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 13:58:36 compute-0 sudo[92523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgvutqojururrqwiblbmxarmjbgdhhe ; /usr/bin/python3'
Jan 20 13:58:36 compute-0 sudo[92523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:37 compute-0 python3[92525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:37 compute-0 great_gates[92493]: {
Jan 20 13:58:37 compute-0 great_gates[92493]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 13:58:37 compute-0 great_gates[92493]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:58:37 compute-0 great_gates[92493]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:58:37 compute-0 great_gates[92493]:         "osd_id": 0,
Jan 20 13:58:37 compute-0 great_gates[92493]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:58:37 compute-0 great_gates[92493]:         "type": "bluestore"
Jan 20 13:58:37 compute-0 great_gates[92493]:     }
Jan 20 13:58:37 compute-0 great_gates[92493]: }
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.138979061 +0000 UTC m=+0.071852454 container create 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:58:37 compute-0 systemd[1]: libpod-97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2.scope: Deactivated successfully.
Jan 20 13:58:37 compute-0 podman[92477]: 2026-01-20 13:58:37.141046037 +0000 UTC m=+1.110751710 container died 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 13:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad6052fe115313107ad50d4ef8a7c69ab4eb4a1eb07ca02d95235890ba96926-merged.mount: Deactivated successfully.
Jan 20 13:58:37 compute-0 systemd[1]: Started libpod-conmon-5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04.scope.
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.110270799 +0000 UTC m=+0.043144252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:37 compute-0 podman[92477]: 2026-01-20 13:58:37.201628847 +0000 UTC m=+1.171334510 container remove 97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gates, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:37 compute-0 systemd[1]: libpod-conmon-97e5c8c267e29995296fd96062d3fcf9f755d91c05f578be8ba06500dcdb10a2.scope: Deactivated successfully.
Jan 20 13:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa3985fd9548e53a606a9162078034f063bf5489c24170739beb6b978df3a53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa3985fd9548e53a606a9162078034f063bf5489c24170739beb6b978df3a53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.233092764 +0000 UTC m=+0.165966157 container init 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 13:58:37 compute-0 sudo[92329]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.243021221 +0000 UTC m=+0.175894584 container start 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.24673093 +0000 UTC m=+0.179604343 container attach 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 1bace1f1-4dbd-4902-94dc-d069c46a4b30 (Updating rgw.rgw deployment (+3 -> 3))
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:37 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.ktpnzt on compute-2
Jan 20 13:58:37 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.ktpnzt on compute-2
Jan 20 13:58:37 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 20 13:58:37 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 20 13:58:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347102734' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:37 compute-0 festive_banach[92565]: 
Jan 20 13:58:37 compute-0 festive_banach[92565]: {"fsid":"e399cf45-e6b6-5393-99f1-75c601d3f188","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":34,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1768917509,"num_in_osds":3,"osd_in_since":1768917490,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":503603200,"bytes_avail":22032392192,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-20T13:58:31.206134+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.oweoeg":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.gunjko":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 20 13:58:37 compute-0 systemd[1]: libpod-5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04.scope: Deactivated successfully.
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.858517233 +0000 UTC m=+0.791390666 container died 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:37 compute-0 ceph-mon[74360]: 3.1f scrub starts
Jan 20 13:58:37 compute-0 ceph-mon[74360]: 3.1f scrub ok
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ktpnzt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/347102734' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 13:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fa3985fd9548e53a606a9162078034f063bf5489c24170739beb6b978df3a53-merged.mount: Deactivated successfully.
Jan 20 13:58:37 compute-0 podman[92535]: 2026-01-20 13:58:37.910965895 +0000 UTC m=+0.843839248 container remove 5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04 (image=quay.io/ceph/ceph:v18, name=festive_banach, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 13:58:37 compute-0 systemd[1]: libpod-conmon-5f4955b410062cc3a1f61716f93a6226220f342c740286eab1bb49c182619f04.scope: Deactivated successfully.
Jan 20 13:58:37 compute-0 sudo[92523]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:38 compute-0 sudo[92627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaehkdremvbpnguvpngeovdtuomtgvlg ; /usr/bin/python3'
Jan 20 13:58:38 compute-0 sudo[92627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:38 compute-0 ceph-mon[74360]: pgmap v117: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:38 compute-0 ceph-mon[74360]: Deploying daemon rgw.rgw.compute-2.ktpnzt on compute-2
Jan 20 13:58:38 compute-0 ceph-mon[74360]: 2.17 scrub starts
Jan 20 13:58:38 compute-0 ceph-mon[74360]: 2.17 scrub ok
Jan 20 13:58:38 compute-0 ceph-mon[74360]: 5.3 scrub starts
Jan 20 13:58:38 compute-0 ceph-mon[74360]: 5.3 scrub ok
Jan 20 13:58:38 compute-0 python3[92629]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.004574203 +0000 UTC m=+0.041141669 container create 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:39 compute-0 systemd[1]: Started libpod-conmon-4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93.scope.
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304e6ae2acaf9c661903e1b6ef0d02e6e8187d698e4dd9498a035095a1f981/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a304e6ae2acaf9c661903e1b6ef0d02e6e8187d698e4dd9498a035095a1f981/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.074972747 +0000 UTC m=+0.111540233 container init 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:38.988798578 +0000 UTC m=+0.025366044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.085164301 +0000 UTC m=+0.121731757 container start 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.088364378 +0000 UTC m=+0.124931844 container attach 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:39 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.orkqpg on compute-1
Jan 20 13:58:39 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.orkqpg on compute-1
Jan 20 13:58:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 20 13:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238419675' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:58:39 compute-0 quizzical_ritchie[92645]: 
Jan 20 13:58:39 compute-0 systemd[1]: libpod-4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93.scope: Deactivated successfully.
Jan 20 13:58:39 compute-0 quizzical_ritchie[92645]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502921113","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.orkqpg","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.ktpnzt","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.621419311 +0000 UTC m=+0.657986777 container died 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 13:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a304e6ae2acaf9c661903e1b6ef0d02e6e8187d698e4dd9498a035095a1f981-merged.mount: Deactivated successfully.
Jan 20 13:58:39 compute-0 podman[92630]: 2026-01-20 13:58:39.675014764 +0000 UTC m=+0.711582240 container remove 4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93 (image=quay.io/ceph/ceph:v18, name=quizzical_ritchie, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:39 compute-0 systemd[1]: libpod-conmon-4d160dd840e613a7bb8e5b22adc38c587b4f68d84c910c30bc88b10d95fb4c93.scope: Deactivated successfully.
Jan 20 13:58:39 compute-0 sudo[92627]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:40 compute-0 ceph-mon[74360]: 2.1a scrub starts
Jan 20 13:58:40 compute-0 ceph-mon[74360]: 2.1a scrub ok
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.orkqpg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:40 compute-0 ceph-mon[74360]: Deploying daemon rgw.rgw.compute-1.orkqpg on compute-1
Jan 20 13:58:40 compute-0 ceph-mon[74360]: pgmap v118: 193 pgs: 193 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/238419675' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 13:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 20 13:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 20 13:58:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 20 13:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 20 13:58:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 13:58:40 compute-0 sudo[92705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfupklqgsifwqlamljqipejbnxbbxnx ; /usr/bin/python3'
Jan 20 13:58:40 compute-0 sudo[92705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:40 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:40 compute-0 python3[92707]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:40 compute-0 podman[92708]: 2026-01-20 13:58:40.68047114 +0000 UTC m=+0.037380817 container create ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:40 compute-0 systemd[1]: Started libpod-conmon-ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31.scope.
Jan 20 13:58:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000558c162c1c966ef679b7ee0ca221c6588722675c5136d274eea3cf6751463/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000558c162c1c966ef679b7ee0ca221c6588722675c5136d274eea3cf6751463/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:40 compute-0 podman[92708]: 2026-01-20 13:58:40.746499457 +0000 UTC m=+0.103409144 container init ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 13:58:40 compute-0 podman[92708]: 2026-01-20 13:58:40.75147306 +0000 UTC m=+0.108382737 container start ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 13:58:40 compute-0 podman[92708]: 2026-01-20 13:58:40.754201054 +0000 UTC m=+0.111110731 container attach ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:40 compute-0 podman[92708]: 2026-01-20 13:58:40.666622658 +0000 UTC m=+0.023532335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:40 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Jan 20 13:58:40 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 20 13:58:41 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [0] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:41 compute-0 ceph-mon[74360]: 6.1 scrub starts
Jan 20 13:58:41 compute-0 ceph-mon[74360]: 6.1 scrub ok
Jan 20 13:58:41 compute-0 ceph-mon[74360]: osdmap e43: 3 total, 3 up, 3 in
Jan 20 13:58:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 13:58:41 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 13:58:41 compute-0 ceph-mon[74360]: 5.5 deep-scrub starts
Jan 20 13:58:41 compute-0 ceph-mon[74360]: 5.5 deep-scrub ok
Jan 20 13:58:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v121: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/969844602' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 20 13:58:41 compute-0 elated_johnson[92724]: mimic
Jan 20 13:58:41 compute-0 systemd[1]: libpod-ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31.scope: Deactivated successfully.
Jan 20 13:58:41 compute-0 podman[92708]: 2026-01-20 13:58:41.276473807 +0000 UTC m=+0.633383514 container died ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 20 13:58:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-000558c162c1c966ef679b7ee0ca221c6588722675c5136d274eea3cf6751463-merged.mount: Deactivated successfully.
Jan 20 13:58:41 compute-0 podman[92708]: 2026-01-20 13:58:41.325028104 +0000 UTC m=+0.681937811 container remove ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31 (image=quay.io/ceph/ceph:v18, name=elated_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 13:58:41 compute-0 systemd[1]: libpod-conmon-ffb3a214ccb54f7964a202e03d7e9c4966fa7bae05387edca299952fbdeb6d31.scope: Deactivated successfully.
Jan 20 13:58:41 compute-0 sudo[92705]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:41 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.kiggjh on compute-0
Jan 20 13:58:41 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.kiggjh on compute-0
Jan 20 13:58:41 compute-0 sudo[92765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:41 compute-0 sudo[92765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:41 compute-0 sudo[92765]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:41 compute-0 sudo[92790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:41 compute-0 sudo[92790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:41 compute-0 sudo[92790]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:41 compute-0 sudo[92815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:41 compute-0 sudo[92815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:41 compute-0 sudo[92815]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:41 compute-0 sudo[92840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:41 compute-0 sudo[92840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:41 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 13:58:41 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 13:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 20 13:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 20 13:58:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 13:58:42 compute-0 ceph-mon[74360]: osdmap e44: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-0 ceph-mon[74360]: pgmap v121: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/969844602' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kiggjh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:42 compute-0 ceph-mon[74360]: Deploying daemon rgw.rgw.compute-0.kiggjh on compute-0
Jan 20 13:58:42 compute-0 ceph-mon[74360]: 4.4 scrub starts
Jan 20 13:58:42 compute-0 ceph-mon[74360]: 4.4 scrub ok
Jan 20 13:58:42 compute-0 ceph-mon[74360]: osdmap e45: 3 total, 3 up, 3 in
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.140098687 +0000 UTC m=+0.066792807 container create 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 13:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 20 13:58:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.105063894 +0000 UTC m=+0.031758064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:42 compute-0 systemd[1]: Started libpod-conmon-5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b.scope.
Jan 20 13:58:42 compute-0 sudo[92945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjszmuponxkvfgezjprtuzilbmpwqox ; /usr/bin/python3'
Jan 20 13:58:42 compute-0 sudo[92945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:58:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.276054926 +0000 UTC m=+0.202749046 container init 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.283808555 +0000 UTC m=+0.210502655 container start 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.287205736 +0000 UTC m=+0.213899836 container attach 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 13:58:42 compute-0 peaceful_bohr[92946]: 167 167
Jan 20 13:58:42 compute-0 systemd[1]: libpod-5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b.scope: Deactivated successfully.
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.289662562 +0000 UTC m=+0.216356682 container died 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 13:58:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33bfdc6ea795003b2e3483ecece1fa59730aac732516bc10e818e56aad12fb0-merged.mount: Deactivated successfully.
Jan 20 13:58:42 compute-0 podman[92906]: 2026-01-20 13:58:42.331235971 +0000 UTC m=+0.257930071 container remove 5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:42 compute-0 systemd[1]: libpod-conmon-5d4c64b80d3177e5bc4baba7025642ed33da19282033733cd299634a8d22c90b.scope: Deactivated successfully.
Jan 20 13:58:42 compute-0 systemd[1]: Reloading.
Jan 20 13:58:42 compute-0 python3[92950]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:42 compute-0 systemd-rc-local-generator[92996]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:42 compute-0 systemd-sysv-generator[93000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:42 compute-0 podman[92985]: 2026-01-20 13:58:42.528705325 +0000 UTC m=+0.060965832 container create 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:58:42 compute-0 podman[92985]: 2026-01-20 13:58:42.505736596 +0000 UTC m=+0.037997103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:58:42 compute-0 systemd[1]: Started libpod-conmon-0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572.scope.
Jan 20 13:58:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74ca4ff9e4e74331e72118342fca9090c163d732a7b078723d7b5d41eff9f32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74ca4ff9e4e74331e72118342fca9090c163d732a7b078723d7b5d41eff9f32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:42 compute-0 systemd[1]: Reloading.
Jan 20 13:58:42 compute-0 podman[92985]: 2026-01-20 13:58:42.731626755 +0000 UTC m=+0.263887322 container init 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:58:42 compute-0 podman[92985]: 2026-01-20 13:58:42.743117164 +0000 UTC m=+0.275377661 container start 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:58:42 compute-0 podman[92985]: 2026-01-20 13:58:42.747524993 +0000 UTC m=+0.279785490 container attach 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 13:58:42 compute-0 systemd-rc-local-generator[93054]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:42 compute-0 systemd-sysv-generator[93057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:42 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.kiggjh for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 20 13:58:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:43 compute-0 ceph-mon[74360]: 4.3 scrub starts
Jan 20 13:58:43 compute-0 ceph-mon[74360]: 4.3 scrub ok
Jan 20 13:58:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v124: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:43 compute-0 podman[93131]: 2026-01-20 13:58:43.246004156 +0000 UTC m=+0.044065246 container create 793d79ee59d8324232451e3c988f452470c4579e2f2b2377644f7209af529f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-0-kiggjh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea15318a0fa66c244ca53d34953bcd9b3752231fe3cc89fbb1a15f95ed99e288/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea15318a0fa66c244ca53d34953bcd9b3752231fe3cc89fbb1a15f95ed99e288/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea15318a0fa66c244ca53d34953bcd9b3752231fe3cc89fbb1a15f95ed99e288/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea15318a0fa66c244ca53d34953bcd9b3752231fe3cc89fbb1a15f95ed99e288/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.kiggjh supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:43 compute-0 podman[93131]: 2026-01-20 13:58:43.304702996 +0000 UTC m=+0.102764086 container init 793d79ee59d8324232451e3c988f452470c4579e2f2b2377644f7209af529f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-0-kiggjh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:58:43 compute-0 podman[93131]: 2026-01-20 13:58:43.311338095 +0000 UTC m=+0.109399175 container start 793d79ee59d8324232451e3c988f452470c4579e2f2b2377644f7209af529f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-0-kiggjh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:43 compute-0 bash[93131]: 793d79ee59d8324232451e3c988f452470c4579e2f2b2377644f7209af529f87
Jan 20 13:58:43 compute-0 podman[93131]: 2026-01-20 13:58:43.22830382 +0000 UTC m=+0.026364920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:43 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.kiggjh for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:43 compute-0 sudo[92840]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:43 compute-0 radosgw[93153]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:43 compute-0 radosgw[93153]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 20 13:58:43 compute-0 radosgw[93153]: framework: beast
Jan 20 13:58:43 compute-0 radosgw[93153]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 20 13:58:43 compute-0 radosgw[93153]: init_numa not setting numa affinity
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 1bace1f1-4dbd-4902-94dc-d069c46a4b30 (Updating rgw.rgw deployment (+3 -> 3))
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 1bace1f1-4dbd-4902-94dc-d069c46a4b30 (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361094635' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 20 13:58:43 compute-0 vigorous_edison[93020]: 
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 20 13:58:43 compute-0 systemd[1]: libpod-0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572.scope: Deactivated successfully.
Jan 20 13:58:43 compute-0 vigorous_edison[93020]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":9}}
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 podman[92985]: 2026-01-20 13:58:43.507193505 +0000 UTC m=+1.039454012 container died 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 624c54b8-afe4-417d-a918-7eacb4f1e91a (Updating mds.cephfs deployment (+3 -> 3))
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.jyxktq on compute-2
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.jyxktq on compute-2
Jan 20 13:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74ca4ff9e4e74331e72118342fca9090c163d732a7b078723d7b5d41eff9f32-merged.mount: Deactivated successfully.
Jan 20 13:58:43 compute-0 podman[92985]: 2026-01-20 13:58:43.555497344 +0000 UTC m=+1.087757851 container remove 0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572 (image=quay.io/ceph/ceph:v18, name=vigorous_edison, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:43 compute-0 systemd[1]: libpod-conmon-0682af5b2a3fd2d62121ab34859d26d735121c469fee8e44112ba6a600580572.scope: Deactivated successfully.
Jan 20 13:58:43 compute-0 sudo[92945]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 13 completed events
Jan 20 13:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:43 compute-0 ceph-mgr[74653]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 20 13:58:43 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 13:58:43 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 13:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 20 13:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 20 13:58:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 20 13:58:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 20 13:58:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: 6.1b scrub starts
Jan 20 13:58:44 compute-0 ceph-mon[74360]: 6.1b scrub ok
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 13:58:44 compute-0 ceph-mon[74360]: osdmap e46: 3 total, 3 up, 3 in
Jan 20 13:58:44 compute-0 ceph-mon[74360]: pgmap v124: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/361094635' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.jyxktq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:44 compute-0 ceph-mon[74360]: Deploying daemon mds.cephfs.compute-2.jyxktq on compute-2
Jan 20 13:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:44 compute-0 ceph-mon[74360]: 5.6 scrub starts
Jan 20 13:58:44 compute-0 ceph-mon[74360]: 5.6 scrub ok
Jan 20 13:58:44 compute-0 ceph-mon[74360]: osdmap e47: 3 total, 3 up, 3 in
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2347323994' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1247667946' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/418792044' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: osdmap e48: 3 total, 3 up, 3 in
Jan 20 13:58:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v127: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:45 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.znrafi on compute-0
Jan 20 13:58:45 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.znrafi on compute-0
Jan 20 13:58:45 compute-0 sudo[93245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:45 compute-0 sudo[93245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:45 compute-0 sudo[93245]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:45 compute-0 sudo[93270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:45 compute-0 sudo[93270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:45 compute-0 sudo[93270]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:45 compute-0 sudo[93295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:45 compute-0 sudo[93295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:45 compute-0 sudo[93295]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:45 compute-0 sudo[93320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:45 compute-0 sudo[93320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.259605579 +0000 UTC m=+0.058105154 container create a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:58:46 compute-0 systemd[1]: Started libpod-conmon-a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21.scope.
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.237153125 +0000 UTC m=+0.035652730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.350181567 +0000 UTC m=+0.148681192 container init a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.36217083 +0000 UTC m=+0.160670395 container start a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.366009172 +0000 UTC m=+0.164508767 container attach a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 13:58:46 compute-0 busy_mclean[93400]: 167 167
Jan 20 13:58:46 compute-0 systemd[1]: libpod-a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21.scope: Deactivated successfully.
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.367606236 +0000 UTC m=+0.166105791 container died a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-25341adf7702248afc95bc3aea7998a5b757e667a573004dbd81aebac23fb068-merged.mount: Deactivated successfully.
Jan 20 13:58:46 compute-0 podman[93384]: 2026-01-20 13:58:46.404331724 +0000 UTC m=+0.202831289 container remove a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 13:58:46 compute-0 systemd[1]: libpod-conmon-a430278d2075c60561526cf643e8394a29b53919c5679d761a56b44f399b0e21.scope: Deactivated successfully.
Jan 20 13:58:46 compute-0 systemd[1]: Reloading.
Jan 20 13:58:46 compute-0 systemd-rc-local-generator[93446]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:46 compute-0 systemd-sysv-generator[93451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e3 new map
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:19.644841+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.jyxktq{-1:24178} state up:standby seq 1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:boot
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] as mds.0
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.jyxktq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 13:58:46 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 20 13:58:46 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.jyxktq"} v 0) v1
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.jyxktq"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e3 all = 0
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e4 new map
Jan 20 13:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:46.558090+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:creating seq 1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 20 13:58:46 compute-0 ceph-mon[74360]: pgmap v127: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.znrafi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: Deploying daemon mds.cephfs.compute-0.znrafi on compute-0
Jan 20 13:58:46 compute-0 ceph-mon[74360]: osdmap e49: 3 total, 3 up, 3 in
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/159360274' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:creating}
Jan 20 13:58:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:58:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.jyxktq is now active in filesystem cephfs as rank 0
Jan 20 13:58:46 compute-0 systemd[1]: Reloading.
Jan 20 13:58:46 compute-0 systemd-rc-local-generator[93484]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:46 compute-0 systemd-sysv-generator[93488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:47 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.znrafi for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:47 compute-0 podman[93548]: 2026-01-20 13:58:47.293724957 +0000 UTC m=+0.045842135 container create 54a4a1369bdfc2ce487fb99a088918e7a98791cfa33283998c29313720869508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-0-znrafi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f661e9989c97e0e2b3260af224f1d540fe0db08619fe6721a0ff23810c543398/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f661e9989c97e0e2b3260af224f1d540fe0db08619fe6721a0ff23810c543398/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f661e9989c97e0e2b3260af224f1d540fe0db08619fe6721a0ff23810c543398/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f661e9989c97e0e2b3260af224f1d540fe0db08619fe6721a0ff23810c543398/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.znrafi supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:47 compute-0 podman[93548]: 2026-01-20 13:58:47.356526567 +0000 UTC m=+0.108643755 container init 54a4a1369bdfc2ce487fb99a088918e7a98791cfa33283998c29313720869508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-0-znrafi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 13:58:47 compute-0 podman[93548]: 2026-01-20 13:58:47.274705704 +0000 UTC m=+0.026822922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:58:47 compute-0 podman[93548]: 2026-01-20 13:58:47.370036731 +0000 UTC m=+0.122153909 container start 54a4a1369bdfc2ce487fb99a088918e7a98791cfa33283998c29313720869508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-0-znrafi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 13:58:47 compute-0 bash[93548]: 54a4a1369bdfc2ce487fb99a088918e7a98791cfa33283998c29313720869508
Jan 20 13:58:47 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.znrafi for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:47 compute-0 ceph-mds[93566]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:47 compute-0 ceph-mds[93566]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 20 13:58:47 compute-0 ceph-mds[93566]: main not setting numa affinity
Jan 20 13:58:47 compute-0 ceph-mds[93566]: pidfile_write: ignore empty --pid-file
Jan 20 13:58:47 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mds-cephfs-compute-0-znrafi[93562]: starting mds.cephfs.compute-0.znrafi at 
Jan 20 13:58:47 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Updating MDS map to version 4 from mon.0
Jan 20 13:58:47 compute-0 sudo[93320]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.rtofcx on compute-1
Jan 20 13:58:47 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.rtofcx on compute-1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:boot
Jan 20 13:58:47 compute-0 ceph-mon[74360]: daemon mds.cephfs.compute-2.jyxktq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 13:58:47 compute-0 ceph-mon[74360]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 13:58:47 compute-0 ceph-mon[74360]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 13:58:47 compute-0 ceph-mon[74360]: Cluster is now healthy
Jan 20 13:58:47 compute-0 ceph-mon[74360]: fsmap cephfs:0 1 up:standby
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.jyxktq"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:creating}
Jan 20 13:58:47 compute-0 ceph-mon[74360]: daemon mds.cephfs.compute-2.jyxktq is now active in filesystem cephfs as rank 0
Jan 20 13:58:47 compute-0 ceph-mon[74360]: 4.6 deep-scrub starts
Jan 20 13:58:47 compute-0 ceph-mon[74360]: 4.6 deep-scrub ok
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: osdmap e50: 3 total, 3 up, 3 in
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1523026806' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/159360274' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.rtofcx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 13:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e5 new map
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:47.570199+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 2 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:47 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Updating MDS map to version 5 from mon.0
Jan 20 13:58:47 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Monitors have assigned me to become a standby.
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:boot
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.znrafi"} v 0) v1
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.znrafi"}]: dispatch
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e5 all = 0
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e6 new map
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:47.570199+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 2 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 20 13:58:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 20 13:58:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 20 13:58:48 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-rgw-rgw-compute-0-kiggjh[93149]: 2026-01-20T13:58:48.415+0000 7fb6d4ae1940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 20 13:58:48 compute-0 radosgw[93153]: LDAP not started since no server URIs were provided in the configuration.
Jan 20 13:58:48 compute-0 radosgw[93153]: framework: beast
Jan 20 13:58:48 compute-0 radosgw[93153]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 20 13:58:48 compute-0 radosgw[93153]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: starting handler: beast
Jan 20 13:58:48 compute-0 radosgw[93153]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: mgrc service_daemon_register rgw.14370 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.kiggjh,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=8115d0e5-f46a-4d23-887b-99af6a666d4f,zone_name=default,zonegroup_id=1c9817d6-3061-4a20-aeb7-2a830f7cf40e,zonegroup_name=default}
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 20 13:58:48 compute-0 ceph-mon[74360]: pgmap v130: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 13:58:48 compute-0 ceph-mon[74360]: Deploying daemon mds.cephfs.compute-1.rtofcx on compute-1
Jan 20 13:58:48 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:48 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:boot
Jan 20 13:58:48 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.znrafi"}]: dispatch
Jan 20 13:58:48 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 1 up:standby
Jan 20 13:58:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4243338850' entity='client.rgw.rgw.compute-0.kiggjh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-1.orkqpg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: from='client.? ' entity='client.rgw.rgw.compute-2.ktpnzt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 13:58:48 compute-0 ceph-mon[74360]: osdmap e51: 3 total, 3 up, 3 in
Jan 20 13:58:48 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event f1d72837-3a43-4473-a861-b643030a23e6 (Global Recovery Event) in 5 seconds
Jan 20 13:58:48 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 20 13:58:48 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 11 KiB/s wr, 190 op/s
Jan 20 13:58:49 compute-0 ceph-mon[74360]: 4.2 scrub starts
Jan 20 13:58:49 compute-0 ceph-mon[74360]: 4.2 scrub ok
Jan 20 13:58:49 compute-0 ceph-mon[74360]: 5.a scrub starts
Jan 20 13:58:49 compute-0 ceph-mon[74360]: 5.a scrub ok
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 624c54b8-afe4-417d-a918-7eacb4f1e91a (Updating mds.cephfs deployment (+3 -> 3))
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 624c54b8-afe4-417d-a918-7eacb4f1e91a (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 8ab0a7e9-511d-4ec2-8832-34ffb9e7eb3a (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 20 13:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 20 13:58:49 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 13:58:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.nqkboe on compute-0
Jan 20 13:58:49 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.nqkboe on compute-0
Jan 20 13:58:49 compute-0 sudo[94129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:49 compute-0 sudo[94129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:49 compute-0 sudo[94129]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:50 compute-0 sudo[94154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:50 compute-0 sudo[94154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:50 compute-0 sudo[94154]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:50 compute-0 sudo[94179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:50 compute-0 sudo[94179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:50 compute-0 sudo[94179]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:50 compute-0 sudo[94204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:50 compute-0 sudo[94204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:50 compute-0 ceph-mon[74360]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 11 KiB/s wr, 190 op/s
Jan 20 13:58:50 compute-0 ceph-mon[74360]: 7.1 scrub starts
Jan 20 13:58:50 compute-0 ceph-mon[74360]: 7.1 scrub ok
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:50 compute-0 ceph-mon[74360]: 4.7 scrub starts
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:50 compute-0 ceph-mon[74360]: 4.7 scrub ok
Jan 20 13:58:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e7 new map
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:boot
Jan 20 13:58:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.rtofcx"} v 0) v1
Jan 20 13:58:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rtofcx"}]: dispatch
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e7 all = 0
Jan 20 13:58:51 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 13:58:51 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 13:58:51 compute-0 ceph-mon[74360]: Deploying daemon haproxy.rgw.default.compute-0.nqkboe on compute-0
Jan 20 13:58:51 compute-0 ceph-mon[74360]: 2.1b scrub starts
Jan 20 13:58:51 compute-0 ceph-mon[74360]: 2.1b scrub ok
Jan 20 13:58:51 compute-0 ceph-mon[74360]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 8.0 KiB/s wr, 131 op/s
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:boot
Jan 20 13:58:51 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] up:active
Jan 20 13:58:51 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.rtofcx"}]: dispatch
Jan 20 13:58:51 compute-0 ceph-mon[74360]: 5.c scrub starts
Jan 20 13:58:51 compute-0 ceph-mon[74360]: 5.c scrub ok
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:58:52
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta']
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:58:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e8 new map
Jan 20 13:58:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:52 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Updating MDS map to version 8 from mon.0
Jan 20 13:58:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:standby
Jan 20 13:58:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.811154317 +0000 UTC m=+2.417900575 container create de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.795177546 +0000 UTC m=+2.401923834 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 13:58:52 compute-0 systemd[1]: Started libpod-conmon-de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4.scope.
Jan 20 13:58:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.893097871 +0000 UTC m=+2.499844149 container init de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.89901193 +0000 UTC m=+2.505758188 container start de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.901802206 +0000 UTC m=+2.508548474 container attach de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 lucid_burnell[94386]: 0 0
Jan 20 13:58:52 compute-0 systemd[1]: libpod-de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4.scope: Deactivated successfully.
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.90419597 +0000 UTC m=+2.510942238 container died de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb6e7583287575e5d0a9ca53071b80b7d626acd17b004b05b7e3c202d3ca8190-merged.mount: Deactivated successfully.
Jan 20 13:58:52 compute-0 podman[94271]: 2026-01-20 13:58:52.934257209 +0000 UTC m=+2.541003477 container remove de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4 (image=quay.io/ceph/haproxy:2.3, name=lucid_burnell)
Jan 20 13:58:52 compute-0 systemd[1]: libpod-conmon-de7d3380f0e6fb03a16c90a4afa2baad73051730fa61e0b770009513c92b64e4.scope: Deactivated successfully.
Jan 20 13:58:52 compute-0 systemd[1]: Reloading.
Jan 20 13:58:53 compute-0 systemd-rc-local-generator[94430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:53 compute-0 systemd-sysv-generator[94436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 6.8 KiB/s wr, 111 op/s
Jan 20 13:58:53 compute-0 systemd[1]: Reloading.
Jan 20 13:58:53 compute-0 systemd-rc-local-generator[94469]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:58:53 compute-0 systemd-sysv-generator[94472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:58:53 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.nqkboe for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:58:53 compute-0 ceph-mon[74360]: 3.0 scrub starts
Jan 20 13:58:53 compute-0 ceph-mon[74360]: 3.0 scrub ok
Jan 20 13:58:53 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] up:standby
Jan 20 13:58:53 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:53 compute-0 podman[94530]: 2026-01-20 13:58:53.71285076 +0000 UTC m=+0.042543256 container create a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:58:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/081fd7509c32d20caf014ba3f0b4fd2e3acf3264f8ee0a621b97069887bff736/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 20 13:58:53 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 15 completed events
Jan 20 13:58:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:58:53 compute-0 podman[94530]: 2026-01-20 13:58:53.774349405 +0000 UTC m=+0.104041931 container init a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:58:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:53 compute-0 podman[94530]: 2026-01-20 13:58:53.780003207 +0000 UTC m=+0.109695703 container start a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:58:53 compute-0 bash[94530]: a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8
Jan 20 13:58:53 compute-0 podman[94530]: 2026-01-20 13:58:53.691985159 +0000 UTC m=+0.021677705 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 13:58:53 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.nqkboe for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:58:53 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe[94545]: [NOTICE] 019/135853 (2) : New worker #1 (4) forked
Jan 20 13:58:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:58:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 13:58:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 13:58:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 20 13:58:53 compute-0 sudo[94204]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:58:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 20 13:58:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:58:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:58:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:53 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.cuokcs on compute-2
Jan 20 13:58:53 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.cuokcs on compute-2
Jan 20 13:58:54 compute-0 ceph-mon[74360]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 6.8 KiB/s wr, 111 op/s
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 7.7 scrub starts
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 7.7 scrub ok
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 5.d scrub starts
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 5.d scrub ok
Jan 20 13:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 5.14 scrub starts
Jan 20 13:58:54 compute-0 ceph-mon[74360]: 5.14 scrub ok
Jan 20 13:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:54 compute-0 ceph-mon[74360]: Deploying daemon haproxy.rgw.default.compute-2.cuokcs on compute-2
Jan 20 13:58:54 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 20 13:58:54 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 20 13:58:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e9 new map
Jan 20 13:58:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T13:58:19.644785+0000
                                           modified        2026-01-20T13:58:50.863864+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24178}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.jyxktq{0:24178} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2187119920,v1:192.168.122.102:6805/2187119920] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.znrafi{-1:14376} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2144836821,v1:192.168.122.100:6807/2144836821] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.rtofcx{-1:24137} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] compat {c=[1],r=[1],i=[7ff]}]
Jan 20 13:58:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:standby
Jan 20 13:58:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 154 KiB/s rd, 6.0 KiB/s wr, 290 op/s
Jan 20 13:58:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:58:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:55 compute-0 ceph-mon[74360]: 4.b scrub starts
Jan 20 13:58:55 compute-0 ceph-mon[74360]: 4.b scrub ok
Jan 20 13:58:55 compute-0 ceph-mon[74360]: mds.? [v2:192.168.122.101:6804/2015191638,v1:192.168.122.101:6805/2015191638] up:standby
Jan 20 13:58:55 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 13:58:56 compute-0 ceph-mon[74360]: pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 154 KiB/s rd, 6.0 KiB/s wr, 290 op/s
Jan 20 13:58:56 compute-0 ceph-mon[74360]: 7.c scrub starts
Jan 20 13:58:56 compute-0 ceph-mon[74360]: 7.c scrub ok
Jan 20 13:58:56 compute-0 ceph-mon[74360]: 2.a scrub starts
Jan 20 13:58:56 compute-0 ceph-mon[74360]: 2.a scrub ok
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 13:58:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 4.8 KiB/s wr, 233 op/s
Jan 20 13:58:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:58:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:58:57 compute-0 ceph-mon[74360]: 2.c deep-scrub starts
Jan 20 13:58:57 compute-0 ceph-mon[74360]: 2.c deep-scrub ok
Jan 20 13:58:57 compute-0 ceph-mon[74360]: pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 4.8 KiB/s wr, 233 op/s
Jan 20 13:58:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:58:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:58:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:58:58.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:58:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:58:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:58:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:58:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 20 13:58:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 13:58:58 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 13:58:58 compute-0 sudo[94559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:58 compute-0 sudo[94559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:58 compute-0 sudo[94559]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:58 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 13:58:58 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 13:58:58 compute-0 sudo[94584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:58:58 compute-0 sudo[94584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:58 compute-0 sudo[94584]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:58 compute-0 sudo[94609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:58:58 compute-0 sudo[94609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:58 compute-0 sudo[94609]: pam_unix(sudo:session): session closed for user root
Jan 20 13:58:58 compute-0 sudo[94634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 2.d scrub starts
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 2.d scrub ok
Jan 20 13:58:58 compute-0 sudo[94634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:58:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:58:58 compute-0 ceph-mon[74360]: Deploying daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 5.17 scrub starts
Jan 20 13:58:58 compute-0 ceph-mon[74360]: 5.17 scrub ok
Jan 20 13:58:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Jan 20 13:58:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:58:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:58:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:58:59.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:00 compute-0 ceph-mon[74360]: 7.d scrub starts
Jan 20 13:59:00 compute-0 ceph-mon[74360]: 7.d scrub ok
Jan 20 13:59:00 compute-0 ceph-mon[74360]: pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Jan 20 13:59:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:00 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 20 13:59:00 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 20 13:59:01 compute-0 ceph-mon[74360]: 7.12 scrub starts
Jan 20 13:59:01 compute-0 ceph-mon[74360]: 7.12 scrub ok
Jan 20 13:59:01 compute-0 ceph-mon[74360]: 5.19 scrub starts
Jan 20 13:59:01 compute-0 ceph-mon[74360]: 5.19 scrub ok
Jan 20 13:59:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 170 B/s wr, 129 op/s
Jan 20 13:59:01 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 13:59:01 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 13:59:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:01.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:02 compute-0 ceph-mon[74360]: pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 170 B/s wr, 129 op/s
Jan 20 13:59:02 compute-0 ceph-mon[74360]: 4.f scrub starts
Jan 20 13:59:02 compute-0 ceph-mon[74360]: 4.f scrub ok
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.450617344 +0000 UTC m=+3.100258416 container create 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 13:59:02 compute-0 systemd[1]: Started libpod-conmon-6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942.scope.
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.427884332 +0000 UTC m=+3.077525474 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 13:59:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.546613587 +0000 UTC m=+3.196254739 container init 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.556465792 +0000 UTC m=+3.206106894 container start 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, name=keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4)
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.560646415 +0000 UTC m=+3.210287567 container attach 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 13:59:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:02 compute-0 recursing_goldberg[94797]: 0 0
Jan 20 13:59:02 compute-0 systemd[1]: libpod-6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942.scope: Deactivated successfully.
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.562749201 +0000 UTC m=+3.212390263 container died 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793)
Jan 20 13:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f290379d3033557f41d14e3ecbad0387d5d279253a4bea7c825b182e6d4f9a47-merged.mount: Deactivated successfully.
Jan 20 13:59:02 compute-0 podman[94699]: 2026-01-20 13:59:02.605904522 +0000 UTC m=+3.255545624 container remove 6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942 (image=quay.io/ceph/keepalived:2.2.4, name=recursing_goldberg, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4)
Jan 20 13:59:02 compute-0 systemd[1]: libpod-conmon-6d6add83ac17f12095efc9d0b6bbca69ee81ca969838af7305752012efcaf942.scope: Deactivated successfully.
Jan 20 13:59:02 compute-0 systemd[1]: Reloading.
Jan 20 13:59:02 compute-0 systemd-rc-local-generator[94848]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:02 compute-0 systemd-sysv-generator[94852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:02 compute-0 systemd[1]: Reloading.
Jan 20 13:59:03 compute-0 systemd-rc-local-generator[94887]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:03 compute-0 systemd-sysv-generator[94890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:03 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v139: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:03 compute-0 podman[94945]: 2026-01-20 13:59:03.473138289 +0000 UTC m=+0.054243440 container create 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, version=2.2.4, name=keepalived, io.buildah.version=1.28.2)
Jan 20 13:59:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7900c5890505e855b0f5cb23820624ef14318547c4515d911e56d53b5c2a650/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:03 compute-0 podman[94945]: 2026-01-20 13:59:03.443369968 +0000 UTC m=+0.024475199 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 13:59:03 compute-0 podman[94945]: 2026-01-20 13:59:03.539580757 +0000 UTC m=+0.120686018 container init 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-type=git, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc.)
Jan 20 13:59:03 compute-0 podman[94945]: 2026-01-20 13:59:03.543654357 +0000 UTC m=+0.124759548 container start 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, name=keepalived, com.redhat.component=keepalived-container)
Jan 20 13:59:03 compute-0 bash[94945]: 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878
Jan 20 13:59:03 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Starting VRRP child process, pid=4
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: Startup complete
Jan 20 13:59:03 compute-0 sudo[94634]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 13:59:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:03 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:03 2026: VRRP_Script(check_backend) succeeded
Jan 20 13:59:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:59:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.dleeql on compute-2
Jan 20 13:59:03 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.dleeql on compute-2
Jan 20 13:59:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:03.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 7.15 scrub starts
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 7.15 scrub ok
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 7.a scrub starts
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 7.a scrub ok
Jan 20 13:59:04 compute-0 ceph-mon[74360]: pgmap v139: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 13:59:04 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 13:59:04 compute-0 ceph-mon[74360]: Deploying daemon keepalived.rgw.default.compute-2.dleeql on compute-2
Jan 20 13:59:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:05 compute-0 ceph-mon[74360]: 7.17 scrub starts
Jan 20 13:59:05 compute-0 ceph-mon[74360]: 7.17 scrub ok
Jan 20 13:59:05 compute-0 ceph-mon[74360]: 5.b scrub starts
Jan 20 13:59:05 compute-0 ceph-mon[74360]: 5.b scrub ok
Jan 20 13:59:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:06 compute-0 ceph-mon[74360]: pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 13:59:06 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 13:59:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:59:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:07 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:07 2026: (VI_0) Entering MASTER STATE
Jan 20 13:59:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 20 13:59:07 compute-0 ceph-mon[74360]: 5.8 scrub starts
Jan 20 13:59:07 compute-0 ceph-mon[74360]: 5.8 scrub ok
Jan 20 13:59:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 20 13:59:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 20 13:59:07 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev a1d4a715-e644-4ea8-9c5b-d20dfeb04138 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 13:59:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:59:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:07.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:08.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 20 13:59:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev da8cb037-d6f5-4278-bd90-2691fddeb9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 13:59:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:59:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:08 compute-0 ceph-mon[74360]: pgmap v141: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:08 compute-0 ceph-mon[74360]: 7.19 scrub starts
Jan 20 13:59:08 compute-0 ceph-mon[74360]: 7.19 scrub ok
Jan 20 13:59:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:08 compute-0 ceph-mon[74360]: osdmap e52: 3 total, 3 up, 3 in
Jan 20 13:59:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:08 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Jan 20 13:59:08 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Jan 20 13:59:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 8ab0a7e9-511d-4ec2-8832-34ffb9e7eb3a (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 20 13:59:09 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 8ab0a7e9-511d-4ec2-8832-34ffb9e7eb3a (Updating ingress.rgw.default deployment (+4 -> 4)) in 19 seconds
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 13:59:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v144: 197 pgs: 197 active+clean; 456 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 sudo[94971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-0 sudo[94971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[94971]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 sudo[94972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-0 sudo[94972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[94972]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 sudo[95021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:09 compute-0 sudo[95021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[95021]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 sudo[95027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-0 sudo[95027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[95027]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 975561b1-8962-4542-a8e3-fd894978e692 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 13:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 20 13:59:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 54 pg[9.0( v 51'1000 (0'0,51'1000] local-lis/les=45/46 n=177 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=13.397859573s) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 51'999 mlcod 51'999 active pruub 126.239120483s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:09 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=11.367815018s) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 124.209106445s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:09 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 54 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=11.367815018s) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 124.209106445s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:09 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 54 pg[9.0( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=13.397859573s) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 51'999 mlcod 0'0 unknown pruub 126.239120483s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: osdmap e53: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-mon[74360]: 5.1d deep-scrub starts
Jan 20 13:59:09 compute-0 ceph-mon[74360]: 5.1d deep-scrub ok
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:09 compute-0 ceph-mon[74360]: osdmap e54: 3 total, 3 up, 3 in
Jan 20 13:59:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 13:59:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:09.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:09 compute-0 sudo[95072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-0 sudo[95072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[95072]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 sudo[95097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:09 compute-0 sudo[95097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[95097]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:09 compute-0 sudo[95122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:09 compute-0 sudo[95122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:09 compute-0 sudo[95122]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:10 compute-0 sudo[95147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:59:10 compute-0 sudo[95147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:10 compute-0 podman[95242]: 2026-01-20 13:59:10.637613848 +0000 UTC m=+0.082629565 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 20 13:59:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 20 13:59:10 compute-0 podman[95242]: 2026-01-20 13:59:10.758068339 +0000 UTC m=+0.203084006 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 3121d91f-45b5-41e3-9111-9caae05ccf9a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.15( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.14( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.17( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev a1d4a715-e644-4ea8-9c5b-d20dfeb04138 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event a1d4a715-e644-4ea8-9c5b-d20dfeb04138 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.16( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev da8cb037-d6f5-4278-bd90-2691fddeb9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.11( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event da8cb037-d6f5-4278-bd90-2691fddeb9d7 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.10( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 975561b1-8962-4542-a8e3-fd894978e692 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 975561b1-8962-4542-a8e3-fd894978e692 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 3121d91f-45b5-41e3-9111-9caae05ccf9a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.3( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 3121d91f-45b5-41e3-9111-9caae05ccf9a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.2( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.e( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.9( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.8( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.b( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.f( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.c( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.d( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.6( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.a( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.7( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.5( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.4( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1a( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1b( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.18( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.19( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1e( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1f( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1c( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1d( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.12( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.13( v 51'1000 lc 0'0 (0'0,51'1000] local-lis/les=45/46 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.14( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.2( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.c( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.0( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 51'999 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.4( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1c( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 55 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:10 compute-0 ceph-mon[74360]: pgmap v144: 197 pgs: 197 active+clean; 456 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:10 compute-0 ceph-mon[74360]: 7.1a scrub starts
Jan 20 13:59:10 compute-0 ceph-mon[74360]: 7.1a scrub ok
Jan 20 13:59:10 compute-0 ceph-mon[74360]: 7.14 scrub starts
Jan 20 13:59:10 compute-0 ceph-mon[74360]: 7.14 scrub ok
Jan 20 13:59:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 13:59:10 compute-0 ceph-mon[74360]: osdmap e55: 3 total, 3 up, 3 in
Jan 20 13:59:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Jan 20 13:59:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Jan 20 13:59:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v147: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 podman[95399]: 2026-01-20 13:59:11.539294091 +0000 UTC m=+0.072331037 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:59:11 compute-0 podman[95399]: 2026-01-20 13:59:11.549827665 +0000 UTC m=+0.082864611 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 20 13:59:11 compute-0 ceph-mon[74360]: 5.1e deep-scrub starts
Jan 20 13:59:11 compute-0 ceph-mon[74360]: 5.1e deep-scrub ok
Jan 20 13:59:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:11 compute-0 podman[95467]: 2026-01-20 13:59:11.828486433 +0000 UTC m=+0.072627405 container exec 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Jan 20 13:59:11 compute-0 podman[95467]: 2026-01-20 13:59:11.844066672 +0000 UTC m=+0.088207624 container exec_died 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Jan 20 13:59:11 compute-0 sudo[95147]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:59:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 sudo[95502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:12 compute-0 sudo[95502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:12 compute-0 sudo[95502]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:12 compute-0 sudo[95527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:12 compute-0 sudo[95527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:12 compute-0 sudo[95527]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:12 compute-0 sudo[95552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:12 compute-0 sudo[95552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:12 compute-0 sudo[95552]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:12 compute-0 sudo[95577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 13:59:12 compute-0 sudo[95577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 13:59:12 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 20 13:59:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=14.452516556s) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active pruub 130.281845093s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56 pruub=14.452516556s) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown pruub 130.281845093s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 20 13:59:12 compute-0 ceph-mon[74360]: pgmap v147: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:12 compute-0 ceph-mon[74360]: 7.1c scrub starts
Jan 20 13:59:12 compute-0 ceph-mon[74360]: 7.1c scrub ok
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 13:59:12 compute-0 ceph-mon[74360]: osdmap e56: 3 total, 3 up, 3 in
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 20 13:59:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=49/50 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=56/57 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=49/49 les/c/f=50/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:12 compute-0 sudo[95577]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:59:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:59:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7c8e0c48-6288-4a87-a9f6-196170bd71ec does not exist
Jan 20 13:59:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 20c26564-1015-4e88-bd0a-5165c6464712 does not exist
Jan 20 13:59:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev faa6da86-812c-4f6e-9c4d-c68afd1fb5e2 does not exist
Jan 20 13:59:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 13:59:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:59:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:13 compute-0 sudo[95633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:13 compute-0 sudo[95633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:13 compute-0 sudo[95633]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:13 compute-0 sudo[95658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:13 compute-0 sudo[95658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:13 compute-0 sudo[95658]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:13 compute-0 sudo[95683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:13 compute-0 sudo[95683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:13 compute-0 sudo[95683]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:13 compute-0 sudo[95708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:59:13 compute-0 sudo[95708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.581883565 +0000 UTC m=+0.056468450 container create c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 13:59:13 compute-0 systemd[1]: Started libpod-conmon-c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6.scope.
Jan 20 13:59:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.646483433 +0000 UTC m=+0.121068338 container init c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.553345328 +0000 UTC m=+0.027930303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.652194048 +0000 UTC m=+0.126778963 container start c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 13:59:13 compute-0 relaxed_darwin[95787]: 167 167
Jan 20 13:59:13 compute-0 systemd[1]: libpod-c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6.scope: Deactivated successfully.
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.655705152 +0000 UTC m=+0.130290047 container attach c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.656633137 +0000 UTC m=+0.131218052 container died c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 13:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-02e04362f8342c96d9bf204f91abf71b8785e696bae97b6ae3df5b6fe199ae9a-merged.mount: Deactivated successfully.
Jan 20 13:59:13 compute-0 podman[95769]: 2026-01-20 13:59:13.693658843 +0000 UTC m=+0.168243728 container remove c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:59:13 compute-0 systemd[1]: libpod-conmon-c319cbd9d7a130c25c599d2d57ec04f099bd3949ee7a9716e1c31deab2db88a6.scope: Deactivated successfully.
Jan 20 13:59:13 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 20 completed events
Jan 20 13:59:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 13:59:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:13.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:13 compute-0 ceph-mon[74360]: osdmap e57: 3 total, 3 up, 3 in
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:13 compute-0 podman[95810]: 2026-01-20 13:59:13.863254937 +0000 UTC m=+0.060249293 container create ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:59:13 compute-0 systemd[1]: Started libpod-conmon-ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4.scope.
Jan 20 13:59:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:13 compute-0 podman[95810]: 2026-01-20 13:59:13.841617085 +0000 UTC m=+0.038611461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:13 compute-0 podman[95810]: 2026-01-20 13:59:13.959083586 +0000 UTC m=+0.156077952 container init ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 13:59:13 compute-0 podman[95810]: 2026-01-20 13:59:13.970920404 +0000 UTC m=+0.167914770 container start ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:13 compute-0 podman[95810]: 2026-01-20 13:59:13.975425915 +0000 UTC m=+0.172420251 container attach ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:14.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:14 compute-0 gifted_wiles[95826]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:59:14 compute-0 gifted_wiles[95826]: --> relative data size: 1.0
Jan 20 13:59:14 compute-0 gifted_wiles[95826]: --> All data devices are unavailable
Jan 20 13:59:14 compute-0 systemd[1]: libpod-ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4.scope: Deactivated successfully.
Jan 20 13:59:14 compute-0 podman[95810]: 2026-01-20 13:59:14.823755283 +0000 UTC m=+1.020749649 container died ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2551db97813b8837d4d11ce08e8dec9a731a8f86ca16a996e648051154922bde-merged.mount: Deactivated successfully.
Jan 20 13:59:14 compute-0 ceph-mon[74360]: pgmap v150: 321 pgs: 124 unknown, 197 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:14 compute-0 ceph-mon[74360]: 6.1a scrub starts
Jan 20 13:59:14 compute-0 ceph-mon[74360]: 6.1a scrub ok
Jan 20 13:59:14 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 13:59:14 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 13:59:14 compute-0 podman[95810]: 2026-01-20 13:59:14.900088137 +0000 UTC m=+1.097082493 container remove ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 13:59:14 compute-0 systemd[1]: libpod-conmon-ff9c657502403afab30dd321e02c19e85f1661f72e0e30128514fa1ff44109d4.scope: Deactivated successfully.
Jan 20 13:59:14 compute-0 sudo[95708]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:15 compute-0 sudo[95855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:15 compute-0 sudo[95855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:15 compute-0 sudo[95855]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:15 compute-0 sudo[95880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:15 compute-0 sudo[95880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:15 compute-0 sudo[95880]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:15 compute-0 sudo[95905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:15 compute-0 sudo[95905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:15 compute-0 sudo[95905]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:15 compute-0 sshd-session[95842]: Connection closed by authenticating user root 157.245.78.139 port 35056 [preauth]
Jan 20 13:59:15 compute-0 sudo[95930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:59:15 compute-0 sudo[95930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.758123997 +0000 UTC m=+0.056634536 container create 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:15 compute-0 systemd[1]: Started libpod-conmon-62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476.scope.
Jan 20 13:59:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.736869924 +0000 UTC m=+0.035380513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.845495227 +0000 UTC m=+0.144005846 container init 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.855892517 +0000 UTC m=+0.154403096 container start 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.860229884 +0000 UTC m=+0.158740503 container attach 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:15 compute-0 hardcore_turing[96010]: 167 167
Jan 20 13:59:15 compute-0 systemd[1]: libpod-62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476.scope: Deactivated successfully.
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.862813863 +0000 UTC m=+0.161324402 container died 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-40197d09a21ba58dbf8d4860abbb27edc9363eefb089a8b647705b3e5241d47d-merged.mount: Deactivated successfully.
Jan 20 13:59:15 compute-0 ceph-mon[74360]: 4.10 scrub starts
Jan 20 13:59:15 compute-0 ceph-mon[74360]: 4.10 scrub ok
Jan 20 13:59:15 compute-0 podman[95994]: 2026-01-20 13:59:15.90769144 +0000 UTC m=+0.206202019 container remove 62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:59:15 compute-0 systemd[1]: libpod-conmon-62ce1df4ddeb54b622540b59c58aa23ee4e9700846561f85cbd7fabf5ed7d476.scope: Deactivated successfully.
Jan 20 13:59:16 compute-0 podman[96033]: 2026-01-20 13:59:16.161939652 +0000 UTC m=+0.073395105 container create 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:16 compute-0 systemd[1]: Started libpod-conmon-19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563.scope.
Jan 20 13:59:16 compute-0 podman[96033]: 2026-01-20 13:59:16.127558137 +0000 UTC m=+0.039013640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bd1163880ea88303a63d949092b4ce374ce5d2bad41f57bb586e70ef85660c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bd1163880ea88303a63d949092b4ce374ce5d2bad41f57bb586e70ef85660c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bd1163880ea88303a63d949092b4ce374ce5d2bad41f57bb586e70ef85660c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bd1163880ea88303a63d949092b4ce374ce5d2bad41f57bb586e70ef85660c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:16 compute-0 podman[96033]: 2026-01-20 13:59:16.266023323 +0000 UTC m=+0.177478836 container init 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 13:59:16 compute-0 podman[96033]: 2026-01-20 13:59:16.27815819 +0000 UTC m=+0.189613643 container start 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 13:59:16 compute-0 podman[96033]: 2026-01-20 13:59:16.282196669 +0000 UTC m=+0.193652162 container attach 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 13:59:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:16.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:16 compute-0 ceph-mon[74360]: pgmap v151: 321 pgs: 31 unknown, 290 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:16 compute-0 ceph-mon[74360]: 2.13 scrub starts
Jan 20 13:59:16 compute-0 ceph-mon[74360]: 2.13 scrub ok
Jan 20 13:59:16 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 13:59:16 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 13:59:17 compute-0 agitated_banzai[96049]: {
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:     "0": [
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:         {
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "devices": [
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "/dev/loop3"
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             ],
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "lv_name": "ceph_lv0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "lv_size": "7511998464",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "name": "ceph_lv0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "tags": {
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.cluster_name": "ceph",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.crush_device_class": "",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.encrypted": "0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.osd_id": "0",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.type": "block",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:                 "ceph.vdo": "0"
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             },
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "type": "block",
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:             "vg_name": "ceph_vg0"
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:         }
Jan 20 13:59:17 compute-0 agitated_banzai[96049]:     ]
Jan 20 13:59:17 compute-0 agitated_banzai[96049]: }
Jan 20 13:59:17 compute-0 systemd[1]: libpod-19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563.scope: Deactivated successfully.
Jan 20 13:59:17 compute-0 podman[96033]: 2026-01-20 13:59:17.079791952 +0000 UTC m=+0.991247405 container died 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8bd1163880ea88303a63d949092b4ce374ce5d2bad41f57bb586e70ef85660c-merged.mount: Deactivated successfully.
Jan 20 13:59:17 compute-0 podman[96033]: 2026-01-20 13:59:17.146888977 +0000 UTC m=+1.058344410 container remove 19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:59:17 compute-0 systemd[1]: libpod-conmon-19d32c86ce3b360b19057cf38820f7a66fbc4436a1dc90932af574742b6ab563.scope: Deactivated successfully.
Jan 20 13:59:17 compute-0 sudo[95930]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 sudo[96070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:17 compute-0 sudo[96070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:17 compute-0 sudo[96070]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:17 compute-0 sudo[96095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:17 compute-0 sudo[96095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:17 compute-0 sudo[96095]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:17 compute-0 sudo[96120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:17 compute-0 sudo[96120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:17 compute-0 sudo[96120]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 20 13:59:17 compute-0 sudo[96146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:59:17 compute-0 sudo[96146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 20 13:59:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.18( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.834457397s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.857238770s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.837559700s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860473633s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.834317207s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.857238770s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.837480545s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860473633s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923012733s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.946044922s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.922834396s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.946044922s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925803185s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949325562s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925759315s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949325562s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925695419s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949310303s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.836176872s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.859786987s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925605774s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949310303s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.836034775s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.859786987s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925493240s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949325562s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835976601s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.859832764s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925473213s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949325562s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835955620s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.859832764s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-mon[74360]: 2.10 scrub starts
Jan 20 13:59:17 compute-0 ceph-mon[74360]: 2.10 scrub ok
Jan 20 13:59:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835901260s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.859909058s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835886002s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.859909058s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925320625s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949386597s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925303459s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949386597s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835894585s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860046387s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925331116s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949508667s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835892677s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860092163s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925313950s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949508667s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835856438s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860046387s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835873604s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860092163s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835866928s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860183716s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835851669s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860183716s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835771561s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860183716s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835759163s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860183716s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.842449188s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.866958618s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.842428207s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.866958618s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925060272s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949600220s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925028801s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949600220s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835680962s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860305786s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835618973s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860321045s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835635185s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860305786s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835602760s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860321045s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924891472s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949661255s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835626602s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860397339s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924867630s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949661255s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835603714s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860397339s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835473061s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860443115s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924716949s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949676514s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835448265s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860443115s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924699783s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949737549s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924558640s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949737549s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.835040092s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.860504150s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924295425s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949783325s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924277306s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949783325s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924649239s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949676514s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.834998131s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.860504150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841423988s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867111206s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841406822s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867111206s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.842005730s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867752075s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924149513s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949905396s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923973083s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949783325s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841931343s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867752075s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923951149s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949783325s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924082756s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949905396s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841112137s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867080688s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=54/55 n=1 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841090202s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867080688s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841368675s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867416382s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841346741s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867416382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923709869s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949844360s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923690796s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949844360s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841164589s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867416382s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841139793s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867416382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923482895s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949844360s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841082573s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867507935s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925932884s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.952331543s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925832748s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.952331543s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923412323s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949844360s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923388481s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949996948s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.923366547s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949996948s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841139793s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867797852s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841118813s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867797852s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.841059685s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867507935s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925466537s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.952331543s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925443649s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.952331543s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.925028801s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.952346802s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.840454102s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867828369s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.924975395s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.952346802s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.922341347s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 131.949783325s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.840373993s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867828369s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/49 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.922311783s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.949783325s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.840275764s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 129.867980957s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 58 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=54/55 n=0 ec=54/43 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.840239525s) [1] r=-1 lpr=58 pi=[54,58)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.867980957s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.240888055 +0000 UTC m=+0.037578752 container create 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:18 compute-0 systemd[1]: Started libpod-conmon-8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344.scope.
Jan 20 13:59:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.308269158 +0000 UTC m=+0.104959905 container init 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.22394768 +0000 UTC m=+0.020638407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.318570955 +0000 UTC m=+0.115261662 container start 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.322252405 +0000 UTC m=+0.118943142 container attach 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:18 compute-0 amazing_meitner[96226]: 167 167
Jan 20 13:59:18 compute-0 systemd[1]: libpod-8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344.scope: Deactivated successfully.
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.32432209 +0000 UTC m=+0.121012807 container died 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 13:59:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a23b754104c16ce19f418035eee0bff9c338388f91e0d2e5e8d0c03a9a99fb7-merged.mount: Deactivated successfully.
Jan 20 13:59:18 compute-0 podman[96210]: 2026-01-20 13:59:18.373517865 +0000 UTC m=+0.170208572 container remove 8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 13:59:18 compute-0 systemd[1]: libpod-conmon-8606d15a67bae35f319a3e1dc0b430dc389d5ce69c3bbca53e0d66420aefd344.scope: Deactivated successfully.
Jan 20 13:59:18 compute-0 podman[96250]: 2026-01-20 13:59:18.5603019 +0000 UTC m=+0.053467059 container create a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 13:59:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:18 compute-0 systemd[1]: Started libpod-conmon-a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a.scope.
Jan 20 13:59:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:18 compute-0 podman[96250]: 2026-01-20 13:59:18.53613926 +0000 UTC m=+0.029304499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bd4a948ff5e5345570784115e12c72d978e83548456fd76cf90e07807df32a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bd4a948ff5e5345570784115e12c72d978e83548456fd76cf90e07807df32a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bd4a948ff5e5345570784115e12c72d978e83548456fd76cf90e07807df32a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1bd4a948ff5e5345570784115e12c72d978e83548456fd76cf90e07807df32a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:18 compute-0 podman[96250]: 2026-01-20 13:59:18.658485952 +0000 UTC m=+0.151651171 container init a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:18 compute-0 podman[96250]: 2026-01-20 13:59:18.671595345 +0000 UTC m=+0.164760534 container start a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:18 compute-0 podman[96250]: 2026-01-20 13:59:18.675452179 +0000 UTC m=+0.168617388 container attach a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:59:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 20 13:59:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.14( v 57'51 lc 48'43 (0'0,57'51] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.13( v 48'48 (0'0,48'48] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.8( v 48'48 (0'0,48'48] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.15( v 57'51 lc 48'20 (0'0,57'51] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.2( v 48'48 (0'0,48'48] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.5( v 48'48 (0'0,48'48] local-lis/les=58/59 n=1 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.1b( v 48'48 (0'0,48'48] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.18( v 48'48 (0'0,48'48] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 59 pg[10.19( v 48'48 (0'0,48'48] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [0] r=0 lpr=58 pi=[56,58)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:18 compute-0 ceph-mon[74360]: 4.11 scrub starts
Jan 20 13:59:18 compute-0 ceph-mon[74360]: 4.11 scrub ok
Jan 20 13:59:18 compute-0 ceph-mon[74360]: pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:18 compute-0 ceph-mon[74360]: 4.1a scrub starts
Jan 20 13:59:18 compute-0 ceph-mon[74360]: 4.1a scrub ok
Jan 20 13:59:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 13:59:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 13:59:18 compute-0 ceph-mon[74360]: osdmap e58: 3 total, 3 up, 3 in
Jan 20 13:59:18 compute-0 ceph-mon[74360]: osdmap e59: 3 total, 3 up, 3 in
Jan 20 13:59:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:19 compute-0 bold_kirch[96267]: {
Jan 20 13:59:19 compute-0 bold_kirch[96267]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 13:59:19 compute-0 bold_kirch[96267]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:59:19 compute-0 bold_kirch[96267]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:59:19 compute-0 bold_kirch[96267]:         "osd_id": 0,
Jan 20 13:59:19 compute-0 bold_kirch[96267]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:59:19 compute-0 bold_kirch[96267]:         "type": "bluestore"
Jan 20 13:59:19 compute-0 bold_kirch[96267]:     }
Jan 20 13:59:19 compute-0 bold_kirch[96267]: }
Jan 20 13:59:19 compute-0 systemd[1]: libpod-a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a.scope: Deactivated successfully.
Jan 20 13:59:19 compute-0 podman[96250]: 2026-01-20 13:59:19.489232917 +0000 UTC m=+0.982398076 container died a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 13:59:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1bd4a948ff5e5345570784115e12c72d978e83548456fd76cf90e07807df32a-merged.mount: Deactivated successfully.
Jan 20 13:59:19 compute-0 podman[96250]: 2026-01-20 13:59:19.54395929 +0000 UTC m=+1.037124439 container remove a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kirch, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:19 compute-0 systemd[1]: libpod-conmon-a8b3a7ec4b993b9942cc1c351356c9ef6cac1d5b5cbc66ecbad64e1fc828094a.scope: Deactivated successfully.
Jan 20 13:59:19 compute-0 sudo[96146]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 525cb34e-ea9c-4baf-ab7d-e4518cff6c45 does not exist
Jan 20 13:59:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 10be40cd-42a9-4237-9e91-5540cf957f77 does not exist
Jan 20 13:59:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7d1f8a22-5b1a-42ab-b431-c8b6201ca011 does not exist
Jan 20 13:59:19 compute-0 sudo[96301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:19 compute-0 sudo[96301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:19 compute-0 sudo[96301]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:19 compute-0 sudo[96326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:19 compute-0 sudo[96326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:19 compute-0 sudo[96326]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:19.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:19 compute-0 ceph-mon[74360]: 5.18 scrub starts
Jan 20 13:59:19 compute-0 ceph-mon[74360]: 5.18 scrub ok
Jan 20 13:59:19 compute-0 ceph-mon[74360]: 5.12 scrub starts
Jan 20 13:59:19 compute-0 ceph-mon[74360]: 5.12 scrub ok
Jan 20 13:59:19 compute-0 ceph-mon[74360]: pgmap v155: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:59:20 compute-0 sudo[96351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:20 compute-0 sudo[96351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 sudo[96351]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:20 compute-0 sudo[96376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:20 compute-0 sudo[96376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 sudo[96376]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:20 compute-0 sudo[96401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:20 compute-0 sudo[96401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 sudo[96401]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:20 compute-0 sudo[96426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:20 compute-0 sudo[96426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.584912731 +0000 UTC m=+0.045335151 container create 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:20.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:20 compute-0 systemd[1]: Started libpod-conmon-07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac.scope.
Jan 20 13:59:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.566879225 +0000 UTC m=+0.027301655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.672339533 +0000 UTC m=+0.132761983 container init 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.677936794 +0000 UTC m=+0.138359214 container start 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:20 compute-0 naughty_fermi[96484]: 167 167
Jan 20 13:59:20 compute-0 systemd[1]: libpod-07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac.scope: Deactivated successfully.
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.683832463 +0000 UTC m=+0.144254923 container attach 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.684136201 +0000 UTC m=+0.144558621 container died 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0be20ec9b41d8e9ffac6793d3130c4df3f0fdb90a6c47e0ac96c9af0372a4f-merged.mount: Deactivated successfully.
Jan 20 13:59:20 compute-0 podman[96468]: 2026-01-20 13:59:20.752274265 +0000 UTC m=+0.212696685 container remove 07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_fermi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:59:20 compute-0 systemd[1]: libpod-conmon-07e90ee13acd88d5015f5a85ea2fd34e8ed2c7195ede44683bde52b52560ccac.scope: Deactivated successfully.
Jan 20 13:59:20 compute-0 sudo[96426]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wookjv (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wookjv (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:59:20 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:59:20 compute-0 sudo[96504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:20 compute-0 sudo[96504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 sudo[96504]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:20 compute-0 ceph-mon[74360]: 4.1b scrub starts
Jan 20 13:59:20 compute-0 ceph-mon[74360]: 4.1b scrub ok
Jan 20 13:59:20 compute-0 ceph-mon[74360]: 5.13 scrub starts
Jan 20 13:59:20 compute-0 ceph-mon[74360]: 5.13 scrub ok
Jan 20 13:59:20 compute-0 ceph-mon[74360]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:20 compute-0 ceph-mon[74360]: Reconfiguring mgr.compute-0.wookjv (monmap changed)...
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wookjv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:20 compute-0 ceph-mon[74360]: Reconfiguring daemon mgr.compute-0.wookjv on compute-0
Jan 20 13:59:20 compute-0 sudo[96529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:20 compute-0 sudo[96529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:20 compute-0 sudo[96529]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 sudo[96554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:21 compute-0 sudo[96554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:21 compute-0 sudo[96554]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 sudo[96579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:21 compute-0 sudo[96579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.374088877 +0000 UTC m=+0.048450035 container create 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 20 13:59:21 compute-0 systemd[1]: Started libpod-conmon-31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0.scope.
Jan 20 13:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.348986482 +0000 UTC m=+0.023347680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.467682215 +0000 UTC m=+0.142043383 container init 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.472443304 +0000 UTC m=+0.146804462 container start 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:21 compute-0 pedantic_hellman[96638]: 167 167
Jan 20 13:59:21 compute-0 systemd[1]: libpod-31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0.scope: Deactivated successfully.
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.476420811 +0000 UTC m=+0.150781989 container attach 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 13:59:21 compute-0 conmon[96638]: conmon 31e1ae52744a9a4a2bf4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0.scope/container/memory.events
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.476931254 +0000 UTC m=+0.151292412 container died 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e523637e66c4ce8244c27bc2cdf89d118e0bb09a453fca8dde350a1849e1ac-merged.mount: Deactivated successfully.
Jan 20 13:59:21 compute-0 podman[96621]: 2026-01-20 13:59:21.537576596 +0000 UTC m=+0.211937744 container remove 31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:21 compute-0 systemd[1]: libpod-conmon-31e1ae52744a9a4a2bf4001fcb1c327711f4c0c60f4eeefc2b9b5c20be8282f0.scope: Deactivated successfully.
Jan 20 13:59:21 compute-0 sudo[96579]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:21 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 13:59:21 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 13:59:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 20 13:59:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:21 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 13:59:21 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 13:59:21 compute-0 sudo[96659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:21 compute-0 sudo[96659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:21 compute-0 sudo[96659]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 sudo[96684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:21 compute-0 sudo[96684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:21 compute-0 sudo[96684]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 sudo[96709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:21 compute-0 sudo[96709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:21 compute-0 sudo[96709]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:21 compute-0 sudo[96734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:21 compute-0 sudo[96734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.061891145 +0000 UTC m=+0.033437741 container create 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:22 compute-0 systemd[1]: Started libpod-conmon-7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec.scope.
Jan 20 13:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.119490345 +0000 UTC m=+0.091036971 container init 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.124281704 +0000 UTC m=+0.095828300 container start 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.127321556 +0000 UTC m=+0.098868172 container attach 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:59:22 compute-0 brave_lumiere[96791]: 167 167
Jan 20 13:59:22 compute-0 systemd[1]: libpod-7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec.scope: Deactivated successfully.
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.129017672 +0000 UTC m=+0.100564268 container died 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.047777766 +0000 UTC m=+0.019324382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f11b10d82d384a094dba8d2b5c6e5b0b46ba992f720d6ca8aa0bd7eef2db32-merged.mount: Deactivated successfully.
Jan 20 13:59:22 compute-0 podman[96775]: 2026-01-20 13:59:22.164361343 +0000 UTC m=+0.135907949 container remove 7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 13:59:22 compute-0 systemd[1]: libpod-conmon-7bfff0f2606a443c50e52feb4c9eae552418edf9865a4c45ac683eddce4f9bec.scope: Deactivated successfully.
Jan 20 13:59:22 compute-0 sudo[96734]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 20 13:59:22 compute-0 sudo[96809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:22 compute-0 sudo[96809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:22 compute-0 sudo[96809]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:22 compute-0 sudo[96834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:22 compute-0 sudo[96834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:22 compute-0 sudo[96834]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:22 compute-0 sudo[96859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:22 compute-0 sudo[96859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:22 compute-0 sudo[96859]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:22 compute-0 sudo[96884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 13:59:22 compute-0 sudo[96884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:22.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:22 compute-0 ceph-mon[74360]: pgmap v156: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mon[74360]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 13:59:22 compute-0 ceph-mon[74360]: 4.1c scrub starts
Jan 20 13:59:22 compute-0 ceph-mon[74360]: 4.1c scrub ok
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.652873408 +0000 UTC m=+0.034919871 container create 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:59:22 compute-0 systemd[1]: Started libpod-conmon-581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3.scope.
Jan 20 13:59:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.7109219 +0000 UTC m=+0.092968363 container init 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.716190382 +0000 UTC m=+0.098236865 container start 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:59:22 compute-0 vibrant_goldwasser[96942]: 167 167
Jan 20 13:59:22 compute-0 systemd[1]: libpod-581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3.scope: Deactivated successfully.
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.720024515 +0000 UTC m=+0.102070978 container attach 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.720260371 +0000 UTC m=+0.102306804 container died 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.638006888 +0000 UTC m=+0.020053351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a176d4d1e336f0c1dd0c7658d414c827fe4dd8dcd80c2fb778fc0a514e8048ab-merged.mount: Deactivated successfully.
Jan 20 13:59:22 compute-0 podman[96926]: 2026-01-20 13:59:22.751615595 +0000 UTC m=+0.133662028 container remove 581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:22 compute-0 systemd[1]: libpod-conmon-581a4f1c7cdc88b2386d6139d413d028c7d130cf86b8a28aad12801e6a69d9d3.scope: Deactivated successfully.
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:22 compute-0 sudo[96884]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 13:59:22 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 13:59:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 13:59:23 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 13:59:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:59:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:59:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 20 13:59:23 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 20 13:59:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 20 13:59:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 13:59:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:23 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 20 13:59:23 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 20 13:59:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:23 compute-0 ceph-mon[74360]: Reconfiguring osd.0 (monmap changed)...
Jan 20 13:59:23 compute-0 ceph-mon[74360]: Reconfiguring daemon osd.0 on compute-0
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:23 compute-0 ceph-mon[74360]: 6.19 deep-scrub starts
Jan 20 13:59:23 compute-0 ceph-mon[74360]: 6.19 deep-scrub ok
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 13:59:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 13:59:24 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: 4.12 scrub starts
Jan 20 13:59:24 compute-0 ceph-mon[74360]: 4.12 scrub ok
Jan 20 13:59:24 compute-0 ceph-mon[74360]: pgmap v157: 321 pgs: 9 peering, 312 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring osd.1 (monmap changed)...
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring daemon osd.1 on compute-1
Jan 20 13:59:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 13:59:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:24 compute-0 ceph-mon[74360]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 13:59:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:59:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 13:59:25 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 13:59:25 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 20 13:59:25 compute-0 ceph-mon[74360]: 7.1d scrub starts
Jan 20 13:59:25 compute-0 ceph-mon[74360]: 7.1d scrub ok
Jan 20 13:59:25 compute-0 ceph-mon[74360]: pgmap v158: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 120 B/s, 0 objects/s recovering
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:25 compute-0 ceph-mon[74360]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:25 compute-0 ceph-mon[74360]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 13:59:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 20 13:59:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 20 13:59:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:26.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:59:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:59:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.gunjko (monmap changed)...
Jan 20 13:59:26 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.gunjko (monmap changed)...
Jan 20 13:59:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 20 13:59:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 13:59:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:59:26 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:59:26 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 20 13:59:26 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 20 13:59:26 compute-0 ceph-mon[74360]: 2.15 scrub starts
Jan 20 13:59:26 compute-0 ceph-mon[74360]: 2.15 scrub ok
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 13:59:26 compute-0 ceph-mon[74360]: osdmap e60: 3 total, 3 up, 3 in
Jan 20 13:59:26 compute-0 ceph-mon[74360]: 5.1b scrub starts
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:26 compute-0 ceph-mon[74360]: Reconfiguring mgr.compute-2.gunjko (monmap changed)...
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gunjko", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:26 compute-0 ceph-mon[74360]: Reconfiguring daemon mgr.compute-2.gunjko on compute-2
Jan 20 13:59:27 compute-0 sudo[96993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nncpyzgtrngplsmbsqlntnbasydwkckd ; /usr/bin/python3'
Jan 20 13:59:27 compute-0 sudo[96993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:59:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 116 B/s, 0 objects/s recovering
Jan 20 13:59:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 20 13:59:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 13:59:27 compute-0 python3[96995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.371982924 +0000 UTC m=+0.049923744 container create 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 13:59:27 compute-0 systemd[1]: Started libpod-conmon-07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e.scope.
Jan 20 13:59:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a809e86e5d952aa5bd02a35f08a7e8e47826db17e6e0afacba4491381c1b11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a809e86e5d952aa5bd02a35f08a7e8e47826db17e6e0afacba4491381c1b11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.444541777 +0000 UTC m=+0.122482607 container init 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.352087499 +0000 UTC m=+0.030028389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.451311819 +0000 UTC m=+0.129252639 container start 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.45472438 +0000 UTC m=+0.132665230 container attach 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 13:59:27 compute-0 nervous_brattain[97011]: could not fetch user info: no user info saved
Jan 20 13:59:27 compute-0 systemd[1]: libpod-07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e.scope: Deactivated successfully.
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.715519919 +0000 UTC m=+0.393460719 container died 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 13:59:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:59:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a809e86e5d952aa5bd02a35f08a7e8e47826db17e6e0afacba4491381c1b11-merged.mount: Deactivated successfully.
Jan 20 13:59:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:27 compute-0 podman[96996]: 2026-01-20 13:59:27.757351245 +0000 UTC m=+0.435292085 container remove 07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e (image=quay.io/ceph/ceph:v18, name=nervous_brattain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:59:27 compute-0 systemd[1]: libpod-conmon-07b8d14b6847f509f489a41bc2ed4c0a91e66da97da720b0f99fcd337fa0ef5e.scope: Deactivated successfully.
Jan 20 13:59:27 compute-0 sudo[96993]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-0 sudo[97108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:27 compute-0 sudo[97108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-0 sudo[97108]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:27 compute-0 sudo[97133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:27 compute-0 sudo[97133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-0 sudo[97133]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:27 compute-0 sudo[97158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:27 compute-0 sudo[97158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:27 compute-0 sudo[97158]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Jan 20 13:59:27 compute-0 sudo[97206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llafpltslkpkjynvskwlkuysxgiokrfn ; /usr/bin/python3'
Jan 20 13:59:27 compute-0 sudo[97206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:59:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Jan 20 13:59:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 20 13:59:27 compute-0 ceph-mon[74360]: 5.1b scrub ok
Jan 20 13:59:27 compute-0 ceph-mon[74360]: 4.19 scrub starts
Jan 20 13:59:27 compute-0 ceph-mon[74360]: 4.19 scrub ok
Jan 20 13:59:27 compute-0 ceph-mon[74360]: 4.16 scrub starts
Jan 20 13:59:27 compute-0 ceph-mon[74360]: 4.16 scrub ok
Jan 20 13:59:27 compute-0 ceph-mon[74360]: pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 116 B/s, 0 objects/s recovering
Jan 20 13:59:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 13:59:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:28 compute-0 sudo[97207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 13:59:28 compute-0 sudo[97207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 13:59:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 20 13:59:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746667862s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.859985352s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.747064590s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860580444s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746970177s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860580444s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746931076s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860580444s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746482849s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.859985352s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746792793s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860580444s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746430397s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860366821s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753223419s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.867248535s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.746358871s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860366821s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753164291s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.867248535s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753784180s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.867980957s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753764153s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.867980957s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753373146s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.867767334s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753354073s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.867767334s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753669739s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.868194580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:28 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 61 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.753647804s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.868194580s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:28 compute-0 python3[97215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:28 compute-0 podman[97234]: 2026-01-20 13:59:28.172086145 +0000 UTC m=+0.050537231 container create cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:28 compute-0 systemd[1]: Started libpod-conmon-cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d.scope.
Jan 20 13:59:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e02f79a2f248f0d53da451ad2e13c74ea9c4e38bb8034350850d30745b2638c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e02f79a2f248f0d53da451ad2e13c74ea9c4e38bb8034350850d30745b2638c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:28 compute-0 podman[97234]: 2026-01-20 13:59:28.237055613 +0000 UTC m=+0.115506729 container init cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:28 compute-0 podman[97234]: 2026-01-20 13:59:28.24175517 +0000 UTC m=+0.120206256 container start cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 13:59:28 compute-0 podman[97234]: 2026-01-20 13:59:28.150672349 +0000 UTC m=+0.029123525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 20 13:59:28 compute-0 podman[97234]: 2026-01-20 13:59:28.244631037 +0000 UTC m=+0.123082143 container attach cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 13:59:28 compute-0 podman[97381]: 2026-01-20 13:59:28.491985153 +0000 UTC m=+0.056772689 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 20 13:59:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:28.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:28 compute-0 podman[97381]: 2026-01-20 13:59:28.641796274 +0000 UTC m=+0.206583800 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 20 13:59:29 compute-0 ceph-mon[74360]: 4.17 deep-scrub starts
Jan 20 13:59:29 compute-0 ceph-mon[74360]: 4.17 deep-scrub ok
Jan 20 13:59:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 13:59:29 compute-0 ceph-mon[74360]: osdmap e61: 3 total, 3 up, 3 in
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:29 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 62 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 13:59:29 compute-0 podman[97536]: 2026-01-20 13:59:29.407621482 +0000 UTC m=+0.082388149 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:59:29 compute-0 podman[97536]: 2026-01-20 13:59:29.419256815 +0000 UTC m=+0.094023472 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 sudo[97558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:29 compute-0 sudo[97558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:29 compute-0 sudo[97558]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-0 sudo[97605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:29 compute-0 sudo[97605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:29 compute-0 sudo[97605]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-0 podman[97652]: 2026-01-20 13:59:29.697109872 +0000 UTC m=+0.065151925 container exec 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, build-date=2023-02-22T09:23:20, vcs-type=git, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, version=2.2.4, io.openshift.expose-services=, release=1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Jan 20 13:59:29 compute-0 podman[97652]: 2026-01-20 13:59:29.709469175 +0000 UTC m=+0.077511238 container exec_died 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, release=1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-type=git)
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 13:59:29 compute-0 sudo[97207]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:29.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 20 13:59:30 compute-0 ceph-mon[74360]: osdmap e62: 3 total, 3 up, 3 in
Jan 20 13:59:30 compute-0 ceph-mon[74360]: pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 853 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 63 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]: {
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "user_id": "openstack",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "display_name": "openstack",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "email": "",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "suspended": 0,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "max_buckets": 1000,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "subusers": [],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "keys": [
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         {
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:             "user": "openstack",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:             "access_key": "1GQ487KKW1Q4XQ7QUEJW",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:             "secret_key": "y5MvuABT6rVR4rlSsadSoQpBE4dAxqsYvMjbcl6R"
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         }
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     ],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "swift_keys": [],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "caps": [],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "op_mask": "read, write, delete",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "default_placement": "",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "default_storage_class": "",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "placement_tags": [],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "bucket_quota": {
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "enabled": false,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "check_on_raw": false,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_size": -1,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_size_kb": 0,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_objects": -1
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     },
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "user_quota": {
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "enabled": false,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "check_on_raw": false,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_size": -1,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_size_kb": 0,
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:         "max_objects": -1
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     },
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "temp_url_keys": [],
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "type": "rgw",
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]:     "mfa_ids": []
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]: }
Jan 20 13:59:30 compute-0 sleepy_rhodes[97274]: 
Jan 20 13:59:30 compute-0 systemd[1]: libpod-cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d.scope: Deactivated successfully.
Jan 20 13:59:30 compute-0 conmon[97274]: conmon cf18f53c179b673f9916 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d.scope/container/memory.events
Jan 20 13:59:30 compute-0 podman[97234]: 2026-01-20 13:59:30.277687435 +0000 UTC m=+2.156138561 container died cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 13:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e02f79a2f248f0d53da451ad2e13c74ea9c4e38bb8034350850d30745b2638c-merged.mount: Deactivated successfully.
Jan 20 13:59:30 compute-0 podman[97234]: 2026-01-20 13:59:30.33175205 +0000 UTC m=+2.210203166 container remove cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d (image=quay.io/ceph/ceph:v18, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:30 compute-0 systemd[1]: libpod-conmon-cf18f53c179b673f991602e53e61144cf1c07a6367f7bb5124cc898f6a88028d.scope: Deactivated successfully.
Jan 20 13:59:30 compute-0 sudo[97206]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5ecc8517-91a3-4002-b6d8-a6eaf76c25ed does not exist
Jan 20 13:59:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f615ee34-bd04-425a-a537-350085a2de2f does not exist
Jan 20 13:59:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 35d51e38-3537-4d15-a868-3e50d55235b1 does not exist
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 13:59:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:30 compute-0 sudo[97717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:30 compute-0 sudo[97717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:30 compute-0 sudo[97717]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:30 compute-0 sudo[97742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:30 compute-0 sudo[97742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:30 compute-0 sudo[97742]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:30.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:30 compute-0 sudo[97767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:30 compute-0 sudo[97767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:30 compute-0 sudo[97767]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:30 compute-0 sudo[97792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 13:59:30 compute-0 sudo[97792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.104443732 +0000 UTC m=+0.072504822 container create de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:31 compute-0 ceph-mon[74360]: 4.e deep-scrub starts
Jan 20 13:59:31 compute-0 ceph-mon[74360]: 4.e deep-scrub ok
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 13:59:31 compute-0 ceph-mon[74360]: osdmap e63: 3 total, 3 up, 3 in
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 13:59:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 13:59:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 20 13:59:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.983066559s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.207901001s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.17( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982918739s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.207901001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982608795s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.207824707s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.3( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982529640s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.207824707s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982531548s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.208007812s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982483864s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.208007812s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982384682s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.207855225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.982150078s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.207855225s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981492043s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.207550049s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981457710s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.207550049s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981485367s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.207534790s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.7( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=6 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981260300s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.207534790s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.975356102s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.201782227s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.13( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.975276947s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.201782227s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981345177s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 149.208007812s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:31 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 64 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=62/63 n=5 ec=54/45 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.981285095s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 149.208007812s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:31 compute-0 systemd[1]: Started libpod-conmon-de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f.scope.
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.078064612 +0000 UTC m=+0.046125742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.192158493 +0000 UTC m=+0.160219603 container init de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.201473213 +0000 UTC m=+0.169534293 container start de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.205364568 +0000 UTC m=+0.173425698 container attach de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 13:59:31 compute-0 quizzical_poincare[97874]: 167 167
Jan 20 13:59:31 compute-0 systemd[1]: libpod-de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f.scope: Deactivated successfully.
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.207987218 +0000 UTC m=+0.176048298 container died de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 13:59:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 2 op/s
Jan 20 13:59:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 20 13:59:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 13:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a841b1de0e83bbe20e342c56857e1122817d885d05f2df6d360f66ff70b2298e-merged.mount: Deactivated successfully.
Jan 20 13:59:31 compute-0 podman[97858]: 2026-01-20 13:59:31.259121355 +0000 UTC m=+0.227182445 container remove de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 13:59:31 compute-0 systemd[1]: libpod-conmon-de9077d93566ec01e11ea73eec350f30eea049f1bd2ec685d2a83cb36cc9a07f.scope: Deactivated successfully.
Jan 20 13:59:31 compute-0 podman[97897]: 2026-01-20 13:59:31.487160101 +0000 UTC m=+0.062306628 container create 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 13:59:31 compute-0 systemd[1]: Started libpod-conmon-70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437.scope.
Jan 20 13:59:31 compute-0 podman[97897]: 2026-01-20 13:59:31.464716256 +0000 UTC m=+0.039862783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:31 compute-0 podman[97897]: 2026-01-20 13:59:31.625482592 +0000 UTC m=+0.200629089 container init 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 13:59:31 compute-0 podman[97897]: 2026-01-20 13:59:31.636114569 +0000 UTC m=+0.211261086 container start 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 13:59:31 compute-0 podman[97897]: 2026-01-20 13:59:31.641501194 +0000 UTC m=+0.216647711 container attach 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:31.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 20 13:59:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 13:59:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.619371414s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860244751s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.619231224s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860244751s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.619204521s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860656738s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.619158745s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860656738s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.625162125s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.867248535s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.625094414s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.867248535s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.625808716s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.868209839s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 65 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=65 pruub=10.625741959s) [2] r=-1 lpr=65 pi=[54,65)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.868209839s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:32 compute-0 ceph-mon[74360]: 5.0 scrub starts
Jan 20 13:59:32 compute-0 ceph-mon[74360]: 5.0 scrub ok
Jan 20 13:59:32 compute-0 ceph-mon[74360]: osdmap e64: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-0 ceph-mon[74360]: pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 2 op/s
Jan 20 13:59:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 13:59:32 compute-0 musing_ishizaka[97913]: --> passed data devices: 0 physical, 1 LVM
Jan 20 13:59:32 compute-0 musing_ishizaka[97913]: --> relative data size: 1.0
Jan 20 13:59:32 compute-0 musing_ishizaka[97913]: --> All data devices are unavailable
Jan 20 13:59:32 compute-0 systemd[1]: libpod-70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437.scope: Deactivated successfully.
Jan 20 13:59:32 compute-0 podman[97897]: 2026-01-20 13:59:32.531869223 +0000 UTC m=+1.107015700 container died 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-032687c3d31e508cd05111c53f205f40a430930c4a646cfe77bc54c6ea9e299c-merged.mount: Deactivated successfully.
Jan 20 13:59:32 compute-0 podman[97897]: 2026-01-20 13:59:32.597496539 +0000 UTC m=+1.172643036 container remove 70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:32 compute-0 systemd[1]: libpod-conmon-70d7ef68a31be0db3cf021a37b1183d6c1b0237d2fce493ae133c34f0d8de437.scope: Deactivated successfully.
Jan 20 13:59:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:32 compute-0 sudo[97792]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:32 compute-0 sudo[97940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:32 compute-0 sudo[97940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:32 compute-0 sudo[97940]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:32 compute-0 sudo[97965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:32 compute-0 sudo[97965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:32 compute-0 sudo[97965]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 20 13:59:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:32 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 66 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:32 compute-0 sudo[97990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:32 compute-0 sudo[97990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:32 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 20 13:59:32 compute-0 sudo[97990]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:32 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 20 13:59:33 compute-0 sudo[98015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 13:59:33 compute-0 sudo[98015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:33 compute-0 ceph-mon[74360]: 5.1c scrub starts
Jan 20 13:59:33 compute-0 ceph-mon[74360]: 5.1c scrub ok
Jan 20 13:59:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 13:59:33 compute-0 ceph-mon[74360]: osdmap e65: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-0 ceph-mon[74360]: osdmap e66: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 13:59:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 20 13:59:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.481216839 +0000 UTC m=+0.053335166 container create 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 13:59:33 compute-0 systemd[1]: Started libpod-conmon-3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184.scope.
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.456647427 +0000 UTC m=+0.028765834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.576428551 +0000 UTC m=+0.148546938 container init 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.587583471 +0000 UTC m=+0.159701808 container start 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.591952168 +0000 UTC m=+0.164070585 container attach 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 20 13:59:33 compute-0 suspicious_hoover[98099]: 167 167
Jan 20 13:59:33 compute-0 systemd[1]: libpod-3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184.scope: Deactivated successfully.
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.594435685 +0000 UTC m=+0.166554022 container died 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 13:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0ad2706b78403e7bd44009da2b0cfc9c7826091cd5500499126b633eeecda1-merged.mount: Deactivated successfully.
Jan 20 13:59:33 compute-0 podman[98082]: 2026-01-20 13:59:33.640010521 +0000 UTC m=+0.212128858 container remove 3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 13:59:33 compute-0 systemd[1]: libpod-conmon-3529b3b1169bd8b5cb90066d4deed0bc52f131c4e6d51a2cbb45c517c3b3c184.scope: Deactivated successfully.
Jan 20 13:59:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:33 compute-0 podman[98123]: 2026-01-20 13:59:33.868008987 +0000 UTC m=+0.067249641 container create 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 13:59:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 20 13:59:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 13:59:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-0 systemd[1]: Started libpod-conmon-02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b.scope.
Jan 20 13:59:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.848131180s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860122681s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.848061562s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860122681s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.848185539s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.860504150s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.854751587s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.867233276s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.854470253s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.867233276s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.855154037s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 145.868209839s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.855134010s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.868209839s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=8.847826958s) [1] r=-1 lpr=67 pi=[54,67)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.860504150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:33 compute-0 podman[98123]: 2026-01-20 13:59:33.840343902 +0000 UTC m=+0.039584676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:33 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 67 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[54,66)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9f5b563f94fee7878a5c0678d79b375ef1e4b7413ad60b980716b2cd7a7fa1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9f5b563f94fee7878a5c0678d79b375ef1e4b7413ad60b980716b2cd7a7fa1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9f5b563f94fee7878a5c0678d79b375ef1e4b7413ad60b980716b2cd7a7fa1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9f5b563f94fee7878a5c0678d79b375ef1e4b7413ad60b980716b2cd7a7fa1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:33 compute-0 podman[98123]: 2026-01-20 13:59:33.974869122 +0000 UTC m=+0.174109866 container init 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 13:59:33 compute-0 podman[98123]: 2026-01-20 13:59:33.988186131 +0000 UTC m=+0.187426815 container start 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 13:59:33 compute-0 podman[98123]: 2026-01-20 13:59:33.992109576 +0000 UTC m=+0.191350260 container attach 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:34 compute-0 ceph-mon[74360]: 4.18 scrub starts
Jan 20 13:59:34 compute-0 ceph-mon[74360]: 4.18 scrub ok
Jan 20 13:59:34 compute-0 ceph-mon[74360]: 4.1e scrub starts
Jan 20 13:59:34 compute-0 ceph-mon[74360]: 4.1e scrub ok
Jan 20 13:59:34 compute-0 ceph-mon[74360]: pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 13:59:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 13:59:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 13:59:34 compute-0 ceph-mon[74360]: osdmap e67: 3 total, 3 up, 3 in
Jan 20 13:59:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:34.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:34 compute-0 zen_swanson[98139]: {
Jan 20 13:59:34 compute-0 zen_swanson[98139]:     "0": [
Jan 20 13:59:34 compute-0 zen_swanson[98139]:         {
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "devices": [
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "/dev/loop3"
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             ],
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "lv_name": "ceph_lv0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "lv_size": "7511998464",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "name": "ceph_lv0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "tags": {
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.cluster_name": "ceph",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.crush_device_class": "",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.encrypted": "0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.osd_id": "0",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.type": "block",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:                 "ceph.vdo": "0"
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             },
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "type": "block",
Jan 20 13:59:34 compute-0 zen_swanson[98139]:             "vg_name": "ceph_vg0"
Jan 20 13:59:34 compute-0 zen_swanson[98139]:         }
Jan 20 13:59:34 compute-0 zen_swanson[98139]:     ]
Jan 20 13:59:34 compute-0 zen_swanson[98139]: }
Jan 20 13:59:34 compute-0 systemd[1]: libpod-02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b.scope: Deactivated successfully.
Jan 20 13:59:34 compute-0 podman[98123]: 2026-01-20 13:59:34.74467225 +0000 UTC m=+0.943912934 container died 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:59:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9f5b563f94fee7878a5c0678d79b375ef1e4b7413ad60b980716b2cd7a7fa1e-merged.mount: Deactivated successfully.
Jan 20 13:59:34 compute-0 podman[98123]: 2026-01-20 13:59:34.818239535 +0000 UTC m=+1.017480199 container remove 02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swanson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 13:59:34 compute-0 systemd[1]: libpod-conmon-02eece5b8c8f7de92f89f2aa87dd323492dd6fa3dddde6b251d96029f581b70b.scope: Deactivated successfully.
Jan 20 13:59:34 compute-0 sudo[98015]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 20 13:59:34 compute-0 sudo[98160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:34 compute-0 sudo[98160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:34 compute-0 sudo[98160]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:35 compute-0 sudo[98185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 13:59:35 compute-0 sudo[98185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:35 compute-0 sudo[98185]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:35 compute-0 sudo[98210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:35 compute-0 sudo[98210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:35 compute-0 sudo[98210]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v171: 321 pgs: 4 remapped+peering, 317 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 750 B/s rd, 500 B/s wr, 1 op/s; 295 B/s, 9 objects/s recovering
Jan 20 13:59:35 compute-0 sudo[98235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 13:59:35 compute-0 sudo[98235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:35 compute-0 podman[98300]: 2026-01-20 13:59:35.574917961 +0000 UTC m=+0.035806966 container create 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 13:59:35 compute-0 systemd[1]: Started libpod-conmon-3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c.scope.
Jan 20 13:59:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:35 compute-0 podman[98300]: 2026-01-20 13:59:35.559023584 +0000 UTC m=+0.019912609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 20 13:59:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679198265s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 153.028717041s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679076195s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 153.028686523s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.1d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679107666s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.028717041s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.d( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679014206s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.028686523s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679059982s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 153.028732300s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.5( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=6 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.679006577s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.028732300s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.672714233s) [2] async=[2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 153.023025513s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 68 pg[9.15( v 51'1000 (0'0,51'1000] local-lis/les=66/67 n=5 ec=54/45 lis/c=66/54 les/c/f=67/55/0 sis=68 pruub=13.672654152s) [2] r=-1 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.023025513s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:36 compute-0 ceph-mon[74360]: 3.1c scrub starts
Jan 20 13:59:36 compute-0 ceph-mon[74360]: 3.1c scrub ok
Jan 20 13:59:36 compute-0 podman[98300]: 2026-01-20 13:59:36.321265134 +0000 UTC m=+0.782154149 container init 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 13:59:36 compute-0 podman[98300]: 2026-01-20 13:59:36.327838335 +0000 UTC m=+0.788727350 container start 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 13:59:36 compute-0 romantic_diffie[98315]: 167 167
Jan 20 13:59:36 compute-0 systemd[1]: libpod-3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c.scope: Deactivated successfully.
Jan 20 13:59:36 compute-0 podman[98300]: 2026-01-20 13:59:36.342811106 +0000 UTC m=+0.803700151 container attach 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 13:59:36 compute-0 podman[98300]: 2026-01-20 13:59:36.343253409 +0000 UTC m=+0.804142454 container died 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 13:59:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a5fe83d27e6d89991e00fcf4ea178701cf16426fcb967616e187ee5b6cb0bc-merged.mount: Deactivated successfully.
Jan 20 13:59:36 compute-0 podman[98300]: 2026-01-20 13:59:36.382764356 +0000 UTC m=+0.843653361 container remove 3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 13:59:36 compute-0 systemd[1]: libpod-conmon-3a92eba93e9714da5380a47fe1acdb62be4ce8a32dd0626e101e259f2d39a23c.scope: Deactivated successfully.
Jan 20 13:59:36 compute-0 podman[98345]: 2026-01-20 13:59:36.543446887 +0000 UTC m=+0.051863008 container create 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 13:59:36 compute-0 systemd[1]: Started libpod-conmon-01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905.scope.
Jan 20 13:59:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 13:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ed8244a56c148838c94a35497ecece7b07226dcfb00dfbccdc5d1794e1ec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ed8244a56c148838c94a35497ecece7b07226dcfb00dfbccdc5d1794e1ec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ed8244a56c148838c94a35497ecece7b07226dcfb00dfbccdc5d1794e1ec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ed8244a56c148838c94a35497ecece7b07226dcfb00dfbccdc5d1794e1ec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 13:59:36 compute-0 podman[98345]: 2026-01-20 13:59:36.510637163 +0000 UTC m=+0.019053264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 13:59:36 compute-0 podman[98345]: 2026-01-20 13:59:36.623374595 +0000 UTC m=+0.131790766 container init 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 13:59:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:36 compute-0 podman[98345]: 2026-01-20 13:59:36.633282578 +0000 UTC m=+0.141698659 container start 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 13:59:36 compute-0 podman[98345]: 2026-01-20 13:59:36.637154645 +0000 UTC m=+0.145570766 container attach 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 13:59:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 20 13:59:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 20 13:59:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 20 13:59:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 69 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 69 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 69 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 69 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[54,68)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 279 B/s, 9 objects/s recovering
Jan 20 13:59:37 compute-0 stoic_morse[98362]: {
Jan 20 13:59:37 compute-0 stoic_morse[98362]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 13:59:37 compute-0 stoic_morse[98362]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 13:59:37 compute-0 stoic_morse[98362]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 13:59:37 compute-0 stoic_morse[98362]:         "osd_id": 0,
Jan 20 13:59:37 compute-0 stoic_morse[98362]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 13:59:37 compute-0 stoic_morse[98362]:         "type": "bluestore"
Jan 20 13:59:37 compute-0 stoic_morse[98362]:     }
Jan 20 13:59:37 compute-0 stoic_morse[98362]: }
Jan 20 13:59:37 compute-0 systemd[1]: libpod-01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905.scope: Deactivated successfully.
Jan 20 13:59:37 compute-0 podman[98345]: 2026-01-20 13:59:37.500216708 +0000 UTC m=+1.008632819 container died 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 13:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-406ed8244a56c148838c94a35497ecece7b07226dcfb00dfbccdc5d1794e1ec2-merged.mount: Deactivated successfully.
Jan 20 13:59:37 compute-0 podman[98345]: 2026-01-20 13:59:37.557144054 +0000 UTC m=+1.065560175 container remove 01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 13:59:37 compute-0 systemd[1]: libpod-conmon-01fda1988df151082bd1be32914a2611dce3f76094016379476edcc1e1993905.scope: Deactivated successfully.
Jan 20 13:59:37 compute-0 sudo[98235]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 13:59:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 20 13:59:37 compute-0 ceph-mon[74360]: 4.14 deep-scrub starts
Jan 20 13:59:37 compute-0 ceph-mon[74360]: 4.14 deep-scrub ok
Jan 20 13:59:37 compute-0 ceph-mon[74360]: pgmap v171: 321 pgs: 4 remapped+peering, 317 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 750 B/s rd, 500 B/s wr, 1 op/s; 295 B/s, 9 objects/s recovering
Jan 20 13:59:37 compute-0 ceph-mon[74360]: osdmap e68: 3 total, 3 up, 3 in
Jan 20 13:59:37 compute-0 ceph-mon[74360]: osdmap e69: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.103970528s) [1] async=[1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 156.201660156s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.103862762s) [1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.201660156s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.099121094s) [1] async=[1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 156.197097778s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.16( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.098708153s) [1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.197097778s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.103070259s) [1] async=[1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 156.201721191s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.102921486s) [1] async=[1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 156.201889038s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=5 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.102717400s) [1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.201889038s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 70 pg[9.6( v 51'1000 (0'0,51'1000] local-lis/les=68/69 n=6 ec=54/45 lis/c=68/54 les/c/f=69/55/0 sis=70 pruub=15.102617264s) [1] r=-1 lpr=70 pi=[54,70)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.201721191s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 29e5220b-2be0-4de5-9451-bd9ec2d28928 does not exist
Jan 20 13:59:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e9b72629-69eb-42d2-b0d7-a485ba901434 does not exist
Jan 20 13:59:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e3a5aa83-65c0-4e8e-9a45-1826e5ae6bd9 does not exist
Jan 20 13:59:38 compute-0 sudo[98396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:38 compute-0 sudo[98396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:38 compute-0 sudo[98396]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:38 compute-0 sudo[98421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 13:59:38 compute-0 sudo[98421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:38 compute-0 sudo[98421]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:38 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 20 13:59:38 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 6.d scrub starts
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 6.d scrub ok
Jan 20 13:59:38 compute-0 ceph-mon[74360]: pgmap v174: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 279 B/s, 9 objects/s recovering
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 5.2 scrub starts
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 5.2 scrub ok
Jan 20 13:59:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-0 ceph-mon[74360]: osdmap e70: 3 total, 3 up, 3 in
Jan 20 13:59:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 6.4 scrub starts
Jan 20 13:59:38 compute-0 ceph-mon[74360]: 6.4 scrub ok
Jan 20 13:59:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 20 13:59:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 20 13:59:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 20 13:59:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:39.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:39 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 13:59:39 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 13:59:40 compute-0 ceph-mon[74360]: osdmap e71: 3 total, 3 up, 3 in
Jan 20 13:59:40 compute-0 ceph-mon[74360]: pgmap v177: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:40.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:41 compute-0 ceph-mon[74360]: 6.6 scrub starts
Jan 20 13:59:41 compute-0 ceph-mon[74360]: 6.6 scrub ok
Jan 20 13:59:41 compute-0 ceph-mon[74360]: 4.1f deep-scrub starts
Jan 20 13:59:41 compute-0 ceph-mon[74360]: 4.1f deep-scrub ok
Jan 20 13:59:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 615 B/s wr, 57 op/s; 163 B/s, 8 objects/s recovering
Jan 20 13:59:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 20 13:59:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 13:59:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 20 13:59:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 13:59:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 20 13:59:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 20 13:59:42 compute-0 ceph-mon[74360]: pgmap v178: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 615 B/s wr, 57 op/s; 163 B/s, 8 objects/s recovering
Jan 20 13:59:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 13:59:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 20 13:59:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 13:59:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 13:59:43 compute-0 ceph-mon[74360]: osdmap e72: 3 total, 3 up, 3 in
Jan 20 13:59:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 20 13:59:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 13:59:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 20 13:59:43 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 20 13:59:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 73 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=15.272333145s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.860855103s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 73 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=15.272280693s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.860855103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 73 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=15.278745651s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.867889404s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:43 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 73 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=73 pruub=15.278696060s) [2] r=-1 lpr=73 pi=[54,73)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.867889404s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:43 compute-0 sshd-session[98446]: Connection closed by authenticating user root 159.223.5.14 port 49470 [preauth]
Jan 20 13:59:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:44 compute-0 ceph-mon[74360]: pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 13:59:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 13:59:44 compute-0 ceph-mon[74360]: osdmap e73: 3 total, 3 up, 3 in
Jan 20 13:59:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 20 13:59:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 20 13:59:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 20 13:59:44 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 74 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:44 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 74 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:44 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 74 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:44 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 74 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 20 13:59:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 13:59:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 20 13:59:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 13:59:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 20 13:59:45 compute-0 ceph-mon[74360]: osdmap e74: 3 total, 3 up, 3 in
Jan 20 13:59:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 13:59:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=13.328457832s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.860748291s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=13.328310013s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.860748291s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=13.334841728s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.868087769s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=75 pruub=13.334647179s) [2] r=-1 lpr=75 pi=[54,75)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.868087769s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:45 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 75 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[54,74)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 20 13:59:46 compute-0 ceph-mon[74360]: pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 511 B/s wr, 47 op/s; 135 B/s, 7 objects/s recovering
Jan 20 13:59:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 13:59:46 compute-0 ceph-mon[74360]: osdmap e75: 3 total, 3 up, 3 in
Jan 20 13:59:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 20 13:59:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/54 les/c/f=75/55/0 sis=76 pruub=14.984970093s) [2] async=[2] r=-1 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 164.538269043s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.8( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=6 ec=54/45 lis/c=74/54 les/c/f=75/55/0 sis=76 pruub=14.984863281s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.538269043s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=5 ec=54/45 lis/c=74/54 les/c/f=75/55/0 sis=76 pruub=14.985935211s) [2] async=[2] r=-1 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 164.540100098s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.18( v 51'1000 (0'0,51'1000] local-lis/les=74/75 n=5 ec=54/45 lis/c=74/54 les/c/f=75/55/0 sis=76 pruub=14.985845566s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.540100098s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:46 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 76 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 20 13:59:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 20 13:59:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-0 ceph-mon[74360]: osdmap e76: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 13:59:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.282646179s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.867889404s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.282576561s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.867889404s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.282383919s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 161.867584229s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.281693459s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.867584229s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 77 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[54,76)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 20 13:59:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.588979721s) [2] async=[2] r=-1 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 166.594360352s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.9( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=6 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.588880539s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.594360352s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.587812424s) [2] async=[2] r=-1 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 166.594375610s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:47 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 78 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=76/77 n=5 ec=54/45 lis/c=76/54 les/c/f=77/55/0 sis=78 pruub=15.587525368s) [2] r=-1 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.594375610s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:48 compute-0 ceph-mon[74360]: pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:48 compute-0 ceph-mon[74360]: 4.5 deep-scrub starts
Jan 20 13:59:48 compute-0 ceph-mon[74360]: 4.5 deep-scrub ok
Jan 20 13:59:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 13:59:48 compute-0 ceph-mon[74360]: osdmap e77: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-0 ceph-mon[74360]: osdmap e78: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 20 13:59:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 20 13:59:48 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 79 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:48 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 79 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 13:59:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:49 compute-0 sudo[98454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:49 compute-0 sudo[98454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:49 compute-0 sudo[98454]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:49 compute-0 sudo[98479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 13:59:49 compute-0 sudo[98479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 13:59:49 compute-0 sudo[98479]: pam_unix(sudo:session): session closed for user root
Jan 20 13:59:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 20 13:59:49 compute-0 ceph-mon[74360]: osdmap e79: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 20 13:59:49 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 80 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.977008820s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 168.023452759s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:49 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 80 pg[9.a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=6 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.976887703s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.023452759s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:49 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 80 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=5 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.976744652s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 168.023437500s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 13:59:49 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 80 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=78/79 n=5 ec=54/45 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=14.976655960s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.023437500s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 13:59:49 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 20 13:59:50 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 20 13:59:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:50.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:50 compute-0 ceph-mon[74360]: pgmap v190: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:50 compute-0 ceph-mon[74360]: 5.f scrub starts
Jan 20 13:59:50 compute-0 ceph-mon[74360]: 5.f scrub ok
Jan 20 13:59:50 compute-0 ceph-mon[74360]: osdmap e80: 3 total, 3 up, 3 in
Jan 20 13:59:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 20 13:59:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 20 13:59:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 20 13:59:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 205 B/s, 9 objects/s recovering
Jan 20 13:59:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:51 compute-0 ceph-mon[74360]: 6.9 deep-scrub starts
Jan 20 13:59:51 compute-0 ceph-mon[74360]: 6.9 deep-scrub ok
Jan 20 13:59:51 compute-0 ceph-mon[74360]: osdmap e81: 3 total, 3 up, 3 in
Jan 20 13:59:51 compute-0 ceph-mon[74360]: pgmap v193: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 205 B/s, 9 objects/s recovering
Jan 20 13:59:51 compute-0 ceph-mon[74360]: 6.1c scrub starts
Jan 20 13:59:51 compute-0 ceph-mon[74360]: 6.1c scrub ok
Jan 20 13:59:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 20 13:59:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_13:59:52
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Some PGs (0.006231) are inactive; try again later
Jan 20 13:59:52 compute-0 sshd-session[98505]: Accepted publickey for zuul from 192.168.122.30 port 51048 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 13:59:52 compute-0 systemd-logind[796]: New session 34 of user zuul.
Jan 20 13:59:52 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 20 13:59:52 compute-0 sshd-session[98505]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 13:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 13:59:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:52 compute-0 ceph-mon[74360]: 6.b scrub starts
Jan 20 13:59:52 compute-0 ceph-mon[74360]: 6.b scrub ok
Jan 20 13:59:52 compute-0 ceph-mon[74360]: 6.1e scrub starts
Jan 20 13:59:52 compute-0 ceph-mon[74360]: 6.1e scrub ok
Jan 20 13:59:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 13:59:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 13:59:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 144 B/s, 6 objects/s recovering
Jan 20 13:59:53 compute-0 python3.9[98658]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:59:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:54 compute-0 ceph-mon[74360]: 6.c scrub starts
Jan 20 13:59:54 compute-0 ceph-mon[74360]: 6.c scrub ok
Jan 20 13:59:54 compute-0 ceph-mon[74360]: pgmap v194: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 144 B/s, 6 objects/s recovering
Jan 20 13:59:54 compute-0 ceph-mon[74360]: 6.17 scrub starts
Jan 20 13:59:54 compute-0 ceph-mon[74360]: 6.17 scrub ok
Jan 20 13:59:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:54.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:55 compute-0 ceph-mon[74360]: 6.7 scrub starts
Jan 20 13:59:55 compute-0 ceph-mon[74360]: 6.7 scrub ok
Jan 20 13:59:55 compute-0 ceph-mon[74360]: 6.12 scrub starts
Jan 20 13:59:55 compute-0 ceph-mon[74360]: 6.12 scrub ok
Jan 20 13:59:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 121 B/s, 5 objects/s recovering
Jan 20 13:59:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:56 compute-0 ceph-mon[74360]: pgmap v195: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 121 B/s, 5 objects/s recovering
Jan 20 13:59:56 compute-0 ceph-mon[74360]: 4.15 scrub starts
Jan 20 13:59:56 compute-0 ceph-mon[74360]: 4.15 scrub ok
Jan 20 13:59:56 compute-0 sudo[98872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yczfyaylkibjwkbdmfeldaofxpiqthmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917595.6628053-56-258112961952613/AnsiballZ_command.py'
Jan 20 13:59:56 compute-0 sudo[98872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 13:59:56 compute-0 python3.9[98874]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 13:59:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:56.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 13:59:57 compute-0 ceph-mon[74360]: 5.7 deep-scrub starts
Jan 20 13:59:57 compute-0 ceph-mon[74360]: 5.7 deep-scrub ok
Jan 20 13:59:57 compute-0 ceph-mon[74360]: 7.11 scrub starts
Jan 20 13:59:57 compute-0 ceph-mon[74360]: 7.11 scrub ok
Jan 20 13:59:57 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 13:59:57 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 13:59:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Jan 20 13:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 20 13:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 13:59:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 13:59:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 20 13:59:58 compute-0 ceph-mon[74360]: 6.f scrub starts
Jan 20 13:59:58 compute-0 ceph-mon[74360]: 6.f scrub ok
Jan 20 13:59:58 compute-0 ceph-mon[74360]: pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Jan 20 13:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 13:59:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 13:59:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 20 13:59:58 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 20 13:59:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 13:59:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:13:59:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 13:59:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.10 deep-scrub starts
Jan 20 13:59:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.10 deep-scrub ok
Jan 20 13:59:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 13:59:59 compute-0 ceph-mon[74360]: osdmap e82: 3 total, 3 up, 3 in
Jan 20 13:59:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 13:59:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 20 13:59:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 13:59:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 13:59:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 13:59:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:13:59:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 13:59:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Jan 20 14:00:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:00:00 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Jan 20 14:00:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 20 14:00:00 compute-0 ceph-mon[74360]: 6.10 deep-scrub starts
Jan 20 14:00:00 compute-0 ceph-mon[74360]: 6.10 deep-scrub ok
Jan 20 14:00:00 compute-0 ceph-mon[74360]: pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 14:00:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:00:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:00:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 20 14:00:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 20 14:00:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:01 compute-0 sshd-session[98900]: Connection closed by authenticating user root 157.245.78.139 port 59278 [preauth]
Jan 20 14:00:01 compute-0 ceph-mon[74360]: 6.11 scrub starts
Jan 20 14:00:01 compute-0 ceph-mon[74360]: 6.11 scrub ok
Jan 20 14:00:01 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:00:01 compute-0 ceph-mon[74360]: osdmap e83: 3 total, 3 up, 3 in
Jan 20 14:00:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 20 14:00:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 14:00:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 20 14:00:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 20 14:00:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 20 14:00:02 compute-0 ceph-mon[74360]: pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 14:00:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:00:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 20 14:00:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 20 14:00:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 20 14:00:02 compute-0 sudo[98872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 20 14:00:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-0 ceph-mon[74360]: 6.13 scrub starts
Jan 20 14:00:03 compute-0 ceph-mon[74360]: 6.13 scrub ok
Jan 20 14:00:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:00:03 compute-0 ceph-mon[74360]: osdmap e84: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-0 ceph-mon[74360]: osdmap e85: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 20 14:00:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 14:00:03 compute-0 sshd-session[98508]: Connection closed by 192.168.122.30 port 51048
Jan 20 14:00:03 compute-0 sshd-session[98505]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:00:03 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 20 14:00:03 compute-0 systemd[1]: session-34.scope: Consumed 8.330s CPU time.
Jan 20 14:00:03 compute-0 systemd-logind[796]: Session 34 logged out. Waiting for processes to exit.
Jan 20 14:00:03 compute-0 systemd-logind[796]: Removed session 34.
Jan 20 14:00:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:00:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:03.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:00:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 20 14:00:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:00:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 20 14:00:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-0 ceph-mon[74360]: pgmap v203: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 14:00:04 compute-0 ceph-mon[74360]: 7.16 scrub starts
Jan 20 14:00:04 compute-0 ceph-mon[74360]: 7.16 scrub ok
Jan 20 14:00:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:00:04 compute-0 ceph-mon[74360]: osdmap e86: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 20 14:00:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 20 14:00:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 20 14:00:05 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 20 14:00:05 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 20 14:00:05 compute-0 ceph-mon[74360]: 7.1f scrub starts
Jan 20 14:00:05 compute-0 ceph-mon[74360]: 7.1f scrub ok
Jan 20 14:00:05 compute-0 ceph-mon[74360]: osdmap e87: 3 total, 3 up, 3 in
Jan 20 14:00:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 2 active+remapped, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 20 14:00:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 14:00:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:05.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 20 14:00:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:00:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 20 14:00:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 20 14:00:06 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 20 14:00:06 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 6.3 scrub starts
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 6.3 scrub ok
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 6.14 scrub starts
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 6.14 scrub ok
Jan 20 14:00:06 compute-0 ceph-mon[74360]: pgmap v206: 321 pgs: 2 active+remapped, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 3.15 scrub starts
Jan 20 14:00:06 compute-0 ceph-mon[74360]: 3.15 scrub ok
Jan 20 14:00:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:00:06 compute-0 ceph-mon[74360]: osdmap e88: 3 total, 3 up, 3 in
Jan 20 14:00:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 20 14:00:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 20 14:00:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 20 14:00:07 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 20 14:00:07 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 6.2 scrub starts
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 6.2 scrub ok
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 6.16 scrub starts
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 6.16 scrub ok
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 2.12 scrub starts
Jan 20 14:00:07 compute-0 ceph-mon[74360]: 2.12 scrub ok
Jan 20 14:00:07 compute-0 ceph-mon[74360]: osdmap e89: 3 total, 3 up, 3 in
Jan 20 14:00:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:07.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 20 14:00:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 20 14:00:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 20 14:00:08 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 20 14:00:08 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 20 14:00:08 compute-0 ceph-mon[74360]: 6.18 scrub starts
Jan 20 14:00:08 compute-0 ceph-mon[74360]: 6.18 scrub ok
Jan 20 14:00:08 compute-0 ceph-mon[74360]: pgmap v209: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 20 14:00:08 compute-0 ceph-mon[74360]: osdmap e90: 3 total, 3 up, 3 in
Jan 20 14:00:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v211: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 20 14:00:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 20 14:00:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 20 14:00:09 compute-0 ceph-mon[74360]: 6.1d scrub starts
Jan 20 14:00:09 compute-0 ceph-mon[74360]: 6.1d scrub ok
Jan 20 14:00:09 compute-0 ceph-mon[74360]: 3.e scrub starts
Jan 20 14:00:09 compute-0 ceph-mon[74360]: 3.e scrub ok
Jan 20 14:00:09 compute-0 sudo[98940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:09 compute-0 sudo[98940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:09 compute-0 sudo[98940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:09.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:09 compute-0 sudo[98965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:09 compute-0 sudo[98965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:09 compute-0 sudo[98965]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 20 14:00:10 compute-0 ceph-mon[74360]: pgmap v211: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:10 compute-0 ceph-mon[74360]: osdmap e91: 3 total, 3 up, 3 in
Jan 20 14:00:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 20 14:00:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 20 14:00:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:00:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:00:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 20 14:00:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 14:00:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 20 14:00:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 14:00:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 20 14:00:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 20 14:00:11 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 93 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=11.364454269s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 185.861053467s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:11 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 93 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=11.364386559s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.861053467s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:11.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:12 compute-0 ceph-mon[74360]: osdmap e92: 3 total, 3 up, 3 in
Jan 20 14:00:12 compute-0 ceph-mon[74360]: 3.11 scrub starts
Jan 20 14:00:12 compute-0 ceph-mon[74360]: 3.11 scrub ok
Jan 20 14:00:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 14:00:12 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 20 14:00:12 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 20 14:00:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 20 14:00:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 20 14:00:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 20 14:00:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 94 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:12 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 94 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:13 compute-0 ceph-mon[74360]: pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 14:00:13 compute-0 ceph-mon[74360]: osdmap e93: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-0 ceph-mon[74360]: 6.1f scrub starts
Jan 20 14:00:13 compute-0 ceph-mon[74360]: 6.1f scrub ok
Jan 20 14:00:13 compute-0 ceph-mon[74360]: osdmap e94: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-0 ceph-mon[74360]: 4.9 scrub starts
Jan 20 14:00:13 compute-0 ceph-mon[74360]: 4.9 scrub ok
Jan 20 14:00:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 20 14:00:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 14:00:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 20 14:00:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 14:00:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 20 14:00:13 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 95 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=9.336042404s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 185.860855103s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:13 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 95 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=9.335947990s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 185.860855103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:13 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 95 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=94/95 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:13.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:14 compute-0 ceph-mon[74360]: pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 14:00:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 14:00:14 compute-0 ceph-mon[74360]: osdmap e95: 3 total, 3 up, 3 in
Jan 20 14:00:14 compute-0 ceph-mon[74360]: 2.18 scrub starts
Jan 20 14:00:14 compute-0 ceph-mon[74360]: 2.18 scrub ok
Jan 20 14:00:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 20 14:00:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 20 14:00:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 20 14:00:14 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 96 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:14 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 96 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.012884140s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 192.542083740s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:14 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 96 pg[9.10( v 51'1000 (0'0,51'1000] local-lis/les=94/95 n=6 ec=54/45 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=15.012821198s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.542083740s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:14 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 96 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:14.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:15 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Jan 20 14:00:15 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Jan 20 14:00:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 20 14:00:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 14:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 20 14:00:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 14:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 20 14:00:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 20 14:00:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 97 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=15.324283600s) [1] r=-1 lpr=97 pi=[54,97)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 193.868896484s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 97 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=97 pruub=15.324126244s) [1] r=-1 lpr=97 pi=[54,97)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.868896484s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:15 compute-0 ceph-mon[74360]: osdmap e96: 3 total, 3 up, 3 in
Jan 20 14:00:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 14:00:15 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 97 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=96/97 n=6 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[54,96)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:15.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 20 14:00:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 98 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=96/97 n=6 ec=54/45 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=15.005532265s) [1] async=[1] r=-1 lpr=98 pi=[54,98)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 194.554824829s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 98 pg[9.11( v 51'1000 (0'0,51'1000] local-lis/les=96/97 n=6 ec=54/45 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=15.005423546s) [1] r=-1 lpr=98 pi=[54,98)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.554824829s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 98 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=98) [1]/[0] r=0 lpr=98 pi=[54,98)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:16 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 98 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=54/55 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=98) [1]/[0] r=0 lpr=98 pi=[54,98)/1 crt=51'1000 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:16 compute-0 ceph-mon[74360]: 2.19 deep-scrub starts
Jan 20 14:00:16 compute-0 ceph-mon[74360]: 2.19 deep-scrub ok
Jan 20 14:00:16 compute-0 ceph-mon[74360]: pgmap v220: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 14:00:16 compute-0 ceph-mon[74360]: osdmap e97: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-0 ceph-mon[74360]: 4.8 scrub starts
Jan 20 14:00:16 compute-0 ceph-mon[74360]: 4.8 scrub ok
Jan 20 14:00:16 compute-0 ceph-mon[74360]: osdmap e98: 3 total, 3 up, 3 in
Jan 20 14:00:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 20 14:00:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 20 14:00:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 20 14:00:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 99 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=98/99 n=5 ec=54/45 lis/c=54/54 les/c/f=55/55/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[54,98)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 20 14:00:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 20 14:00:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 100 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=98/99 n=5 ec=54/45 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.617624283s) [1] async=[1] r=-1 lpr=100 pi=[54,100)/1 crt=51'1000 lcod 0'0 mlcod 0'0 active pruub 196.640853882s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:17 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 100 pg[9.12( v 51'1000 (0'0,51'1000] local-lis/les=98/99 n=5 ec=54/45 lis/c=98/54 les/c/f=99/55/0 sis=100 pruub=15.617413521s) [1] r=-1 lpr=100 pi=[54,100)/1 crt=51'1000 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 196.640853882s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 20 14:00:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 20 14:00:18 compute-0 sshd-session[98994]: Accepted publickey for zuul from 192.168.122.30 port 44268 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:00:18 compute-0 systemd-logind[796]: New session 35 of user zuul.
Jan 20 14:00:18 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 20 14:00:18 compute-0 sshd-session[98994]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:00:18 compute-0 ceph-mon[74360]: 7.13 scrub starts
Jan 20 14:00:18 compute-0 ceph-mon[74360]: 7.13 scrub ok
Jan 20 14:00:18 compute-0 ceph-mon[74360]: pgmap v223: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:00:18 compute-0 ceph-mon[74360]: 5.1 scrub starts
Jan 20 14:00:18 compute-0 ceph-mon[74360]: 5.1 scrub ok
Jan 20 14:00:18 compute-0 ceph-mon[74360]: osdmap e99: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-0 ceph-mon[74360]: osdmap e100: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:18.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 20 14:00:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 20 14:00:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 20 14:00:19 compute-0 python3.9[99147]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 14:00:19 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 14:00:19 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 14:00:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:19 compute-0 ceph-mon[74360]: 7.10 scrub starts
Jan 20 14:00:19 compute-0 ceph-mon[74360]: 7.10 scrub ok
Jan 20 14:00:19 compute-0 ceph-mon[74360]: 5.4 scrub starts
Jan 20 14:00:19 compute-0 ceph-mon[74360]: 5.4 scrub ok
Jan 20 14:00:19 compute-0 ceph-mon[74360]: osdmap e101: 3 total, 3 up, 3 in
Jan 20 14:00:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:19.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:20 compute-0 python3.9[99322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:20 compute-0 ceph-mon[74360]: 2.e scrub starts
Jan 20 14:00:20 compute-0 ceph-mon[74360]: 2.e scrub ok
Jan 20 14:00:20 compute-0 ceph-mon[74360]: pgmap v227: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 213 B/s wr, 14 op/s; 45 B/s, 1 objects/s recovering
Jan 20 14:00:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 20 14:00:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 14:00:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 20 14:00:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 14:00:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 20 14:00:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 20 14:00:21 compute-0 ceph-mon[74360]: 7.5 scrub starts
Jan 20 14:00:21 compute-0 ceph-mon[74360]: 7.5 scrub ok
Jan 20 14:00:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 14:00:21 compute-0 sudo[99477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysnsqsxvidvvivxosfukbmvvxdqywptd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917621.1598005-93-82384743604066/AnsiballZ_command.py'
Jan 20 14:00:21 compute-0 sudo[99477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:21 compute-0 python3.9[99479]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:21 compute-0 sudo[99477]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 20 14:00:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:22 compute-0 ceph-mon[74360]: pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 213 B/s wr, 14 op/s; 45 B/s, 1 objects/s recovering
Jan 20 14:00:22 compute-0 ceph-mon[74360]: 4.d scrub starts
Jan 20 14:00:22 compute-0 ceph-mon[74360]: 4.d scrub ok
Jan 20 14:00:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 14:00:22 compute-0 ceph-mon[74360]: osdmap e102: 3 total, 3 up, 3 in
Jan 20 14:00:22 compute-0 ceph-mon[74360]: 4.1 deep-scrub starts
Jan 20 14:00:22 compute-0 ceph-mon[74360]: 4.1 deep-scrub ok
Jan 20 14:00:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:22.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:22 compute-0 sudo[99630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnthlmsjegxfgdprorrdbfazlqbgabyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917622.3403006-129-123651550668986/AnsiballZ_stat.py'
Jan 20 14:00:22 compute-0 sudo[99630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:22 compute-0 python3.9[99632]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:22 compute-0 sudo[99630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 179 B/s wr, 12 op/s; 38 B/s, 1 objects/s recovering
Jan 20 14:00:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 20 14:00:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 14:00:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 20 14:00:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 14:00:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 20 14:00:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 20 14:00:23 compute-0 ceph-mon[74360]: 7.b scrub starts
Jan 20 14:00:23 compute-0 ceph-mon[74360]: 7.b scrub ok
Jan 20 14:00:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 14:00:23 compute-0 sudo[99785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhushghaqqrstnksmlpnhvgywgbzdtge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917623.3088553-162-195614267234649/AnsiballZ_file.py'
Jan 20 14:00:23 compute-0 sudo[99785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:23.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:23 compute-0 python3.9[99787]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:00:24 compute-0 sudo[99785]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 20 14:00:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 20 14:00:24 compute-0 sudo[99937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwjsevqndqlfkvvawzagackteejxyfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917624.3143206-189-115330207495211/AnsiballZ_file.py'
Jan 20 14:00:24 compute-0 sudo[99937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:24.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:24 compute-0 ceph-mon[74360]: pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 179 B/s wr, 12 op/s; 38 B/s, 1 objects/s recovering
Jan 20 14:00:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 14:00:24 compute-0 ceph-mon[74360]: osdmap e103: 3 total, 3 up, 3 in
Jan 20 14:00:24 compute-0 python3.9[99939]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:00:24 compute-0 sudo[99937]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 162 B/s wr, 11 op/s; 34 B/s, 1 objects/s recovering
Jan 20 14:00:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 20 14:00:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 14:00:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 20 14:00:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 14:00:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 20 14:00:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 20 14:00:25 compute-0 ceph-mon[74360]: 7.9 scrub starts
Jan 20 14:00:25 compute-0 ceph-mon[74360]: 7.9 scrub ok
Jan 20 14:00:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 14:00:25 compute-0 python3.9[100090]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:00:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:25 compute-0 network[100107]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:00:25 compute-0 network[100108]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:00:25 compute-0 network[100109]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:00:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 20 14:00:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 20 14:00:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 20 14:00:26 compute-0 ceph-mon[74360]: pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 162 B/s wr, 11 op/s; 34 B/s, 1 objects/s recovering
Jan 20 14:00:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 14:00:26 compute-0 ceph-mon[74360]: 3.9 scrub starts
Jan 20 14:00:26 compute-0 ceph-mon[74360]: osdmap e104: 3 total, 3 up, 3 in
Jan 20 14:00:26 compute-0 ceph-mon[74360]: 3.9 scrub ok
Jan 20 14:00:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 20 14:00:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 14:00:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 20 14:00:27 compute-0 ceph-mon[74360]: osdmap e105: 3 total, 3 up, 3 in
Jan 20 14:00:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 14:00:27 compute-0 ceph-mon[74360]: 6.5 scrub starts
Jan 20 14:00:27 compute-0 ceph-mon[74360]: 6.5 scrub ok
Jan 20 14:00:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 14:00:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 20 14:00:27 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 20 14:00:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 20 14:00:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 20 14:00:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 20 14:00:28 compute-0 ceph-mon[74360]: pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 14:00:28 compute-0 ceph-mon[74360]: osdmap e106: 3 total, 3 up, 3 in
Jan 20 14:00:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 20 14:00:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 20 14:00:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 14:00:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 20 14:00:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 14:00:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-0 ceph-mon[74360]: 7.8 scrub starts
Jan 20 14:00:29 compute-0 ceph-mon[74360]: 7.8 scrub ok
Jan 20 14:00:29 compute-0 ceph-mon[74360]: osdmap e107: 3 total, 3 up, 3 in
Jan 20 14:00:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 14:00:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:30 compute-0 sudo[100321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:30 compute-0 sudo[100321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:30 compute-0 sudo[100321]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:30 compute-0 sudo[100370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:30 compute-0 sudo[100370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:30 compute-0 sudo[100370]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:30 compute-0 python3.9[100419]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:30.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 20 14:00:30 compute-0 ceph-mon[74360]: pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 14:00:30 compute-0 ceph-mon[74360]: osdmap e108: 3 total, 3 up, 3 in
Jan 20 14:00:30 compute-0 ceph-mon[74360]: 6.e scrub starts
Jan 20 14:00:30 compute-0 ceph-mon[74360]: 6.e scrub ok
Jan 20 14:00:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 20 14:00:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 20 14:00:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Jan 20 14:00:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 20 14:00:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 14:00:31 compute-0 python3.9[100571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 20 14:00:31 compute-0 ceph-mon[74360]: osdmap e109: 3 total, 3 up, 3 in
Jan 20 14:00:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 14:00:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 14:00:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 20 14:00:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 20 14:00:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:32 compute-0 python3.9[100726]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:32 compute-0 ceph-mon[74360]: pgmap v241: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Jan 20 14:00:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 14:00:32 compute-0 ceph-mon[74360]: osdmap e110: 3 total, 3 up, 3 in
Jan 20 14:00:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 20 14:00:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 20 14:00:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 14:00:33 compute-0 sudo[100883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqyasxcapzajdlxpxxqnrgyajfxotbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917633.3660629-333-164213548474621/AnsiballZ_setup.py'
Jan 20 14:00:33 compute-0 sudo[100883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 20 14:00:34 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Jan 20 14:00:34 compute-0 python3.9[100885]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:00:34 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Jan 20 14:00:34 compute-0 sudo[100883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:34 compute-0 ceph-mon[74360]: pgmap v243: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 20 14:00:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 14:00:34 compute-0 sudo[100967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxcigmlpsfjqdxtmsqxngozuwkrsqpvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917633.3660629-333-164213548474621/AnsiballZ_dnf.py'
Jan 20 14:00:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:00:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:00:34 compute-0 sudo[100967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:00:34 compute-0 python3.9[100969]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:00:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 14:00:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 20 14:00:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 20 14:00:35 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 111 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=78/78 les/c/f=79/79/0 sis=111) [0] r=0 lpr=111 pi=[78,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 20 14:00:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 14:00:35 compute-0 ceph-mon[74360]: 7.e deep-scrub starts
Jan 20 14:00:35 compute-0 ceph-mon[74360]: 7.e deep-scrub ok
Jan 20 14:00:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 14:00:35 compute-0 ceph-mon[74360]: osdmap e111: 3 total, 3 up, 3 in
Jan 20 14:00:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 14:00:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:35.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:36 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 20 14:00:36 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 20 14:00:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 20 14:00:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 14:00:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 20 14:00:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 20 14:00:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 112 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=78/78 les/c/f=79/79/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[78,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 112 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=78/78 les/c/f=79/79/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[78,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:36 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 112 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:36 compute-0 ceph-mon[74360]: pgmap v245: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:36 compute-0 ceph-mon[74360]: 3.a scrub starts
Jan 20 14:00:36 compute-0 ceph-mon[74360]: 3.a scrub ok
Jan 20 14:00:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 14:00:36 compute-0 ceph-mon[74360]: osdmap e112: 3 total, 3 up, 3 in
Jan 20 14:00:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 20 14:00:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 20 14:00:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[80,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 113 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=113) [0] r=0 lpr=113 pi=[64,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[80,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:37 compute-0 ceph-mon[74360]: 3.1a scrub starts
Jan 20 14:00:37 compute-0 ceph-mon[74360]: 3.1a scrub ok
Jan 20 14:00:37 compute-0 ceph-mon[74360]: 7.f scrub starts
Jan 20 14:00:37 compute-0 ceph-mon[74360]: 7.f scrub ok
Jan 20 14:00:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 14:00:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 14:00:37 compute-0 ceph-mon[74360]: osdmap e113: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 20 14:00:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:37.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 114 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[64,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 114 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=64/64 les/c/f=65/65/0 sis=114) [0]/[2] r=-1 lpr=114 pi=[64,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 114 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=112/78 les/c/f=113/79/0 sis=114) [0] r=0 lpr=114 pi=[78,114)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:37 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 114 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=112/78 les/c/f=113/79/0 sis=114) [0] r=0 lpr=114 pi=[78,114)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:38 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 14:00:38 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 14:00:38 compute-0 ceph-mon[74360]: pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:38 compute-0 ceph-mon[74360]: osdmap e114: 3 total, 3 up, 3 in
Jan 20 14:00:38 compute-0 sudo[101035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:38 compute-0 sudo[101035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-0 sudo[101035]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-0 sudo[101060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:38 compute-0 sudo[101060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-0 sudo[101060]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-0 sudo[101085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:38 compute-0 sudo[101085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-0 sudo[101085]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:38 compute-0 sudo[101110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:00:38 compute-0 sudo[101110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 20 14:00:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 20 14:00:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 115 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:38 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 20 14:00:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 115 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:38 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 115 pg[9.19( v 51'1000 (0'0,51'1000] local-lis/les=114/115 n=5 ec=54/45 lis/c=112/78 les/c/f=113/79/0 sis=114) [0] r=0 lpr=114 pi=[78,114)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 20 14:00:39 compute-0 ceph-mon[74360]: 7.2 scrub starts
Jan 20 14:00:39 compute-0 ceph-mon[74360]: 7.2 scrub ok
Jan 20 14:00:39 compute-0 ceph-mon[74360]: osdmap e115: 3 total, 3 up, 3 in
Jan 20 14:00:39 compute-0 podman[101209]: 2026-01-20 14:00:39.441220007 +0000 UTC m=+0.077353811 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:00:39 compute-0 podman[101209]: 2026-01-20 14:00:39.560058726 +0000 UTC m=+0.196192560 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:00:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:39.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 20 14:00:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 20 14:00:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 20 14:00:39 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 116 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:39 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 116 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:39 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 116 pg[9.1a( v 51'1000 (0'0,51'1000] local-lis/les=115/116 n=5 ec=54/45 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:40 compute-0 podman[101370]: 2026-01-20 14:00:40.189011951 +0000 UTC m=+0.055593729 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:00:40 compute-0 podman[101370]: 2026-01-20 14:00:40.204685335 +0000 UTC m=+0.071267113 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: pgmap v251: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 20 14:00:40 compute-0 ceph-mon[74360]: osdmap e116: 3 total, 3 up, 3 in
Jan 20 14:00:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 podman[101437]: 2026-01-20 14:00:40.444487102 +0000 UTC m=+0.051523567 container exec 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.openshift.expose-services=, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 14:00:40 compute-0 podman[101437]: 2026-01-20 14:00:40.481624029 +0000 UTC m=+0.088660474 container exec_died 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container)
Jan 20 14:00:40 compute-0 sudo[101110]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:00:40 compute-0 sudo[101468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:40 compute-0 sudo[101468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-0 sudo[101468]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:40 compute-0 sudo[101493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:40 compute-0 sudo[101493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-0 sudo[101493]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-0 sudo[101518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:40 compute-0 sudo[101518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-0 sudo[101518]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:40.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:40 compute-0 sudo[101543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:00:40 compute-0 sudo[101543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 20 14:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 20 14:00:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 20 14:00:41 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 117 pg[9.1b( v 51'1000 (0'0,51'1000] local-lis/les=116/117 n=5 ec=54/45 lis/c=114/64 les/c/f=115/65/0 sis=116) [0] r=0 lpr=116 pi=[64,116)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:41 compute-0 sudo[101543]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 55 B/s, 2 objects/s recovering
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f9066e00-9d16-4fb5-8a2f-5fb52a5439f6 does not exist
Jan 20 14:00:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 84cfcc03-efe6-4b69-a240-e129cc5d9e95 does not exist
Jan 20 14:00:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 90c19aee-fb96-4999-911d-d01e8f62f962 does not exist
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:00:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-0 sudo[101599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:41 compute-0 sudo[101599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:41 compute-0 sudo[101599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:41 compute-0 sudo[101624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:41 compute-0 sudo[101624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:41 compute-0 sudo[101624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:41 compute-0 sudo[101649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:41 compute-0 sudo[101649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:41 compute-0 sudo[101649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mon[74360]: osdmap e117: 3 total, 3 up, 3 in
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:00:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:00:41 compute-0 sudo[101674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:00:41 compute-0 sudo[101674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:41 compute-0 podman[101741]: 2026-01-20 14:00:41.917538006 +0000 UTC m=+0.054020416 container create 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:00:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:41.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:41 compute-0 systemd[1]: Started libpod-conmon-621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151.scope.
Jan 20 14:00:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:41 compute-0 podman[101741]: 2026-01-20 14:00:41.894153479 +0000 UTC m=+0.030635869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:41 compute-0 podman[101741]: 2026-01-20 14:00:41.992971374 +0000 UTC m=+0.129453764 container init 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:00:42 compute-0 podman[101741]: 2026-01-20 14:00:42.005296274 +0000 UTC m=+0.141778684 container start 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:00:42 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 20 14:00:42 compute-0 beautiful_mirzakhani[101758]: 167 167
Jan 20 14:00:42 compute-0 podman[101741]: 2026-01-20 14:00:42.010518789 +0000 UTC m=+0.147001179 container attach 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:42 compute-0 systemd[1]: libpod-621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151.scope: Deactivated successfully.
Jan 20 14:00:42 compute-0 podman[101741]: 2026-01-20 14:00:42.013622275 +0000 UTC m=+0.150104685 container died 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:00:42 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 20 14:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-071fc09810b77f5a4f437df91d6ada3d2e9f2baba2ba1cce98fa05430707d3c7-merged.mount: Deactivated successfully.
Jan 20 14:00:42 compute-0 podman[101741]: 2026-01-20 14:00:42.067504776 +0000 UTC m=+0.203987166 container remove 621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:00:42 compute-0 systemd[1]: libpod-conmon-621b273d84a4dfafa1189ce837e75e327f68276b960a7db971850d829e95e151.scope: Deactivated successfully.
Jan 20 14:00:42 compute-0 podman[101782]: 2026-01-20 14:00:42.32312679 +0000 UTC m=+0.082657949 container create 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:42 compute-0 systemd[1]: Started libpod-conmon-51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd.scope.
Jan 20 14:00:42 compute-0 podman[101782]: 2026-01-20 14:00:42.289731486 +0000 UTC m=+0.049262745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:42 compute-0 podman[101782]: 2026-01-20 14:00:42.432821586 +0000 UTC m=+0.192352795 container init 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:42 compute-0 podman[101782]: 2026-01-20 14:00:42.44525921 +0000 UTC m=+0.204790369 container start 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:00:42 compute-0 podman[101782]: 2026-01-20 14:00:42.449322473 +0000 UTC m=+0.208853672 container attach 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:00:42 compute-0 ceph-mon[74360]: pgmap v254: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 55 B/s, 2 objects/s recovering
Jan 20 14:00:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:43 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 20 14:00:43 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 20 14:00:43 compute-0 vigorous_bose[101798]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:00:43 compute-0 vigorous_bose[101798]: --> relative data size: 1.0
Jan 20 14:00:43 compute-0 vigorous_bose[101798]: --> All data devices are unavailable
Jan 20 14:00:43 compute-0 systemd[1]: libpod-51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd.scope: Deactivated successfully.
Jan 20 14:00:43 compute-0 podman[101782]: 2026-01-20 14:00:43.238577834 +0000 UTC m=+0.998108993 container died 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a9e97c4ac147af37415470368c1adbb443913819f700157fdffadf9825b7c4-merged.mount: Deactivated successfully.
Jan 20 14:00:43 compute-0 podman[101782]: 2026-01-20 14:00:43.287301612 +0000 UTC m=+1.046832761 container remove 51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:00:43 compute-0 systemd[1]: libpod-conmon-51de2de9c7027f7e77deb6425a5860edd2b0933bfbf55cea1c79b31b3461d7bd.scope: Deactivated successfully.
Jan 20 14:00:43 compute-0 sudo[101674]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:43 compute-0 sudo[101829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:43 compute-0 sudo[101829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:43 compute-0 sudo[101829]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:43 compute-0 sudo[101854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:43 compute-0 sudo[101854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:43 compute-0 sudo[101854]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:43 compute-0 sudo[101879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:43 compute-0 ceph-mon[74360]: 3.1d deep-scrub starts
Jan 20 14:00:43 compute-0 ceph-mon[74360]: 3.1d deep-scrub ok
Jan 20 14:00:43 compute-0 ceph-mon[74360]: 2.6 scrub starts
Jan 20 14:00:43 compute-0 ceph-mon[74360]: 2.6 scrub ok
Jan 20 14:00:43 compute-0 sudo[101879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:43 compute-0 sudo[101879]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:43 compute-0 sudo[101904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:00:43 compute-0 sudo[101904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:43.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:43 compute-0 podman[101970]: 2026-01-20 14:00:43.968190044 +0000 UTC m=+0.045423477 container create 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:00:43 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Jan 20 14:00:44 compute-0 systemd[1]: Started libpod-conmon-10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad.scope.
Jan 20 14:00:44 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Jan 20 14:00:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:43.954575508 +0000 UTC m=+0.031808961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:44.057435134 +0000 UTC m=+0.134668587 container init 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:44.068368557 +0000 UTC m=+0.145601990 container start 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:44.071263477 +0000 UTC m=+0.148496930 container attach 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:44 compute-0 blissful_diffie[101987]: 167 167
Jan 20 14:00:44 compute-0 systemd[1]: libpod-10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad.scope: Deactivated successfully.
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:44.074451765 +0000 UTC m=+0.151685268 container died 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6254b82b66625083cb1d773b8c395bf0280cfe14da06088ffc99c2eef07cc1c-merged.mount: Deactivated successfully.
Jan 20 14:00:44 compute-0 podman[101970]: 2026-01-20 14:00:44.113755443 +0000 UTC m=+0.190988916 container remove 10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_diffie, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:44 compute-0 systemd[1]: libpod-conmon-10d83ec64c3ce2bfca16d3adc0d035c1403c6cf45e92c0c29ee394a89a4399ad.scope: Deactivated successfully.
Jan 20 14:00:44 compute-0 podman[102012]: 2026-01-20 14:00:44.29357986 +0000 UTC m=+0.043424713 container create 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:00:44 compute-0 systemd[1]: Started libpod-conmon-4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995.scope.
Jan 20 14:00:44 compute-0 podman[102012]: 2026-01-20 14:00:44.274792969 +0000 UTC m=+0.024637902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36775828aca60768777bac0cdddbf1a1654af418efc4cb6e81f9ce02b9b06c11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36775828aca60768777bac0cdddbf1a1654af418efc4cb6e81f9ce02b9b06c11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36775828aca60768777bac0cdddbf1a1654af418efc4cb6e81f9ce02b9b06c11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36775828aca60768777bac0cdddbf1a1654af418efc4cb6e81f9ce02b9b06c11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:44 compute-0 podman[102012]: 2026-01-20 14:00:44.404327594 +0000 UTC m=+0.154172537 container init 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 20 14:00:44 compute-0 podman[102012]: 2026-01-20 14:00:44.409970751 +0000 UTC m=+0.159815614 container start 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:44 compute-0 podman[102012]: 2026-01-20 14:00:44.414045873 +0000 UTC m=+0.163890826 container attach 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:00:44 compute-0 ceph-mon[74360]: 7.4 scrub starts
Jan 20 14:00:44 compute-0 ceph-mon[74360]: 7.4 scrub ok
Jan 20 14:00:44 compute-0 ceph-mon[74360]: pgmap v255: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 20 14:00:44 compute-0 ceph-mon[74360]: 6.8 scrub starts
Jan 20 14:00:44 compute-0 ceph-mon[74360]: 6.8 scrub ok
Jan 20 14:00:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:44.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:45 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 20 14:00:45 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 20 14:00:45 compute-0 agitated_curie[102028]: {
Jan 20 14:00:45 compute-0 agitated_curie[102028]:     "0": [
Jan 20 14:00:45 compute-0 agitated_curie[102028]:         {
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "devices": [
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "/dev/loop3"
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             ],
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "lv_name": "ceph_lv0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "lv_size": "7511998464",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "name": "ceph_lv0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "tags": {
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.cluster_name": "ceph",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.crush_device_class": "",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.encrypted": "0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.osd_id": "0",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.type": "block",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:                 "ceph.vdo": "0"
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             },
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "type": "block",
Jan 20 14:00:45 compute-0 agitated_curie[102028]:             "vg_name": "ceph_vg0"
Jan 20 14:00:45 compute-0 agitated_curie[102028]:         }
Jan 20 14:00:45 compute-0 agitated_curie[102028]:     ]
Jan 20 14:00:45 compute-0 agitated_curie[102028]: }
Jan 20 14:00:45 compute-0 systemd[1]: libpod-4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995.scope: Deactivated successfully.
Jan 20 14:00:45 compute-0 podman[102012]: 2026-01-20 14:00:45.130365036 +0000 UTC m=+0.880209929 container died 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-36775828aca60768777bac0cdddbf1a1654af418efc4cb6e81f9ce02b9b06c11-merged.mount: Deactivated successfully.
Jan 20 14:00:45 compute-0 podman[102012]: 2026-01-20 14:00:45.208868318 +0000 UTC m=+0.958713201 container remove 4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_curie, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:00:45 compute-0 systemd[1]: libpod-conmon-4549ee90eb2e576989858ecc79d842f7e97d71a654724053e894b76b9116c995.scope: Deactivated successfully.
Jan 20 14:00:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Jan 20 14:00:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 20 14:00:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 14:00:45 compute-0 sudo[101904]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:45 compute-0 sudo[102048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:45 compute-0 sudo[102048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:45 compute-0 sudo[102048]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:45 compute-0 sudo[102073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:00:45 compute-0 sudo[102073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:45 compute-0 sudo[102073]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:45 compute-0 sudo[102098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:45 compute-0 sudo[102098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:45 compute-0 sudo[102098]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:45 compute-0 sudo[102123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:00:45 compute-0 sudo[102123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 20 14:00:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 14:00:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 20 14:00:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 20 14:00:45 compute-0 ceph-mon[74360]: 7.6 deep-scrub starts
Jan 20 14:00:45 compute-0 ceph-mon[74360]: 7.6 deep-scrub ok
Jan 20 14:00:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 14:00:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:45 compute-0 podman[102187]: 2026-01-20 14:00:45.983990659 +0000 UTC m=+0.062063258 container create 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:00:46 compute-0 systemd[1]: Started libpod-conmon-9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219.scope.
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:45.952362543 +0000 UTC m=+0.030435212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:46 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:46.070447662 +0000 UTC m=+0.148520271 container init 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:00:46 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:46.078897956 +0000 UTC m=+0.156970545 container start 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:46.081685203 +0000 UTC m=+0.159757782 container attach 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:00:46 compute-0 systemd[1]: libpod-9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219.scope: Deactivated successfully.
Jan 20 14:00:46 compute-0 crazy_goodall[102203]: 167 167
Jan 20 14:00:46 compute-0 conmon[102203]: conmon 9803d01132e63a9d046b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219.scope/container/memory.events
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:46.08627926 +0000 UTC m=+0.164351869 container died 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:00:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-08984647db72b0fe7c1b795a3e636a45ad23f5a704225e91244aa2ecb2cec9f5-merged.mount: Deactivated successfully.
Jan 20 14:00:46 compute-0 podman[102187]: 2026-01-20 14:00:46.12060647 +0000 UTC m=+0.198679069 container remove 9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:00:46 compute-0 systemd[1]: libpod-conmon-9803d01132e63a9d046b1e4c4ed6de73049083ee5efe455632a7c8043647f219.scope: Deactivated successfully.
Jan 20 14:00:46 compute-0 podman[102227]: 2026-01-20 14:00:46.298443772 +0000 UTC m=+0.061006710 container create 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:00:46 compute-0 systemd[1]: Started libpod-conmon-618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f.scope.
Jan 20 14:00:46 compute-0 podman[102227]: 2026-01-20 14:00:46.268968596 +0000 UTC m=+0.031531594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:00:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584a4457739e9dc77137722e515c5e7f591db5b8ab848b8650f5d25e94e3dea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584a4457739e9dc77137722e515c5e7f591db5b8ab848b8650f5d25e94e3dea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584a4457739e9dc77137722e515c5e7f591db5b8ab848b8650f5d25e94e3dea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8584a4457739e9dc77137722e515c5e7f591db5b8ab848b8650f5d25e94e3dea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:00:46 compute-0 podman[102227]: 2026-01-20 14:00:46.425525808 +0000 UTC m=+0.188088746 container init 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:00:46 compute-0 podman[102227]: 2026-01-20 14:00:46.439679149 +0000 UTC m=+0.202242087 container start 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:00:46 compute-0 podman[102227]: 2026-01-20 14:00:46.444253316 +0000 UTC m=+0.206816224 container attach 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:00:46 compute-0 ceph-mon[74360]: 2.9 scrub starts
Jan 20 14:00:46 compute-0 ceph-mon[74360]: 2.9 scrub ok
Jan 20 14:00:46 compute-0 ceph-mon[74360]: pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Jan 20 14:00:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 14:00:46 compute-0 ceph-mon[74360]: osdmap e118: 3 total, 3 up, 3 in
Jan 20 14:00:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 14:00:47 compute-0 hungry_nash[102244]: {
Jan 20 14:00:47 compute-0 hungry_nash[102244]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:00:47 compute-0 hungry_nash[102244]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:00:47 compute-0 hungry_nash[102244]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:00:47 compute-0 hungry_nash[102244]:         "osd_id": 0,
Jan 20 14:00:47 compute-0 hungry_nash[102244]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:00:47 compute-0 hungry_nash[102244]:         "type": "bluestore"
Jan 20 14:00:47 compute-0 hungry_nash[102244]:     }
Jan 20 14:00:47 compute-0 hungry_nash[102244]: }
Jan 20 14:00:47 compute-0 systemd[1]: libpod-618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f.scope: Deactivated successfully.
Jan 20 14:00:47 compute-0 podman[102227]: 2026-01-20 14:00:47.382349127 +0000 UTC m=+1.144912055 container died 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8584a4457739e9dc77137722e515c5e7f591db5b8ab848b8650f5d25e94e3dea-merged.mount: Deactivated successfully.
Jan 20 14:00:47 compute-0 podman[102227]: 2026-01-20 14:00:47.463214824 +0000 UTC m=+1.225777752 container remove 618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:00:47 compute-0 systemd[1]: libpod-conmon-618f79b09ae62f1a8c018ff8f8358a24a1815d64c77f6cc4fcb5d8251f28287f.scope: Deactivated successfully.
Jan 20 14:00:47 compute-0 sudo[102123]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 93779538-caf5-45e9-9414-9d118b50c9ef does not exist
Jan 20 14:00:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4391090c-58fb-403c-97e5-a779931eedf3 does not exist
Jan 20 14:00:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4b0aa443-e4e3-4aad-b64c-239e99978c8b does not exist
Jan 20 14:00:47 compute-0 sudo[102283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:47 compute-0 sudo[102283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:47 compute-0 sudo[102283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-0 ceph-mon[74360]: 7.18 scrub starts
Jan 20 14:00:47 compute-0 ceph-mon[74360]: 7.18 scrub ok
Jan 20 14:00:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 14:00:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.660652) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647660811, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7393, "num_deletes": 251, "total_data_size": 9487459, "memory_usage": 9727432, "flush_reason": "Manual Compaction"}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 20 14:00:47 compute-0 sudo[102309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:00:47 compute-0 sudo[102309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:47 compute-0 sudo[102309]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647715149, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7711698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 136, "largest_seqno": 7520, "table_properties": {"data_size": 7684319, "index_size": 17984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77081, "raw_average_key_size": 23, "raw_value_size": 7620115, "raw_average_value_size": 2307, "num_data_blocks": 795, "num_entries": 3303, "num_filter_entries": 3303, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917307, "oldest_key_time": 1768917307, "file_creation_time": 1768917647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 54531 microseconds, and 27209 cpu microseconds.
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.715198) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7711698 bytes OK
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.715236) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.716975) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.716994) EVENT_LOG_v1 {"time_micros": 1768917647716990, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.717009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9455317, prev total WAL file size 9455317, number of live WAL files 2.
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.718718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7530KB) 13(50KB) 8(1944B)]
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647718790, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7765020, "oldest_snapshot_seqno": -1}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3111 keys, 7722103 bytes, temperature: kUnknown
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647762148, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7722103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7695288, "index_size": 17937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 74891, "raw_average_key_size": 24, "raw_value_size": 7632969, "raw_average_value_size": 2453, "num_data_blocks": 796, "num_entries": 3111, "num_filter_entries": 3111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768917647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.762336) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7722103 bytes
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.763447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 177.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.4, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3402, records dropped: 291 output_compression: NoCompression
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.763467) EVENT_LOG_v1 {"time_micros": 1768917647763458, "job": 4, "event": "compaction_finished", "compaction_time_micros": 43418, "compaction_time_cpu_micros": 15103, "output_level": 6, "num_output_files": 1, "total_output_size": 7722103, "num_input_records": 3402, "num_output_records": 3111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647765005, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647765068, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917647765130, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 20 14:00:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:00:47.718626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:00:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 20 14:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 20 14:00:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 20 14:00:48 compute-0 sshd-session[102317]: Connection closed by authenticating user root 157.245.78.139 port 45652 [preauth]
Jan 20 14:00:48 compute-0 ceph-mon[74360]: pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 20 14:00:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 14:00:48 compute-0 ceph-mon[74360]: osdmap e119: 3 total, 3 up, 3 in
Jan 20 14:00:48 compute-0 ceph-mon[74360]: osdmap e120: 3 total, 3 up, 3 in
Jan 20 14:00:48 compute-0 ceph-mon[74360]: 3.f deep-scrub starts
Jan 20 14:00:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:48.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 20 14:00:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 20 14:00:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 20 14:00:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 20 14:00:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 14:00:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 20 14:00:50 compute-0 ceph-mon[74360]: 3.f deep-scrub ok
Jan 20 14:00:50 compute-0 ceph-mon[74360]: osdmap e121: 3 total, 3 up, 3 in
Jan 20 14:00:50 compute-0 ceph-mon[74360]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 14:00:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 14:00:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 20 14:00:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 20 14:00:50 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=122) [0] r=0 lpr=122 pi=[70,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:50 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 20 14:00:50 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 20 14:00:50 compute-0 sudo[102346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:50 compute-0 sudo[102346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:50 compute-0 sudo[102346]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:50 compute-0 sudo[102371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:00:50 compute-0 sudo[102371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:00:50 compute-0 sudo[102371]: pam_unix(sudo:session): session closed for user root
Jan 20 14:00:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:50.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 20 14:00:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 20 14:00:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 20 14:00:51 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[70,123)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:51 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=70/70 les/c/f=71/71/0 sis=123) [0]/[1] r=-1 lpr=123 pi=[70,123)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 14:00:51 compute-0 ceph-mon[74360]: osdmap e122: 3 total, 3 up, 3 in
Jan 20 14:00:51 compute-0 ceph-mon[74360]: 7.1e scrub starts
Jan 20 14:00:51 compute-0 ceph-mon[74360]: 7.1e scrub ok
Jan 20 14:00:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 20 14:00:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 14:00:51 compute-0 sshd-session[102282]: Connection closed by authenticating user root 159.223.5.14 port 59302 [preauth]
Jan 20 14:00:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:51.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 20 14:00:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:00:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-mon[74360]: 4.a scrub starts
Jan 20 14:00:52 compute-0 ceph-mon[74360]: 4.a scrub ok
Jan 20 14:00:52 compute-0 ceph-mon[74360]: osdmap e123: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-mon[74360]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 14:00:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:00:52 compute-0 ceph-mon[74360]: osdmap e124: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=124) [0] r=0 lpr=124 pi=[91,124)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:00:52
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'images', 'volumes']
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:00:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:52.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 20 14:00:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 20 14:00:52 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 125 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=125) [0]/[1] r=-1 lpr=125 pi=[91,125)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:52 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 125 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/45 lis/c=91/91 les/c/f=92/92/0 sis=125) [0]/[1] r=-1 lpr=125 pi=[91,125)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:00:52 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 125 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=123/70 les/c/f=124/71/0 sis=125) [0] r=0 lpr=125 pi=[70,125)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:52 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 125 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=123/70 les/c/f=124/71/0 sis=125) [0] r=0 lpr=125 pi=[70,125)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 20 14:00:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 20 14:00:54 compute-0 ceph-mon[74360]: 4.c scrub starts
Jan 20 14:00:54 compute-0 ceph-mon[74360]: 4.c scrub ok
Jan 20 14:00:54 compute-0 ceph-mon[74360]: osdmap e125: 3 total, 3 up, 3 in
Jan 20 14:00:54 compute-0 ceph-mon[74360]: 5.e scrub starts
Jan 20 14:00:54 compute-0 ceph-mon[74360]: 5.e scrub ok
Jan 20 14:00:54 compute-0 ceph-mon[74360]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:00:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 20 14:00:54 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 126 pg[9.1e( v 51'1000 (0'0,51'1000] local-lis/les=125/126 n=5 ec=54/45 lis/c=123/70 les/c/f=124/71/0 sis=125) [0] r=0 lpr=125 pi=[70,125)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:54.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 20 14:00:55 compute-0 ceph-mon[74360]: osdmap e126: 3 total, 3 up, 3 in
Jan 20 14:00:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 20 14:00:55 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 20 14:00:55 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 127 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=125/91 les/c/f=126/92/0 sis=127) [0] r=0 lpr=127 pi=[91,127)/1 luod=0'0 crt=51'1000 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 20 14:00:55 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 127 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=0/0 n=5 ec=54/45 lis/c=125/91 les/c/f=126/92/0 sis=127) [0] r=0 lpr=127 pi=[91,127)/1 crt=51'1000 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:00:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:55.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 20 14:00:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 20 14:00:56 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:00:56 compute-0 ceph-osd[84815]: osd.0 pg_epoch: 128 pg[9.1f( v 51'1000 (0'0,51'1000] local-lis/les=127/128 n=5 ec=54/45 lis/c=125/91 les/c/f=126/92/0 sis=127) [0] r=0 lpr=127 pi=[91,127)/1 crt=51'1000 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:00:56 compute-0 ceph-mon[74360]: osdmap e127: 3 total, 3 up, 3 in
Jan 20 14:00:56 compute-0 ceph-mon[74360]: pgmap v271: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:56 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 20 14:00:56 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 20 14:00:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:00:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:56.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:00:57 compute-0 ceph-mon[74360]: osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:00:57 compute-0 ceph-mon[74360]: 2.4 scrub starts
Jan 20 14:00:57 compute-0 ceph-mon[74360]: 2.4 scrub ok
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:00:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:57.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:00:58 compute-0 ceph-mon[74360]: 3.c scrub starts
Jan 20 14:00:58 compute-0 ceph-mon[74360]: 3.c scrub ok
Jan 20 14:00:58 compute-0 ceph-mon[74360]: pgmap v273: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 51 B/s, 2 objects/s recovering
Jan 20 14:00:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:00:58.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:00:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:00:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 20 14:00:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 20 14:00:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:00:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:00:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:00:59.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:00 compute-0 ceph-mon[74360]: 2.1c scrub starts
Jan 20 14:01:00 compute-0 ceph-mon[74360]: 2.1c scrub ok
Jan 20 14:01:00 compute-0 ceph-mon[74360]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 20 14:01:00 compute-0 ceph-mon[74360]: 2.1e scrub starts
Jan 20 14:01:00 compute-0 ceph-mon[74360]: 2.1e scrub ok
Jan 20 14:01:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:00.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 45 B/s, 1 objects/s recovering
Jan 20 14:01:01 compute-0 ceph-mon[74360]: 3.10 scrub starts
Jan 20 14:01:01 compute-0 ceph-mon[74360]: 3.10 scrub ok
Jan 20 14:01:01 compute-0 ceph-mon[74360]: 5.1a scrub starts
Jan 20 14:01:01 compute-0 ceph-mon[74360]: 5.1a scrub ok
Jan 20 14:01:01 compute-0 CROND[102461]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 14:01:01 compute-0 run-parts[102464]: (/etc/cron.hourly) starting 0anacron
Jan 20 14:01:01 compute-0 anacron[102472]: Anacron started on 2026-01-20
Jan 20 14:01:01 compute-0 anacron[102472]: Will run job `cron.daily' in 7 min.
Jan 20 14:01:01 compute-0 anacron[102472]: Will run job `cron.weekly' in 27 min.
Jan 20 14:01:01 compute-0 anacron[102472]: Will run job `cron.monthly' in 47 min.
Jan 20 14:01:01 compute-0 anacron[102472]: Jobs will be executed sequentially
Jan 20 14:01:01 compute-0 run-parts[102474]: (/etc/cron.hourly) finished 0anacron
Jan 20 14:01:01 compute-0 CROND[102460]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 14:01:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:01.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 20 14:01:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 20 14:01:02 compute-0 ceph-mon[74360]: 6.a scrub starts
Jan 20 14:01:02 compute-0 ceph-mon[74360]: 6.a scrub ok
Jan 20 14:01:02 compute-0 ceph-mon[74360]: 2.1d scrub starts
Jan 20 14:01:02 compute-0 ceph-mon[74360]: 2.1d scrub ok
Jan 20 14:01:02 compute-0 ceph-mon[74360]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 45 B/s, 1 objects/s recovering
Jan 20 14:01:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:02.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:03 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 20 14:01:03 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 20 14:01:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 20 14:01:03 compute-0 ceph-mon[74360]: 7.1b scrub starts
Jan 20 14:01:03 compute-0 ceph-mon[74360]: 7.1b scrub ok
Jan 20 14:01:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:03.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:04 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 20 14:01:04 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 20 14:01:04 compute-0 ceph-mon[74360]: 7.3 scrub starts
Jan 20 14:01:04 compute-0 ceph-mon[74360]: 7.3 scrub ok
Jan 20 14:01:04 compute-0 ceph-mon[74360]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 20 14:01:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:04.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 20 14:01:05 compute-0 ceph-mon[74360]: 2.b scrub starts
Jan 20 14:01:05 compute-0 ceph-mon[74360]: 2.b scrub ok
Jan 20 14:01:05 compute-0 ceph-mon[74360]: 2.1f scrub starts
Jan 20 14:01:05 compute-0 ceph-mon[74360]: 2.1f scrub ok
Jan 20 14:01:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:06 compute-0 ceph-mon[74360]: pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 20 14:01:06 compute-0 ceph-mon[74360]: 6.15 scrub starts
Jan 20 14:01:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:06.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:07 compute-0 ceph-mon[74360]: 6.15 scrub ok
Jan 20 14:01:07 compute-0 ceph-mon[74360]: 5.9 scrub starts
Jan 20 14:01:07 compute-0 ceph-mon[74360]: 5.9 scrub ok
Jan 20 14:01:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:07.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:08 compute-0 ceph-mon[74360]: pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:09 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 20 14:01:09 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 20 14:01:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:09 compute-0 ceph-mon[74360]: 2.f scrub starts
Jan 20 14:01:09 compute-0 ceph-mon[74360]: 2.f scrub ok
Jan 20 14:01:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:09.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 20 14:01:10 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 20 14:01:10 compute-0 sudo[102484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:10 compute-0 sudo[102484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:10 compute-0 sudo[102484]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:10 compute-0 sudo[102509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:10 compute-0 sudo[102509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:10 compute-0 sudo[102509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:10 compute-0 ceph-mon[74360]: 8.1 scrub starts
Jan 20 14:01:10 compute-0 ceph-mon[74360]: 8.1 scrub ok
Jan 20 14:01:10 compute-0 ceph-mon[74360]: pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:01:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:01:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:01:11 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Jan 20 14:01:11 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Jan 20 14:01:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:11 compute-0 ceph-mon[74360]: 10.12 scrub starts
Jan 20 14:01:11 compute-0 ceph-mon[74360]: 10.12 scrub ok
Jan 20 14:01:11 compute-0 ceph-mon[74360]: 9.1 scrub starts
Jan 20 14:01:11 compute-0 ceph-mon[74360]: 9.1 scrub ok
Jan 20 14:01:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:11.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:12 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 14:01:12 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 14:01:12 compute-0 ceph-mon[74360]: 8.7 deep-scrub starts
Jan 20 14:01:12 compute-0 ceph-mon[74360]: 8.7 deep-scrub ok
Jan 20 14:01:12 compute-0 ceph-mon[74360]: pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:12 compute-0 ceph-mon[74360]: 3.13 scrub starts
Jan 20 14:01:12 compute-0 ceph-mon[74360]: 3.13 scrub ok
Jan 20 14:01:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:12.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:13 compute-0 ceph-mon[74360]: 9.2 scrub starts
Jan 20 14:01:13 compute-0 ceph-mon[74360]: 9.2 scrub ok
Jan 20 14:01:13 compute-0 ceph-mon[74360]: 3.14 scrub starts
Jan 20 14:01:13 compute-0 ceph-mon[74360]: 3.14 scrub ok
Jan 20 14:01:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:14 compute-0 ceph-mon[74360]: 10.1e scrub starts
Jan 20 14:01:14 compute-0 ceph-mon[74360]: 10.1e scrub ok
Jan 20 14:01:14 compute-0 ceph-mon[74360]: pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:14.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:15 compute-0 ceph-mon[74360]: 10.4 scrub starts
Jan 20 14:01:15 compute-0 ceph-mon[74360]: 10.4 scrub ok
Jan 20 14:01:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:15.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:16.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:16 compute-0 ceph-mon[74360]: pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:17 compute-0 sudo[100967]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 20 14:01:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:17 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 20 14:01:17 compute-0 ceph-mon[74360]: 10.11 scrub starts
Jan 20 14:01:17 compute-0 ceph-mon[74360]: 10.11 scrub ok
Jan 20 14:01:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:18.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 8.e scrub starts
Jan 20 14:01:18 compute-0 ceph-mon[74360]: pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 8.e scrub ok
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 10.f scrub starts
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 10.f scrub ok
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 5.15 scrub starts
Jan 20 14:01:18 compute-0 ceph-mon[74360]: 5.15 scrub ok
Jan 20 14:01:18 compute-0 sudo[102687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysvejdsabtdsjsstesxlnemyholureku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917678.544951-369-89145906385405/AnsiballZ_command.py'
Jan 20 14:01:18 compute-0 sudo[102687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:19 compute-0 python3.9[102689]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:19 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 14:01:19 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 14:01:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:19 compute-0 sudo[102687]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:20 compute-0 ceph-mon[74360]: 10.10 scrub starts
Jan 20 14:01:20 compute-0 ceph-mon[74360]: 10.10 scrub ok
Jan 20 14:01:20 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 14:01:20 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 14:01:20 compute-0 sudo[102975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcerikgxfzxctmldkihujogwgvtfczef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917680.1614964-393-26420931882187/AnsiballZ_selinux.py'
Jan 20 14:01:20 compute-0 sudo[102975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:20.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:21 compute-0 python3.9[102977]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 14:01:21 compute-0 sudo[102975]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:21 compute-0 ceph-mon[74360]: 8.13 scrub starts
Jan 20 14:01:21 compute-0 ceph-mon[74360]: 8.13 scrub ok
Jan 20 14:01:21 compute-0 ceph-mon[74360]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:21 compute-0 ceph-mon[74360]: 9.4 scrub starts
Jan 20 14:01:21 compute-0 ceph-mon[74360]: 9.4 scrub ok
Jan 20 14:01:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:21 compute-0 sudo[103128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwwjrtpvvdblddyswbvndnmxuwjyxjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917681.610919-426-157236602482103/AnsiballZ_command.py'
Jan 20 14:01:21 compute-0 sudo[103128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:21.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:22 compute-0 python3.9[103130]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 14:01:22 compute-0 sudo[103128]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:22 compute-0 ceph-mon[74360]: 10.3 scrub starts
Jan 20 14:01:22 compute-0 ceph-mon[74360]: 10.3 scrub ok
Jan 20 14:01:22 compute-0 ceph-mon[74360]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Jan 20 14:01:22 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Jan 20 14:01:22 compute-0 systemd[75982]: Created slice User Background Tasks Slice.
Jan 20 14:01:22 compute-0 systemd[75982]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 14:01:22 compute-0 systemd[75982]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:22 compute-0 sudo[103281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zepuubpmsbvugftaxhswypodzttxvxvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917682.3645701-450-106342829163227/AnsiballZ_file.py'
Jan 20 14:01:22 compute-0 sudo[103281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:22 compute-0 python3.9[103283]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:22 compute-0 sudo[103281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:23 compute-0 ceph-mon[74360]: 8.1a deep-scrub starts
Jan 20 14:01:23 compute-0 ceph-mon[74360]: 8.1a deep-scrub ok
Jan 20 14:01:23 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Jan 20 14:01:23 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Jan 20 14:01:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:23 compute-0 sudo[103433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpailmilhymehztucfyfmbhixamlwsse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917683.189631-474-76134261801706/AnsiballZ_mount.py'
Jan 20 14:01:23 compute-0 sudo[103433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:23 compute-0 python3.9[103436]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 14:01:23 compute-0 sudo[103433]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 14:01:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 14:01:24 compute-0 ceph-mon[74360]: 8.1d deep-scrub starts
Jan 20 14:01:24 compute-0 ceph-mon[74360]: 8.1d deep-scrub ok
Jan 20 14:01:24 compute-0 ceph-mon[74360]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:24.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:25 compute-0 ceph-mon[74360]: 8.1e scrub starts
Jan 20 14:01:25 compute-0 ceph-mon[74360]: 8.1e scrub ok
Jan 20 14:01:25 compute-0 sudo[103586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfuqltoynnysyxjxjkacqvpazifhcubn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917684.9190826-558-202212675337298/AnsiballZ_file.py'
Jan 20 14:01:25 compute-0 sudo[103586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:25 compute-0 python3.9[103588]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:25 compute-0 sudo[103586]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:25 compute-0 sudo[103739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcrwwtljnorebosrpgsoxrpbafvovkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917685.706695-582-253905879388959/AnsiballZ_stat.py'
Jan 20 14:01:25 compute-0 sudo[103739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:25.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:26 compute-0 python3.9[103741]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:26 compute-0 sudo[103739]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:26 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 20 14:01:26 compute-0 ceph-mon[74360]: 10.1 scrub starts
Jan 20 14:01:26 compute-0 ceph-mon[74360]: 10.1 scrub ok
Jan 20 14:01:26 compute-0 ceph-mon[74360]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:26 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 20 14:01:26 compute-0 sudo[103817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgblzrwvocuzphoklimfubbnitfssohf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917685.706695-582-253905879388959/AnsiballZ_file.py'
Jan 20 14:01:26 compute-0 sudo[103817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:26 compute-0 python3.9[103819]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:26 compute-0 sudo[103817]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:27 compute-0 ceph-mon[74360]: 9.c scrub starts
Jan 20 14:01:27 compute-0 ceph-mon[74360]: 9.c scrub ok
Jan 20 14:01:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:28 compute-0 sudo[103970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmqzptinmlmlcvfbqqoubugttaxgitmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917687.7672818-645-207189497451846/AnsiballZ_stat.py'
Jan 20 14:01:28 compute-0 sudo[103970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:28 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 14:01:28 compute-0 python3.9[103972]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:28 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 14:01:28 compute-0 sudo[103970]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:28 compute-0 ceph-mon[74360]: 4.13 scrub starts
Jan 20 14:01:28 compute-0 ceph-mon[74360]: 4.13 scrub ok
Jan 20 14:01:28 compute-0 ceph-mon[74360]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:29 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 20 14:01:29 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 20 14:01:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:29 compute-0 sudo[104124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnibuqwitfmxoamwxzqpufwdcsqlxcnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917688.8647063-684-74864748444624/AnsiballZ_getent.py'
Jan 20 14:01:29 compute-0 sudo[104124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:29 compute-0 ceph-mon[74360]: 8.15 scrub starts
Jan 20 14:01:29 compute-0 ceph-mon[74360]: 8.15 scrub ok
Jan 20 14:01:29 compute-0 ceph-mon[74360]: 9.14 scrub starts
Jan 20 14:01:29 compute-0 ceph-mon[74360]: 9.14 scrub ok
Jan 20 14:01:29 compute-0 python3.9[104126]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 14:01:29 compute-0 sudo[104124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:29.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 20 14:01:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 20 14:01:30 compute-0 sudo[104278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpisfumyxdxgdhgdzxpaqvuajqsssnml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917689.964417-714-254494107927230/AnsiballZ_getent.py'
Jan 20 14:01:30 compute-0 sudo[104278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:30 compute-0 python3.9[104280]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 14:01:30 compute-0 sudo[104278]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:30 compute-0 sudo[104281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:30 compute-0 sudo[104281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:30 compute-0 sudo[104281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:30 compute-0 ceph-mon[74360]: 11.16 deep-scrub starts
Jan 20 14:01:30 compute-0 ceph-mon[74360]: 11.16 deep-scrub ok
Jan 20 14:01:30 compute-0 ceph-mon[74360]: 9.1c scrub starts
Jan 20 14:01:30 compute-0 ceph-mon[74360]: 9.1c scrub ok
Jan 20 14:01:30 compute-0 ceph-mon[74360]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:30 compute-0 sudo[104313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:30 compute-0 sudo[104313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:30 compute-0 sudo[104313]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:31 compute-0 sudo[104481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnxuvaqfvupnhzyvuqptglhoforkmdox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917690.742503-738-55541330645632/AnsiballZ_group.py'
Jan 20 14:01:31 compute-0 sudo[104481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:31 compute-0 python3.9[104483]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:01:31 compute-0 sudo[104481]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:31 compute-0 ceph-mon[74360]: 11.17 scrub starts
Jan 20 14:01:31 compute-0 ceph-mon[74360]: 11.17 scrub ok
Jan 20 14:01:31 compute-0 ceph-mon[74360]: 11.2 scrub starts
Jan 20 14:01:31 compute-0 ceph-mon[74360]: 11.2 scrub ok
Jan 20 14:01:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:31.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:32 compute-0 sudo[104634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqlgjjcadmlgyxtmpjqvqmulckngswk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917691.8650525-765-28263942904250/AnsiballZ_file.py'
Jan 20 14:01:32 compute-0 sudo[104634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:32 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Jan 20 14:01:32 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Jan 20 14:01:32 compute-0 python3.9[104636]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 14:01:32 compute-0 sudo[104634]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:32 compute-0 ceph-mon[74360]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:32.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:33 compute-0 sudo[104786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zluxrqevesblljjhlnageaozfrlmphed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917693.0580626-798-218762905436440/AnsiballZ_dnf.py'
Jan 20 14:01:33 compute-0 sudo[104786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:33 compute-0 python3.9[104788]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:01:33 compute-0 ceph-mon[74360]: 8.2 deep-scrub starts
Jan 20 14:01:33 compute-0 ceph-mon[74360]: 8.2 deep-scrub ok
Jan 20 14:01:33 compute-0 ceph-mon[74360]: 11.6 deep-scrub starts
Jan 20 14:01:33 compute-0 ceph-mon[74360]: 11.6 deep-scrub ok
Jan 20 14:01:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:34 compute-0 ceph-mon[74360]: pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:34 compute-0 sudo[104786]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:34 compute-0 sshd-session[104791]: Connection closed by authenticating user root 157.245.78.139 port 53154 [preauth]
Jan 20 14:01:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:35.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:35 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 14:01:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:35 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 14:01:35 compute-0 ceph-mon[74360]: 8.16 deep-scrub starts
Jan 20 14:01:35 compute-0 ceph-mon[74360]: 8.16 deep-scrub ok
Jan 20 14:01:35 compute-0 sudo[104943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sccjpcgpixcgguryidparvqrcikowfbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917695.6313093-822-186370946533225/AnsiballZ_file.py'
Jan 20 14:01:35 compute-0 sudo[104943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:35.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:36 compute-0 python3.9[104945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:36 compute-0 sudo[104943]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 8.f scrub starts
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 8.f scrub ok
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 11.9 scrub starts
Jan 20 14:01:36 compute-0 ceph-mon[74360]: pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 11.9 scrub ok
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 5.16 scrub starts
Jan 20 14:01:36 compute-0 ceph-mon[74360]: 5.16 scrub ok
Jan 20 14:01:36 compute-0 sudo[105095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvjjdzvmaeoilcvrmywcnwbzmezujqlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917696.4346917-846-115055105939752/AnsiballZ_stat.py'
Jan 20 14:01:36 compute-0 sudo[105095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:37 compute-0 python3.9[105097]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:37 compute-0 sudo[105095]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:37.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:37 compute-0 sudo[105173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yubiqrbrqbifxpilvwuadqbsnoujpkob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917696.4346917-846-115055105939752/AnsiballZ_file.py'
Jan 20 14:01:37 compute-0 sudo[105173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:37 compute-0 python3.9[105175]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:37 compute-0 sudo[105173]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:37 compute-0 ceph-mon[74360]: 3.16 scrub starts
Jan 20 14:01:37 compute-0 ceph-mon[74360]: 3.16 scrub ok
Jan 20 14:01:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:37.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:38 compute-0 sudo[105326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshpbkkboegnbkjdsgkxzmrkonbcewgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917697.790142-885-166563472301472/AnsiballZ_stat.py'
Jan 20 14:01:38 compute-0 sudo[105326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:38 compute-0 python3.9[105328]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:01:38 compute-0 sudo[105326]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:38 compute-0 sudo[105404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbkpbcvhtwdksqipkibazzepwvnnvdto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917697.790142-885-166563472301472/AnsiballZ_file.py'
Jan 20 14:01:38 compute-0 sudo[105404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:38 compute-0 ceph-mon[74360]: 11.a scrub starts
Jan 20 14:01:38 compute-0 ceph-mon[74360]: 11.a scrub ok
Jan 20 14:01:38 compute-0 ceph-mon[74360]: pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:38 compute-0 python3.9[105406]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:01:38 compute-0 sudo[105404]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:39 compute-0 sudo[105557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhlsooucmioazmcnldliysykdxrmznvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917699.4259255-930-92469880275768/AnsiballZ_dnf.py'
Jan 20 14:01:39 compute-0 sudo[105557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:39.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:40 compute-0 python3.9[105559]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:01:40 compute-0 ceph-mon[74360]: pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:41 compute-0 sudo[105557]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:41 compute-0 ceph-mon[74360]: 5.10 scrub starts
Jan 20 14:01:41 compute-0 ceph-mon[74360]: 5.10 scrub ok
Jan 20 14:01:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:42 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 14:01:42 compute-0 python3.9[105711]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:42 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 14:01:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:43 compute-0 ceph-mon[74360]: pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:43 compute-0 python3.9[105863]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 14:01:43 compute-0 python3.9[106014]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:01:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 11.b scrub starts
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 11.b scrub ok
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 5.11 scrub starts
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 5.11 scrub ok
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 8.3 scrub starts
Jan 20 14:01:44 compute-0 ceph-mon[74360]: 8.3 scrub ok
Jan 20 14:01:44 compute-0 ceph-mon[74360]: pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:44 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 14:01:44 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 14:01:45 compute-0 ceph-mon[74360]: 11.c scrub starts
Jan 20 14:01:45 compute-0 ceph-mon[74360]: 11.c scrub ok
Jan 20 14:01:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:45.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:45 compute-0 sudo[106164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oigchjtnkwccngjdnvykemozsfqfmlyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917704.6897857-1053-32050957180638/AnsiballZ_systemd.py'
Jan 20 14:01:45 compute-0 sudo[106164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:45 compute-0 python3.9[106166]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:45 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 14:01:45 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 14:01:45 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 14:01:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 14:01:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 14:01:45 compute-0 sudo[106164]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:01:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:01:46 compute-0 ceph-mon[74360]: 5.1f scrub starts
Jan 20 14:01:46 compute-0 ceph-mon[74360]: 5.1f scrub ok
Jan 20 14:01:46 compute-0 ceph-mon[74360]: pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:46 compute-0 python3.9[106329]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 14:01:47 compute-0 ceph-mon[74360]: 8.a scrub starts
Jan 20 14:01:47 compute-0 ceph-mon[74360]: 8.a scrub ok
Jan 20 14:01:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:47 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 14:01:47 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 14:01:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:47.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:48 compute-0 sudo[106355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:48 compute-0 sudo[106355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-0 sudo[106355]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-0 sudo[106380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:48 compute-0 sudo[106380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-0 sudo[106380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-0 sudo[106405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:48 compute-0 sudo[106405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-0 sudo[106405]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:48 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 20 14:01:48 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 20 14:01:48 compute-0 sudo[106430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:01:48 compute-0 sudo[106430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:48 compute-0 ceph-mon[74360]: pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:48 compute-0 ceph-mon[74360]: 11.d scrub starts
Jan 20 14:01:48 compute-0 ceph-mon[74360]: 11.d scrub ok
Jan 20 14:01:49 compute-0 podman[106529]: 2026-01-20 14:01:49.080931622 +0000 UTC m=+0.085956802 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:49 compute-0 podman[106529]: 2026-01-20 14:01:49.182876171 +0000 UTC m=+0.187901281 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:01:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:49 compute-0 ceph-mon[74360]: 11.10 scrub starts
Jan 20 14:01:49 compute-0 ceph-mon[74360]: 11.10 scrub ok
Jan 20 14:01:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:01:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:01:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:49 compute-0 podman[106680]: 2026-01-20 14:01:49.844021251 +0000 UTC m=+0.118436924 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:01:49 compute-0 podman[106680]: 2026-01-20 14:01:49.854738858 +0000 UTC m=+0.129154501 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:01:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:49.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 podman[106746]: 2026-01-20 14:01:50.082530251 +0000 UTC m=+0.063631492 container exec 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived, io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, distribution-scope=public)
Jan 20 14:01:50 compute-0 podman[106746]: 2026-01-20 14:01:50.102074486 +0000 UTC m=+0.083175667 container exec_died 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., release=1793, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Jan 20 14:01:50 compute-0 sudo[106430]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 sudo[106777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:50 compute-0 sudo[106777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 sudo[106777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 sudo[106802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:50 compute-0 sudo[106802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 sudo[106802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 sudo[106828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:50 compute-0 sudo[106828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 sudo[106828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 sudo[106904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:01:50 compute-0 sudo[106904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 ceph-mon[74360]: pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 sudo[106977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:50 compute-0 sudo[106977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 sudo[107025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyyilivyevnanyikmhwiheokpiraenmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917710.3740792-1224-80324838856479/AnsiballZ_systemd.py'
Jan 20 14:01:50 compute-0 sudo[106977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 sudo[107025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:50 compute-0 sudo[107030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:50 compute-0 sudo[107030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:50 compute-0 sudo[107030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 sudo[106904]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:50 compute-0 python3.9[107029]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:01:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d1037333-217a-43c6-b0e2-65f421bdaa32 does not exist
Jan 20 14:01:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 22156704-316f-436d-a47c-7ad63c183039 does not exist
Jan 20 14:01:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c33b2d7a-5d43-4a27-bace-ba50a60771f7 does not exist
Jan 20 14:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:51 compute-0 sudo[107025]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-0 sudo[107086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:51 compute-0 sudo[107086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:51 compute-0 sudo[107086]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-0 sudo[107135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:51 compute-0 sudo[107135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:51 compute-0 sudo[107135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-0 sudo[107183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:51 compute-0 sudo[107183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:51 compute-0 sudo[107183]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-0 sudo[107228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:01:51 compute-0 sudo[107228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:51 compute-0 sudo[107361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutcwcpxcornujdyuerwgdavclrsphwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917711.139328-1224-182680991935534/AnsiballZ_systemd.py'
Jan 20 14:01:51 compute-0 sudo[107361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.559341863 +0000 UTC m=+0.036705138 container create b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:51 compute-0 systemd[1]: Started libpod-conmon-b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb.scope.
Jan 20 14:01:51 compute-0 ceph-mon[74360]: 3.d scrub starts
Jan 20 14:01:51 compute-0 ceph-mon[74360]: 3.d scrub ok
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:01:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:01:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.542941942 +0000 UTC m=+0.020305247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.64148128 +0000 UTC m=+0.118844605 container init b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.647751249 +0000 UTC m=+0.125114534 container start b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:01:51 compute-0 gallant_heisenberg[107395]: 167 167
Jan 20 14:01:51 compute-0 systemd[1]: libpod-b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb.scope: Deactivated successfully.
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.652008703 +0000 UTC m=+0.129372018 container attach b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.652527107 +0000 UTC m=+0.129890412 container died b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69557f59ce79d4f9c170294106ea252c9ecac59f4e09088824b00f7fa192610-merged.mount: Deactivated successfully.
Jan 20 14:01:51 compute-0 podman[107378]: 2026-01-20 14:01:51.70027777 +0000 UTC m=+0.177641055 container remove b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:01:51 compute-0 systemd[1]: libpod-conmon-b0293e1e2fe9df0f44aee27d20eb93f7c9788103bbcaa0b7a7d78d6f5e555dfb.scope: Deactivated successfully.
Jan 20 14:01:51 compute-0 python3.9[107365]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:51 compute-0 sudo[107361]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:51 compute-0 podman[107423]: 2026-01-20 14:01:51.848501504 +0000 UTC m=+0.037423957 container create e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:01:51 compute-0 systemd[1]: Started libpod-conmon-e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc.scope.
Jan 20 14:01:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:51 compute-0 podman[107423]: 2026-01-20 14:01:51.830364367 +0000 UTC m=+0.019286840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:51 compute-0 podman[107423]: 2026-01-20 14:01:51.932499762 +0000 UTC m=+0.121422225 container init e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:01:51 compute-0 podman[107423]: 2026-01-20 14:01:51.942528351 +0000 UTC m=+0.131450794 container start e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:01:51 compute-0 podman[107423]: 2026-01-20 14:01:51.945554333 +0000 UTC m=+0.134476786 container attach e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:01:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:51.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 14:01:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:01:52
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.log']
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:01:52 compute-0 ceph-mon[74360]: 8.11 scrub starts
Jan 20 14:01:52 compute-0 ceph-mon[74360]: 8.11 scrub ok
Jan 20 14:01:52 compute-0 ceph-mon[74360]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:52 compute-0 ceph-mon[74360]: 10.6 scrub starts
Jan 20 14:01:52 compute-0 ceph-mon[74360]: 10.6 scrub ok
Jan 20 14:01:52 compute-0 elated_snyder[107464]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:01:52 compute-0 elated_snyder[107464]: --> relative data size: 1.0
Jan 20 14:01:52 compute-0 elated_snyder[107464]: --> All data devices are unavailable
Jan 20 14:01:52 compute-0 podman[107423]: 2026-01-20 14:01:52.762190761 +0000 UTC m=+0.951113204 container died e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:01:52 compute-0 systemd[1]: libpod-e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc.scope: Deactivated successfully.
Jan 20 14:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a23057f0d5b5508b75f2efc577bed18d2eadd945e8dbff2220e077d1539644e-merged.mount: Deactivated successfully.
Jan 20 14:01:52 compute-0 sshd-session[98997]: Connection closed by 192.168.122.30 port 44268
Jan 20 14:01:52 compute-0 sshd-session[98994]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:01:52 compute-0 podman[107423]: 2026-01-20 14:01:52.816095119 +0000 UTC m=+1.005017572 container remove e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:01:52 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 20 14:01:52 compute-0 systemd[1]: session-35.scope: Consumed 1min 4.637s CPU time.
Jan 20 14:01:52 compute-0 systemd-logind[796]: Session 35 logged out. Waiting for processes to exit.
Jan 20 14:01:52 compute-0 systemd-logind[796]: Removed session 35.
Jan 20 14:01:52 compute-0 systemd[1]: libpod-conmon-e642305b362445f25e0bd19c44f80ebb498229b559f0a4a7c6c97c9117d9bedc.scope: Deactivated successfully.
Jan 20 14:01:52 compute-0 sudo[107228]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:52 compute-0 sudo[107494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:52 compute-0 sudo[107494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:52 compute-0 sudo[107494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:53 compute-0 sudo[107519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:53 compute-0 sudo[107519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:53 compute-0 sudo[107519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:53 compute-0 sudo[107544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:53 compute-0 sudo[107544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:53 compute-0 sudo[107544]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:53 compute-0 sudo[107569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:01:53 compute-0 sudo[107569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 14:01:53 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.605432924 +0000 UTC m=+0.066397895 container create e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:01:53 compute-0 ceph-mon[74360]: 11.11 scrub starts
Jan 20 14:01:53 compute-0 ceph-mon[74360]: 11.11 scrub ok
Jan 20 14:01:53 compute-0 ceph-mon[74360]: 10.7 scrub starts
Jan 20 14:01:53 compute-0 ceph-mon[74360]: 10.7 scrub ok
Jan 20 14:01:53 compute-0 systemd[1]: Started libpod-conmon-e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249.scope.
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.577617047 +0000 UTC m=+0.038582028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.703789808 +0000 UTC m=+0.164754819 container init e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.715332419 +0000 UTC m=+0.176297400 container start e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.719638894 +0000 UTC m=+0.180603835 container attach e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:53 compute-0 ecstatic_leakey[107652]: 167 167
Jan 20 14:01:53 compute-0 systemd[1]: libpod-e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249.scope: Deactivated successfully.
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.722797089 +0000 UTC m=+0.183762040 container died e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:01:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-444bb5bd4fc7563487cdaf16c871f428be599f27810baf0d4e5102540c85ab04-merged.mount: Deactivated successfully.
Jan 20 14:01:53 compute-0 podman[107634]: 2026-01-20 14:01:53.760499852 +0000 UTC m=+0.221464813 container remove e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_leakey, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:53 compute-0 systemd[1]: libpod-conmon-e2a01134faa2ac950af4694c3e9fa993e585b6e36cb017495d39849cfbd58249.scope: Deactivated successfully.
Jan 20 14:01:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.011042606 +0000 UTC m=+0.067372041 container create 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:01:54 compute-0 systemd[1]: Started libpod-conmon-3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb.scope.
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:53.984025839 +0000 UTC m=+0.040355314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1479a06759789ee4aac2e2b1ca8276533b58c364d26871b882b70a8991cb1d50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1479a06759789ee4aac2e2b1ca8276533b58c364d26871b882b70a8991cb1d50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1479a06759789ee4aac2e2b1ca8276533b58c364d26871b882b70a8991cb1d50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1479a06759789ee4aac2e2b1ca8276533b58c364d26871b882b70a8991cb1d50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.102181486 +0000 UTC m=+0.158510891 container init 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.114132816 +0000 UTC m=+0.170462251 container start 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.118030391 +0000 UTC m=+0.174359806 container attach 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:01:54 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Jan 20 14:01:54 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Jan 20 14:01:54 compute-0 ceph-mon[74360]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:54 compute-0 ceph-mon[74360]: 11.15 scrub starts
Jan 20 14:01:54 compute-0 ceph-mon[74360]: 11.15 scrub ok
Jan 20 14:01:54 compute-0 determined_moore[107691]: {
Jan 20 14:01:54 compute-0 determined_moore[107691]:     "0": [
Jan 20 14:01:54 compute-0 determined_moore[107691]:         {
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "devices": [
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "/dev/loop3"
Jan 20 14:01:54 compute-0 determined_moore[107691]:             ],
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "lv_name": "ceph_lv0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "lv_size": "7511998464",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "name": "ceph_lv0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "tags": {
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.cluster_name": "ceph",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.crush_device_class": "",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.encrypted": "0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.osd_id": "0",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.type": "block",
Jan 20 14:01:54 compute-0 determined_moore[107691]:                 "ceph.vdo": "0"
Jan 20 14:01:54 compute-0 determined_moore[107691]:             },
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "type": "block",
Jan 20 14:01:54 compute-0 determined_moore[107691]:             "vg_name": "ceph_vg0"
Jan 20 14:01:54 compute-0 determined_moore[107691]:         }
Jan 20 14:01:54 compute-0 determined_moore[107691]:     ]
Jan 20 14:01:54 compute-0 determined_moore[107691]: }
Jan 20 14:01:54 compute-0 systemd[1]: libpod-3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb.scope: Deactivated successfully.
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.867642829 +0000 UTC m=+0.923972234 container died 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1479a06759789ee4aac2e2b1ca8276533b58c364d26871b882b70a8991cb1d50-merged.mount: Deactivated successfully.
Jan 20 14:01:54 compute-0 podman[107675]: 2026-01-20 14:01:54.936869499 +0000 UTC m=+0.993198934 container remove 3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:01:54 compute-0 systemd[1]: libpod-conmon-3e8da7d8471cf22a916b440518a375e0f7ee73effdc4f13d5c111ba497faa5cb.scope: Deactivated successfully.
Jan 20 14:01:54 compute-0 sudo[107569]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:55 compute-0 sudo[107711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:55 compute-0 sudo[107711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:55 compute-0 sudo[107711]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:55 compute-0 sudo[107736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:01:55 compute-0 sudo[107736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:55 compute-0 sudo[107736]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:55 compute-0 sudo[107761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:55 compute-0 sudo[107761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:55 compute-0 sudo[107761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:55 compute-0 sudo[107786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:01:55 compute-0 sudo[107786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.635318351 +0000 UTC m=+0.039412270 container create c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:01:55 compute-0 systemd[1]: Started libpod-conmon-c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902.scope.
Jan 20 14:01:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.700899634 +0000 UTC m=+0.104993573 container init c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.71230454 +0000 UTC m=+0.116398489 container start c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.620829971 +0000 UTC m=+0.024923910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.718794854 +0000 UTC m=+0.122888773 container attach c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:01:55 compute-0 sad_rubin[107868]: 167 167
Jan 20 14:01:55 compute-0 systemd[1]: libpod-c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902.scope: Deactivated successfully.
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.720626704 +0000 UTC m=+0.124720633 container died c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:01:55 compute-0 ceph-mon[74360]: 11.18 deep-scrub starts
Jan 20 14:01:55 compute-0 ceph-mon[74360]: 11.18 deep-scrub ok
Jan 20 14:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-395a66655a082c325c1a94005c677b67878852dac5b679cc75d8005a365de36c-merged.mount: Deactivated successfully.
Jan 20 14:01:55 compute-0 podman[107851]: 2026-01-20 14:01:55.772506368 +0000 UTC m=+0.176600327 container remove c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:01:55 compute-0 systemd[1]: libpod-conmon-c96221e603edca95db5e2cc473f0ff035662aadfcff8beb239afb25402cfd902.scope: Deactivated successfully.
Jan 20 14:01:55 compute-0 podman[107892]: 2026-01-20 14:01:55.959671698 +0000 UTC m=+0.059496850 container create 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:01:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:55.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:56 compute-0 systemd[1]: Started libpod-conmon-1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559.scope.
Jan 20 14:01:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972e9d53f3da9f566f8ca33e8601756ef3f32c053d748d19e2933064614d5a38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972e9d53f3da9f566f8ca33e8601756ef3f32c053d748d19e2933064614d5a38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972e9d53f3da9f566f8ca33e8601756ef3f32c053d748d19e2933064614d5a38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/972e9d53f3da9f566f8ca33e8601756ef3f32c053d748d19e2933064614d5a38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:55.9418778 +0000 UTC m=+0.041702952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:56.058452843 +0000 UTC m=+0.158278045 container init 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:56.068815571 +0000 UTC m=+0.168640733 container start 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:56.072610244 +0000 UTC m=+0.172435406 container attach 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 20 14:01:56 compute-0 ceph-mon[74360]: pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:56 compute-0 great_rosalind[107908]: {
Jan 20 14:01:56 compute-0 great_rosalind[107908]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:01:56 compute-0 great_rosalind[107908]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:01:56 compute-0 great_rosalind[107908]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:01:56 compute-0 great_rosalind[107908]:         "osd_id": 0,
Jan 20 14:01:56 compute-0 great_rosalind[107908]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:01:56 compute-0 great_rosalind[107908]:         "type": "bluestore"
Jan 20 14:01:56 compute-0 great_rosalind[107908]:     }
Jan 20 14:01:56 compute-0 great_rosalind[107908]: }
Jan 20 14:01:56 compute-0 systemd[1]: libpod-1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559.scope: Deactivated successfully.
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:56.912179338 +0000 UTC m=+1.012004500 container died 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-972e9d53f3da9f566f8ca33e8601756ef3f32c053d748d19e2933064614d5a38-merged.mount: Deactivated successfully.
Jan 20 14:01:56 compute-0 podman[107892]: 2026-01-20 14:01:56.98555083 +0000 UTC m=+1.085375992 container remove 1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:01:56 compute-0 systemd[1]: libpod-conmon-1634c89e0569c1e9d5577af04b9ae9a4bb32d89a6e556d256de2c829a96ae559.scope: Deactivated successfully.
Jan 20 14:01:57 compute-0 sudo[107786]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:01:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:01:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6e9991c5-31da-4c97-8906-3f40d2dd852f does not exist
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 036e2a37-f07c-4391-bac7-12cae3fa99f8 does not exist
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6ef1f55d-f539-4abe-a42d-249972f1a3c3 does not exist
Jan 20 14:01:57 compute-0 sudo[107941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:01:57 compute-0 sudo[107941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:57 compute-0 sudo[107941]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:01:57 compute-0 sudo[107966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:01:57 compute-0 sudo[107966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:01:57 compute-0 sudo[107966]: pam_unix(sudo:session): session closed for user root
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:01:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:01:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:01:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:01:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:01:57.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:01:58 compute-0 ceph-mon[74360]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:58 compute-0 sshd-session[107993]: Accepted publickey for zuul from 192.168.122.30 port 43274 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:01:58 compute-0 systemd-logind[796]: New session 36 of user zuul.
Jan 20 14:01:58 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 20 14:01:58 compute-0 sshd-session[107993]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:01:59 compute-0 ceph-mon[74360]: 10.9 deep-scrub starts
Jan 20 14:01:59 compute-0 ceph-mon[74360]: 10.9 deep-scrub ok
Jan 20 14:01:59 compute-0 ceph-mon[74360]: 10.a scrub starts
Jan 20 14:01:59 compute-0 ceph-mon[74360]: 10.a scrub ok
Jan 20 14:01:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:01:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:01:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:01:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:01:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:01:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Jan 20 14:01:59 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Jan 20 14:01:59 compute-0 python3.9[108147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:00.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 11.e scrub starts
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 11.e scrub ok
Jan 20 14:02:00 compute-0 ceph-mon[74360]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 10.b scrub starts
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 10.b scrub ok
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 11.1f deep-scrub starts
Jan 20 14:02:00 compute-0 ceph-mon[74360]: 11.1f deep-scrub ok
Jan 20 14:02:01 compute-0 sudo[108302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidrxxflfauzzyznfkvpeplebwofclzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917720.4259076-68-59829282011197/AnsiballZ_getent.py'
Jan 20 14:02:01 compute-0 sudo[108302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:01 compute-0 ceph-mon[74360]: 11.13 scrub starts
Jan 20 14:02:01 compute-0 ceph-mon[74360]: 11.13 scrub ok
Jan 20 14:02:01 compute-0 ceph-mon[74360]: 10.c scrub starts
Jan 20 14:02:01 compute-0 ceph-mon[74360]: 10.c scrub ok
Jan 20 14:02:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:01 compute-0 python3.9[108304]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 14:02:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:01 compute-0 sudo[108302]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:01 compute-0 sshd-session[107991]: Connection closed by authenticating user root 159.223.5.14 port 35764 [preauth]
Jan 20 14:02:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:02 compute-0 sudo[108456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqozzmgzqsmxlqourpzhqeiygzcsats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917721.7317204-104-55557167273073/AnsiballZ_setup.py'
Jan 20 14:02:02 compute-0 sudo[108456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:02 compute-0 ceph-mon[74360]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:02 compute-0 python3.9[108458]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 14:02:02 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 14:02:02 compute-0 sudo[108456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:02 compute-0 sudo[108540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftdnvdqbikrlxklrwghfyglntlkhflmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917721.7317204-104-55557167273073/AnsiballZ_dnf.py'
Jan 20 14:02:02 compute-0 sudo[108540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:03 compute-0 ceph-mon[74360]: 10.d scrub starts
Jan 20 14:02:03 compute-0 ceph-mon[74360]: 10.d scrub ok
Jan 20 14:02:03 compute-0 ceph-mon[74360]: 10.1b scrub starts
Jan 20 14:02:03 compute-0 ceph-mon[74360]: 10.1b scrub ok
Jan 20 14:02:03 compute-0 python3.9[108542]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:02:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:04.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:04 compute-0 ceph-mon[74360]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:04 compute-0 sudo[108540]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:05 compute-0 sudo[108694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrbtjnuuhpjfqwfxiokijitwvlwahgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917724.8142045-146-62208973999911/AnsiballZ_dnf.py'
Jan 20 14:02:05 compute-0 sudo[108694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:05 compute-0 ceph-mon[74360]: 8.c scrub starts
Jan 20 14:02:05 compute-0 ceph-mon[74360]: 8.c scrub ok
Jan 20 14:02:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:05 compute-0 python3.9[108696]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 10.e deep-scrub starts
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 10.e deep-scrub ok
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 8.d scrub starts
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 8.d scrub ok
Jan 20 14:02:06 compute-0 ceph-mon[74360]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 10.16 scrub starts
Jan 20 14:02:06 compute-0 ceph-mon[74360]: 10.16 scrub ok
Jan 20 14:02:06 compute-0 sudo[108694]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:07 compute-0 sudo[108848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlrclxdhlkpmtwxkrzuqxslkzfdffofg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917726.8227386-170-202276106954802/AnsiballZ_systemd.py'
Jan 20 14:02:07 compute-0 sudo[108848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:07 compute-0 python3.9[108850]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:02:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:08.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:08 compute-0 ceph-mon[74360]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:08 compute-0 sudo[108848]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:09 compute-0 python3.9[109004]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:10.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:10 compute-0 ceph-mon[74360]: pgmap v309: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:10 compute-0 sudo[109155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikwwrxneewvvuhbymxtqyhilppmtokxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917730.2358747-224-214678824661121/AnsiballZ_sefcontext.py'
Jan 20 14:02:10 compute-0 sudo[109155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:02:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:02:10 compute-0 sudo[109158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:10 compute-0 sudo[109158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:10 compute-0 sudo[109158]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:10 compute-0 python3.9[109157]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 14:02:10 compute-0 sudo[109183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:10 compute-0 sudo[109183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:10 compute-0 sudo[109183]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:11 compute-0 sudo[109155]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:11.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:11 compute-0 ceph-mon[74360]: 10.17 deep-scrub starts
Jan 20 14:02:11 compute-0 ceph-mon[74360]: 10.17 deep-scrub ok
Jan 20 14:02:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:12 compute-0 python3.9[109358]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:12 compute-0 ceph-mon[74360]: 8.b scrub starts
Jan 20 14:02:12 compute-0 ceph-mon[74360]: 8.b scrub ok
Jan 20 14:02:12 compute-0 ceph-mon[74360]: pgmap v310: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:12 compute-0 ceph-mon[74360]: 10.1a scrub starts
Jan 20 14:02:12 compute-0 ceph-mon[74360]: 10.1a scrub ok
Jan 20 14:02:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:13.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:13 compute-0 ceph-mon[74360]: 10.1c scrub starts
Jan 20 14:02:13 compute-0 ceph-mon[74360]: 10.1c scrub ok
Jan 20 14:02:13 compute-0 sudo[109515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duzfrukwjoaopsfpegfptgsmwilhouav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917732.8721573-278-257531209290030/AnsiballZ_dnf.py'
Jan 20 14:02:13 compute-0 sudo[109515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:14.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:14 compute-0 python3.9[109517]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:14 compute-0 ceph-mon[74360]: pgmap v311: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:15 compute-0 sudo[109515]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:15.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:16.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:16 compute-0 ceph-mon[74360]: 8.6 scrub starts
Jan 20 14:02:16 compute-0 ceph-mon[74360]: 8.6 scrub ok
Jan 20 14:02:16 compute-0 ceph-mon[74360]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:16 compute-0 ceph-mon[74360]: 10.1d deep-scrub starts
Jan 20 14:02:16 compute-0 ceph-mon[74360]: 10.1d deep-scrub ok
Jan 20 14:02:16 compute-0 sudo[109669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzsccfifufxhiavapgajuxixaglgjzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917736.0900803-302-280788560495183/AnsiballZ_command.py'
Jan 20 14:02:16 compute-0 sudo[109669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:16 compute-0 python3.9[109671]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:17.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:17 compute-0 sudo[109669]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:18.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 14:02:18 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 14:02:18 compute-0 sudo[109957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plzlzitmjablwcyoawkbwlkjjjpfqtlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917737.8096268-326-99185974673596/AnsiballZ_file.py'
Jan 20 14:02:18 compute-0 sudo[109957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:18 compute-0 ceph-mon[74360]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:18 compute-0 python3.9[109959]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 14:02:18 compute-0 sudo[109957]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:19.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:19 compute-0 python3.9[110109]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:02:19 compute-0 ceph-mon[74360]: 10.18 scrub starts
Jan 20 14:02:19 compute-0 ceph-mon[74360]: 10.18 scrub ok
Jan 20 14:02:19 compute-0 ceph-mon[74360]: 10.1f scrub starts
Jan 20 14:02:19 compute-0 ceph-mon[74360]: 10.1f scrub ok
Jan 20 14:02:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:20.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:20 compute-0 sudo[110262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbitcxgqcfgegjkswbkzlqfiehtlgrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917739.7583919-374-225400870811289/AnsiballZ_dnf.py'
Jan 20 14:02:20 compute-0 sudo[110262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:20 compute-0 python3.9[110264]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:20 compute-0 ceph-mon[74360]: 8.1c scrub starts
Jan 20 14:02:20 compute-0 ceph-mon[74360]: 8.1c scrub ok
Jan 20 14:02:20 compute-0 ceph-mon[74360]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:21.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:21 compute-0 sudo[110262]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:21 compute-0 ceph-mon[74360]: 8.14 scrub starts
Jan 20 14:02:21 compute-0 ceph-mon[74360]: 8.14 scrub ok
Jan 20 14:02:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:22.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:22 compute-0 sudo[110418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udstzejxmpqkryhhziftqplfbwfmfivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917741.8562455-401-8155905963072/AnsiballZ_dnf.py'
Jan 20 14:02:22 compute-0 sudo[110418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:22 compute-0 python3.9[110420]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:22 compute-0 sshd-session[110374]: Connection closed by authenticating user root 157.245.78.139 port 59804 [preauth]
Jan 20 14:02:22 compute-0 ceph-mon[74360]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:23 compute-0 sudo[110418]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:23 compute-0 ceph-mon[74360]: 11.14 scrub starts
Jan 20 14:02:23 compute-0 ceph-mon[74360]: 11.14 scrub ok
Jan 20 14:02:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:24.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 14:02:24 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 14:02:24 compute-0 sudo[110572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvexajtvfarnevvgllmjiapwkyayhkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917744.2730901-437-124516736583674/AnsiballZ_stat.py'
Jan 20 14:02:24 compute-0 sudo[110572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:24 compute-0 python3.9[110574]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:02:24 compute-0 sudo[110572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:24 compute-0 ceph-mon[74360]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:25 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 20 14:02:25 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 20 14:02:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:25.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:25 compute-0 sudo[110726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgyfilhozghmkpgnkyquhvydribanwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917745.0680115-461-57107843381278/AnsiballZ_slurp.py'
Jan 20 14:02:25 compute-0 sudo[110726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:25 compute-0 python3.9[110728]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 20 14:02:25 compute-0 sudo[110726]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:25 compute-0 ceph-mon[74360]: 10.19 scrub starts
Jan 20 14:02:25 compute-0 ceph-mon[74360]: 10.19 scrub ok
Jan 20 14:02:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:26.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:26 compute-0 ceph-mon[74360]: 10.2 scrub starts
Jan 20 14:02:26 compute-0 ceph-mon[74360]: 10.2 scrub ok
Jan 20 14:02:26 compute-0 ceph-mon[74360]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:26 compute-0 ceph-mon[74360]: 8.5 scrub starts
Jan 20 14:02:26 compute-0 ceph-mon[74360]: 8.5 scrub ok
Jan 20 14:02:27 compute-0 sshd-session[107997]: Connection closed by 192.168.122.30 port 43274
Jan 20 14:02:27 compute-0 sshd-session[107993]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:02:27 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 20 14:02:27 compute-0 systemd[1]: session-36.scope: Consumed 18.277s CPU time.
Jan 20 14:02:27 compute-0 systemd-logind[796]: Session 36 logged out. Waiting for processes to exit.
Jan 20 14:02:27 compute-0 systemd-logind[796]: Removed session 36.
Jan 20 14:02:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 20 14:02:27 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 20 14:02:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:02:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:02:29 compute-0 ceph-mon[74360]: 10.8 scrub starts
Jan 20 14:02:29 compute-0 ceph-mon[74360]: 10.8 scrub ok
Jan 20 14:02:29 compute-0 ceph-mon[74360]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:30.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:30 compute-0 ceph-mon[74360]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:30 compute-0 ceph-mon[74360]: 8.17 scrub starts
Jan 20 14:02:30 compute-0 ceph-mon[74360]: 8.17 scrub ok
Jan 20 14:02:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 14:02:30 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 14:02:31 compute-0 sudo[110756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:31 compute-0 sudo[110756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:31 compute-0 sudo[110756]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:31 compute-0 sudo[110781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:31 compute-0 sudo[110781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:31 compute-0 sudo[110781]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:31 compute-0 ceph-mon[74360]: 10.5 scrub starts
Jan 20 14:02:31 compute-0 ceph-mon[74360]: 10.5 scrub ok
Jan 20 14:02:31 compute-0 ceph-mon[74360]: 8.9 scrub starts
Jan 20 14:02:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:31.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:32.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:32 compute-0 ceph-mon[74360]: 8.9 scrub ok
Jan 20 14:02:32 compute-0 ceph-mon[74360]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:32 compute-0 ceph-mon[74360]: 8.10 deep-scrub starts
Jan 20 14:02:32 compute-0 ceph-mon[74360]: 8.10 deep-scrub ok
Jan 20 14:02:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:33 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 20 14:02:33 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 20 14:02:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:33.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:33 compute-0 sshd-session[110808]: Accepted publickey for zuul from 192.168.122.30 port 35872 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:02:33 compute-0 systemd-logind[796]: New session 37 of user zuul.
Jan 20 14:02:34 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 20 14:02:34 compute-0 sshd-session[110808]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:02:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:34 compute-0 ceph-mon[74360]: 11.19 scrub starts
Jan 20 14:02:34 compute-0 ceph-mon[74360]: 11.19 scrub ok
Jan 20 14:02:34 compute-0 ceph-mon[74360]: 10.13 scrub starts
Jan 20 14:02:34 compute-0 ceph-mon[74360]: 10.13 scrub ok
Jan 20 14:02:34 compute-0 ceph-mon[74360]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:35 compute-0 python3.9[110961]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:35.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:36 compute-0 python3.9[111116]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:37 compute-0 ceph-mon[74360]: pgmap v322: 321 pgs: 321 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:37.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:37 compute-0 python3.9[111310]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:38 compute-0 ceph-mon[74360]: 11.8 scrub starts
Jan 20 14:02:38 compute-0 ceph-mon[74360]: 11.8 scrub ok
Jan 20 14:02:38 compute-0 ceph-mon[74360]: 11.12 scrub starts
Jan 20 14:02:38 compute-0 ceph-mon[74360]: 11.12 scrub ok
Jan 20 14:02:38 compute-0 ceph-mon[74360]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:38 compute-0 sshd-session[110811]: Connection closed by 192.168.122.30 port 35872
Jan 20 14:02:38 compute-0 sshd-session[110808]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:02:38 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 20 14:02:38 compute-0 systemd[1]: session-37.scope: Consumed 2.425s CPU time.
Jan 20 14:02:38 compute-0 systemd-logind[796]: Session 37 logged out. Waiting for processes to exit.
Jan 20 14:02:38 compute-0 systemd-logind[796]: Removed session 37.
Jan 20 14:02:39 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.15 deep-scrub starts
Jan 20 14:02:39 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.15 deep-scrub ok
Jan 20 14:02:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:39.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:40.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:40 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 14:02:40 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 14:02:40 compute-0 ceph-mon[74360]: 10.15 deep-scrub starts
Jan 20 14:02:40 compute-0 ceph-mon[74360]: 10.15 deep-scrub ok
Jan 20 14:02:40 compute-0 ceph-mon[74360]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:41 compute-0 ceph-mon[74360]: 10.14 scrub starts
Jan 20 14:02:41 compute-0 ceph-mon[74360]: 10.14 scrub ok
Jan 20 14:02:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:42.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:42 compute-0 ceph-mon[74360]: 8.1f scrub starts
Jan 20 14:02:42 compute-0 ceph-mon[74360]: 8.1f scrub ok
Jan 20 14:02:42 compute-0 ceph-mon[74360]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:42 compute-0 ceph-mon[74360]: 11.5 scrub starts
Jan 20 14:02:42 compute-0 ceph-mon[74360]: 11.5 scrub ok
Jan 20 14:02:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:43 compute-0 ceph-mon[74360]: 11.f scrub starts
Jan 20 14:02:43 compute-0 ceph-mon[74360]: 11.f scrub ok
Jan 20 14:02:44 compute-0 sshd-session[111340]: Accepted publickey for zuul from 192.168.122.30 port 40870 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:02:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:44 compute-0 systemd-logind[796]: New session 38 of user zuul.
Jan 20 14:02:44 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 20 14:02:44 compute-0 sshd-session[111340]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:02:44 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 20 14:02:44 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 20 14:02:44 compute-0 ceph-mon[74360]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:44 compute-0 ceph-mon[74360]: 11.4 scrub starts
Jan 20 14:02:44 compute-0 ceph-mon[74360]: 11.4 scrub ok
Jan 20 14:02:45 compute-0 python3.9[111493]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:45 compute-0 ceph-mon[74360]: 9.19 scrub starts
Jan 20 14:02:45 compute-0 ceph-mon[74360]: 9.19 scrub ok
Jan 20 14:02:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:46 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 20 14:02:46 compute-0 python3.9[111648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:02:46 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 20 14:02:46 compute-0 ceph-mon[74360]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:47 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 20 14:02:47 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 20 14:02:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:47 compute-0 sudo[111802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhiuydgdanrcvmgkzjqnrqisrgvubefp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917766.8226714-80-73300879733419/AnsiballZ_setup.py'
Jan 20 14:02:47 compute-0 sudo[111802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:47 compute-0 python3.9[111804]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:47 compute-0 ceph-mon[74360]: 9.1a scrub starts
Jan 20 14:02:47 compute-0 ceph-mon[74360]: 9.1a scrub ok
Jan 20 14:02:47 compute-0 sudo[111802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:48 compute-0 sudo[111887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdhnjwbgwzipvfvlkldnvexpbunxbwea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917766.8226714-80-73300879733419/AnsiballZ_dnf.py'
Jan 20 14:02:48 compute-0 sudo[111887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:48 compute-0 python3.9[111889]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:02:48 compute-0 ceph-mon[74360]: 9.1b scrub starts
Jan 20 14:02:48 compute-0 ceph-mon[74360]: 9.1b scrub ok
Jan 20 14:02:48 compute-0 ceph-mon[74360]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:49 compute-0 sudo[111887]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:50.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:50 compute-0 sudo[112041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muqdtqneewpyeiteagymnevuadqaiehb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917770.0050867-116-277771776836631/AnsiballZ_setup.py'
Jan 20 14:02:50 compute-0 sudo[112041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:50 compute-0 python3.9[112043]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:02:50 compute-0 ceph-mon[74360]: 11.3 scrub starts
Jan 20 14:02:50 compute-0 ceph-mon[74360]: 11.3 scrub ok
Jan 20 14:02:50 compute-0 ceph-mon[74360]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:50 compute-0 sudo[112041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:51 compute-0 sudo[112111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:51 compute-0 sudo[112111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:51 compute-0 sudo[112111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:51 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 20 14:02:51 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 20 14:02:51 compute-0 sudo[112136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:51 compute-0 sudo[112136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:51 compute-0 sudo[112136]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:51 compute-0 ceph-mon[74360]: 9.b scrub starts
Jan 20 14:02:51 compute-0 ceph-mon[74360]: 9.b scrub ok
Jan 20 14:02:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:52.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:52 compute-0 sudo[112287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exuohzwfnmwnjbqdjioervsgdxzllput ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917771.5834215-149-109232119341205/AnsiballZ_file.py'
Jan 20 14:02:52 compute-0 sudo[112287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 20 14:02:52 compute-0 ceph-osd[84815]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 20 14:02:52 compute-0 python3.9[112289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:02:52 compute-0 sudo[112287]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:02:52
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.log']
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:02:52 compute-0 ceph-mon[74360]: 9.1e scrub starts
Jan 20 14:02:52 compute-0 ceph-mon[74360]: 9.1e scrub ok
Jan 20 14:02:52 compute-0 ceph-mon[74360]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:52 compute-0 ceph-mon[74360]: 11.1 scrub starts
Jan 20 14:02:52 compute-0 ceph-mon[74360]: 11.1 scrub ok
Jan 20 14:02:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:53 compute-0 ceph-mon[74360]: 9.1f scrub starts
Jan 20 14:02:53 compute-0 ceph-mon[74360]: 9.1f scrub ok
Jan 20 14:02:53 compute-0 sudo[112440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ualfwihidqsslkfoltpbsorhipiqemub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917773.461068-173-245701173232989/AnsiballZ_command.py'
Jan 20 14:02:53 compute-0 sudo[112440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:54.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:54 compute-0 python3.9[112442]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:54 compute-0 sudo[112440]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:54 compute-0 ceph-mon[74360]: 9.3 scrub starts
Jan 20 14:02:54 compute-0 ceph-mon[74360]: 9.3 scrub ok
Jan 20 14:02:54 compute-0 ceph-mon[74360]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:54 compute-0 ceph-mon[74360]: 11.7 scrub starts
Jan 20 14:02:54 compute-0 ceph-mon[74360]: 11.7 scrub ok
Jan 20 14:02:55 compute-0 sudo[112605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxawetiouvpajjnovnilenhfrksvjuyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917774.5206892-197-29962536645495/AnsiballZ_stat.py'
Jan 20 14:02:55 compute-0 sudo[112605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:55 compute-0 python3.9[112607]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:02:55 compute-0 sudo[112605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:55 compute-0 sudo[112684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjswrogysasktubxnjleklhlqocwmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917774.5206892-197-29962536645495/AnsiballZ_file.py'
Jan 20 14:02:55 compute-0 sudo[112684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:55 compute-0 ceph-mon[74360]: 9.17 scrub starts
Jan 20 14:02:55 compute-0 ceph-mon[74360]: 9.17 scrub ok
Jan 20 14:02:55 compute-0 python3.9[112686]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:02:55 compute-0 sudo[112684]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:56.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:56 compute-0 sudo[112836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jljjyemovtinsyskymdstnycliwbzjvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917776.223188-233-254640086837661/AnsiballZ_stat.py'
Jan 20 14:02:56 compute-0 sudo[112836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:56 compute-0 ceph-mon[74360]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:56 compute-0 python3.9[112838]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:02:56 compute-0 sudo[112836]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-0 sudo[112914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggurvnqxxspnfakrqypjbmwbndgnmrlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917776.223188-233-254640086837661/AnsiballZ_file.py'
Jan 20 14:02:57 compute-0 sudo[112914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:02:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:57 compute-0 python3.9[112916]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:57 compute-0 sudo[112914]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-0 sudo[112941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:57 compute-0 sudo[112941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-0 sudo[112941]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-0 sudo[112971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:02:57 compute-0 sudo[112971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-0 sudo[112971]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-0 sudo[113032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:02:57 compute-0 sudo[113032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-0 sudo[113032]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:57 compute-0 ceph-mon[74360]: 9.13 scrub starts
Jan 20 14:02:57 compute-0 ceph-mon[74360]: 9.13 scrub ok
Jan 20 14:02:57 compute-0 sudo[113069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:02:57 compute-0 sudo[113069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:02:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:02:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:02:58.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:02:58 compute-0 sudo[113184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryuuiqrdnsckwlbfsqgzwbiybfpfgsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917777.7348578-272-72811277879358/AnsiballZ_ini_file.py'
Jan 20 14:02:58 compute-0 sudo[113184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:58 compute-0 sudo[113069]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:58 compute-0 python3.9[113186]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:58 compute-0 sudo[113184]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:58 compute-0 ceph-mon[74360]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:58 compute-0 ceph-mon[74360]: 8.1b scrub starts
Jan 20 14:02:58 compute-0 ceph-mon[74360]: 8.1b scrub ok
Jan 20 14:02:59 compute-0 sudo[113350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khsttdrdobndkhiyvzfjsnfbxucruqwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917778.6891508-272-59988111823239/AnsiballZ_ini_file.py'
Jan 20 14:02:59 compute-0 sudo[113350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:59 compute-0 python3.9[113352]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:59 compute-0 sudo[113350]: pam_unix(sudo:session): session closed for user root
Jan 20 14:02:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:02:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:02:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:02:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:02:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:02:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:02:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:02:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:02:59.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:02:59 compute-0 sudo[113503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvdhllenodsccpoapzklehnzrmqkhbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917779.415175-272-23682332901319/AnsiballZ_ini_file.py'
Jan 20 14:02:59 compute-0 sudo[113503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:02:59 compute-0 python3.9[113505]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:02:59 compute-0 sudo[113503]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0cdf472e-12b6-45ec-b8ce-35b409bc6b6d does not exist
Jan 20 14:03:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3630bef1-f234-4b76-8d3d-d19cc70fb007 does not exist
Jan 20 14:03:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 339611d4-cd9f-4b77-a1ce-862846966d43 does not exist
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:03:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-0 sudo[113578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:00 compute-0 sudo[113578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:00 compute-0 sudo[113578]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-0 ceph-mon[74360]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:03:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:03:00 compute-0 sudo[113630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:03:00 compute-0 sudo[113630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:00 compute-0 sudo[113630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-0 sudo[113678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:00 compute-0 sudo[113678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:00 compute-0 sudo[113678]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-0 sudo[113730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fufozcwgnxgownmwuyithcgdqpocdsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917780.1874444-272-272927767792334/AnsiballZ_ini_file.py'
Jan 20 14:03:00 compute-0 sudo[113730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:00 compute-0 sudo[113731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:03:00 compute-0 sudo[113731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:00 compute-0 python3.9[113740]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:00 compute-0 sudo[113730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.894174641 +0000 UTC m=+0.037872030 container create 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:03:00 compute-0 systemd[1]: Started libpod-conmon-8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88.scope.
Jan 20 14:03:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.964912157 +0000 UTC m=+0.108609506 container init 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.9710552 +0000 UTC m=+0.114752539 container start 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.877002334 +0000 UTC m=+0.020699693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:00 compute-0 agitated_euclid[113840]: 167 167
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.975636863 +0000 UTC m=+0.119334212 container attach 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 14:03:00 compute-0 systemd[1]: libpod-8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88.scope: Deactivated successfully.
Jan 20 14:03:00 compute-0 podman[113800]: 2026-01-20 14:03:00.976788923 +0000 UTC m=+0.120486272 container died 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e36907a2c5ba9fd7991529f46aa7b7fa7b97c63c054db50fa5085f77a2611731-merged.mount: Deactivated successfully.
Jan 20 14:03:01 compute-0 podman[113800]: 2026-01-20 14:03:01.015844324 +0000 UTC m=+0.159541673 container remove 8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:03:01 compute-0 systemd[1]: libpod-conmon-8cc6198c3110bdae1bc7497ef7ec9fcd14b7dfa926f614ec566c8eb092bcab88.scope: Deactivated successfully.
Jan 20 14:03:01 compute-0 podman[113905]: 2026-01-20 14:03:01.208873448 +0000 UTC m=+0.049977682 container create dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:03:01 compute-0 systemd[1]: Started libpod-conmon-dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93.scope.
Jan 20 14:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 compute-0 podman[113905]: 2026-01-20 14:03:01.192580645 +0000 UTC m=+0.033684889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 compute-0 podman[113905]: 2026-01-20 14:03:01.289825096 +0000 UTC m=+0.130929330 container init dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:01 compute-0 podman[113905]: 2026-01-20 14:03:01.298915499 +0000 UTC m=+0.140019723 container start dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:01 compute-0 podman[113905]: 2026-01-20 14:03:01.30310305 +0000 UTC m=+0.144207284 container attach dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:03:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:01 compute-0 sudo[114012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eollezkfmgtzctupdlmcjegnryrifcqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917781.114-365-208571860954177/AnsiballZ_dnf.py'
Jan 20 14:03:01 compute-0 sudo[114012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:01 compute-0 python3.9[114014]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:03:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:02.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:02 compute-0 funny_cannon[113957]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:03:02 compute-0 funny_cannon[113957]: --> relative data size: 1.0
Jan 20 14:03:02 compute-0 funny_cannon[113957]: --> All data devices are unavailable
Jan 20 14:03:02 compute-0 systemd[1]: libpod-dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93.scope: Deactivated successfully.
Jan 20 14:03:02 compute-0 podman[113905]: 2026-01-20 14:03:02.156826781 +0000 UTC m=+0.997931015 container died dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 14:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5c43fbbe1c7dc7fc6c87b19354b4f0f02f7d28645a8162dbd44511f01689f14-merged.mount: Deactivated successfully.
Jan 20 14:03:02 compute-0 podman[113905]: 2026-01-20 14:03:02.238616221 +0000 UTC m=+1.079720475 container remove dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:02 compute-0 systemd[1]: libpod-conmon-dfdf96058b8e6cf15ce0b5f238148037416e10a6a16c10ac312463f2e10e0a93.scope: Deactivated successfully.
Jan 20 14:03:02 compute-0 sudo[113731]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:02 compute-0 sudo[114039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:02 compute-0 sudo[114039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:02 compute-0 sudo[114039]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:02 compute-0 ceph-mon[74360]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:02 compute-0 ceph-mon[74360]: 8.8 scrub starts
Jan 20 14:03:02 compute-0 ceph-mon[74360]: 8.8 scrub ok
Jan 20 14:03:02 compute-0 sudo[114064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:03:02 compute-0 sudo[114064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:02 compute-0 sudo[114064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:02 compute-0 sudo[114089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:02 compute-0 sudo[114089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:02 compute-0 sudo[114089]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:02 compute-0 sudo[114114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:03:02 compute-0 sudo[114114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:02 compute-0 sudo[114012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.001211944 +0000 UTC m=+0.057549915 container create 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:03:03 compute-0 systemd[1]: Started libpod-conmon-590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb.scope.
Jan 20 14:03:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:02.975406597 +0000 UTC m=+0.031744658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.075195386 +0000 UTC m=+0.131533417 container init 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.081838463 +0000 UTC m=+0.138176434 container start 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.085502081 +0000 UTC m=+0.141840142 container attach 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:03 compute-0 quirky_napier[114219]: 167 167
Jan 20 14:03:03 compute-0 systemd[1]: libpod-590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb.scope: Deactivated successfully.
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.088798779 +0000 UTC m=+0.145136750 container died 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbdf1ff186e06254dc01620e11b2123ee2433aae043ce59f7b0b3147dd01c66d-merged.mount: Deactivated successfully.
Jan 20 14:03:03 compute-0 podman[114180]: 2026-01-20 14:03:03.127084559 +0000 UTC m=+0.183422530 container remove 590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:03:03 compute-0 systemd[1]: libpod-conmon-590939233dbf3cb8574efeb9a747bdbfd932fe5b49d8df5d993ec6d4ef750bcb.scope: Deactivated successfully.
Jan 20 14:03:03 compute-0 podman[114243]: 2026-01-20 14:03:03.283181569 +0000 UTC m=+0.040965083 container create eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:03 compute-0 systemd[1]: Started libpod-conmon-eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8.scope.
Jan 20 14:03:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6105219d3b2ca9f4744979a16e5b912c3f39fb4d096d6aef50850b7e58a8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6105219d3b2ca9f4744979a16e5b912c3f39fb4d096d6aef50850b7e58a8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6105219d3b2ca9f4744979a16e5b912c3f39fb4d096d6aef50850b7e58a8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d6105219d3b2ca9f4744979a16e5b912c3f39fb4d096d6aef50850b7e58a8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 compute-0 podman[114243]: 2026-01-20 14:03:03.353298678 +0000 UTC m=+0.111082212 container init eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:03 compute-0 podman[114243]: 2026-01-20 14:03:03.267222403 +0000 UTC m=+0.025005927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:03 compute-0 podman[114243]: 2026-01-20 14:03:03.367730672 +0000 UTC m=+0.125514186 container start eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:03:03 compute-0 podman[114243]: 2026-01-20 14:03:03.370825884 +0000 UTC m=+0.128609398 container attach eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:03:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:03 compute-0 ceph-mon[74360]: 9.7 scrub starts
Jan 20 14:03:03 compute-0 ceph-mon[74360]: 9.7 scrub ok
Jan 20 14:03:03 compute-0 sudo[114390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnxbcagnxelratuhmcxnnumeubskxour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917783.4513342-398-135662879562656/AnsiballZ_setup.py'
Jan 20 14:03:03 compute-0 sudo[114390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:04.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:04 compute-0 python3.9[114392]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:03:04 compute-0 friendly_rubin[114259]: {
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:     "0": [
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:         {
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "devices": [
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "/dev/loop3"
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             ],
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "lv_name": "ceph_lv0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "lv_size": "7511998464",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "name": "ceph_lv0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "tags": {
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.cluster_name": "ceph",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.crush_device_class": "",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.encrypted": "0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.osd_id": "0",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.type": "block",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:                 "ceph.vdo": "0"
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             },
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "type": "block",
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:             "vg_name": "ceph_vg0"
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:         }
Jan 20 14:03:04 compute-0 friendly_rubin[114259]:     ]
Jan 20 14:03:04 compute-0 friendly_rubin[114259]: }
Jan 20 14:03:04 compute-0 sudo[114390]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-0 systemd[1]: libpod-eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8.scope: Deactivated successfully.
Jan 20 14:03:04 compute-0 podman[114243]: 2026-01-20 14:03:04.126265477 +0000 UTC m=+0.884049021 container died eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d6105219d3b2ca9f4744979a16e5b912c3f39fb4d096d6aef50850b7e58a8d-merged.mount: Deactivated successfully.
Jan 20 14:03:04 compute-0 podman[114243]: 2026-01-20 14:03:04.188717871 +0000 UTC m=+0.946501385 container remove eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:03:04 compute-0 systemd[1]: libpod-conmon-eb62c15458f8a9a5b51ffe786c929537b5a18007527d39cded41df5231ed46f8.scope: Deactivated successfully.
Jan 20 14:03:04 compute-0 sudo[114114]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-0 sudo[114412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:04 compute-0 sudo[114412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:04 compute-0 sudo[114412]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-0 sudo[114461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:03:04 compute-0 sudo[114461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:04 compute-0 sudo[114461]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-0 sudo[114486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:04 compute-0 sudo[114486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:04 compute-0 sudo[114486]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:04 compute-0 ceph-mon[74360]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:04 compute-0 sudo[114511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:03:04 compute-0 sudo[114511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.78379146 +0000 UTC m=+0.052385917 container create a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:03:04 compute-0 systemd[1]: Started libpod-conmon-a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b.scope.
Jan 20 14:03:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:04 compute-0 sudo[114719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prqozzyjzfswpnkfoohjmprsoikoccpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917784.5325134-422-169803964701634/AnsiballZ_stat.py'
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.757727446 +0000 UTC m=+0.026321983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:04 compute-0 sudo[114719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.865087057 +0000 UTC m=+0.133681534 container init a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.876068839 +0000 UTC m=+0.144663296 container start a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.879363567 +0000 UTC m=+0.147958024 container attach a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:04 compute-0 blissful_lalande[114717]: 167 167
Jan 20 14:03:04 compute-0 systemd[1]: libpod-a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b.scope: Deactivated successfully.
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.884041652 +0000 UTC m=+0.152636119 container died a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d30dbae6c941b3daa6fcd35b9b2d16bb21cc3184effb560d1d54869c3b1d6d6-merged.mount: Deactivated successfully.
Jan 20 14:03:04 compute-0 podman[114663]: 2026-01-20 14:03:04.927016677 +0000 UTC m=+0.195611174 container remove a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:03:04 compute-0 systemd[1]: libpod-conmon-a3cc24328bc50b71fd948e177aec97bb824ba92284a6bbf3dcedb7f54f70b95b.scope: Deactivated successfully.
Jan 20 14:03:05 compute-0 python3.9[114722]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:03:05 compute-0 sudo[114719]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:05 compute-0 podman[114744]: 2026-01-20 14:03:05.146419444 +0000 UTC m=+0.040726406 container create 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:05 compute-0 systemd[1]: Started libpod-conmon-6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658.scope.
Jan 20 14:03:05 compute-0 podman[114744]: 2026-01-20 14:03:05.12900422 +0000 UTC m=+0.023311202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:03:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a21363d47fd15cafebe1d4add96c6f0eb4464e2d46d97e68666e052239f09b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a21363d47fd15cafebe1d4add96c6f0eb4464e2d46d97e68666e052239f09b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a21363d47fd15cafebe1d4add96c6f0eb4464e2d46d97e68666e052239f09b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a21363d47fd15cafebe1d4add96c6f0eb4464e2d46d97e68666e052239f09b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:05 compute-0 podman[114744]: 2026-01-20 14:03:05.250985231 +0000 UTC m=+0.145292233 container init 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:03:05 compute-0 podman[114744]: 2026-01-20 14:03:05.262727694 +0000 UTC m=+0.157034686 container start 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:03:05 compute-0 podman[114744]: 2026-01-20 14:03:05.267130891 +0000 UTC m=+0.161437883 container attach 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:05.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:05 compute-0 ceph-mon[74360]: 8.19 scrub starts
Jan 20 14:03:05 compute-0 ceph-mon[74360]: 8.19 scrub ok
Jan 20 14:03:05 compute-0 sudo[114916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouulajsflwdabndmtygjdrsziglowzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917785.3933327-449-84787963444895/AnsiballZ_stat.py'
Jan 20 14:03:05 compute-0 sudo[114916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:06 compute-0 python3.9[114918]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:03:06 compute-0 sudo[114916]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:03:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:06.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:03:06 compute-0 stoic_lamport[114785]: {
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:         "osd_id": 0,
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:         "type": "bluestore"
Jan 20 14:03:06 compute-0 stoic_lamport[114785]:     }
Jan 20 14:03:06 compute-0 stoic_lamport[114785]: }
Jan 20 14:03:06 compute-0 systemd[1]: libpod-6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658.scope: Deactivated successfully.
Jan 20 14:03:06 compute-0 podman[114744]: 2026-01-20 14:03:06.162186714 +0000 UTC m=+1.056493676 container died 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-62a21363d47fd15cafebe1d4add96c6f0eb4464e2d46d97e68666e052239f09b-merged.mount: Deactivated successfully.
Jan 20 14:03:06 compute-0 podman[114744]: 2026-01-20 14:03:06.24081976 +0000 UTC m=+1.135126762 container remove 6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:06 compute-0 systemd[1]: libpod-conmon-6b4ffc84b55c3ce9c74019c95a1ed557e62cb9a26827c56aaa11dbf941247658.scope: Deactivated successfully.
Jan 20 14:03:06 compute-0 sudo[114511]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:03:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:03:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4d8d48ce-2ed9-43f0-b7e5-a0bbc2557a45 does not exist
Jan 20 14:03:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1f104021-4be6-4dff-ae01-d87e22ddd465 does not exist
Jan 20 14:03:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev aaf3e556-7498-4818-9f8e-24fc32ecf69f does not exist
Jan 20 14:03:06 compute-0 sudo[114974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:06 compute-0 sudo[114974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:06 compute-0 sudo[114974]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-0 sudo[115026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:03:06 compute-0 sudo[115026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:06 compute-0 sudo[115026]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:06 compute-0 ceph-mon[74360]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:06 compute-0 ceph-mon[74360]: 11.1c scrub starts
Jan 20 14:03:06 compute-0 ceph-mon[74360]: 11.1c scrub ok
Jan 20 14:03:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:03:06 compute-0 sudo[115147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilxzqzfbeautlzsfaiqegkikdkwvyqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917786.4051247-479-272503099946127/AnsiballZ_command.py'
Jan 20 14:03:06 compute-0 sudo[115147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:06 compute-0 python3.9[115149]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:06 compute-0 sudo[115147]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:07.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:07 compute-0 sshd-session[114393]: Connection closed by authenticating user root 159.223.5.14 port 32804 [preauth]
Jan 20 14:03:07 compute-0 sudo[115301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiscgsnqpzfhtynkgwyywemvcjsviwnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917787.284847-509-188120991537763/AnsiballZ_service_facts.py'
Jan 20 14:03:07 compute-0 sudo[115301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:07 compute-0 python3.9[115303]: ansible-service_facts Invoked
Jan 20 14:03:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:08 compute-0 network[115320]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:03:08 compute-0 network[115321]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:03:08 compute-0 network[115322]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:03:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:08 compute-0 ceph-mon[74360]: 9.5 deep-scrub starts
Jan 20 14:03:08 compute-0 ceph-mon[74360]: 9.5 deep-scrub ok
Jan 20 14:03:08 compute-0 ceph-mon[74360]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:09.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:09 compute-0 ceph-mon[74360]: 11.1a scrub starts
Jan 20 14:03:09 compute-0 ceph-mon[74360]: 11.1a scrub ok
Jan 20 14:03:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:10.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:10 compute-0 sshd-session[115335]: Connection closed by authenticating user root 157.245.78.139 port 60390 [preauth]
Jan 20 14:03:10 compute-0 ceph-mon[74360]: 9.18 scrub starts
Jan 20 14:03:10 compute-0 ceph-mon[74360]: 9.18 scrub ok
Jan 20 14:03:10 compute-0 ceph-mon[74360]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:03:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:03:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:11 compute-0 sudo[115419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:11 compute-0 sudo[115419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:11 compute-0 sudo[115419]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:11 compute-0 sudo[115447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:11 compute-0 sudo[115447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:11 compute-0 sudo[115447]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:11 compute-0 ceph-mon[74360]: 11.1e scrub starts
Jan 20 14:03:11 compute-0 ceph-mon[74360]: 11.1e scrub ok
Jan 20 14:03:11 compute-0 sudo[115301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:12.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:12 compute-0 ceph-mon[74360]: 9.8 scrub starts
Jan 20 14:03:12 compute-0 ceph-mon[74360]: 9.8 scrub ok
Jan 20 14:03:12 compute-0 ceph-mon[74360]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:12 compute-0 ceph-mon[74360]: 11.1d scrub starts
Jan 20 14:03:12 compute-0 ceph-mon[74360]: 11.1d scrub ok
Jan 20 14:03:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:13.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:13 compute-0 ceph-mon[74360]: 8.12 deep-scrub starts
Jan 20 14:03:13 compute-0 ceph-mon[74360]: 8.12 deep-scrub ok
Jan 20 14:03:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:14.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:14 compute-0 ceph-mon[74360]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:15.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:15 compute-0 sudo[115661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydooelyranvxjoojistspycsdzhnbcqb ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768917795.428706-554-42397596633128/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768917795.428706-554-42397596633128/args'
Jan 20 14:03:15 compute-0 sudo[115661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:15 compute-0 sudo[115661]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:16.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:16 compute-0 sudo[115828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuynjeerfzanwrzhdmekladgtbinjbwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917796.3152633-587-255640105312633/AnsiballZ_dnf.py'
Jan 20 14:03:16 compute-0 sudo[115828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:16 compute-0 ceph-mon[74360]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:16 compute-0 ceph-mon[74360]: 9.9 scrub starts
Jan 20 14:03:16 compute-0 python3.9[115830]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:03:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:17.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:17 compute-0 ceph-mon[74360]: 9.9 scrub ok
Jan 20 14:03:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:18 compute-0 sudo[115828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:18 compute-0 ceph-mon[74360]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:18 compute-0 ceph-mon[74360]: 8.4 scrub starts
Jan 20 14:03:18 compute-0 ceph-mon[74360]: 8.4 scrub ok
Jan 20 14:03:18 compute-0 ceph-mon[74360]: 9.16 scrub starts
Jan 20 14:03:18 compute-0 ceph-mon[74360]: 9.16 scrub ok
Jan 20 14:03:19 compute-0 sudo[115982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooquwhuipbpjfyuxmvthnmqshmxwkfxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917798.63932-626-280861017396261/AnsiballZ_package_facts.py'
Jan 20 14:03:19 compute-0 sudo[115982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:19.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:19 compute-0 python3.9[115984]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 14:03:19 compute-0 sudo[115982]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:19 compute-0 ceph-mon[74360]: 9.1d scrub starts
Jan 20 14:03:19 compute-0 ceph-mon[74360]: 9.1d scrub ok
Jan 20 14:03:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:20.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:20 compute-0 ceph-mon[74360]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:21 compute-0 sudo[116135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxciqrpkogsrajwslgoodegwinosrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917800.7166743-656-15036602510361/AnsiballZ_stat.py'
Jan 20 14:03:21 compute-0 sudo[116135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:21 compute-0 python3.9[116137]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:21 compute-0 sudo[116135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:21.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:21 compute-0 sudo[116214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvsrzzorllksrnkpdkebjtedudjpuujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917800.7166743-656-15036602510361/AnsiballZ_file.py'
Jan 20 14:03:21 compute-0 sudo[116214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:21 compute-0 python3.9[116216]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:21 compute-0 sudo[116214]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:22 compute-0 sudo[116366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgvbuxahvsmxyqfcgxcgmfoqrycmbql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917802.4502835-692-53043495721740/AnsiballZ_stat.py'
Jan 20 14:03:22 compute-0 sudo[116366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:22 compute-0 ceph-mon[74360]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:23 compute-0 python3.9[116368]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:23 compute-0 sudo[116366]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:23 compute-0 sudo[116444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiihrzvcwthkyvsjhnbpnjluqfqwcyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917802.4502835-692-53043495721740/AnsiballZ_file.py'
Jan 20 14:03:23 compute-0 sudo[116444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:03:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:23.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:03:23 compute-0 python3.9[116446]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:23 compute-0 sudo[116444]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:24 compute-0 ceph-mon[74360]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:24 compute-0 ceph-mon[74360]: 11.1b deep-scrub starts
Jan 20 14:03:24 compute-0 ceph-mon[74360]: 11.1b deep-scrub ok
Jan 20 14:03:25 compute-0 sudo[116597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxtzuokjtgymfekvncevfypztdeckqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917804.618289-746-105478330696717/AnsiballZ_lineinfile.py'
Jan 20 14:03:25 compute-0 sudo[116597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:25 compute-0 python3.9[116599]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:25 compute-0 sudo[116597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:26 compute-0 sudo[116750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhekunmafsvwrevyzsoearzwsxnnaztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917806.4297073-791-99189691356682/AnsiballZ_setup.py'
Jan 20 14:03:26 compute-0 sudo[116750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:26 compute-0 python3.9[116752]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:03:26 compute-0 ceph-mon[74360]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:27 compute-0 sudo[116750]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:27.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:28 compute-0 ceph-mon[74360]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:28 compute-0 sudo[116835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsjjsfxlhzwnpznsgvnoeqiqtxdfuzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917806.4297073-791-99189691356682/AnsiballZ_systemd.py'
Jan 20 14:03:28 compute-0 sudo[116835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:28 compute-0 python3.9[116837]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:03:28 compute-0 sudo[116835]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:29 compute-0 ceph-mon[74360]: 8.18 scrub starts
Jan 20 14:03:29 compute-0 ceph-mon[74360]: 8.18 scrub ok
Jan 20 14:03:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:29 compute-0 sshd-session[111343]: Connection closed by 192.168.122.30 port 40870
Jan 20 14:03:29 compute-0 sshd-session[111340]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:03:29 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 20 14:03:29 compute-0 systemd[1]: session-38.scope: Consumed 25.632s CPU time.
Jan 20 14:03:29 compute-0 systemd-logind[796]: Session 38 logged out. Waiting for processes to exit.
Jan 20 14:03:29 compute-0 systemd-logind[796]: Removed session 38.
Jan 20 14:03:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:30 compute-0 ceph-mon[74360]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:31.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:31 compute-0 sudo[116865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:31 compute-0 sudo[116865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:31 compute-0 sudo[116865]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:31 compute-0 sudo[116890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:31 compute-0 sudo[116890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:31 compute-0 sudo[116890]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:03:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:03:32 compute-0 ceph-mon[74360]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:34 compute-0 sshd-session[116917]: Accepted publickey for zuul from 192.168.122.30 port 53912 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:03:34 compute-0 systemd-logind[796]: New session 39 of user zuul.
Jan 20 14:03:34 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 20 14:03:34 compute-0 sshd-session[116917]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:03:34 compute-0 ceph-mon[74360]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:34 compute-0 ceph-mon[74360]: 9.e scrub starts
Jan 20 14:03:34 compute-0 ceph-mon[74360]: 9.e scrub ok
Jan 20 14:03:35 compute-0 sudo[117070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjkkdkqrccxdyvolotjtnowhqusnihx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917814.5175905-26-25482280643746/AnsiballZ_file.py'
Jan 20 14:03:35 compute-0 sudo[117070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:35 compute-0 python3.9[117072]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:35 compute-0 sudo[117070]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:36.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:36 compute-0 sudo[117223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgczbnybcarkipjsmvzlgqadtxcfflqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917815.6861737-62-12990285357533/AnsiballZ_stat.py'
Jan 20 14:03:36 compute-0 sudo[117223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:36 compute-0 python3.9[117225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:36 compute-0 sudo[117223]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:36 compute-0 ceph-mon[74360]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:36 compute-0 sudo[117301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bucringzdqsixcwxnnzmthxckyihmkgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917815.6861737-62-12990285357533/AnsiballZ_file.py'
Jan 20 14:03:36 compute-0 sudo[117301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:36 compute-0 python3.9[117303]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:36 compute-0 sudo[117301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:37 compute-0 sshd-session[116920]: Connection closed by 192.168.122.30 port 53912
Jan 20 14:03:37 compute-0 sshd-session[116917]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:03:37 compute-0 systemd-logind[796]: Session 39 logged out. Waiting for processes to exit.
Jan 20 14:03:37 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 20 14:03:37 compute-0 systemd[1]: session-39.scope: Consumed 1.857s CPU time.
Jan 20 14:03:37 compute-0 systemd-logind[796]: Removed session 39.
Jan 20 14:03:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:38 compute-0 ceph-mon[74360]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:40 compute-0 ceph-mon[74360]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:03:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:03:42 compute-0 sshd-session[117331]: Accepted publickey for zuul from 192.168.122.30 port 59584 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:03:42 compute-0 systemd-logind[796]: New session 40 of user zuul.
Jan 20 14:03:42 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 20 14:03:42 compute-0 sshd-session[117331]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:03:42 compute-0 ceph-mon[74360]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:42 compute-0 ceph-mon[74360]: 9.6 scrub starts
Jan 20 14:03:42 compute-0 ceph-mon[74360]: 9.6 scrub ok
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.810516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822810593, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2359, "num_deletes": 251, "total_data_size": 3451798, "memory_usage": 3510816, "flush_reason": "Manual Compaction"}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822839349, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3372780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7521, "largest_seqno": 9879, "table_properties": {"data_size": 3362979, "index_size": 5655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 25988, "raw_average_key_size": 21, "raw_value_size": 3340744, "raw_average_value_size": 2765, "num_data_blocks": 251, "num_entries": 1208, "num_filter_entries": 1208, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917647, "oldest_key_time": 1768917647, "file_creation_time": 1768917822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 28944 microseconds, and 13947 cpu microseconds.
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.839463) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3372780 bytes OK
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.839490) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.841163) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.841186) EVENT_LOG_v1 {"time_micros": 1768917822841178, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.841212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3441586, prev total WAL file size 3441586, number of live WAL files 2.
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.842823) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3293KB)], [20(7541KB)]
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822842905, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11094883, "oldest_snapshot_seqno": -1}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3796 keys, 9488470 bytes, temperature: kUnknown
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822928699, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9488470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9457509, "index_size": 20355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91595, "raw_average_key_size": 24, "raw_value_size": 9383464, "raw_average_value_size": 2471, "num_data_blocks": 890, "num_entries": 3796, "num_filter_entries": 3796, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768917822, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.929148) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9488470 bytes
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.930672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.0 rd, 110.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4319, records dropped: 523 output_compression: NoCompression
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.930703) EVENT_LOG_v1 {"time_micros": 1768917822930688, "job": 6, "event": "compaction_finished", "compaction_time_micros": 85998, "compaction_time_cpu_micros": 39205, "output_level": 6, "num_output_files": 1, "total_output_size": 9488470, "num_input_records": 4319, "num_output_records": 3796, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822932551, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917822935440, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.842716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.935650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.935660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.935665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.935669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:03:42.935673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:03:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:43 compute-0 python3.9[117484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:03:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:44 compute-0 ceph-mon[74360]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:44 compute-0 sudo[117639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybgdijpvcbolnbsuxmicmhbrtblbfcnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917823.8177757-59-66230705657119/AnsiballZ_file.py'
Jan 20 14:03:44 compute-0 sudo[117639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:44 compute-0 python3.9[117641]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:44 compute-0 sudo[117639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:45 compute-0 ceph-mon[74360]: 9.a scrub starts
Jan 20 14:03:45 compute-0 ceph-mon[74360]: 9.a scrub ok
Jan 20 14:03:45 compute-0 sudo[117814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlicdsrtrgzracgdwukvzkotnjbbnihh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917824.7817767-83-2545897589565/AnsiballZ_stat.py'
Jan 20 14:03:45 compute-0 sudo[117814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:45 compute-0 python3.9[117816]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:45 compute-0 sudo[117814]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:45.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:45 compute-0 sudo[117893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgcirnyhrgnerlwqfhobezhmzyysfth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917824.7817767-83-2545897589565/AnsiballZ_file.py'
Jan 20 14:03:45 compute-0 sudo[117893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:45 compute-0 python3.9[117895]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.u5y_p2g5 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:45 compute-0 sudo[117893]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:46 compute-0 ceph-mon[74360]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:46 compute-0 sudo[118045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkmdqjubeanqyrnxuygvzemmzfurejq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917826.618616-143-237813392557225/AnsiballZ_stat.py'
Jan 20 14:03:46 compute-0 sudo[118045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:47 compute-0 python3.9[118047]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:47 compute-0 sudo[118045]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:47 compute-0 sudo[118123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scncmweidgautwvfqbihyvotwezokxqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917826.618616-143-237813392557225/AnsiballZ_file.py'
Jan 20 14:03:47 compute-0 sudo[118123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:03:47 compute-0 python3.9[118125]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dxryr56m recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:47 compute-0 sudo[118123]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:48 compute-0 ceph-mon[74360]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:48 compute-0 sudo[118276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdwaxygcqubtnolgewkjuyhyanniewo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.0654588-182-74977729512206/AnsiballZ_file.py'
Jan 20 14:03:48 compute-0 sudo[118276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:48 compute-0 python3.9[118278]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:48 compute-0 sudo[118276]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:49 compute-0 sudo[118428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhpznkzvzunbtwenhpixibllntxhgjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.9122272-206-139386117807608/AnsiballZ_stat.py'
Jan 20 14:03:49 compute-0 sudo[118428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:49 compute-0 python3.9[118430]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:49 compute-0 sudo[118428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:49 compute-0 sudo[118506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvujsatvfcnyatkjtrnfmurfspbabzen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917828.9122272-206-139386117807608/AnsiballZ_file.py'
Jan 20 14:03:49 compute-0 sudo[118506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:49 compute-0 python3.9[118508]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:49 compute-0 sudo[118506]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:03:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:03:50 compute-0 sudo[118659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxwyjlzegfzufeflcihoykkhgfenzkdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917829.9581969-206-24087171099189/AnsiballZ_stat.py'
Jan 20 14:03:50 compute-0 sudo[118659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:50 compute-0 ceph-mon[74360]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:50 compute-0 ceph-mon[74360]: 9.d scrub starts
Jan 20 14:03:50 compute-0 ceph-mon[74360]: 9.d scrub ok
Jan 20 14:03:50 compute-0 python3.9[118661]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:50 compute-0 sudo[118659]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:50 compute-0 sudo[118737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptttbcsvjjbpddtmansxbjrqhonmtflx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917829.9581969-206-24087171099189/AnsiballZ_file.py'
Jan 20 14:03:50 compute-0 sudo[118737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:50 compute-0 python3.9[118739]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:03:50 compute-0 sudo[118737]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:51 compute-0 ceph-mon[74360]: 9.f deep-scrub starts
Jan 20 14:03:51 compute-0 ceph-mon[74360]: 9.f deep-scrub ok
Jan 20 14:03:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:51.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:51 compute-0 sudo[118765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:51 compute-0 sudo[118765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:51 compute-0 sudo[118765]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:51 compute-0 sudo[118813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:03:51 compute-0 sudo[118813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:03:51 compute-0 sudo[118813]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:52 compute-0 sudo[118940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnlspllrjmvxsvkbkppspyjbdqgiyprf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917831.773326-275-145248316388309/AnsiballZ_file.py'
Jan 20 14:03:52 compute-0 sudo[118940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:52 compute-0 python3.9[118942]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:52 compute-0 sudo[118940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:03:52
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:03:52 compute-0 ceph-mon[74360]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:52 compute-0 ceph-mon[74360]: 9.10 scrub starts
Jan 20 14:03:52 compute-0 ceph-mon[74360]: 9.10 scrub ok
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:52 compute-0 sudo[119092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcxykoxkbtibwgmrbnwfwddueddzqsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917832.650333-299-107677390959707/AnsiballZ_stat.py'
Jan 20 14:03:52 compute-0 sudo[119092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:53 compute-0 python3.9[119094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:53 compute-0 sudo[119092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:53 compute-0 sudo[119170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpdqkaymwnicergizpgctbqmghkarpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917832.650333-299-107677390959707/AnsiballZ_file.py'
Jan 20 14:03:53 compute-0 sudo[119170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:53 compute-0 ceph-mon[74360]: 9.11 scrub starts
Jan 20 14:03:53 compute-0 ceph-mon[74360]: 9.11 scrub ok
Jan 20 14:03:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:53 compute-0 python3.9[119172]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:53 compute-0 sudo[119170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:54 compute-0 sudo[119323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmrkfhquuxtgiwmxzpclwnwbkwbdsnoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917833.9580822-335-74589430924038/AnsiballZ_stat.py'
Jan 20 14:03:54 compute-0 sudo[119323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:54 compute-0 python3.9[119325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:54 compute-0 ceph-mon[74360]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:54 compute-0 sudo[119323]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:54 compute-0 sudo[119401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suyjjmhlmvzqnvfmqyufcwzevpcjbokl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917833.9580822-335-74589430924038/AnsiballZ_file.py'
Jan 20 14:03:54 compute-0 sudo[119401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:54 compute-0 python3.9[119403]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:54 compute-0 sudo[119401]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:56 compute-0 sudo[119556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exevoopzqpsefbfilnellqzbmxecohuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917835.3731732-371-83840771090271/AnsiballZ_systemd.py'
Jan 20 14:03:56 compute-0 sudo[119556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:56 compute-0 sshd-session[119480]: Connection closed by authenticating user root 157.245.78.139 port 34392 [preauth]
Jan 20 14:03:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:56.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:56 compute-0 python3.9[119558]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:03:56 compute-0 systemd[1]: Reloading.
Jan 20 14:03:56 compute-0 systemd-sysv-generator[119591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:56 compute-0 systemd-rc-local-generator[119587]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:56 compute-0 ceph-mon[74360]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:56 compute-0 sudo[119556]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:03:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:57 compute-0 sudo[119746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rezcvcwyshctgswjgarvxmdfqhqqeelz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917837.0684705-395-77350563386930/AnsiballZ_stat.py'
Jan 20 14:03:57 compute-0 sudo[119746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:57 compute-0 python3.9[119748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:57 compute-0 sudo[119746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:57 compute-0 sudo[119825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgtlxsvutuonrfbrguzlsjgfcxymydzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917837.0684705-395-77350563386930/AnsiballZ_file.py'
Jan 20 14:03:57 compute-0 sudo[119825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:58 compute-0 python3.9[119827]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:58 compute-0 sudo[119825]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:03:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:03:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:03:58 compute-0 ceph-mon[74360]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:58 compute-0 sudo[119977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydkodhfuwfzehdmqqhlgkkjwozaxltkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917838.5024455-431-79155567449479/AnsiballZ_stat.py'
Jan 20 14:03:58 compute-0 sudo[119977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:58 compute-0 python3.9[119979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:03:59 compute-0 sudo[119977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:59 compute-0 sudo[120055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflkvmmqmizxadaomaufrwwtizowwwrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917838.5024455-431-79155567449479/AnsiballZ_file.py'
Jan 20 14:03:59 compute-0 sudo[120055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:03:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:03:59 compute-0 python3.9[120057]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:03:59 compute-0 sudo[120055]: pam_unix(sudo:session): session closed for user root
Jan 20 14:03:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:03:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:03:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:03:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:00.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:00 compute-0 sudo[120208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrheqgduaphbmudzvvzbrfjuupztbypl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917839.9312232-467-238430622328323/AnsiballZ_systemd.py'
Jan 20 14:04:00 compute-0 sudo[120208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:00 compute-0 python3.9[120210]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:04:00 compute-0 ceph-mon[74360]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:00 compute-0 systemd[1]: Reloading.
Jan 20 14:04:00 compute-0 systemd-rc-local-generator[120238]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:00 compute-0 systemd-sysv-generator[120243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:00 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 14:04:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:04:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:04:00 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 14:04:00 compute-0 sudo[120208]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:01 compute-0 python3.9[120405]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:04:01 compute-0 network[120422]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:04:01 compute-0 network[120423]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:04:01 compute-0 network[120424]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:04:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:02.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:02 compute-0 ceph-mon[74360]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:02 compute-0 ceph-mon[74360]: 9.12 scrub starts
Jan 20 14:04:02 compute-0 ceph-mon[74360]: 9.12 scrub ok
Jan 20 14:04:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:04 compute-0 ceph-mon[74360]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:04 compute-0 ceph-mon[74360]: 9.15 scrub starts
Jan 20 14:04:04 compute-0 ceph-mon[74360]: 9.15 scrub ok
Jan 20 14:04:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:06 compute-0 ceph-mon[74360]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:06 compute-0 sudo[120561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:06 compute-0 sudo[120561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:06 compute-0 sudo[120561]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:06 compute-0 sudo[120586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:04:06 compute-0 sudo[120586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:06 compute-0 sudo[120586]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:07 compute-0 sudo[120611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:07 compute-0 sudo[120611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:07 compute-0 sudo[120611]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:07 compute-0 sudo[120636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:04:07 compute-0 sudo[120636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:07 compute-0 sudo[120636]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:08.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:08 compute-0 ceph-mon[74360]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:09 compute-0 sudo[120819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nynoxgfauedaasaxkincbbjvqteozlti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917848.7298522-545-146771630521167/AnsiballZ_stat.py'
Jan 20 14:04:09 compute-0 sudo[120819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:04:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:04:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:09 compute-0 python3.9[120821]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:09 compute-0 sudo[120819]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:09 compute-0 sudo[120897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnkevzyorbgvghzcibodqrunhrwsobum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917848.7298522-545-146771630521167/AnsiballZ_file.py'
Jan 20 14:04:09 compute-0 sudo[120897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:09 compute-0 python3.9[120899]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:09 compute-0 sudo[120897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 00f473b4-394b-469a-bbea-c207882ff57a does not exist
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 861b5f52-39ee-4e9c-9fe1-b38129b79468 does not exist
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3880aa9e-173a-4802-82ac-10ed3beb1115 does not exist
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-0 ceph-mon[74360]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:04:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:04:10 compute-0 sudo[120925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:10 compute-0 sudo[120925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:10 compute-0 sudo[120925]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:10 compute-0 sudo[120950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:04:10 compute-0 sudo[120950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:10 compute-0 sudo[120950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:10 compute-0 sudo[120975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:10 compute-0 sudo[120975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:10 compute-0 sudo[120975]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:10 compute-0 sudo[121000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:04:10 compute-0 sudo[121000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.751988052 +0000 UTC m=+0.047946309 container create a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 20 14:04:10 compute-0 systemd[1]: Started libpod-conmon-a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db.scope.
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:04:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.728549587 +0000 UTC m=+0.024507864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:04:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.839697071 +0000 UTC m=+0.135655358 container init a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.852208604 +0000 UTC m=+0.148166891 container start a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.856295083 +0000 UTC m=+0.152253360 container attach a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:04:10 compute-0 admiring_germain[121159]: 167 167
Jan 20 14:04:10 compute-0 systemd[1]: libpod-a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db.scope: Deactivated successfully.
Jan 20 14:04:10 compute-0 conmon[121159]: conmon a56c7ba700e3c3f55b17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db.scope/container/memory.events
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.860478545 +0000 UTC m=+0.156436782 container died a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-42532067968bd6209db335063b74fd8cbb36010d03393dbecfa74352083eceff-merged.mount: Deactivated successfully.
Jan 20 14:04:10 compute-0 podman[121123]: 2026-01-20 14:04:10.898473328 +0000 UTC m=+0.194431575 container remove a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:04:10 compute-0 systemd[1]: libpod-conmon-a56c7ba700e3c3f55b1771a2e770b0ccd49796b00e087c32d09e7cc1936f82db.scope: Deactivated successfully.
Jan 20 14:04:10 compute-0 sudo[121229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnsxcqkccyqkvnjzuyjsxgcckhihqgtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917850.540938-584-43154282417492/AnsiballZ_file.py'
Jan 20 14:04:11 compute-0 sudo[121229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.05713579 +0000 UTC m=+0.043617074 container create f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:04:11 compute-0 systemd[1]: Started libpod-conmon-f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a.scope.
Jan 20 14:04:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.034828415 +0000 UTC m=+0.021309709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.143250156 +0000 UTC m=+0.129731430 container init f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.152428111 +0000 UTC m=+0.138909375 container start f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.156015656 +0000 UTC m=+0.142496910 container attach f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:04:11 compute-0 python3.9[121235]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:11 compute-0 sudo[121229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:11 compute-0 sudo[121414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kodmaaupkkcvxpksvkuspnkrmiisfdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917851.5186815-608-60957489469061/AnsiballZ_stat.py'
Jan 20 14:04:11 compute-0 sudo[121414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:11 compute-0 boring_jepsen[121252]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:04:11 compute-0 boring_jepsen[121252]: --> relative data size: 1.0
Jan 20 14:04:11 compute-0 boring_jepsen[121252]: --> All data devices are unavailable
Jan 20 14:04:11 compute-0 sudo[121417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:11 compute-0 sudo[121417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:11 compute-0 sudo[121417]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:11 compute-0 systemd[1]: libpod-f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a.scope: Deactivated successfully.
Jan 20 14:04:11 compute-0 podman[121236]: 2026-01-20 14:04:11.96386229 +0000 UTC m=+0.950343544 container died f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:04:12 compute-0 sudo[121446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:12 compute-0 sudo[121446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:12 compute-0 sudo[121446]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eacf57b580c37ebb13e8298d5cf671f4634be25db4f9800fde3de2a65b89de20-merged.mount: Deactivated successfully.
Jan 20 14:04:12 compute-0 python3.9[121416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:12 compute-0 podman[121236]: 2026-01-20 14:04:12.099785085 +0000 UTC m=+1.086266379 container remove f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:12 compute-0 systemd[1]: libpod-conmon-f21fcf3908efcba79457a4fc0b999bb4940ef088e7894da2a526aeb93b55a95a.scope: Deactivated successfully.
Jan 20 14:04:12 compute-0 sudo[121000]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 sudo[121414]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:12 compute-0 sudo[121486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:12 compute-0 sudo[121486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:12 compute-0 sudo[121486]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 sudo[121534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:04:12 compute-0 sudo[121534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:12 compute-0 sudo[121534]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 sudo[121580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:12 compute-0 sudo[121580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:12 compute-0 sudo[121580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 ceph-mon[74360]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:12 compute-0 sudo[121638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzqwspzcmtxtrbjljkfohnwemntngggr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917851.5186815-608-60957489469061/AnsiballZ_file.py'
Jan 20 14:04:12 compute-0 sudo[121638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:12 compute-0 sudo[121632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:04:12 compute-0 sudo[121632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:12 compute-0 python3.9[121653]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:12 compute-0 sudo[121638]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.861527659 +0000 UTC m=+0.058962923 container create 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:04:12 compute-0 systemd[1]: Started libpod-conmon-8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af.scope.
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.839735569 +0000 UTC m=+0.037170843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.952343361 +0000 UTC m=+0.149778635 container init 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.964468035 +0000 UTC m=+0.161903269 container start 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.967849525 +0000 UTC m=+0.165284779 container attach 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:04:12 compute-0 crazy_murdock[121740]: 167 167
Jan 20 14:04:12 compute-0 systemd[1]: libpod-8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af.scope: Deactivated successfully.
Jan 20 14:04:12 compute-0 podman[121724]: 2026-01-20 14:04:12.971291637 +0000 UTC m=+0.168726901 container died 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-036034a2d68102ffe9a4933b5d6378a7b7881400c5f9c965c2b65f627098d1f2-merged.mount: Deactivated successfully.
Jan 20 14:04:13 compute-0 podman[121724]: 2026-01-20 14:04:13.01980656 +0000 UTC m=+0.217241834 container remove 8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:13 compute-0 systemd[1]: libpod-conmon-8dc9c81590f3f892b0d6def8c51166476f64806d5fd9343f57f75d69623478af.scope: Deactivated successfully.
Jan 20 14:04:13 compute-0 podman[121765]: 2026-01-20 14:04:13.218293954 +0000 UTC m=+0.046179873 container create 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:13 compute-0 systemd[1]: Started libpod-conmon-7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c.scope.
Jan 20 14:04:13 compute-0 podman[121765]: 2026-01-20 14:04:13.198763003 +0000 UTC m=+0.026648962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ae9311492aff902b940a054a79a3381f9fa2f65a8777288828e904ec2a09d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ae9311492aff902b940a054a79a3381f9fa2f65a8777288828e904ec2a09d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ae9311492aff902b940a054a79a3381f9fa2f65a8777288828e904ec2a09d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ae9311492aff902b940a054a79a3381f9fa2f65a8777288828e904ec2a09d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 compute-0 podman[121765]: 2026-01-20 14:04:13.31973492 +0000 UTC m=+0.147620899 container init 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:13 compute-0 podman[121765]: 2026-01-20 14:04:13.334598065 +0000 UTC m=+0.162483974 container start 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:13 compute-0 podman[121765]: 2026-01-20 14:04:13.337829142 +0000 UTC m=+0.165715131 container attach 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:04:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]: {
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:     "0": [
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:         {
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "devices": [
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "/dev/loop3"
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             ],
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "lv_name": "ceph_lv0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "lv_size": "7511998464",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "name": "ceph_lv0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "tags": {
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.cluster_name": "ceph",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.crush_device_class": "",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.encrypted": "0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.osd_id": "0",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.type": "block",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:                 "ceph.vdo": "0"
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             },
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "type": "block",
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:             "vg_name": "ceph_vg0"
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:         }
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]:     ]
Jan 20 14:04:14 compute-0 sad_dubinsky[121781]: }
Jan 20 14:04:14 compute-0 systemd[1]: libpod-7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c.scope: Deactivated successfully.
Jan 20 14:04:14 compute-0 podman[121765]: 2026-01-20 14:04:14.091247604 +0000 UTC m=+0.919133553 container died 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-05ae9311492aff902b940a054a79a3381f9fa2f65a8777288828e904ec2a09d9-merged.mount: Deactivated successfully.
Jan 20 14:04:14 compute-0 podman[121765]: 2026-01-20 14:04:14.163687436 +0000 UTC m=+0.991573375 container remove 7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:14 compute-0 systemd[1]: libpod-conmon-7b22c2ff644a3511e30f36138aca161f319087c35f2f83d2c25a7742ab2cb49c.scope: Deactivated successfully.
Jan 20 14:04:14 compute-0 sudo[121632]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:14 compute-0 sudo[121883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:14 compute-0 sudo[121883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:14 compute-0 sudo[121883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:14 compute-0 sudo[121970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capozkxyafzuztpnjlnfdtacvuimsqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917853.8608305-653-104346795362802/AnsiballZ_timezone.py'
Jan 20 14:04:14 compute-0 sudo[121970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:14 compute-0 sudo[121938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:04:14 compute-0 sudo[121938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:14 compute-0 sudo[121938]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:14 compute-0 ceph-mon[74360]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:14 compute-0 sudo[121981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:14 compute-0 sudo[121981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:14 compute-0 sudo[121981]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:14 compute-0 sudo[122006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:04:14 compute-0 sudo[122006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:14 compute-0 python3.9[121979]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 14:04:14 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 14:04:14 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 14:04:14 compute-0 sudo[121970]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:14 compute-0 podman[122093]: 2026-01-20 14:04:14.992866429 +0000 UTC m=+0.065735684 container create dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:15 compute-0 systemd[1]: Started libpod-conmon-dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2.scope.
Jan 20 14:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:14.965655673 +0000 UTC m=+0.038524918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:15 compute-0 sshd-session[121055]: Connection closed by authenticating user root 159.223.5.14 port 37266 [preauth]
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:15.073342625 +0000 UTC m=+0.146211930 container init dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:15.080260679 +0000 UTC m=+0.153129894 container start dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:15.083480305 +0000 UTC m=+0.156349550 container attach dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:04:15 compute-0 strange_easley[122115]: 167 167
Jan 20 14:04:15 compute-0 systemd[1]: libpod-dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2.scope: Deactivated successfully.
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:15.086701801 +0000 UTC m=+0.159571016 container died dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-531f518ec0feae8c0c1e8deb35dea09e4c8ed4013f75b286a004f859d8f899e3-merged.mount: Deactivated successfully.
Jan 20 14:04:15 compute-0 podman[122093]: 2026-01-20 14:04:15.134868556 +0000 UTC m=+0.207737771 container remove dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:15 compute-0 systemd[1]: libpod-conmon-dce24ba1c7d47e5944aa43f69d810c385742ec2997a425f7da2cbb8eeb20e8d2.scope: Deactivated successfully.
Jan 20 14:04:15 compute-0 podman[122197]: 2026-01-20 14:04:15.33788711 +0000 UTC m=+0.059366214 container create 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:04:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:15 compute-0 systemd[1]: Started libpod-conmon-37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8.scope.
Jan 20 14:04:15 compute-0 podman[122197]: 2026-01-20 14:04:15.309698648 +0000 UTC m=+0.031177832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8d1c4d0264e26d39822d6b93d81d4a6146a69a025540eeb9fb4fd1e121074/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8d1c4d0264e26d39822d6b93d81d4a6146a69a025540eeb9fb4fd1e121074/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8d1c4d0264e26d39822d6b93d81d4a6146a69a025540eeb9fb4fd1e121074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2d8d1c4d0264e26d39822d6b93d81d4a6146a69a025540eeb9fb4fd1e121074/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 compute-0 podman[122197]: 2026-01-20 14:04:15.423093032 +0000 UTC m=+0.144572156 container init 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:15 compute-0 podman[122197]: 2026-01-20 14:04:15.433402667 +0000 UTC m=+0.154881781 container start 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:15 compute-0 podman[122197]: 2026-01-20 14:04:15.44852502 +0000 UTC m=+0.170004234 container attach 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:04:15 compute-0 sudo[122286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsmsripdtczwpnciizeutdfotvncjtgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917855.1489794-680-29559725823745/AnsiballZ_file.py'
Jan 20 14:04:15 compute-0 sudo[122286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:15.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:15 compute-0 python3.9[122288]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:15 compute-0 sudo[122286]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:16.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:16 compute-0 jovial_golick[122255]: {
Jan 20 14:04:16 compute-0 jovial_golick[122255]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:04:16 compute-0 jovial_golick[122255]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:04:16 compute-0 jovial_golick[122255]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:04:16 compute-0 jovial_golick[122255]:         "osd_id": 0,
Jan 20 14:04:16 compute-0 jovial_golick[122255]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:04:16 compute-0 jovial_golick[122255]:         "type": "bluestore"
Jan 20 14:04:16 compute-0 jovial_golick[122255]:     }
Jan 20 14:04:16 compute-0 jovial_golick[122255]: }
Jan 20 14:04:16 compute-0 systemd[1]: libpod-37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8.scope: Deactivated successfully.
Jan 20 14:04:16 compute-0 podman[122197]: 2026-01-20 14:04:16.344763591 +0000 UTC m=+1.066242685 container died 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:16 compute-0 sudo[122456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqbngjfgumegkufhepmqjqgxyipamdxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917856.038349-704-234816038534141/AnsiballZ_stat.py'
Jan 20 14:04:16 compute-0 sudo[122456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d8d1c4d0264e26d39822d6b93d81d4a6146a69a025540eeb9fb4fd1e121074-merged.mount: Deactivated successfully.
Jan 20 14:04:16 compute-0 podman[122197]: 2026-01-20 14:04:16.398536396 +0000 UTC m=+1.120015490 container remove 37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:16 compute-0 systemd[1]: libpod-conmon-37daeabfeabb2e46328c6159edc32db90282478c9960065ee2f5810e2f972cf8.scope: Deactivated successfully.
Jan 20 14:04:16 compute-0 sudo[122006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:04:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:04:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1de6d885-1d1d-4b86-b339-36ee21746f8b does not exist
Jan 20 14:04:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2e337010-fb8a-4998-b3a7-60d098e5ac90 does not exist
Jan 20 14:04:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a27c406b-d992-44b7-93a5-92da74b9902f does not exist
Jan 20 14:04:16 compute-0 ceph-mon[74360]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:04:16 compute-0 sudo[122470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:16 compute-0 sudo[122470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:16 compute-0 sudo[122470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-0 sudo[122495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:04:16 compute-0 sudo[122495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:16 compute-0 sudo[122495]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-0 python3.9[122460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:16 compute-0 sudo[122456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:16 compute-0 sudo[122595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxngdrctfkuffkisdnooxkqsmuutipns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917856.038349-704-234816038534141/AnsiballZ_file.py'
Jan 20 14:04:16 compute-0 sudo[122595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:17 compute-0 python3.9[122597]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:17 compute-0 sudo[122595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:17 compute-0 sudo[122748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtpgsfqouoiewilhuafdamtisrvrjhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917857.4945035-740-41035272581763/AnsiballZ_stat.py'
Jan 20 14:04:17 compute-0 sudo[122748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:18 compute-0 python3.9[122750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:18 compute-0 sudo[122748]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:18 compute-0 sudo[122826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kihkowqkcwapkciroxhddonmzjjxcbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917857.4945035-740-41035272581763/AnsiballZ_file.py'
Jan 20 14:04:18 compute-0 sudo[122826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:18 compute-0 python3.9[122828]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9kga65jk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:18 compute-0 sudo[122826]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:18 compute-0 ceph-mon[74360]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:19 compute-0 sudo[122978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusaqrvacernalcbdhjkzghcaylurutr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917858.8774965-776-200967274802343/AnsiballZ_stat.py'
Jan 20 14:04:19 compute-0 sudo[122978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:19 compute-0 python3.9[122980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:19 compute-0 sudo[122978]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:19.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:19 compute-0 sudo[123057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzxhiunulqisabishhxucdavbavdkjzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917858.8774965-776-200967274802343/AnsiballZ_file.py'
Jan 20 14:04:19 compute-0 sudo[123057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:19 compute-0 python3.9[123059]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:19 compute-0 sudo[123057]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:20.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:20 compute-0 ceph-mon[74360]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:20 compute-0 sudo[123209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrnotjtcybvznnhpazrywylwgvysxlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917860.194516-815-136795541451335/AnsiballZ_command.py'
Jan 20 14:04:20 compute-0 sudo[123209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:20 compute-0 python3.9[123211]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:20 compute-0 sudo[123209]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:21 compute-0 sudo[123363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtklwrbgptcxoefecvenkwowucattfpm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917861.1976075-839-146622160295943/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:04:21 compute-0 sudo[123363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:21 compute-0 python3[123365]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:04:21 compute-0 sudo[123363]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:22 compute-0 sudo[123515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzqxnplriqikkjopktubndyowjolmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917862.350603-863-13555975965985/AnsiballZ_stat.py'
Jan 20 14:04:22 compute-0 sudo[123515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:22 compute-0 ceph-mon[74360]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:22 compute-0 python3.9[123517]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:22 compute-0 sudo[123515]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:23 compute-0 sudo[123593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koijuypmovyusrylynokyakkkuawadvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917862.350603-863-13555975965985/AnsiballZ_file.py'
Jan 20 14:04:23 compute-0 sudo[123593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:23 compute-0 python3.9[123595]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:23 compute-0 sudo[123593]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:24.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:24 compute-0 sudo[123746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzcufdsffqjlmgvmoqklvrwffebrqhep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917863.7844324-899-61246052023028/AnsiballZ_stat.py'
Jan 20 14:04:24 compute-0 sudo[123746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:24 compute-0 python3.9[123748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:24 compute-0 sudo[123746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:24 compute-0 sudo[123871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyamanqhngyommmpgizsorpxtwdmvufw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917863.7844324-899-61246052023028/AnsiballZ_copy.py'
Jan 20 14:04:24 compute-0 sudo[123871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:25 compute-0 python3.9[123873]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917863.7844324-899-61246052023028/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:25 compute-0 ceph-mon[74360]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:25 compute-0 sudo[123871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:25 compute-0 sudo[124023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxawwmqirqoqwdeiwusebwrnsvzprxwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917865.325498-944-204371496514234/AnsiballZ_stat.py'
Jan 20 14:04:25 compute-0 sudo[124023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:25 compute-0 python3.9[124025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:25 compute-0 sudo[124023]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:25 compute-0 sudo[124102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loselyhuqrrmfiwvyapuqovnemxxthnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917865.325498-944-204371496514234/AnsiballZ_file.py'
Jan 20 14:04:25 compute-0 sudo[124102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:26 compute-0 ceph-mon[74360]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:26 compute-0 python3.9[124104]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:26.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:26 compute-0 sudo[124102]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:27 compute-0 sudo[124254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpnbyyfolqmbdccxutuotfwtitoioszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917866.8054554-980-248491702568729/AnsiballZ_stat.py'
Jan 20 14:04:27 compute-0 sudo[124254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:27 compute-0 python3.9[124256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:27 compute-0 sudo[124254]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:27 compute-0 sudo[124332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iddurhbmexdkmsqtbqlrxfgfhctvhwip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917866.8054554-980-248491702568729/AnsiballZ_file.py'
Jan 20 14:04:27 compute-0 sudo[124332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:27 compute-0 python3.9[124334]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:27 compute-0 sudo[124332]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:28.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:28 compute-0 ceph-mon[74360]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:28 compute-0 sudo[124485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxwslvetsmdegxpymxsfpptvnfygysll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917868.1756866-1016-120829685949480/AnsiballZ_stat.py'
Jan 20 14:04:28 compute-0 sudo[124485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:28 compute-0 python3.9[124487]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:28 compute-0 sudo[124485]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:29 compute-0 sudo[124563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgngfytkqnrboooqogmdlwqdqyjdhor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917868.1756866-1016-120829685949480/AnsiballZ_file.py'
Jan 20 14:04:29 compute-0 sudo[124563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:29 compute-0 python3.9[124565]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:29 compute-0 sudo[124563]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:29 compute-0 sshd-session[71328]: Received disconnect from 38.102.83.230 port 48916:11: disconnected by user
Jan 20 14:04:29 compute-0 sshd-session[71328]: Disconnected from user zuul 38.102.83.230 port 48916
Jan 20 14:04:29 compute-0 sshd-session[71325]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:04:29 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 20 14:04:29 compute-0 systemd[1]: session-18.scope: Consumed 1min 17.964s CPU time.
Jan 20 14:04:29 compute-0 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Jan 20 14:04:29 compute-0 systemd-logind[796]: Removed session 18.
Jan 20 14:04:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:29 compute-0 sudo[124716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttgxxvgfyaneoasuqotsizflqsirtxod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917869.6636207-1055-257503202698399/AnsiballZ_command.py'
Jan 20 14:04:29 compute-0 sudo[124716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:30 compute-0 python3.9[124718]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:30.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:30 compute-0 sudo[124716]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:30 compute-0 ceph-mon[74360]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:31 compute-0 sudo[124871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykghoidqmjmfsklcpvcdfflbplvsinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917870.552043-1079-207232564086353/AnsiballZ_blockinfile.py'
Jan 20 14:04:31 compute-0 sudo[124871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:31 compute-0 python3.9[124873]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:31 compute-0 sudo[124871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:31 compute-0 sudo[125024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkpdzsijhxylrvamqmvbrlgczqnrdgkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917871.6283414-1106-2879506888006/AnsiballZ_file.py'
Jan 20 14:04:31 compute-0 sudo[125024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:32 compute-0 python3.9[125026]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:32 compute-0 sudo[125027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:32 compute-0 sudo[125027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:32 compute-0 sudo[125027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:32 compute-0 sudo[125024]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:32 compute-0 sudo[125052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:32 compute-0 sudo[125052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:32 compute-0 sudo[125052]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:32 compute-0 ceph-mon[74360]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:32 compute-0 sudo[125226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnaicjpmwieheiprhcrzzidscfkamfop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917872.255727-1106-222689813247803/AnsiballZ_file.py'
Jan 20 14:04:32 compute-0 sudo[125226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:32 compute-0 python3.9[125228]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:32 compute-0 sudo[125226]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:33 compute-0 sudo[125378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktbllorqvexnianctlilcxagzbgmuwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917872.9743679-1151-88981404460870/AnsiballZ_mount.py'
Jan 20 14:04:33 compute-0 sudo[125378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:33 compute-0 python3.9[125380]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:04:33 compute-0 sudo[125378]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:34 compute-0 sudo[125531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlrlxgabxuwtwnsnsslsdnjqznmrsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917873.871616-1151-117953160400081/AnsiballZ_mount.py'
Jan 20 14:04:34 compute-0 sudo[125531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:34 compute-0 python3.9[125533]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:04:34 compute-0 sudo[125531]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:34 compute-0 ceph-mon[74360]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:35 compute-0 sshd-session[117334]: Connection closed by 192.168.122.30 port 59584
Jan 20 14:04:35 compute-0 sshd-session[117331]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:04:35 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 20 14:04:35 compute-0 systemd[1]: session-40.scope: Consumed 30.032s CPU time.
Jan 20 14:04:35 compute-0 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Jan 20 14:04:35 compute-0 systemd-logind[796]: Removed session 40.
Jan 20 14:04:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:36 compute-0 ceph-mon[74360]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:38 compute-0 ceph-mon[74360]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:40 compute-0 ceph-mon[74360]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:40 compute-0 sshd-session[125563]: Accepted publickey for zuul from 192.168.122.30 port 57476 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:04:40 compute-0 systemd-logind[796]: New session 41 of user zuul.
Jan 20 14:04:40 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 20 14:04:40 compute-0 sshd-session[125563]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:04:41 compute-0 sshd-session[125561]: Connection closed by authenticating user root 157.245.78.139 port 41470 [preauth]
Jan 20 14:04:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:41 compute-0 sudo[125716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqohxjaaqnoxbqutzlsqkvtygtwjkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917880.979864-23-33827474452215/AnsiballZ_tempfile.py'
Jan 20 14:04:41 compute-0 sudo[125716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:41 compute-0 python3.9[125718]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 14:04:41 compute-0 sudo[125716]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:42 compute-0 sudo[125869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkvlfbgmoeatcbdyserhegdywfghmwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917882.0313623-59-29879020446541/AnsiballZ_stat.py'
Jan 20 14:04:42 compute-0 sudo[125869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:42 compute-0 python3.9[125871]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:04:42 compute-0 sudo[125869]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:42 compute-0 ceph-mon[74360]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:44 compute-0 sudo[126024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaiyynisirnvivntfcawixaeckayxadb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917883.609368-83-98086193321828/AnsiballZ_slurp.py'
Jan 20 14:04:44 compute-0 sudo[126024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:44 compute-0 python3.9[126026]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 20 14:04:44 compute-0 sudo[126024]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:44 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 14:04:45 compute-0 ceph-mon[74360]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:45 compute-0 sudo[126179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejzywtgfeikadegkzaacjjxsrdygoyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917884.5905457-107-80003633282691/AnsiballZ_stat.py'
Jan 20 14:04:45 compute-0 sudo[126179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:45 compute-0 python3.9[126181]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.9dd7yixo follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:04:45 compute-0 sudo[126179]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:46 compute-0 ceph-mon[74360]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:46 compute-0 sudo[126305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvvxvegcbbbtuvhuxtgbjzaahyratrda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917884.5905457-107-80003633282691/AnsiballZ_copy.py'
Jan 20 14:04:46 compute-0 sudo[126305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:46 compute-0 python3.9[126307]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.9dd7yixo mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917884.5905457-107-80003633282691/.source.9dd7yixo _original_basename=.60vdd1f2 follow=False checksum=309fed797bdebad351617b1a1ea9eb224966ee92 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:46 compute-0 sudo[126305]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:47 compute-0 sudo[126457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfjflckaqqdjvxuzjlqzgfrhlpfjszup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917886.8427668-152-105636209724965/AnsiballZ_setup.py'
Jan 20 14:04:47 compute-0 sudo[126457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:47 compute-0 python3.9[126459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:04:47 compute-0 sudo[126457]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.057126) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888057192, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 802, "num_deletes": 250, "total_data_size": 1182822, "memory_usage": 1212816, "flush_reason": "Manual Compaction"}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888063549, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 757928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9881, "largest_seqno": 10681, "table_properties": {"data_size": 754473, "index_size": 1235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8879, "raw_average_key_size": 20, "raw_value_size": 747181, "raw_average_value_size": 1690, "num_data_blocks": 54, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917823, "oldest_key_time": 1768917823, "file_creation_time": 1768917888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6452 microseconds, and 2764 cpu microseconds.
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.063584) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 757928 bytes OK
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.063597) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.065455) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.065466) EVENT_LOG_v1 {"time_micros": 1768917888065463, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.065478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1178890, prev total WAL file size 1178890, number of live WAL files 2.
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.065967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(740KB)], [23(9266KB)]
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888066067, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10246398, "oldest_snapshot_seqno": -1}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3750 keys, 7632924 bytes, temperature: kUnknown
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888129346, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7632924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7605176, "index_size": 17270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91167, "raw_average_key_size": 24, "raw_value_size": 7534724, "raw_average_value_size": 2009, "num_data_blocks": 754, "num_entries": 3750, "num_filter_entries": 3750, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768917888, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.129743) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7632924 bytes
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.132058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.6 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(23.6) write-amplify(10.1) OK, records in: 4238, records dropped: 488 output_compression: NoCompression
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.132093) EVENT_LOG_v1 {"time_micros": 1768917888132076, "job": 8, "event": "compaction_finished", "compaction_time_micros": 63420, "compaction_time_cpu_micros": 41180, "output_level": 6, "num_output_files": 1, "total_output_size": 7632924, "num_input_records": 4238, "num_output_records": 3750, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888132535, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768917888136280, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.065872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.136350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.136356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.136358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.136360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:04:48.136362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:04:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:48 compute-0 sudo[126610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwyvhvcfrqinrqoxpdnyagdhqnlcqon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917888.109106-177-59871799233695/AnsiballZ_blockinfile.py'
Jan 20 14:04:48 compute-0 sudo[126610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:48 compute-0 python3.9[126612]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrCUasX8PhlctvvIb2eE6+Z0hELmfczQ6UoBD+mPtCobptr/s786JmwJ3D8nIoKhlCLVSmhRfbqf1Pm45RUPTEtSuaa6HBDy40dZhTXU34X4KbGfKmur2bp9S/1w83ArKvI8inSqqk2qoMx1l7ECkEgeT+GbFwKfYLnbq5OV4Ms3tzl/uFUC/Xzxs2dbXlhozQiSamcO/a6EObErTvR8PrtaOoLFtTiD/I+oN+rkdBPkBc6r0qT4jS7nU1FOlT96meSZHE7Q1n8pxcy9PEc8w9hFdd1Zj8/WcGIdeEJsekuouK1Lut/sofQLZHyUMWJTcnBjx8BsjGx9NjUHPYUWIw+DZo7lT2QurAPNnaX4rp9ciGV2Bdm3ylNoOu3izNvM1JGTw3xRyYrmyxyWv3Euc35JXa0w07Xrqr+6Ckih0WTLU6q3Rlnrc/grpDC821sHrsljerHipJVOCbZB39LvV6wDDBlqfYZzfqID3dIqlVli4eL12J0K7jr7QAlPRhNf0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOG07miJwhzuA/nm0wvGIorydl2xbBiiDhE7PypnJ/jC
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKiJpWtps/bRsuEHfak4zDuqPHKOWFLaEA2h86H7tPlrZHR8okAVZWCmY7keO3Ad1DFyffUtJPKv5OvTK91xGO8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/dEYtIJ/delwiq9xMMctU8myoGU/TKMiFUM+i3BSaGKrC0rujad6qo1LAtjth5aYbBcgBhxy0UEX0oCruQQgc5qDpPmWHmJiAwdQJaDu6GxTRl3PlXF2u4rd0Rz72DAMuCxPSYedeHU91uL4vlrcD95xONWew2wa9lUuqQWdgj8DtqnB9T895BihDk9vFLXAaoGJcYZVGKJmXR8sOzNTFQxefqstVO0/dfbRUyFd0Ukp5v7rTmLxw0Np5WcGMOg9l/iRzWTopxnTRvXpBoGlFCmzNvTG2uH08dJ4FU5Wk9/iSxonuiVJu9DKs8Tp4EajaA4Y6cEuZiMhhqi7vw6zVCQuCmRBpny6Ub1Ag2CesMYgxwOVJO5cHsKh3BzuPFsh1gMgrrZK7v+qfm2r1rhHlPsCWrcnrtUIZa7gyzdFvHytTh/4uyGMgNpbwxkyCxgSN4PleQy2wvxy/DFW+JxCDzI4jK9LFH5aojzEhUtj+P3E7CXL/wRPxDJdfEU6PhTk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEa1zL0TUD00vr72wZq3y4rgtSnctWBvs+gME/0/EAsV
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2WwWe4rQW0CaFwcmci1J5n144T87fcxCH+Y2CVZd5XQ7Cvzlhh1cGNDX81Tng3KgxvKOuz3mdiSCLqx8noiD0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/3N9PJZXpat0uFh2x2RoV9B0Ih74HU9CPf+g/5HncM7gCVvCpW3CBde1qNDRU2iY9rzpOVPwzi4YzoAUcxB5KAiqZOI9ylmzfiD8JXQ+myLmIRLxHOdXFaEQ4mMp4W+X37hCZ6sdfm6Yqd6eqBuZrM/72ltYoewWBNCG/Hgqzu30L9WC4+BF+iADHT7Qnmvh/cc9U71WxB4h2ikBo1SdGoFCqoez7ajitqx+dw7VWaOtEPliS0LZuDtN3Zt/cBBgxhb/FaAEI3jRP2ej9X0NJW91YxzBygyxiVasslx92g/GmnDFOWVZb5ai/JJsNH6pLTjs25IzvnuWIf8/ZLgZ03zziR4mBLP12CIVF8g1CzaqK1IILDKkjS/dzDiTBefmiQ2+N0i5EEXOgmxchqOqTkFPQg/ar0+0uBPkwzAI0HDk99czhyYHFlO+PhnULVkL1z+XLwHBgOrbNNVQQcJCvady4Gadh66mu1UrLpryNYOgZiugZi67Biha4ZPzPHok=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF41dx3BXAuEvQwQNtbUM7rIrbaOLr5CRvYNdDD+UMr9
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBENFrTpm22/xEaEJMzd7C5WyJttJdK+HK5kxP8/NuvvAQSlLtEulBZnvD/OX5hk3/sDYhPQelj3YsNX1Plw5PJQ=
                                              create=True mode=0644 path=/tmp/ansible.9dd7yixo state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:48 compute-0 sudo[126610]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:48 compute-0 ceph-mon[74360]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:49 compute-0 sudo[126762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmxyfathdbjbnbtbbymwozlzwdaichu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917889.017931-201-109408467744142/AnsiballZ_command.py'
Jan 20 14:04:49 compute-0 sudo[126762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:49 compute-0 python3.9[126764]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9dd7yixo' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:49 compute-0 sudo[126762]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:50 compute-0 sudo[126917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chlmgrfgunujifwanfrcgxtkqdtiekjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917889.930458-225-235584989711288/AnsiballZ_file.py'
Jan 20 14:04:50 compute-0 sudo[126917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:50 compute-0 python3.9[126919]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9dd7yixo state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:50 compute-0 sudo[126917]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:50 compute-0 ceph-mon[74360]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:51 compute-0 sshd-session[125566]: Connection closed by 192.168.122.30 port 57476
Jan 20 14:04:51 compute-0 sshd-session[125563]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:04:51 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 20 14:04:51 compute-0 systemd[1]: session-41.scope: Consumed 5.337s CPU time.
Jan 20 14:04:51 compute-0 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Jan 20 14:04:51 compute-0 systemd-logind[796]: Removed session 41.
Jan 20 14:04:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:52 compute-0 sudo[126946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:52 compute-0 sudo[126946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:52 compute-0 sudo[126946]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:52 compute-0 sudo[126971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:04:52 compute-0 sudo[126971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:04:52 compute-0 sudo[126971]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:04:52
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'backups']
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:52 compute-0 ceph-mon[74360]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:53.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:54.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:54 compute-0 ceph-mon[74360]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:56 compute-0 sshd-session[126998]: Accepted publickey for zuul from 192.168.122.30 port 44866 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:04:56 compute-0 systemd-logind[796]: New session 42 of user zuul.
Jan 20 14:04:56 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 20 14:04:56 compute-0 sshd-session[126998]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:04:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:56 compute-0 ceph-mon[74360]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:57 compute-0 python3.9[127153]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:04:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:04:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:57.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:04:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:04:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:04:58.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:04:58 compute-0 sudo[127308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqdakefehfuysgclczxbzyakimrflusm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917897.664277-56-194903083096561/AnsiballZ_systemd.py'
Jan 20 14:04:58 compute-0 sudo[127308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:58 compute-0 python3.9[127310]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:04:58 compute-0 sudo[127308]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:59 compute-0 sudo[127462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkktpqlilggutdpoydwprgzleqonjics ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917898.9954739-80-23951731506662/AnsiballZ_systemd.py'
Jan 20 14:04:59 compute-0 sudo[127462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:04:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:04:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:04:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:04:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:04:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:04:59 compute-0 python3.9[127464]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:04:59 compute-0 sudo[127462]: pam_unix(sudo:session): session closed for user root
Jan 20 14:04:59 compute-0 sshd-session[127000]: Connection closed by authenticating user root 159.223.5.14 port 48084 [preauth]
Jan 20 14:05:00 compute-0 ceph-mon[74360]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:00 compute-0 sudo[127616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpbyodnbkiyzkubhdczhjsppicbgxeja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917900.1158507-107-154854204853468/AnsiballZ_command.py'
Jan 20 14:05:00 compute-0 sudo[127616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:00 compute-0 python3.9[127618]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:00 compute-0 sudo[127616]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:01 compute-0 ceph-mon[74360]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:01 compute-0 sudo[127769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byrqditlvsqaxnlxunbadbexfpxuodwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917901.0667214-131-139707169545830/AnsiballZ_stat.py'
Jan 20 14:05:01 compute-0 sudo[127769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:01.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:01 compute-0 python3.9[127771]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:01 compute-0 sudo[127769]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:02 compute-0 sudo[127922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vskhgffznbovsrptpqzymbhrrpegxuxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917902.0428782-158-229602594643181/AnsiballZ_file.py'
Jan 20 14:05:02 compute-0 sudo[127922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:02 compute-0 python3.9[127924]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:02 compute-0 sudo[127922]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:03 compute-0 sshd-session[127002]: Connection closed by 192.168.122.30 port 44866
Jan 20 14:05:03 compute-0 sshd-session[126998]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:03 compute-0 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Jan 20 14:05:03 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 20 14:05:03 compute-0 systemd[1]: session-42.scope: Consumed 3.932s CPU time.
Jan 20 14:05:03 compute-0 systemd-logind[796]: Removed session 42.
Jan 20 14:05:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:03 compute-0 ceph-mon[74360]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:05 compute-0 ceph-mon[74360]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:06 compute-0 ceph-mon[74360]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:05:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2404 writes, 10K keys, 2404 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2404 writes, 2404 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2404 writes, 10K keys, 2404 commit groups, 1.0 writes per commit group, ingest: 13.70 MB, 0.02 MB/s
                                           Interval WAL: 2404 writes, 2404 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.6      0.09              0.04         4    0.023       0      0       0.0       0.0
                                             L6      1/0    7.28 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    143.9    122.9      0.19              0.10         3    0.064     11K   1302       0.0       0.0
                                            Sum      1/0    7.28 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     97.0    122.5      0.29              0.14         7    0.041     11K   1302       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     98.2    123.7      0.28              0.14         6    0.047     11K   1302       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    143.9    122.9      0.19              0.10         3    0.064     11K   1302       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.6      0.09              0.04         3    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 308.00 MB usage: 1.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(53,913.92 KB,0.289773%) FilterBlock(8,42.11 KB,0.0133514%) IndexBlock(8,88.06 KB,0.0279216%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:05:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:08.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:08 compute-0 sshd-session[127952]: Accepted publickey for zuul from 192.168.122.30 port 52510 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:05:08 compute-0 systemd-logind[796]: New session 43 of user zuul.
Jan 20 14:05:08 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 20 14:05:08 compute-0 sshd-session[127952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:05:08 compute-0 ceph-mon[74360]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:09 compute-0 python3.9[128105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:05:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:10 compute-0 sudo[128260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhionztotgtshcqvamuxswpaujrghauq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917910.3113673-62-219201488893807/AnsiballZ_setup.py'
Jan 20 14:05:10 compute-0 sudo[128260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:10 compute-0 ceph-mon[74360]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:05:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:05:10 compute-0 python3.9[128262]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:05:11 compute-0 sudo[128260]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:11 compute-0 sudo[128344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnhjiuvefcicuvqsgwqqbpodbackyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917910.3113673-62-219201488893807/AnsiballZ_dnf.py'
Jan 20 14:05:11 compute-0 sudo[128344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:11 compute-0 python3.9[128346]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:05:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:12 compute-0 sudo[128349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:12 compute-0 sudo[128349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:12 compute-0 sudo[128349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:12 compute-0 sudo[128374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:12 compute-0 sudo[128374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:12 compute-0 sudo[128374]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:12 compute-0 ceph-mon[74360]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:13 compute-0 sudo[128344]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:13 compute-0 python3.9[128549]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:15 compute-0 ceph-mon[74360]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:15 compute-0 python3.9[128700]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:05:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:16 compute-0 python3.9[128851]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:16.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:16 compute-0 sudo[129002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:17 compute-0 ceph-mon[74360]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:17 compute-0 sudo[129002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-0 sudo[129002]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-0 sudo[129027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:05:17 compute-0 sudo[129027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-0 sudo[129027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:17 compute-0 sudo[129052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:17 compute-0 sudo[129052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-0 sudo[129052]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-0 python3.9[129001]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:05:17 compute-0 sudo[129077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:05:17 compute-0 sudo[129077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:17.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:17 compute-0 sudo[129077]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:17 compute-0 sshd-session[127955]: Connection closed by 192.168.122.30 port 52510
Jan 20 14:05:17 compute-0 sshd-session[127952]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:17 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 20 14:05:17 compute-0 systemd[1]: session-43.scope: Consumed 5.732s CPU time.
Jan 20 14:05:17 compute-0 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Jan 20 14:05:17 compute-0 systemd-logind[796]: Removed session 43.
Jan 20 14:05:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:18 compute-0 ceph-mon[74360]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:05:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:05:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c5a13c6e-aee4-4ed6-892f-03bbfdb98ced does not exist
Jan 20 14:05:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 404f7d85-b934-4318-8d8d-1676ff049112 does not exist
Jan 20 14:05:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 54300972-d901-42ea-84fb-76a29917284a does not exist
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:05:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:20 compute-0 sudo[129160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:20 compute-0 sudo[129160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:20 compute-0 sudo[129160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:21 compute-0 sudo[129185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:05:21 compute-0 sudo[129185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:21 compute-0 sudo[129185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:21 compute-0 sudo[129210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:21 compute-0 sudo[129210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:21 compute-0 sudo[129210]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:21 compute-0 sudo[129235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:05:21 compute-0 sudo[129235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.492942399 +0000 UTC m=+0.101412481 container create fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.412028401 +0000 UTC m=+0.020498493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:21 compute-0 systemd[1]: Started libpod-conmon-fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e.scope.
Jan 20 14:05:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.63197895 +0000 UTC m=+0.240449112 container init fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:05:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:21.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.638198098 +0000 UTC m=+0.246668180 container start fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.641599249 +0000 UTC m=+0.250069421 container attach fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:21 compute-0 heuristic_nash[129315]: 167 167
Jan 20 14:05:21 compute-0 systemd[1]: libpod-fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e.scope: Deactivated successfully.
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.644262431 +0000 UTC m=+0.252732573 container died fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfaa7951095c2e17b6e97f1bf554ecb11469740147f997d9902051bb2cdfaba5-merged.mount: Deactivated successfully.
Jan 20 14:05:21 compute-0 podman[129299]: 2026-01-20 14:05:21.692819347 +0000 UTC m=+0.301289439 container remove fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:21 compute-0 systemd[1]: libpod-conmon-fc719cf383a636a6f1d8e46bd4cbb34837763f5efe71eb97c4db7298d3953b0e.scope: Deactivated successfully.
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:05:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:05:21 compute-0 podman[129341]: 2026-01-20 14:05:21.905540632 +0000 UTC m=+0.071882136 container create 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:05:21 compute-0 systemd[1]: Started libpod-conmon-8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f.scope.
Jan 20 14:05:21 compute-0 podman[129341]: 2026-01-20 14:05:21.879709076 +0000 UTC m=+0.046050670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:22 compute-0 podman[129341]: 2026-01-20 14:05:22.013395964 +0000 UTC m=+0.179737478 container init 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:05:22 compute-0 podman[129341]: 2026-01-20 14:05:22.018988384 +0000 UTC m=+0.185329888 container start 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:05:22 compute-0 podman[129341]: 2026-01-20 14:05:22.022447517 +0000 UTC m=+0.188789041 container attach 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:05:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:22 compute-0 jolly_ardinghelli[129358]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:05:22 compute-0 jolly_ardinghelli[129358]: --> relative data size: 1.0
Jan 20 14:05:22 compute-0 jolly_ardinghelli[129358]: --> All data devices are unavailable
Jan 20 14:05:22 compute-0 systemd[1]: libpod-8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f.scope: Deactivated successfully.
Jan 20 14:05:22 compute-0 podman[129341]: 2026-01-20 14:05:22.761082723 +0000 UTC m=+0.927424237 container died 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a0ea193d5436df597d644223adc9e5ac4e0e25df0738cfea6f2f3d1408be59-merged.mount: Deactivated successfully.
Jan 20 14:05:22 compute-0 podman[129341]: 2026-01-20 14:05:22.819976177 +0000 UTC m=+0.986317701 container remove 8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:05:22 compute-0 systemd[1]: libpod-conmon-8e36adbee76b9b98b98dec9d34b314a556400b945dc37658a9dfb972d56d2f3f.scope: Deactivated successfully.
Jan 20 14:05:22 compute-0 sudo[129235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:22 compute-0 ceph-mon[74360]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:22 compute-0 sudo[129386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:22 compute-0 sudo[129386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:22 compute-0 sudo[129386]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:23 compute-0 sudo[129411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:05:23 compute-0 sudo[129411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:23 compute-0 sudo[129411]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:23 compute-0 sudo[129436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:23 compute-0 sudo[129436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:23 compute-0 sudo[129436]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:23 compute-0 sudo[129461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:05:23 compute-0 sudo[129461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:23 compute-0 sshd-session[129486]: Accepted publickey for zuul from 192.168.122.30 port 36598 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:05:23 compute-0 systemd-logind[796]: New session 44 of user zuul.
Jan 20 14:05:23 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 20 14:05:23 compute-0 sshd-session[129486]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:05:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.468835697 +0000 UTC m=+0.039427482 container create 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:23 compute-0 systemd[1]: Started libpod-conmon-5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72.scope.
Jan 20 14:05:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.451004677 +0000 UTC m=+0.021596512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.564724847 +0000 UTC m=+0.135316712 container init 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.575273341 +0000 UTC m=+0.145865136 container start 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.578799086 +0000 UTC m=+0.149390961 container attach 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:05:23 compute-0 agitated_satoshi[129597]: 167 167
Jan 20 14:05:23 compute-0 systemd[1]: libpod-5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72.scope: Deactivated successfully.
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.582165386 +0000 UTC m=+0.152757171 container died 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9198fd30de5bffeb8c632ae4e76405a68e83f01b4e2f93d8549b423fbc1d6fa6-merged.mount: Deactivated successfully.
Jan 20 14:05:23 compute-0 podman[129568]: 2026-01-20 14:05:23.618178666 +0000 UTC m=+0.188770471 container remove 5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:05:23 compute-0 systemd[1]: libpod-conmon-5bed1e01407f5e2a3c87fe6f7b46fcc108e4d996a79cc305ef618f488b28ea72.scope: Deactivated successfully.
Jan 20 14:05:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:23 compute-0 podman[129622]: 2026-01-20 14:05:23.797982523 +0000 UTC m=+0.047956371 container create 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:23 compute-0 systemd[1]: Started libpod-conmon-88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385.scope.
Jan 20 14:05:23 compute-0 podman[129622]: 2026-01-20 14:05:23.778175701 +0000 UTC m=+0.028149589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8227487783f9a5365e979d2d01295bc82c70bf0d6d7c2b5ff12b6c5f4e9b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8227487783f9a5365e979d2d01295bc82c70bf0d6d7c2b5ff12b6c5f4e9b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8227487783f9a5365e979d2d01295bc82c70bf0d6d7c2b5ff12b6c5f4e9b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3b8227487783f9a5365e979d2d01295bc82c70bf0d6d7c2b5ff12b6c5f4e9b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:23 compute-0 podman[129622]: 2026-01-20 14:05:23.901757186 +0000 UTC m=+0.151731044 container init 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:23 compute-0 podman[129622]: 2026-01-20 14:05:23.910735178 +0000 UTC m=+0.160709056 container start 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:23 compute-0 podman[129622]: 2026-01-20 14:05:23.915023473 +0000 UTC m=+0.164997331 container attach 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 14:05:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:24.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:24 compute-0 python3.9[129741]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:05:24 compute-0 objective_galois[129657]: {
Jan 20 14:05:24 compute-0 objective_galois[129657]:     "0": [
Jan 20 14:05:24 compute-0 objective_galois[129657]:         {
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "devices": [
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "/dev/loop3"
Jan 20 14:05:24 compute-0 objective_galois[129657]:             ],
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "lv_name": "ceph_lv0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "lv_size": "7511998464",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "name": "ceph_lv0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "tags": {
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.cluster_name": "ceph",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.crush_device_class": "",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.encrypted": "0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.osd_id": "0",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.type": "block",
Jan 20 14:05:24 compute-0 objective_galois[129657]:                 "ceph.vdo": "0"
Jan 20 14:05:24 compute-0 objective_galois[129657]:             },
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "type": "block",
Jan 20 14:05:24 compute-0 objective_galois[129657]:             "vg_name": "ceph_vg0"
Jan 20 14:05:24 compute-0 objective_galois[129657]:         }
Jan 20 14:05:24 compute-0 objective_galois[129657]:     ]
Jan 20 14:05:24 compute-0 objective_galois[129657]: }
Jan 20 14:05:24 compute-0 systemd[1]: libpod-88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385.scope: Deactivated successfully.
Jan 20 14:05:24 compute-0 conmon[129657]: conmon 88dac631cfca3dbb8fa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385.scope/container/memory.events
Jan 20 14:05:24 compute-0 podman[129622]: 2026-01-20 14:05:24.704080725 +0000 UTC m=+0.954054563 container died 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b8227487783f9a5365e979d2d01295bc82c70bf0d6d7c2b5ff12b6c5f4e9b5-merged.mount: Deactivated successfully.
Jan 20 14:05:24 compute-0 podman[129622]: 2026-01-20 14:05:24.758656863 +0000 UTC m=+1.008630701 container remove 88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:05:24 compute-0 systemd[1]: libpod-conmon-88dac631cfca3dbb8fa48f7908c6e4d419272f70d0f134b3b346ab7f8cd76385.scope: Deactivated successfully.
Jan 20 14:05:24 compute-0 sudo[129461]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:24 compute-0 sudo[129785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:24 compute-0 sudo[129785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:24 compute-0 sudo[129785]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:24 compute-0 sudo[129810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:05:24 compute-0 sudo[129810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:24 compute-0 sudo[129810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:24 compute-0 sudo[129835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:24 compute-0 sudo[129835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:24 compute-0 sudo[129835]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:25 compute-0 sudo[129860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:05:25 compute-0 sudo[129860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:25 compute-0 ceph-mon[74360]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.445508696 +0000 UTC m=+0.043717937 container create 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:05:25 compute-0 systemd[1]: Started libpod-conmon-0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1.scope.
Jan 20 14:05:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.522323033 +0000 UTC m=+0.120532274 container init 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.427943843 +0000 UTC m=+0.026153134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.529173117 +0000 UTC m=+0.127382398 container start 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.532954078 +0000 UTC m=+0.131163349 container attach 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:25 compute-0 nostalgic_feistel[129990]: 167 167
Jan 20 14:05:25 compute-0 systemd[1]: libpod-0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1.scope: Deactivated successfully.
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.535876147 +0000 UTC m=+0.134085418 container died 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d7e1a9d95c0a2d3ba38159b5327d49b882725e7365c29847d2d1af184b9e806-merged.mount: Deactivated successfully.
Jan 20 14:05:25 compute-0 podman[129948]: 2026-01-20 14:05:25.590406164 +0000 UTC m=+0.188615415 container remove 0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:05:25 compute-0 systemd[1]: libpod-conmon-0d50cb61bdce04d90ee864c94155c8d1c8197cc067e981a9c1dce3c7a4a501a1.scope: Deactivated successfully.
Jan 20 14:05:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:25 compute-0 podman[130038]: 2026-01-20 14:05:25.765158777 +0000 UTC m=+0.060797377 container create 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:05:25 compute-0 systemd[1]: Started libpod-conmon-115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e.scope.
Jan 20 14:05:25 compute-0 podman[130038]: 2026-01-20 14:05:25.732442507 +0000 UTC m=+0.028081147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:05:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:05:25 compute-0 sudo[130107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfvhcmepszabqkmteenjltayrbkfafdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917925.3732605-107-106257657457/AnsiballZ_file.py'
Jan 20 14:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f81faa0115ab167befd5b5edc1ee2581b6ae012c159f6d91abb15be3348cea03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:25 compute-0 sudo[130107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f81faa0115ab167befd5b5edc1ee2581b6ae012c159f6d91abb15be3348cea03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f81faa0115ab167befd5b5edc1ee2581b6ae012c159f6d91abb15be3348cea03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f81faa0115ab167befd5b5edc1ee2581b6ae012c159f6d91abb15be3348cea03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:25 compute-0 podman[130038]: 2026-01-20 14:05:25.87307141 +0000 UTC m=+0.168710090 container init 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:25 compute-0 podman[130038]: 2026-01-20 14:05:25.891748273 +0000 UTC m=+0.187386873 container start 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:05:25 compute-0 podman[130038]: 2026-01-20 14:05:25.896137462 +0000 UTC m=+0.191776142 container attach 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:05:26 compute-0 python3.9[130109]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:26 compute-0 sudo[130107]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:26 compute-0 ceph-mon[74360]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:26 compute-0 sudo[130264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csffviiiqqckjerxqjzwflwhvgozzexm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.2178926-107-179923030677545/AnsiballZ_file.py'
Jan 20 14:05:26 compute-0 sudo[130264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:26 compute-0 sshd-session[130157]: Connection closed by authenticating user root 157.245.78.139 port 32784 [preauth]
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]: {
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:         "osd_id": 0,
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:         "type": "bluestore"
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]:     }
Jan 20 14:05:26 compute-0 gifted_hodgkin[130102]: }
Jan 20 14:05:26 compute-0 python3.9[130267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:26 compute-0 systemd[1]: libpod-115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e.scope: Deactivated successfully.
Jan 20 14:05:26 compute-0 podman[130038]: 2026-01-20 14:05:26.717096382 +0000 UTC m=+1.012735042 container died 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:26 compute-0 sudo[130264]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f81faa0115ab167befd5b5edc1ee2581b6ae012c159f6d91abb15be3348cea03-merged.mount: Deactivated successfully.
Jan 20 14:05:26 compute-0 podman[130038]: 2026-01-20 14:05:26.783010556 +0000 UTC m=+1.078649166 container remove 115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:05:26 compute-0 systemd[1]: libpod-conmon-115a3db7e059c24e0dc1b320a0e096c695f48ab3a7401fef1b4c6fb4763a917e.scope: Deactivated successfully.
Jan 20 14:05:26 compute-0 sudo[129860]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:05:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:05:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bad01c21-282d-4cfe-a06e-61d4eeac9a98 does not exist
Jan 20 14:05:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6b4b3330-e532-4364-b20d-c28db2bd144e does not exist
Jan 20 14:05:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b91880a4-293c-4ef9-91c8-ffa83eb4477a does not exist
Jan 20 14:05:26 compute-0 sudo[130320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:26 compute-0 sudo[130320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:26 compute-0 sudo[130320]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:26 compute-0 sudo[130364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:05:26 compute-0 sudo[130364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:26 compute-0 sudo[130364]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:27 compute-0 sudo[130495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljucllopjkqelbaszsupppqhzzscfouh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.940487-156-108004592743160/AnsiballZ_stat.py'
Jan 20 14:05:27 compute-0 sudo[130495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:27 compute-0 python3.9[130497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:27 compute-0 sudo[130495]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:05:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:28 compute-0 sudo[130619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buyusyziqksffxvockgjzmjokoohrzna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917926.940487-156-108004592743160/AnsiballZ_copy.py'
Jan 20 14:05:28 compute-0 sudo[130619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:28 compute-0 python3.9[130621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917926.940487-156-108004592743160/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c80011efffc48286cac9a10f0d3a067ca8a2678a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:28 compute-0 sudo[130619]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:28 compute-0 ceph-mon[74360]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:29 compute-0 sudo[130771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijlnvbjkhyhzoojnwygtwdoymksreeje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917928.7045794-156-186330452661835/AnsiballZ_stat.py'
Jan 20 14:05:29 compute-0 sudo[130771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:29 compute-0 python3.9[130773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:29 compute-0 sudo[130771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:29 compute-0 sudo[130894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuhswlwqagziejmcdaeliaajoydezwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917928.7045794-156-186330452661835/AnsiballZ_copy.py'
Jan 20 14:05:29 compute-0 sudo[130894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:29.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:29 compute-0 python3.9[130896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917928.7045794-156-186330452661835/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3f5ad343c2ed5cd826e6179427db625573e3eee3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:29 compute-0 sudo[130894]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:30 compute-0 sudo[131047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynncbwyijevwjwtxgboeqiehbeveszio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917929.899811-156-12115635181436/AnsiballZ_stat.py'
Jan 20 14:05:30 compute-0 sudo[131047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:30 compute-0 python3.9[131049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:30 compute-0 sudo[131047]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:30 compute-0 sudo[131170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmacgmynetzkyygidlinaflqnipzgqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917929.899811-156-12115635181436/AnsiballZ_copy.py'
Jan 20 14:05:30 compute-0 sudo[131170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:30 compute-0 ceph-mon[74360]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:31 compute-0 python3.9[131172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917929.899811-156-12115635181436/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4443c0caa10b10c745d3d5d55292a569de54ad95 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:31 compute-0 sudo[131170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:31 compute-0 sudo[131322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jomkmpdykttilcdfvtykutxcsgxzetbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917931.2844243-283-182432122290985/AnsiballZ_file.py'
Jan 20 14:05:31 compute-0 sudo[131322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:31 compute-0 python3.9[131324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:31 compute-0 sudo[131322]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:32 compute-0 sudo[131475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdcumayvcgkkxbhxvgyknutiqmgfbtct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917931.9132278-283-249085289802189/AnsiballZ_file.py'
Jan 20 14:05:32 compute-0 sudo[131475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:32 compute-0 python3.9[131477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:32 compute-0 sudo[131475]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-0 sudo[131478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:32 compute-0 sudo[131478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:32 compute-0 sudo[131478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-0 sudo[131527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:32 compute-0 sudo[131527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:32 compute-0 sudo[131527]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:32 compute-0 ceph-mon[74360]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:32 compute-0 sudo[131677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdaamhccgrtmyfsismelnnkjevhsdvid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917932.6861646-328-143717471643780/AnsiballZ_stat.py'
Jan 20 14:05:32 compute-0 sudo[131677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:33 compute-0 python3.9[131679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:33 compute-0 sudo[131677]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:33 compute-0 sudo[131800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzozrzgjbmkcrvhuqbsytlqwtltysjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917932.6861646-328-143717471643780/AnsiballZ_copy.py'
Jan 20 14:05:33 compute-0 sudo[131800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:33 compute-0 python3.9[131802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917932.6861646-328-143717471643780/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b78f6e70013972b49c165d4dc7dcf8eb6d5ca27b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:33 compute-0 sudo[131800]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:33.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:34 compute-0 ceph-mon[74360]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:34 compute-0 sudo[131953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txelsymhscyosuvpjqmzogchqcgjssgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917933.876586-328-183836492827637/AnsiballZ_stat.py'
Jan 20 14:05:34 compute-0 sudo[131953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:34 compute-0 python3.9[131955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:34 compute-0 sudo[131953]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:34 compute-0 sudo[132076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhoujlntaixfoytzfdayhymxooklldl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917933.876586-328-183836492827637/AnsiballZ_copy.py'
Jan 20 14:05:34 compute-0 sudo[132076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:34 compute-0 python3.9[132078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917933.876586-328-183836492827637/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=dbd41a175def1218d1038733ac1d1fb38abc7be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:34 compute-0 sudo[132076]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:35 compute-0 sudo[132228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naxzmjjgtibpprjnsbreythblwewybtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917934.9608903-328-151864315460660/AnsiballZ_stat.py'
Jan 20 14:05:35 compute-0 sudo[132228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:35 compute-0 python3.9[132230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:35 compute-0 sudo[132228]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:35 compute-0 sudo[132352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavnmesoszwfaqijtwkprxlvwkbthebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917934.9608903-328-151864315460660/AnsiballZ_copy.py'
Jan 20 14:05:35 compute-0 sudo[132352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:35 compute-0 python3.9[132354]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917934.9608903-328-151864315460660/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f989f2a91db62812b51c90520f3e9642a9ad1dbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:35 compute-0 sudo[132352]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:36.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:36 compute-0 sudo[132504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opewhnsueonfosbmgfqodxdwsynduqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917936.2015567-445-213407590820611/AnsiballZ_file.py'
Jan 20 14:05:36 compute-0 sudo[132504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:36 compute-0 python3.9[132506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:36 compute-0 sudo[132504]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:36 compute-0 ceph-mon[74360]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:37 compute-0 sudo[132656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sipbptknrsjghzwiehltdzqwzsrdxggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917936.9045763-445-253090583580355/AnsiballZ_file.py'
Jan 20 14:05:37 compute-0 sudo[132656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:37 compute-0 python3.9[132658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:37 compute-0 sudo[132656]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:37.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:37 compute-0 sudo[132809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdftdrhzymneowirgicptytcfqmaler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917937.669593-491-170767495796547/AnsiballZ_stat.py'
Jan 20 14:05:37 compute-0 sudo[132809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:38 compute-0 ceph-mon[74360]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:38 compute-0 python3.9[132811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:38 compute-0 sudo[132809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:38.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:38 compute-0 sudo[132932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nllybfjyrjmqtxcjsqshylgckdootecd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917937.669593-491-170767495796547/AnsiballZ_copy.py'
Jan 20 14:05:38 compute-0 sudo[132932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:38 compute-0 python3.9[132934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917937.669593-491-170767495796547/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6fbee5915664a570e9aba6a5fdfc42a0d49c04ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:38 compute-0 sudo[132932]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:39 compute-0 sudo[133084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzedgslsdpsqcelrzittyqnexoizervx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917938.8148656-491-178766001251568/AnsiballZ_stat.py'
Jan 20 14:05:39 compute-0 sudo[133084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:39 compute-0 python3.9[133086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:39 compute-0 sudo[133084]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:39 compute-0 sudo[133208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesgqtvepqgttciyzmzzmgxkqlpnnlfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917938.8148656-491-178766001251568/AnsiballZ_copy.py'
Jan 20 14:05:39 compute-0 sudo[133208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:39.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:39 compute-0 python3.9[133210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917938.8148656-491-178766001251568/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=dbd41a175def1218d1038733ac1d1fb38abc7be7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:39 compute-0 sudo[133208]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:40.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:40 compute-0 sudo[133360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyqnvkiiktyylyzaxutlqvehzhtjqnlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917940.0583615-491-239025750223568/AnsiballZ_stat.py'
Jan 20 14:05:40 compute-0 sudo[133360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:40 compute-0 ceph-mon[74360]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:40 compute-0 python3.9[133362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:40 compute-0 sudo[133360]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:40 compute-0 sudo[133483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evcgvcdaltmaapzaiztuoligszbmqbad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917940.0583615-491-239025750223568/AnsiballZ_copy.py'
Jan 20 14:05:41 compute-0 sudo[133483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:41 compute-0 python3.9[133485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917940.0583615-491-239025750223568/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=973132210d2e8fd73b286c4e0dc24ec27e669674 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:41 compute-0 sudo[133483]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:41.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:42.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:42 compute-0 sudo[133636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxfwrgumeiazfyfgezeatmpfwochnait ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917941.9940262-638-170947105563825/AnsiballZ_file.py'
Jan 20 14:05:42 compute-0 sudo[133636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:42 compute-0 python3.9[133638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:42 compute-0 ceph-mon[74360]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:42 compute-0 sudo[133636]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:42 compute-0 sudo[133788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdwkdndkzkdqbchoemjusuelrqvnnffp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917942.6295996-659-251815393294065/AnsiballZ_stat.py'
Jan 20 14:05:42 compute-0 sudo[133788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:43 compute-0 python3.9[133790]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:43 compute-0 sudo[133788]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:43 compute-0 sudo[133911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdynwgjqxizgcfqrykoqyoqqlxpmbtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917942.6295996-659-251815393294065/AnsiballZ_copy.py'
Jan 20 14:05:43 compute-0 sudo[133911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:43 compute-0 python3.9[133913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917942.6295996-659-251815393294065/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:43 compute-0 sudo[133911]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:44.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:44 compute-0 sudo[134064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rucehgpriqnloohrjzuzktsvxplmxnlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917943.8665066-723-103523419231981/AnsiballZ_file.py'
Jan 20 14:05:44 compute-0 sudo[134064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:44 compute-0 python3.9[134066]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:44 compute-0 ceph-mon[74360]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:44 compute-0 sudo[134064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:44 compute-0 sudo[134216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqepxfwrqdhhcahruxdkspjyajhdkzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917944.651284-748-267455753984041/AnsiballZ_stat.py'
Jan 20 14:05:44 compute-0 sudo[134216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:45 compute-0 python3.9[134218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:45 compute-0 sudo[134216]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:45 compute-0 sudo[134339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsflpytwyajxnzxfpbmdotmssiluzskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917944.651284-748-267455753984041/AnsiballZ_copy.py'
Jan 20 14:05:45 compute-0 sudo[134339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:45 compute-0 python3.9[134341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917944.651284-748-267455753984041/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:45 compute-0 sudo[134339]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:46 compute-0 sudo[134492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixezgbdkwnbdrnoctezuesihibrgbvaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917945.973102-787-110765526289962/AnsiballZ_file.py'
Jan 20 14:05:46 compute-0 sudo[134492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:46 compute-0 python3.9[134494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:46 compute-0 sudo[134492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:46 compute-0 ceph-mon[74360]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:47 compute-0 sudo[134644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmfxrjgeglewyqrowsjvnrdctuvrzpmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917946.6837256-808-51522287489975/AnsiballZ_stat.py'
Jan 20 14:05:47 compute-0 sudo[134644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:47 compute-0 python3.9[134646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:47 compute-0 sudo[134644]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:47 compute-0 sudo[134768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trzzotdqlpviymgkkfgiyldpvlawkqrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917946.6837256-808-51522287489975/AnsiballZ_copy.py'
Jan 20 14:05:47 compute-0 sudo[134768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:47 compute-0 python3.9[134770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917946.6837256-808-51522287489975/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:47 compute-0 sudo[134768]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:48 compute-0 ceph-mon[74360]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:48 compute-0 sudo[134920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbddamjrxwdjakxxayfxznzacguzsii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917948.2352383-859-198269033088044/AnsiballZ_file.py'
Jan 20 14:05:48 compute-0 sudo[134920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:48 compute-0 python3.9[134922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:48 compute-0 sudo[134920]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:49 compute-0 sudo[135072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijyqynbwjtrmucpvwerfbhcqfptaulrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917949.0013518-884-204558445613495/AnsiballZ_stat.py'
Jan 20 14:05:49 compute-0 sudo[135072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:49 compute-0 python3.9[135074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:49 compute-0 sudo[135072]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:49.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:50 compute-0 sudo[135196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkadlciuuwqnahecshvebusjpplwdqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917949.0013518-884-204558445613495/AnsiballZ_copy.py'
Jan 20 14:05:50 compute-0 sudo[135196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:50.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:50 compute-0 python3.9[135198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917949.0013518-884-204558445613495/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:50 compute-0 sudo[135196]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:50 compute-0 ceph-mon[74360]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:50 compute-0 sudo[135348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guvshqcsdcosyykwgpjajpaypnxsuibs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917950.629087-934-158991100177714/AnsiballZ_file.py'
Jan 20 14:05:50 compute-0 sudo[135348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:51 compute-0 python3.9[135350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:51 compute-0 sudo[135348]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:51 compute-0 sudo[135501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimgpdliossyjnyxsmdpwggczrzfevnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917951.3604243-956-209606886238164/AnsiballZ_stat.py'
Jan 20 14:05:51 compute-0 sudo[135501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:51 compute-0 python3.9[135503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:51 compute-0 sudo[135501]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:52.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:52 compute-0 sudo[135624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqngrrrvnktjhaleuvfsjevujqhjdqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917951.3604243-956-209606886238164/AnsiballZ_copy.py'
Jan 20 14:05:52 compute-0 sudo[135624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:05:52
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups']
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:52 compute-0 python3.9[135626]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917951.3604243-956-209606886238164/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:52 compute-0 sudo[135624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:52 compute-0 ceph-mon[74360]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:52 compute-0 sudo[135659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:52 compute-0 sudo[135659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:52 compute-0 sudo[135659]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:52 compute-0 sudo[135716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:05:52 compute-0 sudo[135716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:05:52 compute-0 sudo[135716]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:52 compute-0 sudo[135826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahouzprapmaskjzrkqcmgwzhhwouobcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917952.7190356-998-133768159476472/AnsiballZ_file.py'
Jan 20 14:05:52 compute-0 sudo[135826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:53 compute-0 python3.9[135828]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:05:53 compute-0 sudo[135826]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:53 compute-0 sudo[135979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxwhnylyiwivyvgvhtnaweoovmlkxdqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917953.4638197-1024-151790119689028/AnsiballZ_stat.py'
Jan 20 14:05:53 compute-0 sudo[135979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:53 compute-0 python3.9[135981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:05:53 compute-0 sudo[135979]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:54 compute-0 sudo[136102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjrjufxfockupkihtgztfzxslximdhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917953.4638197-1024-151790119689028/AnsiballZ_copy.py'
Jan 20 14:05:54 compute-0 sudo[136102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:05:54 compute-0 python3.9[136104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917953.4638197-1024-151790119689028/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a9ac548cf1fa241f1d1335913ca73d2a10501b24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:05:54 compute-0 sudo[136102]: pam_unix(sudo:session): session closed for user root
Jan 20 14:05:54 compute-0 ceph-mon[74360]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:05:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:05:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:56.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:56 compute-0 ceph-mon[74360]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:05:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:57 compute-0 sshd-session[129501]: Connection closed by 192.168.122.30 port 36598
Jan 20 14:05:57 compute-0 sshd-session[129486]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:05:57 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 20 14:05:57 compute-0 systemd[1]: session-44.scope: Consumed 23.753s CPU time.
Jan 20 14:05:57 compute-0 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Jan 20 14:05:57 compute-0 systemd-logind[796]: Removed session 44.
Jan 20 14:05:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:05:58.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:05:58 compute-0 ceph-mon[74360]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:05:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:05:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:05:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:05:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:00 compute-0 ceph-mon[74360]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:02.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:02 compute-0 ceph-mon[74360]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:02 compute-0 sshd-session[136133]: Accepted publickey for zuul from 192.168.122.30 port 43890 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:06:02 compute-0 systemd-logind[796]: New session 45 of user zuul.
Jan 20 14:06:02 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 20 14:06:02 compute-0 sshd-session[136133]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:06:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:03 compute-0 sudo[136286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbzoaulryfosjmxyuenelluortguibz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917962.9020243-26-76029014359978/AnsiballZ_file.py'
Jan 20 14:06:03 compute-0 sudo[136286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:03 compute-0 python3.9[136288]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:03 compute-0 sudo[136286]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:04 compute-0 sudo[136439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipuqkkqhkikzdqzachfpykwdqdxsbhoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917964.0909193-62-39988228195009/AnsiballZ_stat.py'
Jan 20 14:06:04 compute-0 sudo[136439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:04 compute-0 python3.9[136441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:04 compute-0 ceph-mon[74360]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:04 compute-0 sudo[136439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:05 compute-0 sudo[136563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cglqalciwblpkucikswyfbmxuuyjrjbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917964.0909193-62-39988228195009/AnsiballZ_copy.py'
Jan 20 14:06:05 compute-0 sudo[136563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:05 compute-0 python3.9[136565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917964.0909193-62-39988228195009/.source.conf _original_basename=ceph.conf follow=False checksum=906e2ddae7738a5e2d5bcdd5b659f6884e758b17 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:05 compute-0 sudo[136563]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:05 compute-0 sudo[136717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxswbudeqxmjstwppprzenmvyutuyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917965.619637-62-123148335489816/AnsiballZ_stat.py'
Jan 20 14:06:05 compute-0 sudo[136717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:06 compute-0 python3.9[136719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:06 compute-0 sudo[136717]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:06.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:06 compute-0 sudo[136840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-figazpjtavhmakgspoqmrhlmcbdawsky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917965.619637-62-123148335489816/AnsiballZ_copy.py'
Jan 20 14:06:06 compute-0 sudo[136840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:06 compute-0 python3.9[136842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768917965.619637-62-123148335489816/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ddae6cb53c02baaa87ed0e28941db377a2638775 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:06 compute-0 sudo[136840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:06 compute-0 ceph-mon[74360]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:07 compute-0 sshd-session[136136]: Connection closed by 192.168.122.30 port 43890
Jan 20 14:06:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:07 compute-0 sshd-session[136133]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:06:07 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 20 14:06:07 compute-0 systemd[1]: session-45.scope: Consumed 2.775s CPU time.
Jan 20 14:06:07 compute-0 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Jan 20 14:06:07 compute-0 systemd-logind[796]: Removed session 45.
Jan 20 14:06:07 compute-0 sshd-session[136516]: Connection closed by authenticating user root 159.223.5.14 port 35036 [preauth]
Jan 20 14:06:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:07.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:08 compute-0 ceph-mon[74360]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:09.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:10.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:10 compute-0 ceph-mon[74360]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:06:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:06:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:11.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:11 compute-0 sshd-session[136869]: Connection closed by authenticating user root 157.245.78.139 port 47642 [preauth]
Jan 20 14:06:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:12.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:12 compute-0 ceph-mon[74360]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:12 compute-0 sudo[136872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:12 compute-0 sudo[136872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:12 compute-0 sudo[136872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:12 compute-0 sudo[136897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:12 compute-0 sudo[136897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:12 compute-0 sudo[136897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:13 compute-0 sshd-session[136922]: Accepted publickey for zuul from 192.168.122.30 port 33794 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:06:13 compute-0 systemd-logind[796]: New session 46 of user zuul.
Jan 20 14:06:13 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 20 14:06:13 compute-0 sshd-session[136922]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:06:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:13.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:14 compute-0 ceph-mon[74360]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:14 compute-0 python3.9[137076]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:15.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:16 compute-0 sudo[137231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysuwcnngrgibgqdgqeesziddxlvdpril ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917975.6244779-62-135587686483251/AnsiballZ_file.py'
Jan 20 14:06:16 compute-0 sudo[137231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:16.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:16 compute-0 python3.9[137233]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:16 compute-0 sudo[137231]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:16 compute-0 ceph-mon[74360]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:17 compute-0 sudo[137383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iirtdwwvkkyiagbebojdjlgfmsonjhgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917976.6894982-62-267117607003824/AnsiballZ_file.py'
Jan 20 14:06:17 compute-0 sudo[137383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:17.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:17 compute-0 python3.9[137386]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:17 compute-0 sudo[137383]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:18 compute-0 ceph-mon[74360]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:18 compute-0 python3.9[137536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:19 compute-0 sudo[137687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhrwwcgsfahkxxboyztottzzevwtruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917979.1469588-131-96811472716460/AnsiballZ_seboolean.py'
Jan 20 14:06:19 compute-0 sudo[137687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:19 compute-0 python3.9[137689]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:06:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:20 compute-0 ceph-mon[74360]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:21 compute-0 sudo[137687]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:21 compute-0 sudo[137844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gweomyciagiosdtpaeurcwmhpuxltkks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917981.5429325-161-219286160757301/AnsiballZ_setup.py'
Jan 20 14:06:21 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 20 14:06:21 compute-0 sudo[137844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:22 compute-0 python3.9[137846]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:06:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:22 compute-0 sudo[137844]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:22 compute-0 ceph-mon[74360]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:23 compute-0 sudo[137928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dudyqiduiljyoqjacqxjvpybquongsyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917981.5429325-161-219286160757301/AnsiballZ_dnf.py'
Jan 20 14:06:23 compute-0 sudo[137928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:23 compute-0 python3.9[137930]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:06:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:23.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:24.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:24 compute-0 sudo[137928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:24 compute-0 ceph-mon[74360]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:25 compute-0 sudo[138082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazgqauueeuxxttmhggysjcdwprdqdwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917984.7121325-197-73552440752129/AnsiballZ_systemd.py'
Jan 20 14:06:25 compute-0 sudo[138082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:25 compute-0 python3.9[138084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:06:25 compute-0 sudo[138082]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:25.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:26.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:26 compute-0 sudo[138238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aodrrkyimqtdtzrbzcobmsscdnyolgku ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917986.1612804-221-126596766260266/AnsiballZ_edpm_nftables_snippet.py'
Jan 20 14:06:26 compute-0 sudo[138238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:26 compute-0 python3[138240]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 20 14:06:26 compute-0 sudo[138238]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-0 ceph-mon[74360]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:27 compute-0 sudo[138283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:27 compute-0 sudo[138283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-0 sudo[138283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:27 compute-0 sudo[138336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:27 compute-0 sudo[138336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-0 sudo[138336]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-0 sudo[138388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:27 compute-0 sudo[138388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-0 sudo[138388]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-0 sudo[138436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:06:27 compute-0 sudo[138436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:27 compute-0 sudo[138491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgblzbovokzwyxtoyyxlbgcayfwipgkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917987.3344262-248-2555092658130/AnsiballZ_file.py'
Jan 20 14:06:27 compute-0 sudo[138491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:27 compute-0 python3.9[138493]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:27 compute-0 sudo[138491]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:27 compute-0 sudo[138436]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 ceph-mon[74360]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:28 compute-0 sudo[138590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:28 compute-0 sudo[138590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:28 compute-0 sudo[138590]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 sudo[138615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:28 compute-0 sudo[138615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:28 compute-0 sudo[138615]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:28 compute-0 sudo[138640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:28 compute-0 sudo[138640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:28 compute-0 sudo[138640]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 sudo[138665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:06:28 compute-0 sudo[138665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:28 compute-0 sudo[138776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuvvfxhgrqcvbxrvmkfyacyjqhduoonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917988.086162-272-23066051466841/AnsiballZ_stat.py'
Jan 20 14:06:28 compute-0 sudo[138776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:28 compute-0 python3.9[138780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:28 compute-0 sudo[138776]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 sudo[138665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6765ef0b-6be1-4aad-ad39-d49ca56f74de does not exist
Jan 20 14:06:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dde0b9ee-87c8-42d5-8669-82e2d81b48f4 does not exist
Jan 20 14:06:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3d190dd3-3564-45db-abcb-15ff7bf06e01 does not exist
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:06:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:06:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:29 compute-0 sudo[138823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:29 compute-0 sudo[138823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:29 compute-0 sudo[138823]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:06:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:06:29 compute-0 sudo[138873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:29 compute-0 sudo[138873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:29 compute-0 sudo[138873]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-0 sudo[138921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftxuxknagvkvdfzhidahmmtiosotnkmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917988.086162-272-23066051466841/AnsiballZ_file.py'
Jan 20 14:06:29 compute-0 sudo[138921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:29 compute-0 sudo[138925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:29 compute-0 sudo[138925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:29 compute-0 sudo[138925]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-0 sudo[138951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:06:29 compute-0 sudo[138951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:29 compute-0 python3.9[138926]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:29 compute-0 sudo[138921]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.693880127 +0000 UTC m=+0.047890690 container create 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:29 compute-0 systemd[1]: Started libpod-conmon-62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5.scope.
Jan 20 14:06:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.675722382 +0000 UTC m=+0.029732975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.778966329 +0000 UTC m=+0.132976912 container init 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.785929784 +0000 UTC m=+0.139940377 container start 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.78951456 +0000 UTC m=+0.143525153 container attach 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:06:29 compute-0 youthful_wilson[139127]: 167 167
Jan 20 14:06:29 compute-0 systemd[1]: libpod-62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5.scope: Deactivated successfully.
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.791289437 +0000 UTC m=+0.145300020 container died 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-60628482a9e0611abc5eca0c3a95a364945df7f6c05f88d6b73839119f50b05e-merged.mount: Deactivated successfully.
Jan 20 14:06:29 compute-0 podman[139065]: 2026-01-20 14:06:29.83781196 +0000 UTC m=+0.191822533 container remove 62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:06:29 compute-0 systemd[1]: libpod-conmon-62223271c0819ba14657a183a1bd031e4e211aa607ea1ef2adb39cc6348c98c5.scope: Deactivated successfully.
Jan 20 14:06:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:29 compute-0 sudo[139196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmetndlneltxegvqryezpnwtmpoluyus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917989.603437-308-175973289557811/AnsiballZ_stat.py'
Jan 20 14:06:29 compute-0 sudo[139196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.028693836 +0000 UTC m=+0.061437162 container create c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:06:30 compute-0 systemd[1]: Started libpod-conmon-c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809.scope.
Jan 20 14:06:30 compute-0 python3.9[139198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.009729349 +0000 UTC m=+0.042472695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.108481676 +0000 UTC m=+0.141225022 container init c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:30 compute-0 sudo[139196]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.11796391 +0000 UTC m=+0.150707236 container start c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.12620781 +0000 UTC m=+0.158951136 container attach c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:06:30 compute-0 ceph-mon[74360]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:30 compute-0 sudo[139301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svxjgbymgfytthwygsqymnbssapfqcqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917989.603437-308-175973289557811/AnsiballZ_file.py'
Jan 20 14:06:30 compute-0 sudo[139301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:30 compute-0 python3.9[139303]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._h9viob1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:30 compute-0 sudo[139301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:30 compute-0 heuristic_haibt[139221]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:06:30 compute-0 heuristic_haibt[139221]: --> relative data size: 1.0
Jan 20 14:06:30 compute-0 heuristic_haibt[139221]: --> All data devices are unavailable
Jan 20 14:06:30 compute-0 systemd[1]: libpod-c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809.scope: Deactivated successfully.
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.895781325 +0000 UTC m=+0.928524671 container died c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-783bb38102322b168b55fdf38f7218712411d44316225c4d7f208ca44b71755b-merged.mount: Deactivated successfully.
Jan 20 14:06:30 compute-0 podman[139204]: 2026-01-20 14:06:30.956682092 +0000 UTC m=+0.989425418 container remove c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:06:30 compute-0 systemd[1]: libpod-conmon-c1a7470ec6f0bb2085679ddbd8edf7f03aea42b8c73e55628b942cb24ffb4809.scope: Deactivated successfully.
Jan 20 14:06:30 compute-0 sudo[138951]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 sudo[139428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:31 compute-0 sudo[139428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:31 compute-0 sudo[139428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 sudo[139457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:31 compute-0 sudo[139457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:31 compute-0 sudo[139457]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 sudo[139549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gslnqbfhisltwpxvysgvdijpefgdvlef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917990.8633747-344-122739907590666/AnsiballZ_stat.py'
Jan 20 14:06:31 compute-0 sudo[139549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:31 compute-0 sudo[139506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:31 compute-0 sudo[139506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:31 compute-0 sudo[139506]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 sudo[139556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:06:31 compute-0 sudo[139556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:31 compute-0 python3.9[139553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:31 compute-0 sudo[139549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.591269174 +0000 UTC m=+0.040401670 container create 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:06:31 compute-0 systemd[1]: Started libpod-conmon-36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59.scope.
Jan 20 14:06:31 compute-0 sudo[139710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itwvbrvumcwotvcpbehleilehhgymlaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917990.8633747-344-122739907590666/AnsiballZ_file.py'
Jan 20 14:06:31 compute-0 sudo[139710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.571143637 +0000 UTC m=+0.020276113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.704520248 +0000 UTC m=+0.153652774 container init 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.711238847 +0000 UTC m=+0.160371333 container start 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:06:31 compute-0 interesting_morse[139715]: 167 167
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.716416785 +0000 UTC m=+0.165549331 container attach 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:06:31 compute-0 systemd[1]: libpod-36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59.scope: Deactivated successfully.
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.71733324 +0000 UTC m=+0.166465726 container died 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:06:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-35e5378e5d7e124255442e4badc17a015402aa58e7ddb9de83da63f9351debcc-merged.mount: Deactivated successfully.
Jan 20 14:06:31 compute-0 podman[139670]: 2026-01-20 14:06:31.767880519 +0000 UTC m=+0.217013005 container remove 36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_morse, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:06:31 compute-0 systemd[1]: libpod-conmon-36598165c4009ef7eb17b6306b0b732f7715fc6838467204906f990bb66a1f59.scope: Deactivated successfully.
Jan 20 14:06:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:31.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:31 compute-0 python3.9[139717]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:31 compute-0 podman[139741]: 2026-01-20 14:06:31.931666112 +0000 UTC m=+0.039838884 container create 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:31 compute-0 sudo[139710]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:31 compute-0 systemd[1]: Started libpod-conmon-6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243.scope.
Jan 20 14:06:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdea5534de81b09d85e347e6f252e420e005abedc3b2e5f02d1d04c48bdb54f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdea5534de81b09d85e347e6f252e420e005abedc3b2e5f02d1d04c48bdb54f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdea5534de81b09d85e347e6f252e420e005abedc3b2e5f02d1d04c48bdb54f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdea5534de81b09d85e347e6f252e420e005abedc3b2e5f02d1d04c48bdb54f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:31.913800785 +0000 UTC m=+0.021973577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:32.021306225 +0000 UTC m=+0.129479007 container init 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:32.02750409 +0000 UTC m=+0.135676882 container start 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:32.03196209 +0000 UTC m=+0.140134882 container attach 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:06:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:32.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:32 compute-0 ceph-mon[74360]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]: {
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:     "0": [
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:         {
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "devices": [
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "/dev/loop3"
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             ],
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "lv_name": "ceph_lv0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "lv_size": "7511998464",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "name": "ceph_lv0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "tags": {
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.cluster_name": "ceph",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.crush_device_class": "",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.encrypted": "0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.osd_id": "0",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.type": "block",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:                 "ceph.vdo": "0"
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             },
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "type": "block",
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:             "vg_name": "ceph_vg0"
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:         }
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]:     ]
Jan 20 14:06:32 compute-0 vigilant_matsumoto[139765]: }
Jan 20 14:06:32 compute-0 systemd[1]: libpod-6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243.scope: Deactivated successfully.
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:32.763918132 +0000 UTC m=+0.872090904 container died 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdea5534de81b09d85e347e6f252e420e005abedc3b2e5f02d1d04c48bdb54f5-merged.mount: Deactivated successfully.
Jan 20 14:06:32 compute-0 sudo[139926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klyuymsyimrdtipbcnxugsioqioasond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917992.344763-383-10484238497850/AnsiballZ_command.py'
Jan 20 14:06:32 compute-0 sudo[139926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:32 compute-0 podman[139741]: 2026-01-20 14:06:32.830096309 +0000 UTC m=+0.938269081 container remove 6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:32 compute-0 systemd[1]: libpod-conmon-6541bb4c39fdcc3b6b13dc5cea22f768acfa2fd301ba873d63352e12b435e243.scope: Deactivated successfully.
Jan 20 14:06:32 compute-0 sudo[139556]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:32 compute-0 sudo[139932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:32 compute-0 sudo[139932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:32 compute-0 sudo[139932]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:32 compute-0 python3.9[139931]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:32 compute-0 sudo[139957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:06:33 compute-0 sudo[139957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:33 compute-0 sudo[139957]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-0 sudo[139926]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-0 sudo[139983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:33 compute-0 sudo[139983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:33 compute-0 sudo[139983]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-0 sudo[139987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:33 compute-0 sudo[139987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:33 compute-0 sudo[139987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-0 sudo[140043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:06:33 compute-0 sudo[140043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:33 compute-0 sudo[140080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:33 compute-0 sudo[140080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:33 compute-0 sudo[140080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.468163154 +0000 UTC m=+0.039627199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.790703985 +0000 UTC m=+0.362167960 container create d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:06:33 compute-0 systemd[1]: Started libpod-conmon-d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716.scope.
Jan 20 14:06:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:33.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.922593566 +0000 UTC m=+0.494057601 container init d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.931077893 +0000 UTC m=+0.502541838 container start d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:06:33 compute-0 affectionate_bouman[140165]: 167 167
Jan 20 14:06:33 compute-0 systemd[1]: libpod-d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716.scope: Deactivated successfully.
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.938139141 +0000 UTC m=+0.509603126 container attach d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.938544262 +0000 UTC m=+0.510008247 container died d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:06:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3f11ee5f902ee53dfd7f3a126db8f221a02bf4dea0c4522f68db28ca4294749-merged.mount: Deactivated successfully.
Jan 20 14:06:33 compute-0 podman[140148]: 2026-01-20 14:06:33.982729882 +0000 UTC m=+0.554193857 container remove d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:06:33 compute-0 systemd[1]: libpod-conmon-d1c38f716fface338853484fe9fab81a9cb5224345a307c7fec19532c5aa5716.scope: Deactivated successfully.
Jan 20 14:06:34 compute-0 podman[140242]: 2026-01-20 14:06:34.151913989 +0000 UTC m=+0.043934974 container create c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:34 compute-0 systemd[1]: Started libpod-conmon-c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7.scope.
Jan 20 14:06:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33455c9297376d957b54ba5f22d2ee63827379a72bd83f75b0503bb8ca28487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33455c9297376d957b54ba5f22d2ee63827379a72bd83f75b0503bb8ca28487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33455c9297376d957b54ba5f22d2ee63827379a72bd83f75b0503bb8ca28487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e33455c9297376d957b54ba5f22d2ee63827379a72bd83f75b0503bb8ca28487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:34 compute-0 podman[140242]: 2026-01-20 14:06:34.1335906 +0000 UTC m=+0.025611605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:06:34 compute-0 podman[140242]: 2026-01-20 14:06:34.230849696 +0000 UTC m=+0.122870671 container init c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:06:34 compute-0 podman[140242]: 2026-01-20 14:06:34.237163195 +0000 UTC m=+0.129184200 container start c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:06:34 compute-0 podman[140242]: 2026-01-20 14:06:34.241522012 +0000 UTC m=+0.133543007 container attach c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:06:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:34 compute-0 sudo[140337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uchfkfknycsjnzszocyoluajfcfyciwg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768917993.976199-407-29117087428714/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:06:34 compute-0 sudo[140337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:34 compute-0 python3[140339]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:06:34 compute-0 sudo[140337]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:34 compute-0 ceph-mon[74360]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]: {
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:         "osd_id": 0,
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:         "type": "bluestore"
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]:     }
Jan 20 14:06:35 compute-0 frosty_chatterjee[140259]: }
Jan 20 14:06:35 compute-0 systemd[1]: libpod-c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7.scope: Deactivated successfully.
Jan 20 14:06:35 compute-0 podman[140242]: 2026-01-20 14:06:35.140102943 +0000 UTC m=+1.032123948 container died c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e33455c9297376d957b54ba5f22d2ee63827379a72bd83f75b0503bb8ca28487-merged.mount: Deactivated successfully.
Jan 20 14:06:35 compute-0 podman[140242]: 2026-01-20 14:06:35.21866122 +0000 UTC m=+1.110682225 container remove c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chatterjee, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:06:35 compute-0 systemd[1]: libpod-conmon-c5bb2065e9b14786fef1c855ff87471d934fdf1aca4502b0c1673299d8a69db7.scope: Deactivated successfully.
Jan 20 14:06:35 compute-0 sudo[140043]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:06:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:06:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fbb0ed20-77a6-4e6e-aa77-91dc61348416 does not exist
Jan 20 14:06:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 801007b6-8724-4b9f-830d-56d21fcd6c4b does not exist
Jan 20 14:06:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3fa66706-0e7f-4eb3-ba62-f9b1b884b2a6 does not exist
Jan 20 14:06:35 compute-0 sudo[140521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txjoaajzbrozggkypbgvdicthbxamybo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917994.8749046-431-177890869639015/AnsiballZ_stat.py'
Jan 20 14:06:35 compute-0 sudo[140521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:35 compute-0 sudo[140518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:35 compute-0 sudo[140518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:35 compute-0 sudo[140518]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:35 compute-0 sudo[140546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:06:35 compute-0 sudo[140546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:35 compute-0 sudo[140546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-0 python3.9[140533]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:35 compute-0 sudo[140521]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:36 compute-0 sudo[140694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaazfxaitoimmhhrrimdvwfrwrbxxvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917994.8749046-431-177890869639015/AnsiballZ_copy.py'
Jan 20 14:06:36 compute-0 sudo[140694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:36 compute-0 python3.9[140696]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917994.8749046-431-177890869639015/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:06:36 compute-0 ceph-mon[74360]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:36 compute-0 sudo[140694]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:36 compute-0 sudo[140846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caeihsstwohdasdtcinfxptcbxcanuis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917996.4996088-476-154558977806708/AnsiballZ_stat.py'
Jan 20 14:06:36 compute-0 sudo[140846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:37 compute-0 python3.9[140848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:37 compute-0 sudo[140846]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:37 compute-0 sudo[140972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npokqwqfvtputddhjgqknpmjobraorus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917996.4996088-476-154558977806708/AnsiballZ_copy.py'
Jan 20 14:06:37 compute-0 sudo[140972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:37.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:37 compute-0 python3.9[140974]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917996.4996088-476-154558977806708/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:37 compute-0 sudo[140972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:38.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:38 compute-0 ceph-mon[74360]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:38 compute-0 sudo[141124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aucxdjiwccxkpqdekukyawlqksrmfhai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917998.1870508-521-198180391693224/AnsiballZ_stat.py'
Jan 20 14:06:38 compute-0 sudo[141124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:38 compute-0 python3.9[141126]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:38 compute-0 sudo[141124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:39 compute-0 sudo[141249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulinbfznrioqjqiufghxfulkqjvwaild ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917998.1870508-521-198180391693224/AnsiballZ_copy.py'
Jan 20 14:06:39 compute-0 sudo[141249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:39 compute-0 python3.9[141251]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917998.1870508-521-198180391693224/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:39 compute-0 sudo[141249]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:40 compute-0 sudo[141402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqrhmpvfuxecauipzqcnabfxevmjxht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917999.7538528-566-110438552967512/AnsiballZ_stat.py'
Jan 20 14:06:40 compute-0 sudo[141402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:40.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:40 compute-0 python3.9[141404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:40 compute-0 sudo[141402]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:40 compute-0 ceph-mon[74360]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:40 compute-0 sudo[141527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvbkubjihgebxiftudlvomncypdpppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768917999.7538528-566-110438552967512/AnsiballZ_copy.py'
Jan 20 14:06:40 compute-0 sudo[141527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:41 compute-0 python3.9[141529]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768917999.7538528-566-110438552967512/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:41 compute-0 sudo[141527]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:41 compute-0 sudo[141680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxugbpkmtqsnfmuqygfompeulvchuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918001.3899288-611-275038593875418/AnsiballZ_stat.py'
Jan 20 14:06:41 compute-0 sudo[141680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:42 compute-0 python3.9[141682]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:42 compute-0 sudo[141680]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:42 compute-0 sudo[141805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bufarwunaphkmiwxlicqchnphuvffflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918001.3899288-611-275038593875418/AnsiballZ_copy.py'
Jan 20 14:06:42 compute-0 sudo[141805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:42 compute-0 ceph-mon[74360]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:42 compute-0 python3.9[141807]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918001.3899288-611-275038593875418/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:42 compute-0 sudo[141805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:43 compute-0 sudo[141957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolwvxllpzmfcokihyrthijstgtglzvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918002.8637874-656-48953754252967/AnsiballZ_file.py'
Jan 20 14:06:43 compute-0 sudo[141957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:43 compute-0 python3.9[141959]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:43 compute-0 sudo[141957]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:43.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:44 compute-0 sudo[142110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgwazcdrltshrjurtqkqgskepseqosy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918003.7301621-680-140142437021731/AnsiballZ_command.py'
Jan 20 14:06:44 compute-0 sudo[142110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:44 compute-0 python3.9[142112]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:44 compute-0 sudo[142110]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:44 compute-0 ceph-mon[74360]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:45 compute-0 sudo[142265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnqrbujupbjonvxgttozmohhchgjcsgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918004.6902704-704-235727172119688/AnsiballZ_blockinfile.py'
Jan 20 14:06:45 compute-0 sudo[142265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:45 compute-0 python3.9[142267]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:45 compute-0 sudo[142265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:46 compute-0 sudo[142418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgnmkxportjhyidrzpurbiisamjhruol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918005.8232353-731-207047853144808/AnsiballZ_command.py'
Jan 20 14:06:46 compute-0 sudo[142418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:46.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:46 compute-0 python3.9[142420]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:46 compute-0 sudo[142418]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:46 compute-0 ceph-mon[74360]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:47 compute-0 sudo[142571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzsqdwazdblgcpfjbskjrzpsiqrznqco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918006.6883407-755-253627100705063/AnsiballZ_stat.py'
Jan 20 14:06:47 compute-0 sudo[142571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:47 compute-0 python3.9[142573]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:06:47 compute-0 sudo[142571]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:47 compute-0 sudo[142726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iijrokesyatamqylxxgfvtgtfkzvphtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918007.5775723-779-153560354251029/AnsiballZ_command.py'
Jan 20 14:06:47 compute-0 sudo[142726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:48 compute-0 python3.9[142728]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:48 compute-0 sudo[142726]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:48.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:48 compute-0 ceph-mon[74360]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:48 compute-0 sudo[142881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmeknlawpzllzdbvpaorfhppfooenziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918008.4500554-803-144064951884333/AnsiballZ_file.py'
Jan 20 14:06:48 compute-0 sudo[142881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:48 compute-0 python3.9[142883]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:49 compute-0 sudo[142881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:49.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:50.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:50 compute-0 python3.9[143034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:50 compute-0 ceph-mon[74360]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:51 compute-0 sudo[143186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiocwdjftcluwqbrudkcthxnbiugxjeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918011.4664736-923-202792695571257/AnsiballZ_command.py'
Jan 20 14:06:51 compute-0 sudo[143186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:51 compute-0 python3.9[143188]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:51 compute-0 ovs-vsctl[143189]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 20 14:06:52 compute-0 sudo[143186]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:52.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:06:52
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', '.mgr']
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:52 compute-0 sudo[143339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctceqwokvlerjptzamtnvohpvcwgrhdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918012.4389155-950-242555692111353/AnsiballZ_command.py'
Jan 20 14:06:52 compute-0 sudo[143339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:52 compute-0 ceph-mon[74360]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:52 compute-0 python3.9[143341]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:52 compute-0 sudo[143339]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:53 compute-0 sudo[143494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwowyoeoujpezcfqhwwxwdumequxhlyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918013.222731-974-194076772320377/AnsiballZ_command.py'
Jan 20 14:06:53 compute-0 sudo[143494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:53 compute-0 sudo[143498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:53 compute-0 sudo[143498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:53 compute-0 sudo[143498]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-0 sudo[143523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:06:53 compute-0 sudo[143523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:06:53 compute-0 sudo[143523]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-0 python3.9[143496]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:53 compute-0 ovs-vsctl[143548]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 20 14:06:53 compute-0 sudo[143494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:06:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:54.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:06:54 compute-0 python3.9[143698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:06:54 compute-0 ceph-mon[74360]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:55 compute-0 sudo[143850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eahaxabykvgreukyfckbgnisrsthxpwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918015.1789365-1025-198396464483846/AnsiballZ_file.py'
Jan 20 14:06:55 compute-0 sudo[143850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:55 compute-0 python3.9[143852]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:55 compute-0 sudo[143850]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:56.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:56 compute-0 sudo[144003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-locybhcuryaqkydauazvjkanqqpmcssv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918016.0174944-1049-101477810884502/AnsiballZ_stat.py'
Jan 20 14:06:56 compute-0 sudo[144003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:56 compute-0 python3.9[144005]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:56 compute-0 sudo[144003]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:56 compute-0 sudo[144081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpfgsktmpsmjnvzdktddzmrzgbmbsypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918016.0174944-1049-101477810884502/AnsiballZ_file.py'
Jan 20 14:06:56 compute-0 sudo[144081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:56 compute-0 ceph-mon[74360]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:57 compute-0 python3.9[144083]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:57 compute-0 sudo[144081]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:06:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:57 compute-0 sudo[144235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsznhjldtxeyuydagfolwytjmnrfgkuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918017.234948-1049-40992658222543/AnsiballZ_stat.py'
Jan 20 14:06:57 compute-0 sudo[144235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:57 compute-0 sshd-session[144108]: Connection closed by authenticating user root 157.245.78.139 port 57142 [preauth]
Jan 20 14:06:57 compute-0 python3.9[144237]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:06:57 compute-0 sudo[144235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:57.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:58 compute-0 sudo[144314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqpgwfjrdmafqgiqzerxinrnoodhyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918017.234948-1049-40992658222543/AnsiballZ_file.py'
Jan 20 14:06:58 compute-0 sudo[144314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:58 compute-0 python3.9[144316]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:58 compute-0 sudo[144314]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:06:58.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:06:58 compute-0 ceph-mon[74360]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:59 compute-0 sudo[144466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uneiiohwyatsfgnauxcgurjyrzbootwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918018.7855399-1118-7621076384982/AnsiballZ_file.py'
Jan 20 14:06:59 compute-0 sudo[144466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:06:59 compute-0 python3.9[144468]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:59 compute-0 sudo[144466]: pam_unix(sudo:session): session closed for user root
Jan 20 14:06:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:06:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:06:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:06:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:06:59.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:00 compute-0 sudo[144619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pttuyxibnlppxdeepkcdnibtnjyirtsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918019.7172265-1142-93155646364127/AnsiballZ_stat.py'
Jan 20 14:07:00 compute-0 sudo[144619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:00 compute-0 python3.9[144621]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:00 compute-0 sudo[144619]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:00.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:00 compute-0 sudo[144697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxvjufzxkyngyislqdzrheqggpbvhotf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918019.7172265-1142-93155646364127/AnsiballZ_file.py'
Jan 20 14:07:00 compute-0 sudo[144697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:00 compute-0 python3.9[144699]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:00 compute-0 sudo[144697]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:00 compute-0 ceph-mon[74360]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:01 compute-0 sudo[144849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgryjbxlaaxpfzvmjfijagffutrmmeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918020.9933515-1178-215854317332043/AnsiballZ_stat.py'
Jan 20 14:07:01 compute-0 sudo[144849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:01 compute-0 python3.9[144851]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:01 compute-0 sudo[144849]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:01 compute-0 sudo[144928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fogjguhfromgqweavjvzxvmbgbbgycsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918020.9933515-1178-215854317332043/AnsiballZ_file.py'
Jan 20 14:07:01 compute-0 sudo[144928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:01.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:02 compute-0 python3.9[144930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:02 compute-0 sudo[144928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:02 compute-0 sudo[145080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boayhehyznecdzkyjrkhhudlrtbhgiun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918022.2815585-1214-192649568381577/AnsiballZ_systemd.py'
Jan 20 14:07:02 compute-0 sudo[145080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:02 compute-0 python3.9[145082]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:02 compute-0 systemd[1]: Reloading.
Jan 20 14:07:02 compute-0 ceph-mon[74360]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:03 compute-0 systemd-rc-local-generator[145109]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:03 compute-0 systemd-sysv-generator[145114]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:03 compute-0 sudo[145080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:03 compute-0 sudo[145272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfcpgxkrozetstcbrkobdbuzzgekkzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918023.4996192-1238-237008647157706/AnsiballZ_stat.py'
Jan 20 14:07:03 compute-0 sudo[145272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:03.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:04 compute-0 python3.9[145274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:04 compute-0 sudo[145272]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:04 compute-0 sudo[145350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njcoxycimyqfsshjfnleswlnekftnnyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918023.4996192-1238-237008647157706/AnsiballZ_file.py'
Jan 20 14:07:04 compute-0 sudo[145350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:04 compute-0 python3.9[145352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:04 compute-0 sudo[145350]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:04 compute-0 ceph-mon[74360]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:05 compute-0 sudo[145502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozffcazntblvjkhrxqvkqwflvpxikzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918024.8403301-1274-201933332487798/AnsiballZ_stat.py'
Jan 20 14:07:05 compute-0 sudo[145502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:05 compute-0 python3.9[145504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:05 compute-0 sudo[145502]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:05 compute-0 sudo[145581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnrwyviozclaohlqpzprrfrrkaqjyvmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918024.8403301-1274-201933332487798/AnsiballZ_file.py'
Jan 20 14:07:05 compute-0 sudo[145581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:06 compute-0 python3.9[145583]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:06 compute-0 sudo[145581]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:06.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:06 compute-0 sudo[145733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrqtoialgsqioyhcagrdshjeajlmqdaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918026.2624958-1310-50959246977761/AnsiballZ_systemd.py'
Jan 20 14:07:06 compute-0 sudo[145733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:06 compute-0 python3.9[145735]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:06 compute-0 systemd[1]: Reloading.
Jan 20 14:07:06 compute-0 systemd-rc-local-generator[145761]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:06 compute-0 systemd-sysv-generator[145765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:07 compute-0 ceph-mon[74360]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:07 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 14:07:07 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:07:07 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:07:07 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 14:07:07 compute-0 sudo[145733]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:07.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:08 compute-0 sudo[145928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilkmmlaqaabrnqmmhmuftwzjcxormxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918027.8556342-1340-239595280538479/AnsiballZ_file.py'
Jan 20 14:07:08 compute-0 sudo[145928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:08.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:08 compute-0 python3.9[145930]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:08 compute-0 sudo[145928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:09 compute-0 ceph-mon[74360]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:09 compute-0 sudo[146080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lygclmirwsmygyeersrjvbfalepotvum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918028.6847744-1364-44697054513343/AnsiballZ_stat.py'
Jan 20 14:07:09 compute-0 sudo[146080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:09 compute-0 python3.9[146082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:09 compute-0 sudo[146080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:09 compute-0 sudo[146204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznwvnkxhddzcutcoxmsjrqntnnbcyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918028.6847744-1364-44697054513343/AnsiballZ_copy.py'
Jan 20 14:07:09 compute-0 sudo[146204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:09.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:10 compute-0 python3.9[146206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918028.6847744-1364-44697054513343/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:10 compute-0 sudo[146204]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:10 compute-0 ceph-mon[74360]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:07:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:07:11 compute-0 sudo[146356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzktwibhhhqdnohshrdqijxlpxqpluyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918030.6839666-1415-25235536836619/AnsiballZ_file.py'
Jan 20 14:07:11 compute-0 sudo[146356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:11 compute-0 python3.9[146358]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:11 compute-0 sudo[146356]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:11 compute-0 sudo[146509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtnncgfmvcxgxwtrfuirkbhcsknojxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918031.47445-1439-281115429975371/AnsiballZ_file.py'
Jan 20 14:07:11 compute-0 sudo[146509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:11 compute-0 python3.9[146511]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:12 compute-0 sudo[146509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:12.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:12 compute-0 ceph-mon[74360]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:12 compute-0 sudo[146661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvqkgzurvypciwgfiwvgrsqgzxvavfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918032.2814205-1463-183470289111165/AnsiballZ_stat.py'
Jan 20 14:07:12 compute-0 sudo[146661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:12 compute-0 python3.9[146663]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:12 compute-0 sudo[146661]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:13 compute-0 sudo[146784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhgvyajplryicsrsdcyykbczqugvaxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918032.2814205-1463-183470289111165/AnsiballZ_copy.py'
Jan 20 14:07:13 compute-0 sudo[146784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:13 compute-0 python3.9[146786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918032.2814205-1463-183470289111165/.source.json _original_basename=.f0giwfrj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:13 compute-0 sudo[146784]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:13 compute-0 sudo[146833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:13 compute-0 sudo[146833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:13 compute-0 sudo[146833]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:13.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:14 compute-0 sudo[146861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:14 compute-0 sudo[146861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:14 compute-0 sudo[146861]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:14 compute-0 python3.9[146987]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:14 compute-0 ceph-mon[74360]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:15.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:16 compute-0 ceph-mon[74360]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:16 compute-0 sudo[147409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcesqngmnevjjjdypykkylaifddfuyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918036.341061-1583-108545154525393/AnsiballZ_container_config_data.py'
Jan 20 14:07:16 compute-0 sudo[147409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:07:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 7907 writes, 33K keys, 7907 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7907 writes, 1461 syncs, 5.41 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7907 writes, 33K keys, 7907 commit groups, 1.0 writes per commit group, ingest: 20.72 MB, 0.03 MB/s
                                           Interval WAL: 7907 writes, 1461 syncs, 5.41 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 14:07:17 compute-0 python3.9[147411]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 20 14:07:17 compute-0 sudo[147409]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:18 compute-0 sudo[147562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riqepqcmdcyibbobfjbgqrcwoqdywhtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918037.6186054-1616-77361520940412/AnsiballZ_container_config_hash.py'
Jan 20 14:07:18 compute-0 sudo[147562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:18 compute-0 python3.9[147564]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:07:18 compute-0 sudo[147562]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:18 compute-0 ceph-mon[74360]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:19 compute-0 sudo[147714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjxnnixhcazrvttdelgxeexsbwiqwgg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918038.8777394-1646-250974468940816/AnsiballZ_edpm_container_manage.py'
Jan 20 14:07:19 compute-0 sudo[147714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:19 compute-0 python3[147716]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:07:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:20.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:20 compute-0 ceph-mon[74360]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:22 compute-0 ceph-mon[74360]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:24 compute-0 ceph-mon[74360]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:24 compute-0 podman[147730]: 2026-01-20 14:07:24.901322 +0000 UTC m=+5.037324173 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-0 podman[147849]: 2026-01-20 14:07:25.050789597 +0000 UTC m=+0.058477114 container create 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 20 14:07:25 compute-0 podman[147849]: 2026-01-20 14:07:25.02723295 +0000 UTC m=+0.034920457 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-0 python3[147716]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:07:25 compute-0 sudo[147714]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:26 compute-0 sudo[148038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awpwizvsisjkzxxezlbumgzvrfhozgbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918045.9063084-1670-271638176164008/AnsiballZ_stat.py'
Jan 20 14:07:26 compute-0 sudo[148038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:26 compute-0 python3.9[148040]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:26 compute-0 sudo[148038]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:26 compute-0 ceph-mon[74360]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:27 compute-0 sudo[148192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqlansfnifqnphfwvsurncqjbxzntmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918047.1483536-1697-143333423432923/AnsiballZ_file.py'
Jan 20 14:07:27 compute-0 sudo[148192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:27 compute-0 python3.9[148194]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:27 compute-0 sudo[148192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:27 compute-0 sudo[148269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpkrddogimcwszgrgtwpydpjbjoughuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918047.1483536-1697-143333423432923/AnsiballZ_stat.py'
Jan 20 14:07:27 compute-0 sudo[148269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:28 compute-0 python3.9[148271]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:28 compute-0 sudo[148269]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:28 compute-0 sudo[148420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugwkgugkbozuyffotvguikzbszljljou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2143025-1697-170014833108294/AnsiballZ_copy.py'
Jan 20 14:07:28 compute-0 sudo[148420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:28 compute-0 python3.9[148422]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918048.2143025-1697-170014833108294/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:28 compute-0 sudo[148420]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:29 compute-0 ceph-mon[74360]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:29 compute-0 sudo[148496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbnaofoeitoksbcpqaiiifpljoqspvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2143025-1697-170014833108294/AnsiballZ_systemd.py'
Jan 20 14:07:29 compute-0 sudo[148496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:29 compute-0 python3.9[148498]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:07:29 compute-0 systemd[1]: Reloading.
Jan 20 14:07:29 compute-0 systemd-rc-local-generator[148518]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:29 compute-0 systemd-sysv-generator[148524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:29 compute-0 sudo[148496]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:30 compute-0 sudo[148607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntxpcftpqguybembynnetsamyotxelm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918048.2143025-1697-170014833108294/AnsiballZ_systemd.py'
Jan 20 14:07:30 compute-0 sudo[148607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:30 compute-0 python3.9[148609]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:30 compute-0 systemd[1]: Reloading.
Jan 20 14:07:30 compute-0 systemd-rc-local-generator[148637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:30 compute-0 systemd-sysv-generator[148641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:30 compute-0 systemd[1]: Starting ovn_controller container...
Jan 20 14:07:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451bc13070a1362524cee7b06ce4305936af313624483d9388a0562c511e8562/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533.
Jan 20 14:07:30 compute-0 podman[148650]: 2026-01-20 14:07:30.928139579 +0000 UTC m=+0.120787282 container init 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:07:30 compute-0 ovn_controller[148666]: + sudo -E kolla_set_configs
Jan 20 14:07:30 compute-0 podman[148650]: 2026-01-20 14:07:30.957067153 +0000 UTC m=+0.149714846 container start 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 14:07:30 compute-0 edpm-start-podman-container[148650]: ovn_controller
Jan 20 14:07:30 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 20 14:07:30 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 20 14:07:31 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:31 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 20 14:07:31 compute-0 systemd[148704]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.049790) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051049877, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1542, "num_deletes": 251, "total_data_size": 2816145, "memory_usage": 2862312, "flush_reason": "Manual Compaction"}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 20 14:07:31 compute-0 edpm-start-podman-container[148649]: Creating additional drop-in dependency for "ovn_controller" (43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533)
Jan 20 14:07:31 compute-0 podman[148673]: 2026-01-20 14:07:31.062356564 +0000 UTC m=+0.096718120 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:07:31 compute-0 systemd[1]: 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533-28a6d8c2ffb62b11.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 14:07:31 compute-0 systemd[1]: 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533-28a6d8c2ffb62b11.service: Failed with result 'exit-code'.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051074310, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2763657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10683, "largest_seqno": 12223, "table_properties": {"data_size": 2756564, "index_size": 4164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13986, "raw_average_key_size": 19, "raw_value_size": 2742491, "raw_average_value_size": 3793, "num_data_blocks": 188, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917888, "oldest_key_time": 1768917888, "file_creation_time": 1768918051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 24615 microseconds, and 11476 cpu microseconds.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.074412) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2763657 bytes OK
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.074430) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.075878) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.075892) EVENT_LOG_v1 {"time_micros": 1768918051075888, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.075908) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2809744, prev total WAL file size 2809744, number of live WAL files 2.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.076741) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2698KB)], [26(7454KB)]
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051076787, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10396581, "oldest_snapshot_seqno": -1}
Jan 20 14:07:31 compute-0 systemd[1]: Reloading.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3956 keys, 8245148 bytes, temperature: kUnknown
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051132741, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8245148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8216271, "index_size": 17887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96085, "raw_average_key_size": 24, "raw_value_size": 8142348, "raw_average_value_size": 2058, "num_data_blocks": 773, "num_entries": 3956, "num_filter_entries": 3956, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.132953) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8245148 bytes
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.134197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.5 rd, 147.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 4473, records dropped: 517 output_compression: NoCompression
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.134217) EVENT_LOG_v1 {"time_micros": 1768918051134207, "job": 10, "event": "compaction_finished", "compaction_time_micros": 56034, "compaction_time_cpu_micros": 25763, "output_level": 6, "num_output_files": 1, "total_output_size": 8245148, "num_input_records": 4473, "num_output_records": 3956, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051134876, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918051136331, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.076669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.136392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.136398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.136400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.136402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:07:31.136404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:07:31 compute-0 systemd-rc-local-generator[148755]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:07:31 compute-0 systemd-sysv-generator[148761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:07:31 compute-0 systemd[148704]: Queued start job for default target Main User Target.
Jan 20 14:07:31 compute-0 systemd[148704]: Created slice User Application Slice.
Jan 20 14:07:31 compute-0 systemd[148704]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 20 14:07:31 compute-0 systemd[148704]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:07:31 compute-0 systemd[148704]: Reached target Paths.
Jan 20 14:07:31 compute-0 systemd[148704]: Reached target Timers.
Jan 20 14:07:31 compute-0 systemd[148704]: Starting D-Bus User Message Bus Socket...
Jan 20 14:07:31 compute-0 systemd[148704]: Starting Create User's Volatile Files and Directories...
Jan 20 14:07:31 compute-0 systemd[148704]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:07:31 compute-0 systemd[148704]: Reached target Sockets.
Jan 20 14:07:31 compute-0 systemd[148704]: Finished Create User's Volatile Files and Directories.
Jan 20 14:07:31 compute-0 systemd[148704]: Reached target Basic System.
Jan 20 14:07:31 compute-0 systemd[148704]: Reached target Main User Target.
Jan 20 14:07:31 compute-0 systemd[148704]: Startup finished in 161ms.
Jan 20 14:07:31 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 20 14:07:31 compute-0 systemd[1]: Started ovn_controller container.
Jan 20 14:07:31 compute-0 systemd[1]: Started Session c1 of User root.
Jan 20 14:07:31 compute-0 sudo[148607]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:31 compute-0 ovn_controller[148666]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:07:31 compute-0 ovn_controller[148666]: INFO:__main__:Validating config file
Jan 20 14:07:31 compute-0 ovn_controller[148666]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:07:31 compute-0 ovn_controller[148666]: INFO:__main__:Writing out command to execute
Jan 20 14:07:31 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: ++ cat /run_command
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + ARGS=
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + sudo kolla_copy_cacerts
Jan 20 14:07:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:31 compute-0 systemd[1]: Started Session c2 of User root.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + [[ ! -n '' ]]
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + . kolla_extend_start
Jan 20 14:07:31 compute-0 ovn_controller[148666]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + umask 0022
Jan 20 14:07:31 compute-0 ovn_controller[148666]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 20 14:07:31 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5230] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5235] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <warn>  [1768918051.5237] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5241] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5244] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5246] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:07:31 compute-0 kernel: br-int: entered promiscuous mode
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5508] manager: (ovn-920572-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 20 14:07:31 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5778] device (genev_sys_6081): carrier: link connected
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.5786] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 20 14:07:31 compute-0 systemd-udevd[148803]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:07:31 compute-0 ovn_controller[148666]: 2026-01-20T14:07:31Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:07:31 compute-0 systemd-udevd[148804]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:07:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.8851] manager: (ovn-5ffd4a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 20 14:07:31 compute-0 NetworkManager[48960]: <info>  [1768918051.9279] manager: (ovn-7c9bfe-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 20 14:07:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:31.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:32 compute-0 ceph-mon[74360]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:32 compute-0 python3.9[148935]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:07:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:33 compute-0 sudo[149086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brsvouaetwsfbqvboauftcpmpveffywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918053.5746355-1832-112740270215980/AnsiballZ_stat.py'
Jan 20 14:07:33 compute-0 sudo[149086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:34 compute-0 sudo[149089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:34 compute-0 sudo[149089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:34 compute-0 sudo[149089]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:34 compute-0 sudo[149114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:34 compute-0 sudo[149114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:34 compute-0 sudo[149114]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:34 compute-0 python3.9[149088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:34 compute-0 sudo[149086]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:34 compute-0 ceph-mon[74360]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:34 compute-0 sudo[149261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecqoaiwhwtwefnasvsfrxqodbdgoudie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918053.5746355-1832-112740270215980/AnsiballZ_copy.py'
Jan 20 14:07:34 compute-0 sudo[149261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:34 compute-0 python3.9[149263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918053.5746355-1832-112740270215980/.source.yaml _original_basename=.affiwu7a follow=False checksum=aedaf657c77fb1feab67c7335f83a0d24eed0971 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:34 compute-0 sudo[149261]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:35 compute-0 sudo[149413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfdmxypvpczpdvnxrxtmmgnbjfxitotc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918055.163569-1877-56297211442443/AnsiballZ_command.py'
Jan 20 14:07:35 compute-0 sudo[149413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:35 compute-0 python3.9[149415]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:35 compute-0 ovs-vsctl[149417]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 20 14:07:35 compute-0 sudo[149413]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-0 sudo[149431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:35 compute-0 sudo[149431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:35 compute-0 sudo[149431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-0 sudo[149467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:35 compute-0 sudo[149467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:35 compute-0 sudo[149467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:36 compute-0 sudo[149518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:36 compute-0 sudo[149518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-0 sudo[149518]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-0 sudo[149569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:07:36 compute-0 sudo[149569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:07:36 compute-0 sudo[149674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuyjdptithfasbnjjozmhmigftuhbjyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918055.9408019-1901-7108553113341/AnsiballZ_command.py'
Jan 20 14:07:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:07:36 compute-0 sudo[149674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:07:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:07:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:36 compute-0 python3.9[149680]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:36 compute-0 ovs-vsctl[149687]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 20 14:07:36 compute-0 sudo[149674]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-0 ceph-mon[74360]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:36 compute-0 sudo[149569]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:07:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-0 sudo[149852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwdiabfgwdukkycmsxltnmoigwetjowb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918057.1041458-1943-87778890522302/AnsiballZ_command.py'
Jan 20 14:07:37 compute-0 sudo[149852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:37 compute-0 python3.9[149854]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:37 compute-0 ovs-vsctl[149855]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 20 14:07:37 compute-0 sudo[149852]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6309c291-86eb-4dea-994d-669a55289f2b does not exist
Jan 20 14:07:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e9321363-f0bd-4d6b-9088-c0131c38cb1f does not exist
Jan 20 14:07:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 95a83746-0934-448b-978f-e6a0f60c312a does not exist
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:07:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:07:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:37 compute-0 sudo[149881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:37 compute-0 sudo[149881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:37 compute-0 sudo[149881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:38 compute-0 sudo[149906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:38 compute-0 sudo[149906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:38 compute-0 sudo[149906]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:38 compute-0 sudo[149931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:38 compute-0 sudo[149931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:38 compute-0 sudo[149931]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:38 compute-0 sudo[149956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:07:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:38 compute-0 sudo[149956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:38 compute-0 sshd-session[136926]: Connection closed by 192.168.122.30 port 33794
Jan 20 14:07:38 compute-0 sshd-session[136922]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:07:38 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 20 14:07:38 compute-0 systemd[1]: session-46.scope: Consumed 1min 4.789s CPU time.
Jan 20 14:07:38 compute-0 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Jan 20 14:07:38 compute-0 systemd-logind[796]: Removed session 46.
Jan 20 14:07:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:38.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.551821518 +0000 UTC m=+0.050277322 container create a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:38 compute-0 systemd[1]: Started libpod-conmon-a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d.scope.
Jan 20 14:07:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.52455909 +0000 UTC m=+0.023014994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.63754423 +0000 UTC m=+0.136000094 container init a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.643569723 +0000 UTC m=+0.142025537 container start a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.646615606 +0000 UTC m=+0.145071520 container attach a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:07:38 compute-0 quirky_nobel[150038]: 167 167
Jan 20 14:07:38 compute-0 systemd[1]: libpod-a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d.scope: Deactivated successfully.
Jan 20 14:07:38 compute-0 conmon[150038]: conmon a49309512fd259730e5c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d.scope/container/memory.events
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.653249165 +0000 UTC m=+0.151704979 container died a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbead2d3e16966890fb7836b1651cf76da2a56cf0da6e3df2072c0cab061c5af-merged.mount: Deactivated successfully.
Jan 20 14:07:38 compute-0 podman[150022]: 2026-01-20 14:07:38.696483946 +0000 UTC m=+0.194939760 container remove a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:07:38 compute-0 systemd[1]: libpod-conmon-a49309512fd259730e5cc1cb89e59b810a18a3db9e620c5ab4a88728d867067d.scope: Deactivated successfully.
Jan 20 14:07:38 compute-0 podman[150062]: 2026-01-20 14:07:38.877102098 +0000 UTC m=+0.036161651 container create 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:07:38 compute-0 ceph-mon[74360]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:07:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:07:38 compute-0 systemd[1]: Started libpod-conmon-33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b.scope.
Jan 20 14:07:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:38 compute-0 podman[150062]: 2026-01-20 14:07:38.956493028 +0000 UTC m=+0.115552601 container init 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:38 compute-0 podman[150062]: 2026-01-20 14:07:38.861980288 +0000 UTC m=+0.021039871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:38 compute-0 podman[150062]: 2026-01-20 14:07:38.96428883 +0000 UTC m=+0.123348393 container start 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:07:38 compute-0 podman[150062]: 2026-01-20 14:07:38.968078882 +0000 UTC m=+0.127138435 container attach 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:07:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:39 compute-0 kind_vaughan[150078]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:07:39 compute-0 kind_vaughan[150078]: --> relative data size: 1.0
Jan 20 14:07:39 compute-0 kind_vaughan[150078]: --> All data devices are unavailable
Jan 20 14:07:39 compute-0 systemd[1]: libpod-33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b.scope: Deactivated successfully.
Jan 20 14:07:39 compute-0 podman[150062]: 2026-01-20 14:07:39.733560084 +0000 UTC m=+0.892619647 container died 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1221910ff2cb25b5e2e3da11cbde70c182de0d2e97a718fe49fef28b2a1461c5-merged.mount: Deactivated successfully.
Jan 20 14:07:40 compute-0 podman[150062]: 2026-01-20 14:07:40.316618075 +0000 UTC m=+1.475677628 container remove 33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:07:40 compute-0 ceph-mon[74360]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:40 compute-0 systemd[1]: libpod-conmon-33c2fb6138e402ca2e7d8ceb95c1ef18bd289893bbbb9985d2a683b5f49d779b.scope: Deactivated successfully.
Jan 20 14:07:40 compute-0 sudo[149956]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:40.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:40 compute-0 sudo[150105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:40 compute-0 sudo[150105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:40 compute-0 sudo[150105]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:40 compute-0 sudo[150130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:40 compute-0 sudo[150130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:40 compute-0 sudo[150130]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:40 compute-0 sudo[150155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:40 compute-0 sudo[150155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:40 compute-0 sudo[150155]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:40 compute-0 sudo[150180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:07:40 compute-0 sudo[150180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.051015086 +0000 UTC m=+0.065068494 container create 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:07:41 compute-0 systemd[1]: Started libpod-conmon-7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88.scope.
Jan 20 14:07:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.02496566 +0000 UTC m=+0.039019158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.131759393 +0000 UTC m=+0.145812891 container init 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.138950898 +0000 UTC m=+0.153004336 container start 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:41 compute-0 crazy_chandrasekhar[150266]: 167 167
Jan 20 14:07:41 compute-0 systemd[1]: libpod-7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88.scope: Deactivated successfully.
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.145538796 +0000 UTC m=+0.159592244 container attach 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.146300627 +0000 UTC m=+0.160354075 container died 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d88c388848c149d5ee5500754e9a88e20a0d1f6f38605d8a4908017d3b4ff2-merged.mount: Deactivated successfully.
Jan 20 14:07:41 compute-0 podman[150249]: 2026-01-20 14:07:41.190950146 +0000 UTC m=+0.205003584 container remove 7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:41 compute-0 systemd[1]: libpod-conmon-7d3126d1d2c2b4f8f53d2d1118cc854fb64f2a481a319a41cf5df801d8481d88.scope: Deactivated successfully.
Jan 20 14:07:41 compute-0 podman[150289]: 2026-01-20 14:07:41.433878295 +0000 UTC m=+0.078287501 container create 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:07:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:41 compute-0 systemd[1]: Started libpod-conmon-05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296.scope.
Jan 20 14:07:41 compute-0 podman[150289]: 2026-01-20 14:07:41.397785848 +0000 UTC m=+0.042195094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:41 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 20 14:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f5c0b653f741b3548b91ced09eb1e9baabdeda2bc4b9eeaf4667647ab8d527/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:41 compute-0 systemd[148704]: Activating special unit Exit the Session...
Jan 20 14:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f5c0b653f741b3548b91ced09eb1e9baabdeda2bc4b9eeaf4667647ab8d527/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f5c0b653f741b3548b91ced09eb1e9baabdeda2bc4b9eeaf4667647ab8d527/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f5c0b653f741b3548b91ced09eb1e9baabdeda2bc4b9eeaf4667647ab8d527/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped target Main User Target.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped target Basic System.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped target Paths.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped target Sockets.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped target Timers.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:07:41 compute-0 systemd[148704]: Closed D-Bus User Message Bus Socket.
Jan 20 14:07:41 compute-0 systemd[148704]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:07:41 compute-0 systemd[148704]: Removed slice User Application Slice.
Jan 20 14:07:41 compute-0 systemd[148704]: Reached target Shutdown.
Jan 20 14:07:41 compute-0 systemd[148704]: Finished Exit the Session.
Jan 20 14:07:41 compute-0 systemd[148704]: Reached target Exit the Session.
Jan 20 14:07:41 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 20 14:07:41 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 20 14:07:41 compute-0 podman[150289]: 2026-01-20 14:07:41.535092817 +0000 UTC m=+0.179502053 container init 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:07:41 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 20 14:07:41 compute-0 podman[150289]: 2026-01-20 14:07:41.550786982 +0000 UTC m=+0.195196188 container start 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:07:41 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 20 14:07:41 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 20 14:07:41 compute-0 podman[150289]: 2026-01-20 14:07:41.555085868 +0000 UTC m=+0.199495074 container attach 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:07:41 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 20 14:07:41 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 20 14:07:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:42 compute-0 crazy_einstein[150305]: {
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:     "0": [
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:         {
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "devices": [
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "/dev/loop3"
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             ],
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "lv_name": "ceph_lv0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "lv_size": "7511998464",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "name": "ceph_lv0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "tags": {
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.cluster_name": "ceph",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.crush_device_class": "",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.encrypted": "0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.osd_id": "0",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.type": "block",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:                 "ceph.vdo": "0"
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             },
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "type": "block",
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:             "vg_name": "ceph_vg0"
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:         }
Jan 20 14:07:42 compute-0 crazy_einstein[150305]:     ]
Jan 20 14:07:42 compute-0 crazy_einstein[150305]: }
Jan 20 14:07:42 compute-0 systemd[1]: libpod-05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296.scope: Deactivated successfully.
Jan 20 14:07:42 compute-0 podman[150289]: 2026-01-20 14:07:42.309283095 +0000 UTC m=+0.953692321 container died 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f5c0b653f741b3548b91ced09eb1e9baabdeda2bc4b9eeaf4667647ab8d527-merged.mount: Deactivated successfully.
Jan 20 14:07:42 compute-0 podman[150289]: 2026-01-20 14:07:42.375124328 +0000 UTC m=+1.019533494 container remove 05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:07:42 compute-0 systemd[1]: libpod-conmon-05e1cfcf710f1d0b5f9e4f8cf19cc9a1fca96154922f2d465ffdb3ed3ddd2296.scope: Deactivated successfully.
Jan 20 14:07:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:42 compute-0 sudo[150180]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:42 compute-0 sudo[150331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:42 compute-0 sudo[150331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:42 compute-0 sudo[150331]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:42 compute-0 ceph-mon[74360]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:42 compute-0 sudo[150356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:07:42 compute-0 sudo[150356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:42 compute-0 sudo[150356]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:42 compute-0 sudo[150381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:42 compute-0 sudo[150381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:42 compute-0 sudo[150381]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:42 compute-0 sudo[150406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:07:42 compute-0 sudo[150406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.017768344 +0000 UTC m=+0.052866573 container create da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:07:43 compute-0 systemd[1]: Started libpod-conmon-da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f.scope.
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:42.990526995 +0000 UTC m=+0.025625314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.103433704 +0000 UTC m=+0.138531963 container init da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.110417263 +0000 UTC m=+0.145515482 container start da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.11365447 +0000 UTC m=+0.148752729 container attach da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:43 compute-0 determined_driscoll[150489]: 167 167
Jan 20 14:07:43 compute-0 systemd[1]: libpod-da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f.scope: Deactivated successfully.
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.114887504 +0000 UTC m=+0.149985763 container died da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a8bf00a4555b23c268c31b5498299ea6437993b42e71c10e11d3ff1f14592b-merged.mount: Deactivated successfully.
Jan 20 14:07:43 compute-0 podman[150473]: 2026-01-20 14:07:43.178961869 +0000 UTC m=+0.214060098 container remove da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_driscoll, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:07:43 compute-0 systemd[1]: libpod-conmon-da4809cbdaa2ed257c0913009d008eeabc30ec06f61a8d4768e8298f5aa23b7f.scope: Deactivated successfully.
Jan 20 14:07:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:43 compute-0 podman[150513]: 2026-01-20 14:07:43.320199505 +0000 UTC m=+0.037179339 container create 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:07:43 compute-0 systemd[1]: Started libpod-conmon-5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8.scope.
Jan 20 14:07:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:07:43 compute-0 podman[150513]: 2026-01-20 14:07:43.302822004 +0000 UTC m=+0.019801888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e93ed895aef4f5539289fa6ccbb6fd869f3372fcb7ea523578eb28fa77ad36b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e93ed895aef4f5539289fa6ccbb6fd869f3372fcb7ea523578eb28fa77ad36b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e93ed895aef4f5539289fa6ccbb6fd869f3372fcb7ea523578eb28fa77ad36b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e93ed895aef4f5539289fa6ccbb6fd869f3372fcb7ea523578eb28fa77ad36b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:43 compute-0 podman[150513]: 2026-01-20 14:07:43.41639721 +0000 UTC m=+0.133377064 container init 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:43 compute-0 podman[150513]: 2026-01-20 14:07:43.426519643 +0000 UTC m=+0.143499477 container start 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:43 compute-0 podman[150513]: 2026-01-20 14:07:43.429243487 +0000 UTC m=+0.146223321 container attach 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:07:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:44 compute-0 sshd-session[150535]: Connection closed by authenticating user root 157.245.78.139 port 56676 [preauth]
Jan 20 14:07:44 compute-0 gifted_kare[150529]: {
Jan 20 14:07:44 compute-0 gifted_kare[150529]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:07:44 compute-0 gifted_kare[150529]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:07:44 compute-0 gifted_kare[150529]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:07:44 compute-0 gifted_kare[150529]:         "osd_id": 0,
Jan 20 14:07:44 compute-0 gifted_kare[150529]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:07:44 compute-0 gifted_kare[150529]:         "type": "bluestore"
Jan 20 14:07:44 compute-0 gifted_kare[150529]:     }
Jan 20 14:07:44 compute-0 gifted_kare[150529]: }
Jan 20 14:07:44 compute-0 systemd[1]: libpod-5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8.scope: Deactivated successfully.
Jan 20 14:07:44 compute-0 podman[150513]: 2026-01-20 14:07:44.350842969 +0000 UTC m=+1.067822813 container died 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e93ed895aef4f5539289fa6ccbb6fd869f3372fcb7ea523578eb28fa77ad36b-merged.mount: Deactivated successfully.
Jan 20 14:07:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:44.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:44 compute-0 podman[150513]: 2026-01-20 14:07:44.409339392 +0000 UTC m=+1.126319226 container remove 5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:07:44 compute-0 systemd[1]: libpod-conmon-5bf4f51df10aa791f52756716ea33a9e72135d2abb67065589ba47653435bbd8.scope: Deactivated successfully.
Jan 20 14:07:44 compute-0 sudo[150406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:07:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:07:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b8c952d4-f4df-41e9-8736-663157383e74 does not exist
Jan 20 14:07:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b6cf62bf-61b0-4e44-8f06-079adfcc3da8 does not exist
Jan 20 14:07:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev beccdc5b-0650-43e8-9de1-b65b40a5e712 does not exist
Jan 20 14:07:44 compute-0 sudo[150567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:44 compute-0 sudo[150567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:44 compute-0 sudo[150567]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:44 compute-0 ceph-mon[74360]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:07:44 compute-0 sudo[150592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:07:44 compute-0 sudo[150592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:44 compute-0 sudo[150592]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:44 compute-0 sshd-session[150617]: Accepted publickey for zuul from 192.168.122.30 port 40328 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:07:44 compute-0 systemd-logind[796]: New session 48 of user zuul.
Jan 20 14:07:44 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 20 14:07:44 compute-0 sshd-session[150617]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:07:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:46 compute-0 python3.9[150771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:46.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:46 compute-0 ceph-mon[74360]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:47 compute-0 sudo[150925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twjnakhbgxmjpvfolrmygunwkhvfvqmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918066.7182465-62-188839311618666/AnsiballZ_file.py'
Jan 20 14:07:47 compute-0 sudo[150925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:47 compute-0 python3.9[150927]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:47 compute-0 sudo[150925]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:47 compute-0 sudo[151078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnaeqxpsopvdfgwsiviflsbifupcnvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918067.63708-62-185055261461846/AnsiballZ_file.py'
Jan 20 14:07:47 compute-0 sudo[151078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:48 compute-0 python3.9[151080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:48 compute-0 sudo[151078]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:48.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:48 compute-0 sudo[151230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hogeckldxjlmzxovtziftwesfelzwnes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918068.250842-62-241779191875647/AnsiballZ_file.py'
Jan 20 14:07:48 compute-0 sudo[151230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:48 compute-0 ceph-mon[74360]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:48 compute-0 python3.9[151232]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:48 compute-0 sudo[151230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:49 compute-0 sudo[151382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpflaiglhvrqkidrcphkyauedudurrxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918068.9215333-62-44932372156793/AnsiballZ_file.py'
Jan 20 14:07:49 compute-0 sudo[151382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:49 compute-0 python3.9[151384]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:49 compute-0 sudo[151382]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:49 compute-0 sudo[151536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwzlrbkzlocnhfogbrpfbyhfggggtqqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918069.6094813-62-46878780508882/AnsiballZ_file.py'
Jan 20 14:07:49 compute-0 sudo[151536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:07:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:07:50 compute-0 python3.9[151538]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:50 compute-0 sudo[151536]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:50.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:50 compute-0 ceph-mon[74360]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:51 compute-0 python3.9[151688]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:52.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:07:52
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:07:52 compute-0 sudo[151839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmaukfmeurjpqmhrgmkxxvbujpbyebyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918071.990917-194-230576529497215/AnsiballZ_seboolean.py'
Jan 20 14:07:52 compute-0 sudo[151839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:52 compute-0 ceph-mon[74360]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:52 compute-0 python3.9[151841]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:07:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:53 compute-0 sudo[151839]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:54 compute-0 python3.9[151992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:54 compute-0 sudo[151993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:54 compute-0 sudo[151993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:54 compute-0 sudo[151993]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:54 compute-0 sudo[152018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:07:54 compute-0 sudo[152018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:07:54 compute-0 sudo[152018]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:54.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:54 compute-0 ceph-mon[74360]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:54 compute-0 python3.9[152163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918073.5927942-218-50477832821858/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:55 compute-0 python3.9[152313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:56 compute-0 python3.9[152435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918075.2699096-263-180794676481718/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:56 compute-0 ceph-mon[74360]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:57 compute-0 sudo[152585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svarjlwyweapurfecjamhnlzilykippl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918076.867973-314-268396348169896/AnsiballZ_setup.py'
Jan 20 14:07:57 compute-0 sudo[152585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:07:57 compute-0 python3.9[152587]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:07:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:57 compute-0 sudo[152585]: pam_unix(sudo:session): session closed for user root
Jan 20 14:07:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:07:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:07:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:07:58 compute-0 sudo[152670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utulifbueogiwppyllrvliujnnmhdcec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918076.867973-314-268396348169896/AnsiballZ_dnf.py'
Jan 20 14:07:58 compute-0 sudo[152670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:07:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:58 compute-0 python3.9[152672]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:07:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:07:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:07:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:07:59 compute-0 ceph-mon[74360]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:07:59 compute-0 sudo[152670]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:00 compute-0 ceph-mon[74360]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:00.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:00 compute-0 sudo[152824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyrbxmwrtlgqucguytyblgeqrevmwfan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918080.2855954-350-218797044512092/AnsiballZ_systemd.py'
Jan 20 14:08:00 compute-0 sudo[152824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:01 compute-0 python3.9[152826]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:08:01 compute-0 sudo[152824]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:01 compute-0 ovn_controller[148666]: 2026-01-20T14:08:01Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 20 14:08:01 compute-0 ovn_controller[148666]: 2026-01-20T14:08:01Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 20 14:08:01 compute-0 podman[152828]: 2026-01-20 14:08:01.259295967 +0000 UTC m=+0.083711258 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:08:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:01 compute-0 anacron[102472]: Job `cron.daily' started
Jan 20 14:08:01 compute-0 anacron[102472]: Job `cron.daily' terminated
Jan 20 14:08:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:02.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:02 compute-0 python3.9[153006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:02.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:02 compute-0 ceph-mon[74360]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:02 compute-0 python3.9[153127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918081.5915623-374-116101673509364/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:03 compute-0 python3.9[153277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:03 compute-0 python3.9[153398]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918082.7584267-374-253451360544095/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:04 compute-0 ceph-mon[74360]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:05 compute-0 python3.9[153549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:06 compute-0 python3.9[153671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918085.1998537-506-65023345428680/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:06 compute-0 python3.9[153821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:06 compute-0 ceph-mon[74360]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:07 compute-0 python3.9[153942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918086.3609383-506-68705022337601/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:08.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:08 compute-0 python3.9[154093]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:08:09 compute-0 ceph-mon[74360]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:09 compute-0 sudo[154245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueblkdtpszzjeohxgkdzorehavskpgog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918088.8432775-620-100246789704335/AnsiballZ_file.py'
Jan 20 14:08:09 compute-0 sudo[154245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:09 compute-0 python3.9[154247]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:09 compute-0 sudo[154245]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:10.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:10 compute-0 sudo[154398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgyguzpmxkmdzlbklbalsmuqsdjjhyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918089.7197587-644-270956421677916/AnsiballZ_stat.py'
Jan 20 14:08:10 compute-0 sudo[154398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:10 compute-0 python3.9[154400]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:10 compute-0 sudo[154398]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:10 compute-0 sudo[154476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dekrpjzxcnlhjeapxlgbodvlgooagugg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918089.7197587-644-270956421677916/AnsiballZ_file.py'
Jan 20 14:08:10 compute-0 sudo[154476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:10 compute-0 python3.9[154478]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:10 compute-0 sudo[154476]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:08:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:08:11 compute-0 sudo[154628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqjxxwzotfmymabejgmiotdkwleoguq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918090.8377616-644-196498551539905/AnsiballZ_stat.py'
Jan 20 14:08:11 compute-0 sudo[154628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:11 compute-0 ceph-mon[74360]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:11 compute-0 python3.9[154630]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:11 compute-0 sudo[154628]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:11 compute-0 sudo[154706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciaujsjbpdlsnwuwrdgcwlhudgbpzgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918090.8377616-644-196498551539905/AnsiballZ_file.py'
Jan 20 14:08:11 compute-0 sudo[154706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:11 compute-0 python3.9[154708]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:11 compute-0 sudo[154706]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:12.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:12 compute-0 ceph-mon[74360]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:12.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:12 compute-0 sudo[154859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntotitttmlapbckvskcqpeiaaoqjnurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918092.2122648-713-121614337422360/AnsiballZ_file.py'
Jan 20 14:08:12 compute-0 sudo[154859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:12 compute-0 python3.9[154861]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:12 compute-0 sudo[154859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:13 compute-0 sudo[155011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnelbctzeoquzoilgxpaoduzppvmtrcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918093.0186992-737-262886569318384/AnsiballZ_stat.py'
Jan 20 14:08:13 compute-0 sudo[155011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:13 compute-0 python3.9[155013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:13 compute-0 sudo[155011]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:13 compute-0 sudo[155090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouxbhbkmfbnlydlrdavfrznpmbzznyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918093.0186992-737-262886569318384/AnsiballZ_file.py'
Jan 20 14:08:13 compute-0 sudo[155090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:13 compute-0 python3.9[155092]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:13 compute-0 sudo[155090]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:14.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:14.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:14 compute-0 sudo[155141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:14 compute-0 sudo[155141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:14 compute-0 sudo[155141]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:14 compute-0 sudo[155194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:14 compute-0 sudo[155194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:14 compute-0 sudo[155194]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:14 compute-0 sudo[155292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwdfxyizhjkxsurjqljcyozjdiogkird ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918094.3826802-773-270266010480573/AnsiballZ_stat.py'
Jan 20 14:08:14 compute-0 sudo[155292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:14 compute-0 ceph-mon[74360]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:14 compute-0 python3.9[155294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:14 compute-0 sudo[155292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:15 compute-0 sudo[155370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgspknzcclctwdzaffgzgjxdbnguipay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918094.3826802-773-270266010480573/AnsiballZ_file.py'
Jan 20 14:08:15 compute-0 sudo[155370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:15 compute-0 python3.9[155372]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:15 compute-0 sudo[155370]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:15 compute-0 sudo[155523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwqizigrltitpzrequyslvipvrqqdpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918095.5015337-809-69034738465441/AnsiballZ_systemd.py'
Jan 20 14:08:15 compute-0 sudo[155523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:16.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:16 compute-0 python3.9[155525]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:08:16 compute-0 systemd[1]: Reloading.
Jan 20 14:08:16 compute-0 systemd-rc-local-generator[155549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:08:16 compute-0 systemd-sysv-generator[155553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:08:16 compute-0 sudo[155523]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:16.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:16 compute-0 ceph-mon[74360]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:17 compute-0 sudo[155712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjehewoeefjxerpklbwjsrwygfaepllx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918096.811592-833-199639548145406/AnsiballZ_stat.py'
Jan 20 14:08:17 compute-0 sudo[155712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:17 compute-0 python3.9[155714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:17 compute-0 sudo[155712]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:17 compute-0 sudo[155790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zshrlofffdaubwlfgntrcgowbctcqfgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918096.811592-833-199639548145406/AnsiballZ_file.py'
Jan 20 14:08:17 compute-0 sudo[155790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:17 compute-0 python3.9[155792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:17 compute-0 sudo[155790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:18.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:18 compute-0 ceph-mon[74360]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:19 compute-0 sudo[155943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodoauzhmayzdifzahuyurufqcffezvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918098.222265-869-96889179633862/AnsiballZ_stat.py'
Jan 20 14:08:19 compute-0 sudo[155943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:19 compute-0 python3.9[155945]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:19 compute-0 sudo[155943]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:19 compute-0 sudo[156021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iucritiizleaajhgmzokitfibucdmffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918098.222265-869-96889179633862/AnsiballZ_file.py'
Jan 20 14:08:19 compute-0 sudo[156021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:19 compute-0 python3.9[156023]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:19 compute-0 sudo[156021]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:20 compute-0 sudo[156174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mureqqgdutwwnxlsjysssrifjygkwnuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918099.8656259-905-146628168513450/AnsiballZ_systemd.py'
Jan 20 14:08:20 compute-0 sudo[156174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:20 compute-0 ceph-mon[74360]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:20 compute-0 python3.9[156176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:08:20 compute-0 systemd[1]: Reloading.
Jan 20 14:08:20 compute-0 systemd-rc-local-generator[156203]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:08:20 compute-0 systemd-sysv-generator[156206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:08:20 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 14:08:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:08:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:08:21 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 14:08:21 compute-0 sudo[156174]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:21 compute-0 sudo[156369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdddwlausdmpcsoshuxrocrmvjgprcyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918101.3970304-935-39713239932108/AnsiballZ_file.py'
Jan 20 14:08:21 compute-0 sudo[156369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:21 compute-0 python3.9[156371]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:21 compute-0 sudo[156369]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:22.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:22 compute-0 ceph-mon[74360]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:22 compute-0 sudo[156521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqggyqgtijkoodfimtnjycmglifcvahu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918102.2281322-959-251005446134994/AnsiballZ_stat.py'
Jan 20 14:08:22 compute-0 sudo[156521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:22 compute-0 python3.9[156523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:22 compute-0 sudo[156521]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:23 compute-0 sudo[156644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjulxibbwayvndcbqevmysvcqgrnnjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918102.2281322-959-251005446134994/AnsiballZ_copy.py'
Jan 20 14:08:23 compute-0 sudo[156644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:23 compute-0 python3.9[156646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918102.2281322-959-251005446134994/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:23 compute-0 sudo[156644]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:24.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:24.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:24 compute-0 sudo[156797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgkszvlfddqluklpohymauqkhbbojiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918104.069206-1010-119292851297047/AnsiballZ_file.py'
Jan 20 14:08:24 compute-0 sudo[156797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:24 compute-0 python3.9[156799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:24 compute-0 sudo[156797]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:24 compute-0 ceph-mon[74360]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:25 compute-0 sudo[156949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poppeuluxcaxrblcaanfbpsqwjuzjlpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918104.8908906-1034-58472937083187/AnsiballZ_file.py'
Jan 20 14:08:25 compute-0 sudo[156949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:25 compute-0 python3.9[156951]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:25 compute-0 sudo[156949]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:26.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:26 compute-0 sudo[157102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbijcyfzoobyqiynfeohqvitekrrzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918105.7226064-1058-112337016466014/AnsiballZ_stat.py'
Jan 20 14:08:26 compute-0 sudo[157102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:26 compute-0 python3.9[157104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:26 compute-0 sudo[157102]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:26.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:26 compute-0 sudo[157225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-medlwvtffrujivouqvixbjchpipryhtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918105.7226064-1058-112337016466014/AnsiballZ_copy.py'
Jan 20 14:08:26 compute-0 sudo[157225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:26 compute-0 ceph-mon[74360]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:26 compute-0 python3.9[157227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918105.7226064-1058-112337016466014/.source.json _original_basename=.odxoveya follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:26 compute-0 sudo[157225]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:27 compute-0 python3.9[157377]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:28.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:28.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:29 compute-0 ceph-mon[74360]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:29 compute-0 sshd-session[157650]: Connection closed by authenticating user root 157.245.78.139 port 47728 [preauth]
Jan 20 14:08:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:30 compute-0 sudo[157802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkapkjimkvvlaabimvqwwhhzxzprstqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918109.7003644-1178-155808131400087/AnsiballZ_container_config_data.py'
Jan 20 14:08:30 compute-0 sudo[157802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:30 compute-0 python3.9[157804]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 20 14:08:30 compute-0 sudo[157802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:30.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:30 compute-0 ceph-mon[74360]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:31 compute-0 sudo[157954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxdfuaszfximbjjwogewdutvwgvxsuzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918110.771894-1211-266523959939303/AnsiballZ_container_config_hash.py'
Jan 20 14:08:31 compute-0 sudo[157954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:31 compute-0 python3.9[157956]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:08:31 compute-0 sudo[157954]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:31 compute-0 podman[157957]: 2026-01-20 14:08:31.529191944 +0000 UTC m=+0.106919466 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:08:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:32 compute-0 sudo[158133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfxvrdkcljgeaeiseaqdgqibsuyyurf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918111.9125984-1241-213665964595072/AnsiballZ_edpm_container_manage.py'
Jan 20 14:08:32 compute-0 sudo[158133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:08:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:32.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:32 compute-0 python3[158135]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:08:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:33 compute-0 ceph-mon[74360]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:34.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:34.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:34 compute-0 ceph-mon[74360]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:34 compute-0 sudo[158182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:34 compute-0 sudo[158182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:34 compute-0 sudo[158182]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:34 compute-0 sudo[158222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:34 compute-0 sudo[158222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:34 compute-0 sudo[158222]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:36.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:36.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:38.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:39 compute-0 ceph-mon[74360]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:39 compute-0 ceph-mon[74360]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:41 compute-0 ceph-mon[74360]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:08:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:08:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:42.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:44.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:44.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:45 compute-0 sudo[158284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:45 compute-0 sudo[158284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-0 sudo[158284]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-0 sudo[158309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:08:45 compute-0 sudo[158309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-0 sudo[158309]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-0 sudo[158334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:45 compute-0 sudo[158334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-0 sudo[158334]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:45 compute-0 sudo[158359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:08:45 compute-0 sudo[158359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 14:08:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:08:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:46.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:48.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:50.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 14:08:50 compute-0 ceph-mon[74360]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:50.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:08:52
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images']
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:08:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:52.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:54 compute-0 sudo[158359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:08:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:08:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:08:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:08:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:08:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:54.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:54 compute-0 ceph-mon[74360]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-0 ceph-mon[74360]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:08:54 compute-0 ceph-mon[74360]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-0 ceph-mon[74360]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:08:54 compute-0 sudo[158453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:54 compute-0 sudo[158453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:54 compute-0 sudo[158453]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:54 compute-0 sudo[158478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:54 compute-0 sudo[158478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:54 compute-0 sudo[158478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:08:55 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 14:08:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:56.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:56 compute-0 sshd-session[158415]: Invalid user ubuntu from 36.137.141.10 port 40540
Jan 20 14:08:57 compute-0 sshd-session[158415]: Received disconnect from 36.137.141.10 port 40540:11:  [preauth]
Jan 20 14:08:57 compute-0 sshd-session[158415]: Disconnected from invalid user ubuntu 36.137.141.10 port 40540 [preauth]
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:08:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 14:08:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:08:58.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:58 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 14:08:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:08:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:08:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:08:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:08:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:08:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1d0dd033-bb67-4894-b58d-ebc5c6675bf3 does not exist
Jan 20 14:08:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3747243b-ad4a-4032-8e83-1a12cfa32fcb does not exist
Jan 20 14:08:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5edbfc8f-97b0-4188-a55b-fe53e6ee30c4 does not exist
Jan 20 14:08:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:08:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:08:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:08:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:08:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:08:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:08:58 compute-0 sudo[158505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:58 compute-0 sudo[158505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:58 compute-0 sudo[158505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:58 compute-0 sudo[158530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:08:58 compute-0 sudo[158530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:58 compute-0 sudo[158530]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:58 compute-0 sudo[158555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:08:58 compute-0 sudo[158555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:58 compute-0 sudo[158555]: pam_unix(sudo:session): session closed for user root
Jan 20 14:08:59 compute-0 sudo[158580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:08:59 compute-0 sudo[158580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:08:59 compute-0 ceph-mon[74360]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:59 compute-0 ceph-mon[74360]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:08:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:08:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:08:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 14:08:59 compute-0 podman[158148]: 2026-01-20 14:08:59.835511919 +0000 UTC m=+27.111680181 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:09:00.006216312 +0000 UTC m=+0.116063314 container create 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:08:59.915246689 +0000 UTC m=+0.025093701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:00.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:00 compute-0 systemd[1]: Started libpod-conmon-7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26.scope.
Jan 20 14:09:00 compute-0 ceph-mon[74360]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:09:00 compute-0 ceph-mon[74360]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 14:09:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:00 compute-0 ceph-mon[74360]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 14:09:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:09:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:09:00.491745162 +0000 UTC m=+0.601592214 container init 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:09:00.504692503 +0000 UTC m=+0.614539565 container start 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:09:00 compute-0 objective_poincare[158698]: 167 167
Jan 20 14:09:00 compute-0 systemd[1]: libpod-7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26.scope: Deactivated successfully.
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:09:00.770171843 +0000 UTC m=+0.880018955 container attach 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:00 compute-0 podman[158658]: 2026-01-20 14:09:00.771068047 +0000 UTC m=+0.880915139 container died 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce923091e6926215a85deb1a6f3c077a06c6fc75ac871ea9abe9ae8466250e35-merged.mount: Deactivated successfully.
Jan 20 14:09:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 20 14:09:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:02.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:09:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:09:03 compute-0 podman[158658]: 2026-01-20 14:09:03.146620737 +0000 UTC m=+3.256467759 container remove 7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:09:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:03 compute-0 systemd[1]: libpod-conmon-7fbde7c14c5f705e8b61cbde27bf0d23b4505d66639555ba5e9f0cb6bbda0a26.scope: Deactivated successfully.
Jan 20 14:09:03 compute-0 podman[158716]: 2026-01-20 14:09:03.286993868 +0000 UTC m=+0.855145002 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:09:03 compute-0 podman[158679]: 2026-01-20 14:09:03.221678629 +0000 UTC m=+3.294835328 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:09:03 compute-0 ceph-mon[74360]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 20 14:09:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:03 compute-0 podman[158749]: 2026-01-20 14:09:03.702201413 +0000 UTC m=+0.372915520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:04.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:04 compute-0 podman[158679]: 2026-01-20 14:09:04.108366094 +0000 UTC m=+4.181522803 container create ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:09:04 compute-0 python3[158135]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:09:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:04.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:04 compute-0 podman[158749]: 2026-01-20 14:09:04.857212686 +0000 UTC m=+1.527926753 container create 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:09:05 compute-0 systemd[1]: Started libpod-conmon-2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5.scope.
Jan 20 14:09:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:05 compute-0 ceph-mon[74360]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:05 compute-0 podman[158749]: 2026-01-20 14:09:05.370161959 +0000 UTC m=+2.040876086 container init 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:09:05 compute-0 podman[158749]: 2026-01-20 14:09:05.37794296 +0000 UTC m=+2.048657027 container start 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:09:05 compute-0 podman[158749]: 2026-01-20 14:09:05.415909548 +0000 UTC m=+2.086623705 container attach 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:05 compute-0 sudo[158133]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Jan 20 14:09:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:06.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:06 compute-0 agitated_davinci[158781]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:09:06 compute-0 agitated_davinci[158781]: --> relative data size: 1.0
Jan 20 14:09:06 compute-0 agitated_davinci[158781]: --> All data devices are unavailable
Jan 20 14:09:06 compute-0 systemd[1]: libpod-2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5.scope: Deactivated successfully.
Jan 20 14:09:06 compute-0 podman[158749]: 2026-01-20 14:09:06.162693114 +0000 UTC m=+2.833407171 container died 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:06.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:06 compute-0 ceph-mon[74360]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Jan 20 14:09:07 compute-0 sudo[158972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubooefgldgaglavshbctafewcwmkbwwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918146.866098-1265-232375328436434/AnsiballZ_stat.py'
Jan 20 14:09:07 compute-0 sudo[158972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc8c3323303e7a4b8316f25d97eb3786cb944ffa3320de9174045cde7e03db9e-merged.mount: Deactivated successfully.
Jan 20 14:09:07 compute-0 python3.9[158974]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:07 compute-0 sudo[158972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 20 14:09:07 compute-0 podman[158749]: 2026-01-20 14:09:07.610672062 +0000 UTC m=+4.281386129 container remove 2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_davinci, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:09:07 compute-0 systemd[1]: libpod-conmon-2d384ef5531bc1abde40c3eb7a8571126601ee8f8a81440c4b72c7a26a9fd6f5.scope: Deactivated successfully.
Jan 20 14:09:07 compute-0 sudo[158580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:07 compute-0 sudo[159002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:07 compute-0 sudo[159002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:07 compute-0 sudo[159002]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:07 compute-0 sudo[159027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:07 compute-0 sudo[159027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:07 compute-0 sudo[159027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:07 compute-0 sudo[159052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:07 compute-0 sudo[159052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:07 compute-0 sudo[159052]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:07 compute-0 sudo[159077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:09:07 compute-0 sudo[159077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.310203037 +0000 UTC m=+0.020861085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.470841268 +0000 UTC m=+0.181499326 container create e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:08.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:08 compute-0 ceph-mon[74360]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 20 14:09:08 compute-0 systemd[1]: Started libpod-conmon-e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936.scope.
Jan 20 14:09:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.640507283 +0000 UTC m=+0.351165401 container init e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.648199042 +0000 UTC m=+0.358857100 container start e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:08 compute-0 amazing_goodall[159160]: 167 167
Jan 20 14:09:08 compute-0 systemd[1]: libpod-e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936.scope: Deactivated successfully.
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.682247444 +0000 UTC m=+0.392905492 container attach e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.683470787 +0000 UTC m=+0.394128825 container died e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-282f7bef4aa0b6d442075fd6809c487cc02657531e1e0e145cec7714260cf277-merged.mount: Deactivated successfully.
Jan 20 14:09:08 compute-0 podman[159143]: 2026-01-20 14:09:08.945451962 +0000 UTC m=+0.656109980 container remove e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:09:08 compute-0 systemd[1]: libpod-conmon-e278e45a30601dde01362795b94183f2938bc934145a119a8951b14049f12936.scope: Deactivated successfully.
Jan 20 14:09:09 compute-0 podman[159236]: 2026-01-20 14:09:09.1281233 +0000 UTC m=+0.077943322 container create 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 14:09:09 compute-0 podman[159236]: 2026-01-20 14:09:09.069833451 +0000 UTC m=+0.019653453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:09 compute-0 systemd[1]: Started libpod-conmon-17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8.scope.
Jan 20 14:09:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76036a53f3a4db78516c9345051b3b305969d9b6ad4681e6e0279e075a2c23f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76036a53f3a4db78516c9345051b3b305969d9b6ad4681e6e0279e075a2c23f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76036a53f3a4db78516c9345051b3b305969d9b6ad4681e6e0279e075a2c23f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d76036a53f3a4db78516c9345051b3b305969d9b6ad4681e6e0279e075a2c23f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:09 compute-0 podman[159236]: 2026-01-20 14:09:09.268091601 +0000 UTC m=+0.217911673 container init 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:09 compute-0 podman[159236]: 2026-01-20 14:09:09.277024273 +0000 UTC m=+0.226844295 container start 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:09 compute-0 podman[159236]: 2026-01-20 14:09:09.302724218 +0000 UTC m=+0.252544240 container attach 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:09:09 compute-0 sudo[159332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmujxqptsaywlbsfankjlofashkpnpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918148.9947429-1292-256067514983456/AnsiballZ_file.py'
Jan 20 14:09:09 compute-0 sudo[159332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 14:09:09 compute-0 python3.9[159334]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:09 compute-0 sudo[159332]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:09 compute-0 sudo[159410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alyuczbbkxllgzqawumbanueaygribxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918148.9947429-1292-256067514983456/AnsiballZ_stat.py'
Jan 20 14:09:09 compute-0 sudo[159410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:09 compute-0 boring_cray[159282]: {
Jan 20 14:09:09 compute-0 boring_cray[159282]:     "0": [
Jan 20 14:09:09 compute-0 boring_cray[159282]:         {
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "devices": [
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "/dev/loop3"
Jan 20 14:09:09 compute-0 boring_cray[159282]:             ],
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "lv_name": "ceph_lv0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "lv_size": "7511998464",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "name": "ceph_lv0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "tags": {
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.cluster_name": "ceph",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.crush_device_class": "",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.encrypted": "0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.osd_id": "0",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.type": "block",
Jan 20 14:09:09 compute-0 boring_cray[159282]:                 "ceph.vdo": "0"
Jan 20 14:09:09 compute-0 boring_cray[159282]:             },
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "type": "block",
Jan 20 14:09:09 compute-0 boring_cray[159282]:             "vg_name": "ceph_vg0"
Jan 20 14:09:09 compute-0 boring_cray[159282]:         }
Jan 20 14:09:09 compute-0 boring_cray[159282]:     ]
Jan 20 14:09:09 compute-0 boring_cray[159282]: }
Jan 20 14:09:10 compute-0 python3.9[159412]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:10 compute-0 sudo[159410]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-0 systemd[1]: libpod-17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8.scope: Deactivated successfully.
Jan 20 14:09:10 compute-0 podman[159236]: 2026-01-20 14:09:10.033764688 +0000 UTC m=+0.983584670 container died 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:09:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:10.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:10 compute-0 ceph-mon[74360]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 14:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d76036a53f3a4db78516c9345051b3b305969d9b6ad4681e6e0279e075a2c23f-merged.mount: Deactivated successfully.
Jan 20 14:09:10 compute-0 podman[159236]: 2026-01-20 14:09:10.450955487 +0000 UTC m=+1.400775479 container remove 17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:10 compute-0 systemd[1]: libpod-conmon-17fbeb2c369cb29187a60afa490737042f1b05e53a409682b53690b50e1024d8.scope: Deactivated successfully.
Jan 20 14:09:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:10 compute-0 sudo[159077]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-0 sudo[159484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:10 compute-0 sudo[159484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:10 compute-0 sudo[159484]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-0 sudo[159509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:10 compute-0 sudo[159509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:10 compute-0 sudo[159509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-0 sudo[159535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:10 compute-0 sudo[159535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:10 compute-0 sudo[159535]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:10 compute-0 sudo[159586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:09:10 compute-0 sudo[159586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:09:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:09:10 compute-0 sudo[159682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltukerhnchcpvlcioweodndpsysucoas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.118443-1292-217908627164107/AnsiballZ_copy.py'
Jan 20 14:09:10 compute-0 sudo[159682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:11 compute-0 python3.9[159689]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918150.118443-1292-217908627164107/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:11 compute-0 sudo[159682]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:11 compute-0 podman[159723]: 2026-01-20 14:09:11.250875423 +0000 UTC m=+0.039014908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:11 compute-0 podman[159723]: 2026-01-20 14:09:11.486656638 +0000 UTC m=+0.274796143 container create 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 20 14:09:11 compute-0 sudo[159810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphgbmyxagslgaoduiyyzmddgpzcclcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.118443-1292-217908627164107/AnsiballZ_systemd.py'
Jan 20 14:09:11 compute-0 sudo[159810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:11 compute-0 systemd[1]: Started libpod-conmon-6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4.scope.
Jan 20 14:09:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:11 compute-0 podman[159723]: 2026-01-20 14:09:11.703170433 +0000 UTC m=+0.491309928 container init 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:11 compute-0 podman[159723]: 2026-01-20 14:09:11.714575511 +0000 UTC m=+0.502715016 container start 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:09:11 compute-0 bold_banzai[159815]: 167 167
Jan 20 14:09:11 compute-0 systemd[1]: libpod-6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4.scope: Deactivated successfully.
Jan 20 14:09:11 compute-0 python3.9[159812]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:09:11 compute-0 systemd[1]: Reloading.
Jan 20 14:09:11 compute-0 systemd-sysv-generator[159862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:11 compute-0 systemd-rc-local-generator[159857]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:09:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:12.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:09:12 compute-0 sudo[159810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:12 compute-0 sudo[159940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwygimvrqykmnniaamylrlikqfvqttv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918150.118443-1292-217908627164107/AnsiballZ_systemd.py'
Jan 20 14:09:12 compute-0 sudo[159940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:12 compute-0 podman[159723]: 2026-01-20 14:09:12.725499901 +0000 UTC m=+1.513639446 container attach 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:09:12 compute-0 podman[159723]: 2026-01-20 14:09:12.728744069 +0000 UTC m=+1.516883684 container died 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:13 compute-0 python3.9[159942]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:13 compute-0 systemd[1]: Reloading.
Jan 20 14:09:13 compute-0 systemd-rc-local-generator[159972]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:13 compute-0 systemd-sysv-generator[159975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:14.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 20 14:09:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:09:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:16.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:09:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:16 compute-0 sshd-session[160006]: Connection closed by authenticating user root 157.245.78.139 port 44032 [preauth]
Jan 20 14:09:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:18.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:19 compute-0 ceph-mds[93566]: mds.beacon.cephfs.compute-0.znrafi missed beacon ack from the monitors
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 20 14:09:19 compute-0 sshd-session[76515]: Received disconnect from 192.168.122.100 port 43612:11: Disconnected by application
Jan 20 14:09:19 compute-0 sshd-session[76515]: Disconnected from user ceph-admin 192.168.122.100 port 43612
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [cephadm ERROR cephadm.serve] host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON: 
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [ERR] : host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON: 
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T14:09:19.884+0000 7f454d358640 -1 log_channel(cephadm) log [ERR] : host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON: 
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: Traceback (most recent call last):
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
Jan 20 14:09:19 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return json.loads(''.join(out))
Jan 20 14:09:19 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/__init__.py", line 346, in loads
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return _default_decoder.decode(s)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/decoder.py", line 337, in decode
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     raise JSONDecodeError("Expecting value", s, err.value) from None
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [cephadm ERROR cephadm.serve] Failed to apply osd.default_drive_group spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
                                           service_id: default_drive_group
                                           service_name: osd.default_drive_group
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             data_devices:
                                               paths:
                                               - /dev/ceph_vg0/ceph_lv0
                                             filter_logic: AND
                                             objectstore: bluestore
                                           ''')): host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
                                           
                                           During handling of the above exception, another exception occurred:
                                           
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 577, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 696, in _apply_service
                                               self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
                                               ret = self.mgr.wait_async(all_hosts())
                                             File "/usr/share/ceph/mgr/cephadm/module.py", line 735, in wait_async
                                               return self.event_loop.get_result(coro, timeout)
                                             File "/usr/share/ceph/mgr/cephadm/ssh.py", line 64, in get_result
                                               return future.result(timeout)
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
                                               return self.__get_result()
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
                                               raise self._exception
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
                                               return await gather(*futures)
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
                                               ret_msg = await self.create_single_host(
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 98, in create_single_host
                                               return await self.deploy_osd_daemons_for_existing_osds(host, drive_group,
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 158, in deploy_osd_daemons_for_existing_osds
                                               raw_elems: dict = await CephadmServe(self.mgr)._run_cephadm_json(
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1518, in _run_cephadm_json
                                               raise OrchestratorError(msg)
                                           orchestrator._interface.OrchestratorError: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [ERR] : Failed to apply osd.default_drive_group spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
                                           service_id: default_drive_group
                                           service_name: osd.default_drive_group
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             data_devices:
                                               paths:
                                               - /dev/ceph_vg0/ceph_lv0
                                             filter_logic: AND
                                             objectstore: bluestore
                                           ''')): host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
                                           
                                           During handling of the above exception, another exception occurred:
                                           
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 577, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 696, in _apply_service
                                               self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
                                               ret = self.mgr.wait_async(all_hosts())
                                             File "/usr/share/ceph/mgr/cephadm/module.py", line 735, in wait_async
                                               return self.event_loop.get_result(coro, timeout)
                                             File "/usr/share/ceph/mgr/cephadm/ssh.py", line 64, in get_result
                                               return future.result(timeout)
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
                                               return self.__get_result()
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
                                               raise self._exception
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
                                               return await gather(*futures)
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
                                               ret_msg = await self.create_single_host(
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 98, in create_single_host
                                               return await self.deploy_osd_daemons_for_existing_osds(host, drive_group,
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 158, in deploy_osd_daemons_for_existing_osds
                                               raw_elems: dict = await CephadmServe(self.mgr)._run_cephadm_json(
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1518, in _run_cephadm_json
                                               raise OrchestratorError(msg)
                                           orchestrator._interface.OrchestratorError: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T14:09:19.891+0000 7f45543a6640 -1 log_channel(cephadm) log [ERR] : Failed to apply osd.default_drive_group spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: service_id: default_drive_group
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: service_name: osd.default_drive_group
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: placement:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   hosts:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-0
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-1
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   - compute-2
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: spec:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   data_devices:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     paths:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     - /dev/ceph_vg0/ceph_lv0
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   filter_logic: AND
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   objectstore: bluestore
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: ''')): host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: Traceback (most recent call last):
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return json.loads(''.join(out))
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/__init__.py", line 346, in loads
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return _default_decoder.decode(s)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/decoder.py", line 337, in decode
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     raise JSONDecodeError("Expecting value", s, err.value) from None
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: During handling of the above exception, another exception occurred:
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: Traceback (most recent call last):
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 577, in _apply_all_services
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     if self._apply_service(spec):
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 696, in _apply_service
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     ret = self.mgr.wait_async(all_hosts())
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/module.py", line 735, in wait_async
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return self.event_loop.get_result(coro, timeout)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/ssh.py", line 64, in get_result
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return future.result(timeout)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return self.__get_result()
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     raise self._exception
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return await gather(*futures)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     ret_msg = await self.create_single_host(
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 98, in create_single_host
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     return await self.deploy_osd_daemons_for_existing_osds(host, drive_group,
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 158, in deploy_osd_daemons_for_existing_osds
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     raw_elems: dict = await CephadmServe(self.mgr)._run_cephadm_json(
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 1518, in _run_cephadm_json
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]:     raise OrchestratorError(msg)
Jan 20 14:09:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: orchestrator._interface.OrchestratorError: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 01e85e69-7c46-465d-8574-0b8411df3270 does not exist
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b473a40f-0f0a-437e-a40e-0056bfded57d does not exist
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 69a3da18-3b4f-413c-a14c-5d30530545e3 (Updating ingress.rgw.default deployment (+2 -> 4))
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-1.uyeocq on compute-1
Jan 20 14:09:19 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-1.uyeocq on compute-1
Jan 20 14:09:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:22.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:23 compute-0 ceph-mds[93566]: mds.beacon.cephfs.compute-0.znrafi missed beacon ack from the monitors
Jan 20 14:09:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:24.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:24 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.283096313s, txc = 0x562bbbdda000
Jan 20 14:09:24 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 12.350984573s
Jan 20 14:09:24 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 12.350984573s
Jan 20 14:09:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 14:09:24 compute-0 ceph-mon[74360]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 20 14:09:25 compute-0 sudo[159981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:25 compute-0 sshd-session[76512]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 14:09:25 compute-0 sudo[159981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:25 compute-0 systemd-logind[796]: Session 33 logged out. Waiting for processes to exit.
Jan 20 14:09:25 compute-0 sudo[159981]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:25 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 20 14:09:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:09:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:09:26 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.616258621s, txc = 0x562bbc1a6300
Jan 20 14:09:26 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 13.616281509s, txc = 0x562bbbe51500
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:26 compute-0 ceph-mon[74360]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:27 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 14:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9daae25e8f99ff1ca2e33bb29a7bc11e8403c295fb74194aaef124c3bc4d4b9-merged.mount: Deactivated successfully.
Jan 20 14:09:27 compute-0 ceph-mds[93566]: mds.beacon.cephfs.compute-0.znrafi missed beacon ack from the monitors
Jan 20 14:09:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:09:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:28 compute-0 ceph-mon[74360]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 0 B/s wr, 79 op/s
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 14:09:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.wookjv(active, since 13m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:28 compute-0 podman[159723]: 2026-01-20 14:09:28.498430017 +0000 UTC m=+17.286569482 container remove 6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:28 compute-0 systemd[1]: libpod-conmon-6274acb73c308fad9ecb69733b48fc6cc03fa242309900d2a85059656fe364f4.scope: Deactivated successfully.
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:09:28 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:09:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea7977877c7f03f59b40ad1f9b22ce32a75fc499db06da00cf42f131e97200/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76ea7977877c7f03f59b40ad1f9b22ce32a75fc499db06da00cf42f131e97200/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 podman[160050]: 2026-01-20 14:09:28.647682187 +0000 UTC m=+0.029689937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:28 compute-0 podman[160050]: 2026-01-20 14:09:28.74447453 +0000 UTC m=+0.126482230 container create 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:09:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2.
Jan 20 14:09:28 compute-0 podman[160023]: 2026-01-20 14:09:28.766215892 +0000 UTC m=+3.646453304 container init ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + sudo -E kolla_set_configs
Jan 20 14:09:28 compute-0 systemd[1]: Started libpod-conmon-6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057.scope.
Jan 20 14:09:28 compute-0 podman[160023]: 2026-01-20 14:09:28.807321774 +0000 UTC m=+3.687559156 container start ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 14:09:28 compute-0 edpm-start-podman-container[160023]: ovn_metadata_agent
Jan 20 14:09:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2775c6992c6346a034611daba67666ad66a0f428629316d810dad9a72a271fbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2775c6992c6346a034611daba67666ad66a0f428629316d810dad9a72a271fbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2775c6992c6346a034611daba67666ad66a0f428629316d810dad9a72a271fbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2775c6992c6346a034611daba67666ad66a0f428629316d810dad9a72a271fbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:28 compute-0 podman[160050]: 2026-01-20 14:09:28.867966028 +0000 UTC m=+0.249973758 container init 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:28 compute-0 podman[160050]: 2026-01-20 14:09:28.878414419 +0000 UTC m=+0.260422129 container start 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Validating config file
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Copying service configuration files
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 20 14:09:28 compute-0 podman[160050]: 2026-01-20 14:09:28.881662506 +0000 UTC m=+0.263670226 container attach 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Writing out command to execute
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 20 14:09:28 compute-0 podman[160077]: 2026-01-20 14:09:28.884993625 +0000 UTC m=+0.065149487 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:09:28 compute-0 edpm-start-podman-container[160022]: Creating additional drop-in dependency for "ovn_metadata_agent" (ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2)
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: ++ cat /run_command
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + CMD=neutron-ovn-metadata-agent
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + ARGS=
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + sudo kolla_copy_cacerts
Jan 20 14:09:28 compute-0 systemd[1]: Reloading.
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + [[ ! -n '' ]]
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + . kolla_extend_start
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: Running command: 'neutron-ovn-metadata-agent'
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + umask 0022
Jan 20 14:09:28 compute-0 ovn_metadata_agent[160049]: + exec neutron-ovn-metadata-agent
Jan 20 14:09:28 compute-0 systemd-rc-local-generator[160142]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:28 compute-0 systemd-sysv-generator[160145]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:29 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 20 14:09:29 compute-0 sudo[159940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 1 service(s): osd.default_drive_group (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 84 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 69 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON: 
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Jan 20 14:09:29 compute-0 ceph-mon[74360]: Failed to apply osd.default_drive_group spec DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
                                           service_id: default_drive_group
                                           service_name: osd.default_drive_group
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           spec:
                                             data_devices:
                                               paths:
                                               - /dev/ceph_vg0/ceph_lv0
                                             filter_logic: AND
                                             objectstore: bluestore
                                           ''')): host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1514, in _run_cephadm_json
                                               return json.loads(''.join(out))
                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                               return _default_decoder.decode(s)
                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
                                           
                                           During handling of the above exception, another exception occurred:
                                           
                                           Traceback (most recent call last):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 577, in _apply_all_services
                                               if self._apply_service(spec):
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 696, in _apply_service
                                               self.mgr.osd_service.create_from_spec(cast(DriveGroupSpec, spec))
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 79, in create_from_spec
                                               ret = self.mgr.wait_async(all_hosts())
                                             File "/usr/share/ceph/mgr/cephadm/module.py", line 735, in wait_async
                                               return self.event_loop.get_result(coro, timeout)
                                             File "/usr/share/ceph/mgr/cephadm/ssh.py", line 64, in get_result
                                               return future.result(timeout)
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 446, in result
                                               return self.__get_result()
                                             File "/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
                                               raise self._exception
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 76, in all_hosts
                                               return await gather(*futures)
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_from_spec_one
                                               ret_msg = await self.create_single_host(
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 98, in create_single_host
                                               return await self.deploy_osd_daemons_for_existing_osds(host, drive_group,
                                             File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 158, in deploy_osd_daemons_for_existing_osds
                                               raw_elems: dict = await CephadmServe(self.mgr)._run_cephadm_json(
                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1518, in _run_cephadm_json
                                               raise OrchestratorError(msg)
                                           orchestrator._interface.OrchestratorError: host compute-0 `cephadm ceph-volume` failed: Cannot decode JSON
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: Deploying daemon haproxy.rgw.default.compute-1.uyeocq on compute-1
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: mon.compute-1 calling monitor election
Jan 20 14:09:29 compute-0 ceph-mon[74360]: mon.compute-2 calling monitor election
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Jan 20 14:09:29 compute-0 ceph-mon[74360]: mon.compute-0 calling monitor election
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:29 compute-0 ceph-mon[74360]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 14:09:29 compute-0 ceph-mon[74360]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:29 compute-0 ceph-mon[74360]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 20 14:09:29 compute-0 ceph-mon[74360]: fsmap cephfs:1 {0=cephfs.compute-2.jyxktq=up:active} 2 up:standby
Jan 20 14:09:29 compute-0 ceph-mon[74360]: osdmap e128: 3 total, 3 up, 3 in
Jan 20 14:09:29 compute-0 ceph-mon[74360]: mgrmap e11: compute-0.wookjv(active, since 13m), standbys: compute-2.gunjko, compute-1.oweoeg
Jan 20 14:09:29 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:09:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:29 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:29 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:29 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 14:09:29 compute-0 ceph-mon[74360]: Deploying daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]: {
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:         "osd_id": 0,
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:         "type": "bluestore"
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]:     }
Jan 20 14:09:29 compute-0 eager_zhukovsky[160073]: }
Jan 20 14:09:29 compute-0 systemd[1]: libpod-6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057.scope: Deactivated successfully.
Jan 20 14:09:29 compute-0 podman[160050]: 2026-01-20 14:09:29.718619761 +0000 UTC m=+1.100627461 container died 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2775c6992c6346a034611daba67666ad66a0f428629316d810dad9a72a271fbe-merged.mount: Deactivated successfully.
Jan 20 14:09:29 compute-0 podman[160050]: 2026-01-20 14:09:29.772506795 +0000 UTC m=+1.154514495 container remove 6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:29 compute-0 systemd[1]: libpod-conmon-6a5f5e61e42496f840231c356bdb118a87a1b055e9f19d746b394d1600698057.scope: Deactivated successfully.
Jan 20 14:09:29 compute-0 sudo[159586]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:29 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 20 14:09:29 compute-0 systemd[1]: session-33.scope: Consumed 1min 47.581s CPU time.
Jan 20 14:09:29 compute-0 systemd-logind[796]: Removed session 33.
Jan 20 14:09:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:09:30 compute-0 python3.9[160334]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:30.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:30.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:30 compute-0 ceph-mon[74360]: Health check failed: Failed to apply 1 service(s): osd.default_drive_group (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 14:09:30 compute-0 ceph-mon[74360]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.676 160071 INFO neutron.common.config [-] Logging enabled!
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.676 160071 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.677 160071 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.678 160071 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.679 160071 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.680 160071 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.681 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.682 160071 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.683 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.684 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.685 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.686 160071 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.687 160071 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.688 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.689 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.690 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.691 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.692 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.693 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.694 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.695 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.696 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.697 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.698 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.699 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.700 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.701 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.702 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.703 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.704 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.705 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.706 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.707 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.708 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.709 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.710 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.710 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.710 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.710 160071 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.710 160071 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.719 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.720 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.720 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.720 160071 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.720 160071 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.735 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 367c1a2c-b16a-4828-ab5a-626bb50023b4 (UUID: 367c1a2c-b16a-4828-ab5a-626bb50023b4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.763 160071 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.764 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.765 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.766 160071 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.768 160071 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.774 160071 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.779 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '367c1a2c-b16a-4828-ab5a-626bb50023b4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], external_ids={}, name=367c1a2c-b16a-4828-ab5a-626bb50023b4, nb_cfg_timestamp=1768918059550, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.780 160071 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f4c274faf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.781 160071 INFO oslo_service.service [-] Starting 1 workers
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.785 160071 DEBUG oslo_service.service [-] Started child 160402 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.788 160071 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp971llhih/privsep.sock']
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.788 160402 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-230108'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.812 160402 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.812 160402 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.812 160402 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.815 160402 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.820 160402 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:30.825 160402 INFO eventlet.wsgi.server [-] (160402) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 20 14:09:31 compute-0 sudo[160489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rubgssyojhbigisbqfedogvwebtczhtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918170.7511086-1427-244228840754400/AnsiballZ_stat.py'
Jan 20 14:09:31 compute-0 sudo[160489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:31 compute-0 python3.9[160491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:31 compute-0 sudo[160489]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:31 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 20 14:09:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.426 160071 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.427 160071 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp971llhih/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.305 160517 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.308 160517 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.310 160517 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.310 160517 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160517
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.430 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ed352cb1-dc1e-4c12-bbf9-7945db5937cd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:09:31 compute-0 sudo[160618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzclzmhpurikhxwlnuuhvudybsoieeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918170.7511086-1427-244228840754400/AnsiballZ_copy.py'
Jan 20 14:09:31 compute-0 sudo[160618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:31 compute-0 python3.9[160620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918170.7511086-1427-244228840754400/.source.yaml _original_basename=.3n8jkuw_ follow=False checksum=cdeb45300f793bd9e5b2caee7d44d83f067a1a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:31 compute-0 sudo[160618]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.906 160517 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.906 160517 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:31.907 160517 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:32.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.419 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ef00aa12-9f10-447c-bdb7-43ec82a6825a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.422 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, column=external_ids, values=({'neutron:ovn-metadata-id': '2ec1ba55-6bb3-51e6-b5ca-cb042653589c'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.458 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.471 160071 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.471 160071 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.471 160071 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.471 160071 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.472 160071 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.472 160071 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.472 160071 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.472 160071 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.473 160071 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.473 160071 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.473 160071 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.473 160071 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.475 160071 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.476 160071 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.476 160071 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.476 160071 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.476 160071 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.477 160071 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.478 160071 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.478 160071 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.478 160071 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.478 160071 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.478 160071 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.479 160071 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.480 160071 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.481 160071 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.482 160071 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.483 160071 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.484 160071 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.485 160071 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.486 160071 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.487 160071 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.488 160071 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.489 160071 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.490 160071 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.491 160071 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.492 160071 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.493 160071 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.494 160071 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.495 160071 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.496 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.497 160071 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.498 160071 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.499 160071 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.500 160071 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.501 160071 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:32.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.502 160071 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.503 160071 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.504 160071 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.505 160071 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.506 160071 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.507 160071 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.508 160071 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.509 160071 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.510 160071 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.511 160071 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.512 160071 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.513 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.514 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.515 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.516 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.517 160071 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.518 160071 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:09:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:09:32.518 160071 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:09:32 compute-0 sshd-session[150620]: Connection closed by 192.168.122.30 port 40328
Jan 20 14:09:32 compute-0 sshd-session[150617]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:09:32 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 20 14:09:32 compute-0 systemd[1]: session-48.scope: Consumed 55.843s CPU time.
Jan 20 14:09:32 compute-0 systemd-logind[796]: Session 48 logged out. Waiting for processes to exit.
Jan 20 14:09:32 compute-0 systemd-logind[796]: Removed session 48.
Jan 20 14:09:32 compute-0 ceph-mon[74360]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 20 14:09:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:09:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:09:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:33 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 69a3da18-3b4f-413c-a14c-5d30530545e3 (Updating ingress.rgw.default deployment (+2 -> 4))
Jan 20 14:09:33 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 69a3da18-3b4f-413c-a14c-5d30530545e3 (Updating ingress.rgw.default deployment (+2 -> 4)) in 14 seconds
Jan 20 14:09:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:33 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 21 completed events
Jan 20 14:09:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 14:09:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:34.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:34 compute-0 ceph-mon[74360]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:34 compute-0 podman[160648]: 2026-01-20 14:09:34.507434403 +0000 UTC m=+0.102740805 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 14:09:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:36.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:36 compute-0 ceph-mon[74360]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f7fae192-9ba1-4504-80a7-f5d7c140fe5f does not exist
Jan 20 14:09:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 91045cc8-5417-4b99-9bbe-48beb8cae276 does not exist
Jan 20 14:09:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6f52a332-c41c-44e4-bd5c-1c7d41dddd71 does not exist
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:09:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:36 compute-0 sshd-session[160675]: Accepted publickey for ceph-admin from 192.168.122.100 port 34980 ssh2: RSA SHA256:eqrJ6T+GYkPtbx0jSDommFb6YAfLVXAEDWraZDSNLSE
Jan 20 14:09:36 compute-0 systemd-logind[796]: New session 49 of user ceph-admin.
Jan 20 14:09:36 compute-0 systemd[1]: Started Session 49 of User ceph-admin.
Jan 20 14:09:36 compute-0 sshd-session[160675]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 14:09:37 compute-0 sudo[160679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:37 compute-0 sudo[160679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:37 compute-0 sudo[160679]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:37 compute-0 sudo[160704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:37 compute-0 sudo[160704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:37 compute-0 sudo[160704]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:37 compute-0 sudo[160729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:37 compute-0 sudo[160729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:37 compute-0 sudo[160729]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:37 compute-0 sudo[160754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:09:37 compute-0 sudo[160754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.592624737 +0000 UTC m=+0.047878023 container create ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:09:37 compute-0 systemd[1]: Started libpod-conmon-ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f.scope.
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:37 compute-0 ceph-mon[74360]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.571614264 +0000 UTC m=+0.026867560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): osd.default_drive_group)
Jan 20 14:09:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.690254524 +0000 UTC m=+0.145507810 container init ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.696912152 +0000 UTC m=+0.152165398 container start ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.699694487 +0000 UTC m=+0.154947753 container attach ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:09:37 compute-0 elated_sammet[160835]: 167 167
Jan 20 14:09:37 compute-0 systemd[1]: libpod-ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f.scope: Deactivated successfully.
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.703659023 +0000 UTC m=+0.158912279 container died ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7d04a5cc292fbefa6c65f7c4d3e99bc335969b6b14d5c94da24dd40209f59fe-merged.mount: Deactivated successfully.
Jan 20 14:09:37 compute-0 podman[160819]: 2026-01-20 14:09:37.752745458 +0000 UTC m=+0.207998744 container remove ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:09:37 compute-0 systemd[1]: libpod-conmon-ef747a83bb79d50feb8a936aa450056460894b4e42f20e07164738208e4fc78f.scope: Deactivated successfully.
Jan 20 14:09:37 compute-0 podman[160859]: 2026-01-20 14:09:37.965577841 +0000 UTC m=+0.047203186 container create d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:09:37 compute-0 systemd[1]: Started libpod-conmon-d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7.scope.
Jan 20 14:09:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:37.945089492 +0000 UTC m=+0.026714887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:38.047193048 +0000 UTC m=+0.128818443 container init d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:38.056583069 +0000 UTC m=+0.138208414 container start d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:38.060309879 +0000 UTC m=+0.141935274 container attach d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:09:38 compute-0 sshd-session[160878]: Accepted publickey for zuul from 192.168.122.30 port 58580 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:09:38 compute-0 systemd-logind[796]: New session 50 of user zuul.
Jan 20 14:09:38 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 20 14:09:38 compute-0 sshd-session[160878]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:38.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:38.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:38 compute-0 ceph-mon[74360]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): osd.default_drive_group)
Jan 20 14:09:38 compute-0 ceph-mon[74360]: Cluster is now healthy
Jan 20 14:09:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:38 compute-0 serene_banach[160875]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:09:38 compute-0 serene_banach[160875]: --> relative data size: 1.0
Jan 20 14:09:38 compute-0 serene_banach[160875]: --> All data devices are unavailable
Jan 20 14:09:38 compute-0 systemd[1]: libpod-d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7.scope: Deactivated successfully.
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:38.795979331 +0000 UTC m=+0.877604686 container died d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8f3efe37e355272baab1c7afb064c6ca2b003f3f15ea8723ba8732d1263682-merged.mount: Deactivated successfully.
Jan 20 14:09:38 compute-0 podman[160859]: 2026-01-20 14:09:38.855219138 +0000 UTC m=+0.936844483 container remove d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:09:38 compute-0 systemd[1]: libpod-conmon-d94fbadd51eceece71eb5c7c0fccb090af8e564f68273d6327a86ab2f2c435c7.scope: Deactivated successfully.
Jan 20 14:09:38 compute-0 sudo[160754]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:38 compute-0 sudo[161007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:38 compute-0 sudo[161007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:38 compute-0 sudo[161007]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:39 compute-0 sudo[161056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:39 compute-0 sudo[161056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:39 compute-0 sudo[161056]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:39 compute-0 sudo[161108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:39 compute-0 sudo[161108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:39 compute-0 sudo[161108]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:39 compute-0 sudo[161133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:09:39 compute-0 sudo[161133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:39 compute-0 python3.9[161106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.534756335 +0000 UTC m=+0.068013693 container create 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:39 compute-0 systemd[1]: Started libpod-conmon-2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef.scope.
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.507616848 +0000 UTC m=+0.040874276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.649990143 +0000 UTC m=+0.183247571 container init 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.660889504 +0000 UTC m=+0.194146882 container start 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:39 compute-0 hopeful_panini[161242]: 167 167
Jan 20 14:09:39 compute-0 systemd[1]: libpod-2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef.scope: Deactivated successfully.
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.666959558 +0000 UTC m=+0.200216946 container attach 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.667584964 +0000 UTC m=+0.200842382 container died 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:09:39 compute-0 ceph-mon[74360]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a685d288a67ecdb3f65093fe12089f9116b9d94ead57f503c70b2bc22a8f97ca-merged.mount: Deactivated successfully.
Jan 20 14:09:39 compute-0 podman[161201]: 2026-01-20 14:09:39.71671651 +0000 UTC m=+0.249973888 container remove 2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:09:39 compute-0 systemd[1]: libpod-conmon-2ac113e09d4c4cac89697caab8c79d9be9b1241391a495c02550703ddfda44ef.scope: Deactivated successfully.
Jan 20 14:09:39 compute-0 podman[161268]: 2026-01-20 14:09:39.906454655 +0000 UTC m=+0.050493234 container create 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:39 compute-0 systemd[1]: Started libpod-conmon-9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d.scope.
Jan 20 14:09:39 compute-0 podman[161268]: 2026-01-20 14:09:39.88239424 +0000 UTC m=+0.026432799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a79eb49679ce84ca7a5120ff86a33f4c6cc49660b2341ed7bdc97196c7965c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a79eb49679ce84ca7a5120ff86a33f4c6cc49660b2341ed7bdc97196c7965c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a79eb49679ce84ca7a5120ff86a33f4c6cc49660b2341ed7bdc97196c7965c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94a79eb49679ce84ca7a5120ff86a33f4c6cc49660b2341ed7bdc97196c7965c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:40 compute-0 podman[161268]: 2026-01-20 14:09:40.015461165 +0000 UTC m=+0.159499784 container init 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 14:09:40 compute-0 podman[161268]: 2026-01-20 14:09:40.028156205 +0000 UTC m=+0.172194784 container start 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:09:40 compute-0 podman[161268]: 2026-01-20 14:09:40.032298286 +0000 UTC m=+0.176336875 container attach 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:40 compute-0 sudo[161414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsxwyqocspkzixaslspdkjwopocqfbdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918179.9084082-62-201261386047388/AnsiballZ_command.py'
Jan 20 14:09:40 compute-0 sudo[161414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:40.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:40 compute-0 python3.9[161416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:40 compute-0 sudo[161414]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:40 compute-0 nice_panini[161328]: {
Jan 20 14:09:40 compute-0 nice_panini[161328]:     "0": [
Jan 20 14:09:40 compute-0 nice_panini[161328]:         {
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "devices": [
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "/dev/loop3"
Jan 20 14:09:40 compute-0 nice_panini[161328]:             ],
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "lv_name": "ceph_lv0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "lv_size": "7511998464",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "name": "ceph_lv0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "tags": {
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.cluster_name": "ceph",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.crush_device_class": "",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.encrypted": "0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.osd_id": "0",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.type": "block",
Jan 20 14:09:40 compute-0 nice_panini[161328]:                 "ceph.vdo": "0"
Jan 20 14:09:40 compute-0 nice_panini[161328]:             },
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "type": "block",
Jan 20 14:09:40 compute-0 nice_panini[161328]:             "vg_name": "ceph_vg0"
Jan 20 14:09:40 compute-0 nice_panini[161328]:         }
Jan 20 14:09:40 compute-0 nice_panini[161328]:     ]
Jan 20 14:09:40 compute-0 nice_panini[161328]: }
Jan 20 14:09:40 compute-0 systemd[1]: libpod-9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d.scope: Deactivated successfully.
Jan 20 14:09:40 compute-0 podman[161268]: 2026-01-20 14:09:40.775310284 +0000 UTC m=+0.919348823 container died 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a79eb49679ce84ca7a5120ff86a33f4c6cc49660b2341ed7bdc97196c7965c-merged.mount: Deactivated successfully.
Jan 20 14:09:40 compute-0 podman[161268]: 2026-01-20 14:09:40.829102896 +0000 UTC m=+0.973141435 container remove 9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:40 compute-0 systemd[1]: libpod-conmon-9e66413c52689d38dd4a1ea983fbbe2ff3a315bf7006a8ac071d6ef75548839d.scope: Deactivated successfully.
Jan 20 14:09:40 compute-0 sudo[161133]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:40 compute-0 sudo[161469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:40 compute-0 sudo[161469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:40 compute-0 sudo[161469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:40 compute-0 sudo[161494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:40 compute-0 sudo[161494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:40 compute-0 sudo[161494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:41 compute-0 sudo[161519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:41 compute-0 sudo[161519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:41 compute-0 sudo[161519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:41 compute-0 sudo[161544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:09:41 compute-0 sudo[161544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.462551348 +0000 UTC m=+0.024942010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.626338476 +0000 UTC m=+0.188729118 container create 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:41 compute-0 systemd[1]: Started libpod-conmon-4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903.scope.
Jan 20 14:09:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.708299523 +0000 UTC m=+0.270690185 container init 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.71456517 +0000 UTC m=+0.276955802 container start 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:41 compute-0 systemd[1]: libpod-4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903.scope: Deactivated successfully.
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.718022013 +0000 UTC m=+0.280412645 container attach 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:09:41 compute-0 wonderful_dhawan[161703]: 167 167
Jan 20 14:09:41 compute-0 conmon[161703]: conmon 4d4c27dbd27531519ca3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903.scope/container/memory.events
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.719084241 +0000 UTC m=+0.281474863 container died 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-25ea573fa65e567bd23446b1c83a508f1e0fde3b02a02fd40c8c8577b179ad5b-merged.mount: Deactivated successfully.
Jan 20 14:09:41 compute-0 podman[161663]: 2026-01-20 14:09:41.753879663 +0000 UTC m=+0.316270295 container remove 4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:41 compute-0 ceph-mon[74360]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:09:41 compute-0 systemd[1]: libpod-conmon-4d4c27dbd27531519ca3d8e1747d2dc8b8f053510fef31f4a6477050cfaf2903.scope: Deactivated successfully.
Jan 20 14:09:41 compute-0 sudo[161771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjnqngkvuzjqqsvzksbmyutbmbgnbejb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918181.219425-95-215134382886963/AnsiballZ_systemd_service.py'
Jan 20 14:09:41 compute-0 sudo[161771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:41 compute-0 podman[161779]: 2026-01-20 14:09:41.902816635 +0000 UTC m=+0.025157075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:42 compute-0 python3.9[161773]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:09:42 compute-0 systemd[1]: Reloading.
Jan 20 14:09:42 compute-0 systemd-sysv-generator[161816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:42 compute-0 systemd-rc-local-generator[161812]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:42.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:42 compute-0 podman[161779]: 2026-01-20 14:09:42.18633532 +0000 UTC m=+0.308675750 container create e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:42 compute-0 sudo[161771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:42 compute-0 systemd[1]: Started libpod-conmon-e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09.scope.
Jan 20 14:09:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16e99c0ac07d8b9acfaf870aa2db4d8a08e177fedb97ca81add7c319515f2f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16e99c0ac07d8b9acfaf870aa2db4d8a08e177fedb97ca81add7c319515f2f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16e99c0ac07d8b9acfaf870aa2db4d8a08e177fedb97ca81add7c319515f2f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16e99c0ac07d8b9acfaf870aa2db4d8a08e177fedb97ca81add7c319515f2f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:42 compute-0 podman[161779]: 2026-01-20 14:09:42.435684441 +0000 UTC m=+0.558024871 container init e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:42 compute-0 podman[161779]: 2026-01-20 14:09:42.445179956 +0000 UTC m=+0.567520366 container start e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:09:42 compute-0 podman[161779]: 2026-01-20 14:09:42.448859715 +0000 UTC m=+0.571200155 container attach e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:43 compute-0 practical_lovelace[161831]: {
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:         "osd_id": 0,
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:         "type": "bluestore"
Jan 20 14:09:43 compute-0 practical_lovelace[161831]:     }
Jan 20 14:09:43 compute-0 practical_lovelace[161831]: }
Jan 20 14:09:43 compute-0 systemd[1]: libpod-e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09.scope: Deactivated successfully.
Jan 20 14:09:43 compute-0 podman[161779]: 2026-01-20 14:09:43.267526409 +0000 UTC m=+1.389866829 container died e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d16e99c0ac07d8b9acfaf870aa2db4d8a08e177fedb97ca81add7c319515f2f5-merged.mount: Deactivated successfully.
Jan 20 14:09:43 compute-0 podman[161779]: 2026-01-20 14:09:43.345613572 +0000 UTC m=+1.467953992 container remove e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 14:09:43 compute-0 python3.9[161996]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:09:43 compute-0 systemd[1]: libpod-conmon-e984f2f5a5d2cc13d2f1308a9f8e8622fcaa594ce5df0b4dd71eb8094482bc09.scope: Deactivated successfully.
Jan 20 14:09:43 compute-0 sudo[161544]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:09:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:09:43 compute-0 network[162030]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:09:43 compute-0 network[162031]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:09:43 compute-0 network[162032]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:09:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:43 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d3d4b85e-a28b-4a78-a07f-282da36c9846 does not exist
Jan 20 14:09:43 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dafc5ae4-24db-4821-a15e-e240b1810702 does not exist
Jan 20 14:09:43 compute-0 ceph-mgr[74653]: [progress INFO root] update: starting ev 772195df-83e1-4544-89d2-7909730a4318 (Updating ingress.rgw.default deployment (-2 -> 4))
Jan 20 14:09:43 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Removing daemon haproxy.rgw.default.compute-2.cuokcs from compute-2 -- ports [8080, 8999]
Jan 20 14:09:43 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Removing daemon haproxy.rgw.default.compute-2.cuokcs from compute-2 -- ports [8080, 8999]
Jan 20 14:09:43 compute-0 ceph-mon[74360]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:09:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.102 - anonymous [20/Jan/2026:14:09:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:44.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:44 compute-0 ceph-mon[74360]: Removing daemon haproxy.rgw.default.compute-2.cuokcs from compute-2 -- ports [8080, 8999]
Jan 20 14:09:45 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.cephadmservice] Removing key for client.ingress.rgw.default.compute-2.cuokcs
Jan 20 14:09:45 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Removing key for client.ingress.rgw.default.compute-2.cuokcs
Jan 20 14:09:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.cuokcs"} v 0) v1
Jan 20 14:09:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.cuokcs"}]: dispatch
Jan 20 14:09:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:45 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Removing daemon keepalived.rgw.default.compute-2.dleeql from compute-2 -- ports []
Jan 20 14:09:45 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Removing daemon keepalived.rgw.default.compute-2.dleeql from compute-2 -- ports []
Jan 20 14:09:45 compute-0 ceph-mon[74360]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.cuokcs"}]: dispatch
Jan 20 14:09:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:46 compute-0 ceph-mon[74360]: Removing key for client.ingress.rgw.default.compute-2.cuokcs
Jan 20 14:09:46 compute-0 ceph-mon[74360]: Removing daemon keepalived.rgw.default.compute-2.dleeql from compute-2 -- ports []
Jan 20 14:09:47 compute-0 ceph-mon[74360]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:48 compute-0 sudo[162295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utirwoberuflqmkqkilkifkeelgybtyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918187.7863498-152-196904237155720/AnsiballZ_systemd_service.py'
Jan 20 14:09:48 compute-0 sudo[162295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.cephadmservice] Removing key for client.ingress.rgw.default.compute-2.dleeql
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Removing key for client.ingress.rgw.default.compute-2.dleeql
Jan 20 14:09:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.dleeql"} v 0) v1
Jan 20 14:09:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.dleeql"}]: dispatch
Jan 20 14:09:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: [progress INFO root] complete: finished ev 772195df-83e1-4544-89d2-7909730a4318 (Updating ingress.rgw.default deployment (-2 -> 4))
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: [progress INFO root] Completed event 772195df-83e1-4544-89d2-7909730a4318 (Updating ingress.rgw.default deployment (-2 -> 4)) in 5 seconds
Jan 20 14:09:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 20 14:09:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:48.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-0 python3.9[162297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:48 compute-0 sudo[162299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-0 sudo[162298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-0 sudo[162299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-0 sudo[162298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-0 sudo[162299]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-0 sudo[162298]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-0 sudo[162295]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:48 compute-0 sudo[162349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:48 compute-0 sudo[162349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-0 sudo[162349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-0 sudo[162350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:09:48 compute-0 sudo[162350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:48 compute-0 sudo[162350]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth rm", "entity": "client.ingress.rgw.default.compute-2.dleeql"}]: dispatch
Jan 20 14:09:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-0 ceph-mgr[74653]: [progress INFO root] Writing back 22 completed events
Jan 20 14:09:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 20 14:09:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:48 compute-0 sudo[162548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svwqlemifyrhnqkkkwqkklrzckiehedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918188.6101348-152-75436328136226/AnsiballZ_systemd_service.py'
Jan 20 14:09:48 compute-0 sudo[162548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:49 compute-0 python3.9[162550]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:49 compute-0 sudo[162548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:49 compute-0 sudo[162701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfsbedxgotpyorryhsmxtexfnydclcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918189.3520563-152-156923673938650/AnsiballZ_systemd_service.py'
Jan 20 14:09:49 compute-0 sudo[162701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:49 compute-0 ceph-mon[74360]: Removing key for client.ingress.rgw.default.compute-2.dleeql
Jan 20 14:09:49 compute-0 ceph-mon[74360]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:49 compute-0 python3.9[162704]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:49 compute-0 sudo[162701]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:50 compute-0 sudo[162855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgtouhruszggqrgnyzosvjjuirxwwsrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918190.097784-152-265044304452154/AnsiballZ_systemd_service.py'
Jan 20 14:09:50 compute-0 sudo[162855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:50.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:09:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:09:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:50.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:50 compute-0 python3.9[162857]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:50 compute-0 sudo[162855]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:51 compute-0 sudo[163008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoonayzvpjrrkjakwdhickrgziieoprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918190.822804-152-195241751871300/AnsiballZ_systemd_service.py'
Jan 20 14:09:51 compute-0 sudo[163008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:09:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:09:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.358327) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191358381, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1399, "num_deletes": 257, "total_data_size": 2346170, "memory_usage": 2375296, "flush_reason": "Manual Compaction"}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191377123, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 2252278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12224, "largest_seqno": 13622, "table_properties": {"data_size": 2245730, "index_size": 3683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13578, "raw_average_key_size": 18, "raw_value_size": 2232168, "raw_average_value_size": 3121, "num_data_blocks": 166, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918052, "oldest_key_time": 1768918052, "file_creation_time": 1768918191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 18830 microseconds, and 5107 cpu microseconds.
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.377160) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 2252278 bytes OK
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.377174) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.378841) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.378853) EVENT_LOG_v1 {"time_micros": 1768918191378849, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.378866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2339960, prev total WAL file size 2339960, number of live WAL files 2.
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.379481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323537' seq:0, type:0; will stop at (end)
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(2199KB)], [29(8051KB)]
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191379538, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 10497426, "oldest_snapshot_seqno": -1}
Jan 20 14:09:51 compute-0 python3.9[163010]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4134 keys, 9909648 bytes, temperature: kUnknown
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191458077, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 9909648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9877971, "index_size": 20239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 101721, "raw_average_key_size": 24, "raw_value_size": 9799254, "raw_average_value_size": 2370, "num_data_blocks": 859, "num_entries": 4134, "num_filter_entries": 4134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918191, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.458271) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9909648 bytes
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.459934) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.6 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.4) OK, records in: 4671, records dropped: 537 output_compression: NoCompression
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.459953) EVENT_LOG_v1 {"time_micros": 1768918191459944, "job": 12, "event": "compaction_finished", "compaction_time_micros": 78599, "compaction_time_cpu_micros": 19740, "output_level": 6, "num_output_files": 1, "total_output_size": 9909648, "num_input_records": 4671, "num_output_records": 4134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191460497, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918191461901, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.379370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.461966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.461973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.461975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.461976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:09:51.461978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:09:51 compute-0 sudo[163008]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 ceph-mon[74360]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:51 compute-0 sudo[163162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkfivrkwglkbkhrayyajrtrvdgfokjxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918191.6397374-152-277440578473584/AnsiballZ_systemd_service.py'
Jan 20 14:09:51 compute-0 sudo[163162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8908b0ee-f4e2-4168-aa2a-5b12f3b39932 does not exist
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev da10625f-9412-43f8-8eae-bfb90d2d08fe does not exist
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3d535ddf-f7b9-4567-90ca-a473ff09a67f does not exist
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:09:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-0 sudo[163165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:52 compute-0 sudo[163165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:52 compute-0 sudo[163165]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-0 python3.9[163164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:52 compute-0 sudo[163190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:52 compute-0 sudo[163190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:52 compute-0 sudo[163190]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:52 compute-0 sudo[163162]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-0 sudo[163216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:52 compute-0 sudo[163216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:52 compute-0 sudo[163216]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:09:52
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups']
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:52 compute-0 sudo[163248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:09:52 compute-0 sudo[163248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:09:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:09:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:52.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:52 compute-0 sudo[163450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czoecxadftlfirrbzolkuialcqzxolff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918192.5095978-152-248093285060082/AnsiballZ_systemd_service.py'
Jan 20 14:09:52 compute-0 sudo[163450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.012362543 +0000 UTC m=+0.075623328 container create 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:09:53 compute-0 systemd[1]: Started libpod-conmon-66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35.scope.
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:52.98201703 +0000 UTC m=+0.045277895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:53 compute-0 python3.9[163457]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.198188122 +0000 UTC m=+0.261448967 container init 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:53 compute-0 sudo[163450]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.20893165 +0000 UTC m=+0.272192445 container start 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.212900606 +0000 UTC m=+0.276161401 container attach 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:09:53 compute-0 quirky_benz[163474]: 167 167
Jan 20 14:09:53 compute-0 systemd[1]: libpod-66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35.scope: Deactivated successfully.
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.215867726 +0000 UTC m=+0.279128521 container died 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e15f48d8217970fc745ea78a942e3b325b47a104359a8aab945d28dccc2a301e-merged.mount: Deactivated successfully.
Jan 20 14:09:53 compute-0 podman[163458]: 2026-01-20 14:09:53.268113546 +0000 UTC m=+0.331374351 container remove 66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:09:53 compute-0 systemd[1]: libpod-conmon-66ead63a46c831f47eb66c1685185e8e746fc837069788514735770f03863f35.scope: Deactivated successfully.
Jan 20 14:09:53 compute-0 podman[163523]: 2026-01-20 14:09:53.499261709 +0000 UTC m=+0.063535673 container create 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:09:53 compute-0 ceph-mon[74360]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:53 compute-0 systemd[1]: Started libpod-conmon-56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77.scope.
Jan 20 14:09:53 compute-0 podman[163523]: 2026-01-20 14:09:53.478384599 +0000 UTC m=+0.042658643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:53 compute-0 podman[163523]: 2026-01-20 14:09:53.618495984 +0000 UTC m=+0.182769988 container init 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:09:53 compute-0 podman[163523]: 2026-01-20 14:09:53.636547417 +0000 UTC m=+0.200821401 container start 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:09:53 compute-0 podman[163523]: 2026-01-20 14:09:53.640830262 +0000 UTC m=+0.205104306 container attach 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:09:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:54.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:54 compute-0 reverent_raman[163539]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:09:54 compute-0 reverent_raman[163539]: --> relative data size: 1.0
Jan 20 14:09:54 compute-0 reverent_raman[163539]: --> All data devices are unavailable
Jan 20 14:09:54 compute-0 systemd[1]: libpod-56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77.scope: Deactivated successfully.
Jan 20 14:09:54 compute-0 podman[163523]: 2026-01-20 14:09:54.472062514 +0000 UTC m=+1.036336528 container died 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f98a2291aadb66d88a812332933b9341e608cfddc01519e291e5344e97bdd5-merged.mount: Deactivated successfully.
Jan 20 14:09:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:09:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:54.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:09:54 compute-0 podman[163523]: 2026-01-20 14:09:54.558618714 +0000 UTC m=+1.122892678 container remove 56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:54 compute-0 systemd[1]: libpod-conmon-56ec43938e3d9516019b2be1123f5def06f5875cdb15f1784f8d93a8f3167e77.scope: Deactivated successfully.
Jan 20 14:09:54 compute-0 sudo[163248]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:54 compute-0 sudo[163567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:54 compute-0 sudo[163567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:54 compute-0 sudo[163567]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:54 compute-0 sudo[163592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:54 compute-0 sudo[163592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:54 compute-0 sudo[163592]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:54 compute-0 sudo[163617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:54 compute-0 sudo[163617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:54 compute-0 sudo[163617]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:54 compute-0 sudo[163642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:09:54 compute-0 sudo[163642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.230519246 +0000 UTC m=+0.058801607 container create 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:09:55 compute-0 systemd[1]: Started libpod-conmon-1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823.scope.
Jan 20 14:09:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.208530467 +0000 UTC m=+0.036812908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.316134241 +0000 UTC m=+0.144416612 container init 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.326306833 +0000 UTC m=+0.154589214 container start 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.330325011 +0000 UTC m=+0.158607392 container attach 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:09:55 compute-0 competent_proskuriakova[163724]: 167 167
Jan 20 14:09:55 compute-0 systemd[1]: libpod-1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823.scope: Deactivated successfully.
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.334203145 +0000 UTC m=+0.162485536 container died 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec021b08d010c49f94f65939bbfbac82efd6d1eaa8496019e5ff9ad4a6f1c876-merged.mount: Deactivated successfully.
Jan 20 14:09:55 compute-0 podman[163707]: 2026-01-20 14:09:55.389956618 +0000 UTC m=+0.218239009 container remove 1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:09:55 compute-0 systemd[1]: libpod-conmon-1e484a7a972ec2a48b012bb610b1144fa2769dde3f462b351e5c91ce2df42823.scope: Deactivated successfully.
Jan 20 14:09:55 compute-0 podman[163750]: 2026-01-20 14:09:55.563068847 +0000 UTC m=+0.040464576 container create 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:09:55 compute-0 systemd[1]: Started libpod-conmon-511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07.scope.
Jan 20 14:09:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71e3671dc61261b3b6ed66735f08b693b32bddbb18bae6711119aa9dc08b2bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71e3671dc61261b3b6ed66735f08b693b32bddbb18bae6711119aa9dc08b2bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71e3671dc61261b3b6ed66735f08b693b32bddbb18bae6711119aa9dc08b2bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71e3671dc61261b3b6ed66735f08b693b32bddbb18bae6711119aa9dc08b2bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:55 compute-0 podman[163750]: 2026-01-20 14:09:55.548983539 +0000 UTC m=+0.026379288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:55 compute-0 podman[163750]: 2026-01-20 14:09:55.64905084 +0000 UTC m=+0.126446589 container init 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:09:55 compute-0 podman[163750]: 2026-01-20 14:09:55.656304575 +0000 UTC m=+0.133700324 container start 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:09:55 compute-0 podman[163750]: 2026-01-20 14:09:55.659913521 +0000 UTC m=+0.137309280 container attach 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:09:55 compute-0 ceph-mon[74360]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:56 compute-0 sudo[163897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khkgxpekeqejcdwwtjeshevnwdikpsgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918195.659945-308-33389193960248/AnsiballZ_file.py'
Jan 20 14:09:56 compute-0 sudo[163897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:56 compute-0 python3.9[163899]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:56 compute-0 sudo[163897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:56.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:56 compute-0 zen_newton[163767]: {
Jan 20 14:09:56 compute-0 zen_newton[163767]:     "0": [
Jan 20 14:09:56 compute-0 zen_newton[163767]:         {
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "devices": [
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "/dev/loop3"
Jan 20 14:09:56 compute-0 zen_newton[163767]:             ],
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "lv_name": "ceph_lv0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "lv_size": "7511998464",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "name": "ceph_lv0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "tags": {
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.cluster_name": "ceph",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.crush_device_class": "",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.encrypted": "0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.osd_id": "0",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.type": "block",
Jan 20 14:09:56 compute-0 zen_newton[163767]:                 "ceph.vdo": "0"
Jan 20 14:09:56 compute-0 zen_newton[163767]:             },
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "type": "block",
Jan 20 14:09:56 compute-0 zen_newton[163767]:             "vg_name": "ceph_vg0"
Jan 20 14:09:56 compute-0 zen_newton[163767]:         }
Jan 20 14:09:56 compute-0 zen_newton[163767]:     ]
Jan 20 14:09:56 compute-0 zen_newton[163767]: }
Jan 20 14:09:56 compute-0 systemd[1]: libpod-511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07.scope: Deactivated successfully.
Jan 20 14:09:56 compute-0 podman[163750]: 2026-01-20 14:09:56.411244573 +0000 UTC m=+0.888640302 container died 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f71e3671dc61261b3b6ed66735f08b693b32bddbb18bae6711119aa9dc08b2bd-merged.mount: Deactivated successfully.
Jan 20 14:09:56 compute-0 podman[163750]: 2026-01-20 14:09:56.46971371 +0000 UTC m=+0.947109429 container remove 511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:09:56 compute-0 systemd[1]: libpod-conmon-511ab2a3cf51da1abb74951f9e6eb58c1f00ff519999e691b4cd31a9919e6b07.scope: Deactivated successfully.
Jan 20 14:09:56 compute-0 sudo[163642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:56 compute-0 sudo[163963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:56 compute-0 sudo[163963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:56 compute-0 sudo[163963]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-0 sudo[164016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:56 compute-0 sudo[164016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:56 compute-0 sudo[164016]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-0 sudo[164064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:56 compute-0 sudo[164064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:56 compute-0 sudo[164064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:56 compute-0 sudo[164113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:09:56 compute-0 sudo[164113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:56 compute-0 sudo[164163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjeqrdwnuugfcpywmlfjqvpvprykcuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918196.5064547-308-32319080758813/AnsiballZ_file.py'
Jan 20 14:09:56 compute-0 sudo[164163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:56 compute-0 python3.9[164166]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:56 compute-0 sudo[164163]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.101338034 +0000 UTC m=+0.036236492 container create 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:09:57 compute-0 systemd[1]: Started libpod-conmon-74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5.scope.
Jan 20 14:09:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.159256935 +0000 UTC m=+0.094155413 container init 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.16765622 +0000 UTC m=+0.102554688 container start 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.171807301 +0000 UTC m=+0.106705779 container attach 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 14:09:57 compute-0 blissful_hoover[164271]: 167 167
Jan 20 14:09:57 compute-0 systemd[1]: libpod-74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5.scope: Deactivated successfully.
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.173487296 +0000 UTC m=+0.108385764 container died 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.085722005 +0000 UTC m=+0.020620483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f36532df8b22cc47850433ecd8912c0cbb4d9ea4873fcc3bd79df1b40736aa1f-merged.mount: Deactivated successfully.
Jan 20 14:09:57 compute-0 podman[164232]: 2026-01-20 14:09:57.222493549 +0000 UTC m=+0.157392017 container remove 74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hoover, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:09:57 compute-0 systemd[1]: libpod-conmon-74a34edf4c9cb72a7792bed3515052d839f6fcebabf00508ac8862d8f1d1fba5.scope: Deactivated successfully.
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:09:57 compute-0 sudo[164409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guazdtrepmeqncuixpshafsikgzypbul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918197.1346805-308-101499180814520/AnsiballZ_file.py'
Jan 20 14:09:57 compute-0 sudo[164409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:57 compute-0 podman[164370]: 2026-01-20 14:09:57.385122937 +0000 UTC m=+0.024314693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:09:57 compute-0 python3.9[164411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:57 compute-0 sudo[164409]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:57 compute-0 podman[164370]: 2026-01-20 14:09:57.729907036 +0000 UTC m=+0.369098732 container create b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:09:57 compute-0 ceph-mon[74360]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:57 compute-0 systemd[1]: Started libpod-conmon-b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa.scope.
Jan 20 14:09:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32121dfd09f4f6213009366b975561df786506f6e0ed8fd141efff73c4ea0338/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32121dfd09f4f6213009366b975561df786506f6e0ed8fd141efff73c4ea0338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32121dfd09f4f6213009366b975561df786506f6e0ed8fd141efff73c4ea0338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32121dfd09f4f6213009366b975561df786506f6e0ed8fd141efff73c4ea0338/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:58 compute-0 podman[164370]: 2026-01-20 14:09:58.018255352 +0000 UTC m=+0.657447138 container init b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:58 compute-0 podman[164370]: 2026-01-20 14:09:58.030771266 +0000 UTC m=+0.669962972 container start b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:58 compute-0 podman[164370]: 2026-01-20 14:09:58.037679271 +0000 UTC m=+0.676870987 container attach b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:58 compute-0 sudo[164570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszhrjhoxtdfyecnhtjvksgaiutergpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918197.8456357-308-220653108188701/AnsiballZ_file.py'
Jan 20 14:09:58 compute-0 sudo[164570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:09:58.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:58 compute-0 python3.9[164572]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:58 compute-0 sudo[164570]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:09:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:09:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:09:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:09:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:09:58 compute-0 objective_turing[164492]: {
Jan 20 14:09:58 compute-0 objective_turing[164492]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:09:58 compute-0 objective_turing[164492]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:09:58 compute-0 objective_turing[164492]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:09:58 compute-0 objective_turing[164492]:         "osd_id": 0,
Jan 20 14:09:58 compute-0 objective_turing[164492]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:09:58 compute-0 objective_turing[164492]:         "type": "bluestore"
Jan 20 14:09:58 compute-0 objective_turing[164492]:     }
Jan 20 14:09:58 compute-0 objective_turing[164492]: }
Jan 20 14:09:58 compute-0 systemd[1]: libpod-b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa.scope: Deactivated successfully.
Jan 20 14:09:58 compute-0 podman[164370]: 2026-01-20 14:09:58.863949541 +0000 UTC m=+1.503141207 container died b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-32121dfd09f4f6213009366b975561df786506f6e0ed8fd141efff73c4ea0338-merged.mount: Deactivated successfully.
Jan 20 14:09:58 compute-0 sudo[164749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzanuqtycqfjivkvuxzvnjrxtiafolb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918198.6308339-308-77744315716075/AnsiballZ_file.py'
Jan 20 14:09:58 compute-0 sudo[164749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:58 compute-0 podman[164370]: 2026-01-20 14:09:58.977616977 +0000 UTC m=+1.616808643 container remove b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_turing, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:09:58 compute-0 systemd[1]: libpod-conmon-b3462e5d618c1ead431f9d461cf9695b47c555ea691b1b8c66d60303551d6efa.scope: Deactivated successfully.
Jan 20 14:09:59 compute-0 sudo[164113]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:09:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:59 compute-0 podman[164751]: 2026-01-20 14:09:59.03970436 +0000 UTC m=+0.063682207 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:09:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:09:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2a98d33d-88d7-4b63-90d0-0a65bf1ade8d does not exist
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev af932985-73b8-4c30-96ac-cbbbb8180acd does not exist
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8ce56e56-e2de-4844-ad18-be9ada5c51f0 does not exist
Jan 20 14:09:59 compute-0 sudo[164769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:59 compute-0 sudo[164769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 sudo[164769]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 python3.9[164752]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:59 compute-0 sudo[164749]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 sudo[164795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:09:59 compute-0 sudo[164795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 sudo[164795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring keepalived.rgw.default.compute-0.gcjsxe (dependencies changed)...
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring keepalived.rgw.default.compute-0.gcjsxe (dependencies changed)...
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 14:09:59 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 14:09:59 compute-0 sudo[164942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:59 compute-0 sudo[164942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 sudo[164942]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 sudo[164996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrlocjeobqovacuiueavyjzktxhoghjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918199.3339267-308-250303614287229/AnsiballZ_file.py'
Jan 20 14:09:59 compute-0 sudo[164996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:09:59 compute-0 sudo[164994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:09:59 compute-0 sudo[164994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 sudo[164994]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 sudo[165023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:09:59 compute-0 sudo[165023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 sudo[165023]: pam_unix(sudo:session): session closed for user root
Jan 20 14:09:59 compute-0 sudo[165048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e399cf45-e6b6-5393-99f1-75c601d3f188
Jan 20 14:09:59 compute-0 sudo[165048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:09:59 compute-0 python3.9[165009]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:59 compute-0 sudo[164996]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:10:00 compute-0 ceph-mon[74360]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.107901661 +0000 UTC m=+0.042460658 container create 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 14:10:00 compute-0 systemd[1]: Started libpod-conmon-1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e.scope.
Jan 20 14:10:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.09068273 +0000 UTC m=+0.025241747 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.200291117 +0000 UTC m=+0.134850164 container init 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, release=1793, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container)
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.211356713 +0000 UTC m=+0.145915740 container start 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, version=2.2.4, name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.215230217 +0000 UTC m=+0.149789214 container attach 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, version=2.2.4, release=1793, name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 14:10:00 compute-0 romantic_proskuriakova[165183]: 0 0
Jan 20 14:10:00 compute-0 systemd[1]: libpod-1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e.scope: Deactivated successfully.
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.22094705 +0000 UTC m=+0.155506127 container died 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, name=keepalived, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph)
Jan 20 14:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ceba0c5e396c7bec08e766e1dd8e851f655523aaef5f0249bbc816b97b850f3-merged.mount: Deactivated successfully.
Jan 20 14:10:00 compute-0 podman[165137]: 2026-01-20 14:10:00.278865982 +0000 UTC m=+0.213424979 container remove 1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e (image=quay.io/ceph/keepalived:2.2.4, name=romantic_proskuriakova, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Jan 20 14:10:00 compute-0 systemd[1]: libpod-conmon-1b83c0bd2603a1845671946c6657038e4a5b4229fd1bf542f7e9e1662c4ae89e.scope: Deactivated successfully.
Jan 20 14:10:00 compute-0 systemd[1]: Stopping Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:10:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:00.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:00 compute-0 sudo[165283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvflhopncybepmevmfzsywnwoppctpwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918200.0302176-308-92183171515458/AnsiballZ_file.py'
Jan 20 14:10:00 compute-0 sudo[165283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:00 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 14:10:00 2026: Stopping
Jan 20 14:10:00 compute-0 python3.9[165292]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:00 compute-0 sudo[165283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:01 compute-0 ceph-mon[74360]: Reconfiguring keepalived.rgw.default.compute-0.gcjsxe (dependencies changed)...
Jan 20 14:10:01 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:01 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:01 compute-0 ceph-mon[74360]: Reconfiguring daemon keepalived.rgw.default.compute-0.gcjsxe on compute-0
Jan 20 14:10:01 compute-0 sudo[165470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnzwkvovnvuqhivczfassobxtecpgyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918201.071007-458-115644672549432/AnsiballZ_file.py'
Jan 20 14:10:01 compute-0 sudo[165470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:01 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 14:10:01 2026: Stopped
Jan 20 14:10:01 compute-0 python3.9[165472]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:01 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[94961]: Tue Jan 20 14:10:01 2026: Stopped Keepalived v2.2.4 (08/21,2021)
Jan 20 14:10:01 compute-0 sudo[165470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:01 compute-0 podman[165306]: 2026-01-20 14:10:01.614791327 +0000 UTC m=+1.132993399 container died 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, io.openshift.expose-services=, name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, description=keepalived for Ceph, version=2.2.4)
Jan 20 14:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7900c5890505e855b0f5cb23820624ef14318547c4515d911e56d53b5c2a650-merged.mount: Deactivated successfully.
Jan 20 14:10:02 compute-0 podman[165306]: 2026-01-20 14:10:02.088835119 +0000 UTC m=+1.607037161 container remove 216d71b5dad97102f8a0d90f660e553dbff152f4aa28ae453da231535e09b878 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, release=1793)
Jan 20 14:10:02 compute-0 bash[165306]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe
Jan 20 14:10:02 compute-0 sudo[165642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zllganwoyyvstofaoaosstfqkovuurpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918201.7844043-458-203568495442727/AnsiballZ_file.py'
Jan 20 14:10:02 compute-0 sudo[165642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:02 compute-0 systemd[1]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@keepalived.rgw.default.compute-0.gcjsxe.service: Deactivated successfully.
Jan 20 14:10:02 compute-0 systemd[1]: Stopped Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:10:02 compute-0 systemd[1]: ceph-e399cf45-e6b6-5393-99f1-75c601d3f188@keepalived.rgw.default.compute-0.gcjsxe.service: Consumed 4.343s CPU time.
Jan 20 14:10:02 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188...
Jan 20 14:10:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:02.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:02 compute-0 python3.9[165649]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:02 compute-0 ceph-mon[74360]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:02 compute-0 sudo[165642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:02.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:02 compute-0 podman[165741]: 2026-01-20 14:10:02.566917468 +0000 UTC m=+0.037804724 container create 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, name=keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 14:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efacd9237fd3f0e5b24cf23293b951cd66a39487231d06243ec9a52b1193d862/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:02 compute-0 podman[165741]: 2026-01-20 14:10:02.623766882 +0000 UTC m=+0.094654128 container init 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, description=keepalived for Ceph, version=2.2.4, release=1793, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 14:10:02 compute-0 podman[165741]: 2026-01-20 14:10:02.629957157 +0000 UTC m=+0.100844383 container start 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, distribution-scope=public, architecture=x86_64)
Jan 20 14:10:02 compute-0 bash[165741]: 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf
Jan 20 14:10:02 compute-0 podman[165741]: 2026-01-20 14:10:02.550126098 +0000 UTC m=+0.021013344 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 14:10:02 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.gcjsxe for e399cf45-e6b6-5393-99f1-75c601d3f188.
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Starting VRRP child process, pid=4
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: Startup complete
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 14:10:02 compute-0 sudo[165048]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:02 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:02 2026: VRRP_Script(check_backend) succeeded
Jan 20 14:10:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:10:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring keepalived.rgw.default.compute-1.cevitz (dependencies changed)...
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring keepalived.rgw.default.compute-1.cevitz (dependencies changed)...
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: [cephadm INFO cephadm.serve] Reconfiguring daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:10:02 compute-0 ceph-mgr[74653]: log_channel(cephadm) log [INF] : Reconfiguring daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:10:02 compute-0 sudo[165883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bricxoztpbwrysmifzozabtqihnfgfxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918202.555525-458-72549765917955/AnsiballZ_file.py'
Jan 20 14:10:02 compute-0 sudo[165883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:03 compute-0 python3.9[165885]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:03 compute-0 sudo[165883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:03 compute-0 sshd-session[165710]: Connection closed by authenticating user root 157.245.78.139 port 44964 [preauth]
Jan 20 14:10:03 compute-0 sudo[166035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-touboxahtqdtuzajyrmcbnirxpegruwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918203.2395267-458-174850974062912/AnsiballZ_file.py'
Jan 20 14:10:03 compute-0 sudo[166035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:03 compute-0 ceph-mon[74360]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:03 compute-0 ceph-mon[74360]: Reconfiguring keepalived.rgw.default.compute-1.cevitz (dependencies changed)...
Jan 20 14:10:03 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 14:10:03 compute-0 ceph-mon[74360]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 14:10:03 compute-0 ceph-mon[74360]: Reconfiguring daemon keepalived.rgw.default.compute-1.cevitz on compute-1
Jan 20 14:10:03 compute-0 python3.9[166037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:03 compute-0 sudo[166035]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:04 compute-0 sudo[166187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdlqcisfvatvipjovjwxpximpmbbwzcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918203.9891143-458-20089686742451/AnsiballZ_file.py'
Jan 20 14:10:04 compute-0 sudo[166187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:04.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:04 compute-0 python3.9[166189]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:04 compute-0 sudo[166187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:04 compute-0 podman[166190]: 2026-01-20 14:10:04.661221903 +0000 UTC m=+0.097543345 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:10:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:04 compute-0 auditd[701]: Audit daemon rotating log files
Jan 20 14:10:05 compute-0 sudo[166367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saikpcnizhdqiepopaasoudvykoobayr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918204.7773721-458-185866810277598/AnsiballZ_file.py'
Jan 20 14:10:05 compute-0 sudo[166367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:05 compute-0 python3.9[166369]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:05 compute-0 sudo[166367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:10:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:10:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-0 sudo[166469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:05 compute-0 sudo[166469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-0 sudo[166469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-0 sudo[166502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:05 compute-0 sudo[166502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-0 sudo[166502]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-0 sudo[166587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewkyvcuocxturkxbhdmsrcfpfmaspqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918205.3889735-458-255765382538705/AnsiballZ_file.py'
Jan 20 14:10:05 compute-0 sudo[166587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:05 compute-0 sudo[166552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:05 compute-0 sudo[166552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-0 sudo[166552]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:05 compute-0 sudo[166597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:10:05 compute-0 sudo[166597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:05 compute-0 ceph-mon[74360]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:05 compute-0 python3.9[166594]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:05 compute-0 sudo[166587]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:06 compute-0 podman[166718]: 2026-01-20 14:10:06.194862325 +0000 UTC m=+0.064944441 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:06 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe[165791]: Tue Jan 20 14:10:06 2026: (VI_0) Entering MASTER STATE
Jan 20 14:10:06 compute-0 podman[166718]: 2026-01-20 14:10:06.31703948 +0000 UTC m=+0.187121616 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:10:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:06.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:06 compute-0 sudo[166988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqpkpgsvtefdadtzewmqcvxelgrenbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918206.552256-611-218488222926510/AnsiballZ_command.py'
Jan 20 14:10:06 compute-0 sudo[166988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:06 compute-0 podman[167003]: 2026-01-20 14:10:06.962758291 +0000 UTC m=+0.066175524 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:10:07 compute-0 python3.9[166995]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:07 compute-0 podman[167023]: 2026-01-20 14:10:07.030566777 +0000 UTC m=+0.052495007 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:10:07 compute-0 podman[167003]: 2026-01-20 14:10:07.038526061 +0000 UTC m=+0.141943304 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:10:07 compute-0 sudo[166988]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:10:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:10:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 podman[167093]: 2026-01-20 14:10:07.322580182 +0000 UTC m=+0.072702009 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, release=1793, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public)
Jan 20 14:10:07 compute-0 podman[167093]: 2026-01-20 14:10:07.334678835 +0000 UTC m=+0.084800562 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, version=2.2.4, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 20 14:10:07 compute-0 sudo[166597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:10:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:10:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 sudo[167127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:07 compute-0 sudo[167127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:07 compute-0 sudo[167127]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-0 sudo[167177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:07 compute-0 sudo[167177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:07 compute-0 sudo[167177]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-0 sudo[167228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:07 compute-0 sudo[167228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:07 compute-0 sudo[167228]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:07 compute-0 sudo[167253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:10:07 compute-0 sudo[167253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:07 compute-0 ceph-mon[74360]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:08 compute-0 python3.9[167367]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:10:08 compute-0 sudo[167253]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c35b984e-109a-4855-bc41-3d2ca624a79c does not exist
Jan 20 14:10:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a83e2198-6e23-4475-85ef-d205fe0dc5ae does not exist
Jan 20 14:10:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 386ccaaf-2a21-4c72-babf-7bad52094291 does not exist
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:10:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:10:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:10:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:08.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:10:08 compute-0 sudo[167406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-0 sudo[167406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 sudo[167431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:08 compute-0 sudo[167431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:08.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:08 compute-0 sudo[167456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-0 sudo[167456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 sudo[167502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:10:08 compute-0 sudo[167502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-0 sudo[167508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167508]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:08 compute-0 sudo[167580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:08 compute-0 sudo[167580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:08 compute-0 sudo[167580]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:08 compute-0 sudo[167716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbatierbpflpdquynbnfsluyvluejwho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918208.5867634-665-271287296403958/AnsiballZ_systemd_service.py'
Jan 20 14:10:08 compute-0 sudo[167716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.051664081 +0000 UTC m=+0.059017332 container create e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:10:09 compute-0 systemd[1]: Started libpod-conmon-e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3.scope.
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.014241458 +0000 UTC m=+0.021594759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.156687755 +0000 UTC m=+0.164041016 container init e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.168519212 +0000 UTC m=+0.175872493 container start e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.172224871 +0000 UTC m=+0.179578142 container attach e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:10:09 compute-0 bold_solomon[167744]: 167 167
Jan 20 14:10:09 compute-0 systemd[1]: libpod-e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3.scope: Deactivated successfully.
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.176511846 +0000 UTC m=+0.183865137 container died e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d33adc23eaa424b5709042d007d3a60efe7345ba0edb459c4af238b34ffdbc-merged.mount: Deactivated successfully.
Jan 20 14:10:09 compute-0 podman[167726]: 2026-01-20 14:10:09.233714199 +0000 UTC m=+0.241067490 container remove e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 14:10:09 compute-0 systemd[1]: libpod-conmon-e1ff559cb17ef2b70e91f14b298e5aff2362466b759651704f1549c1f39a9ec3.scope: Deactivated successfully.
Jan 20 14:10:09 compute-0 python3.9[167722]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:10:09 compute-0 systemd[1]: Reloading.
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:10:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:10:09 compute-0 ceph-mon[74360]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:09 compute-0 systemd-rc-local-generator[167800]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:10:09 compute-0 systemd-sysv-generator[167809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:10:09 compute-0 podman[167770]: 2026-01-20 14:10:09.420202135 +0000 UTC m=+0.058988871 container create b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:10:09 compute-0 podman[167770]: 2026-01-20 14:10:09.401745851 +0000 UTC m=+0.040532597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:09 compute-0 systemd[1]: Started libpod-conmon-b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723.scope.
Jan 20 14:10:09 compute-0 sudo[167716]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:09 compute-0 podman[167770]: 2026-01-20 14:10:09.668595811 +0000 UTC m=+0.307382597 container init b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:09 compute-0 podman[167770]: 2026-01-20 14:10:09.682069062 +0000 UTC m=+0.320855798 container start b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:10:09 compute-0 podman[167770]: 2026-01-20 14:10:09.688135694 +0000 UTC m=+0.326922480 container attach b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:10:10 compute-0 sudo[167974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lagnzlkaegandjkezezlmdbiblmocdzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918209.9378862-689-56366005646391/AnsiballZ_command.py'
Jan 20 14:10:10 compute-0 sudo[167974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:10 compute-0 python3.9[167976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:10.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:10 compute-0 sudo[167974]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-0 happy_poincare[167820]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:10:10 compute-0 happy_poincare[167820]: --> relative data size: 1.0
Jan 20 14:10:10 compute-0 happy_poincare[167820]: --> All data devices are unavailable
Jan 20 14:10:10 compute-0 podman[167770]: 2026-01-20 14:10:10.465076632 +0000 UTC m=+1.103863378 container died b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:10:10 compute-0 systemd[1]: libpod-b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723.scope: Deactivated successfully.
Jan 20 14:10:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7793a9f84b002000f12119b9f583ae6a51253e93599fdf00477c4724c9757cf7-merged.mount: Deactivated successfully.
Jan 20 14:10:10 compute-0 podman[167770]: 2026-01-20 14:10:10.514915427 +0000 UTC m=+1.153702163 container remove b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:10 compute-0 systemd[1]: libpod-conmon-b501257db9350681c1bf70b83ff71adbef593426d43e6d592f37b529d59af723.scope: Deactivated successfully.
Jan 20 14:10:10 compute-0 sudo[167502]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:10 compute-0 sudo[168053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:10 compute-0 sudo[168053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:10 compute-0 sudo[168053]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-0 sudo[168103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:10 compute-0 sudo[168103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:10 compute-0 sudo[168103]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:10 compute-0 sudo[168151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:10 compute-0 sudo[168151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:10 compute-0 sudo[168151]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:10 compute-0 sudo[168184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:10:10 compute-0 sudo[168184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:10 compute-0 sudo[168252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcudxrcjozrzxzsztgolhnnywdubsfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918210.5243387-689-121369389445554/AnsiballZ_command.py'
Jan 20 14:10:10 compute-0 sudo[168252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:10:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:10:10 compute-0 python3.9[168254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:11 compute-0 sudo[168252]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.091729893 +0000 UTC m=+0.035250496 container create 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:11 compute-0 systemd[1]: Started libpod-conmon-98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7.scope.
Jan 20 14:10:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.166786953 +0000 UTC m=+0.110307576 container init 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.076066583 +0000 UTC m=+0.019587206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.172870347 +0000 UTC m=+0.116390950 container start 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:10:11 compute-0 nifty_satoshi[168348]: 167 167
Jan 20 14:10:11 compute-0 systemd[1]: libpod-98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7.scope: Deactivated successfully.
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.177264754 +0000 UTC m=+0.120785367 container attach 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.177795598 +0000 UTC m=+0.121316201 container died 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:10:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-428e77d7b6a571a7cdf394ae1830f028a7a99b30cb870107f42c7e1381272747-merged.mount: Deactivated successfully.
Jan 20 14:10:11 compute-0 podman[168299]: 2026-01-20 14:10:11.214693558 +0000 UTC m=+0.158214161 container remove 98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:10:11 compute-0 systemd[1]: libpod-conmon-98449c4c8d15b0e03d38c51e0b573c412773b20eaba93cff42c2012308babeb7.scope: Deactivated successfully.
Jan 20 14:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:11 compute-0 podman[168434]: 2026-01-20 14:10:11.345169423 +0000 UTC m=+0.023426509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:11 compute-0 sudo[168498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavpuxwurqmgbigbarmeeasvcinigxdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918211.1515257-689-20752478723946/AnsiballZ_command.py'
Jan 20 14:10:11 compute-0 sudo[168498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:11 compute-0 python3.9[168500]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:11 compute-0 sudo[168498]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:12 compute-0 podman[168434]: 2026-01-20 14:10:12.035939752 +0000 UTC m=+0.714196828 container create 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 20 14:10:12 compute-0 ceph-mon[74360]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:12 compute-0 systemd[1]: Started libpod-conmon-8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db.scope.
Jan 20 14:10:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:12 compute-0 sudo[168656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyamnmynjrszdzuznpvdxzkuzakjsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918211.8209229-689-175030002675082/AnsiballZ_command.py'
Jan 20 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2b00ae971b56bb1fe5d2d46a21b8ad320d83a76ee43f4d51e7139eb821f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2b00ae971b56bb1fe5d2d46a21b8ad320d83a76ee43f4d51e7139eb821f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2b00ae971b56bb1fe5d2d46a21b8ad320d83a76ee43f4d51e7139eb821f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f2b00ae971b56bb1fe5d2d46a21b8ad320d83a76ee43f4d51e7139eb821f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:12 compute-0 sudo[168656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:12 compute-0 podman[168434]: 2026-01-20 14:10:12.172431459 +0000 UTC m=+0.850688555 container init 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:10:12 compute-0 podman[168434]: 2026-01-20 14:10:12.179870278 +0000 UTC m=+0.858127344 container start 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:12 compute-0 python3.9[168658]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:12 compute-0 sudo[168656]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:12 compute-0 podman[168434]: 2026-01-20 14:10:12.347610173 +0000 UTC m=+1.025867229 container attach 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:10:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:12.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:12 compute-0 sudo[168814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vclzbhwaamycdlxnndquexkzjoxmozyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918212.5020967-689-158532751527111/AnsiballZ_command.py'
Jan 20 14:10:12 compute-0 sudo[168814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:12 compute-0 charming_ganguly[168650]: {
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:     "0": [
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:         {
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "devices": [
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "/dev/loop3"
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             ],
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "lv_name": "ceph_lv0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "lv_size": "7511998464",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "name": "ceph_lv0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "tags": {
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.cluster_name": "ceph",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.crush_device_class": "",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.encrypted": "0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.osd_id": "0",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.type": "block",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:                 "ceph.vdo": "0"
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             },
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "type": "block",
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:             "vg_name": "ceph_vg0"
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:         }
Jan 20 14:10:12 compute-0 charming_ganguly[168650]:     ]
Jan 20 14:10:12 compute-0 charming_ganguly[168650]: }
Jan 20 14:10:12 compute-0 systemd[1]: libpod-8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db.scope: Deactivated successfully.
Jan 20 14:10:12 compute-0 podman[168434]: 2026-01-20 14:10:12.894482605 +0000 UTC m=+1.572739691 container died 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:10:13 compute-0 python3.9[168818]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:13 compute-0 sudo[168814]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f2b00ae971b56bb1fe5d2d46a21b8ad320d83a76ee43f4d51e7139eb821f6e-merged.mount: Deactivated successfully.
Jan 20 14:10:13 compute-0 ceph-mon[74360]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:13 compute-0 sudo[168982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzmvjetlxdnudyqrmhiizcaxbangwbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918213.2244818-689-243180658951563/AnsiballZ_command.py'
Jan 20 14:10:13 compute-0 sudo[168982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:13 compute-0 python3.9[168984]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:13 compute-0 podman[168434]: 2026-01-20 14:10:13.946620777 +0000 UTC m=+2.624877853 container remove 8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ganguly, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:13 compute-0 systemd[1]: libpod-conmon-8aea6ebae927520af960d0c0bf2cbbedf44da3c4cb65c6047d462b5df62267db.scope: Deactivated successfully.
Jan 20 14:10:13 compute-0 sudo[168184]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-0 sudo[168987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:14 compute-0 sudo[168987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:14 compute-0 sudo[168987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-0 sudo[169012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:10:14 compute-0 sudo[169012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:14 compute-0 sudo[169012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-0 sudo[169037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:14 compute-0 sudo[169037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:14 compute-0 sudo[169037]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-0 sudo[169062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:10:14 compute-0 sudo[169062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:14.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.599305335 +0000 UTC m=+0.042985843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.695173963 +0000 UTC m=+0.138854381 container create 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:10:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:14 compute-0 systemd[1]: Started libpod-conmon-7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764.scope.
Jan 20 14:10:14 compute-0 sudo[168982]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.889685765 +0000 UTC m=+0.333366213 container init 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.89807952 +0000 UTC m=+0.341759938 container start 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:10:14 compute-0 confident_colden[169146]: 167 167
Jan 20 14:10:14 compute-0 systemd[1]: libpod-7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764.scope: Deactivated successfully.
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.927353854 +0000 UTC m=+0.371034272 container attach 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.927881669 +0000 UTC m=+0.371562097 container died 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b8a80848f86dc0c1c519d6f4fc467897fa4dbb2876dc44d59a6090fabd7d5d0-merged.mount: Deactivated successfully.
Jan 20 14:10:14 compute-0 podman[169129]: 2026-01-20 14:10:14.963646987 +0000 UTC m=+0.407327415 container remove 7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_colden, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:14 compute-0 systemd[1]: libpod-conmon-7727b63c7a034efd26b365371f9f7c93fc19c8f0e121b73d4eef18b9c620c764.scope: Deactivated successfully.
Jan 20 14:10:15 compute-0 sudo[169335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtnuymuryxkrqeslajpchvsapvktafro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918214.8823092-689-133849140775046/AnsiballZ_command.py'
Jan 20 14:10:15 compute-0 sudo[169335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:15 compute-0 podman[169292]: 2026-01-20 14:10:15.140426653 +0000 UTC m=+0.041064991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:10:15 compute-0 podman[169292]: 2026-01-20 14:10:15.257620444 +0000 UTC m=+0.158258762 container create 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:10:15 compute-0 systemd[1]: Started libpod-conmon-738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433.scope.
Jan 20 14:10:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af68804d5515338dbe8de66a546213c4a8db2b059fd7068af63dca431b76ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af68804d5515338dbe8de66a546213c4a8db2b059fd7068af63dca431b76ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af68804d5515338dbe8de66a546213c4a8db2b059fd7068af63dca431b76ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af68804d5515338dbe8de66a546213c4a8db2b059fd7068af63dca431b76ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:15 compute-0 podman[169292]: 2026-01-20 14:10:15.399354471 +0000 UTC m=+0.299992859 container init 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:10:15 compute-0 podman[169292]: 2026-01-20 14:10:15.412116253 +0000 UTC m=+0.312754551 container start 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:10:15 compute-0 podman[169292]: 2026-01-20 14:10:15.417338513 +0000 UTC m=+0.317976881 container attach 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:10:15 compute-0 python3.9[169337]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:15 compute-0 sudo[169335]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]: {
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:         "osd_id": 0,
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:         "type": "bluestore"
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]:     }
Jan 20 14:10:16 compute-0 vigorous_kilby[169340]: }
Jan 20 14:10:16 compute-0 systemd[1]: libpod-738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433.scope: Deactivated successfully.
Jan 20 14:10:16 compute-0 podman[169292]: 2026-01-20 14:10:16.358222623 +0000 UTC m=+1.258860941 container died 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:16.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:16 compute-0 ceph-mon[74360]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7af68804d5515338dbe8de66a546213c4a8db2b059fd7068af63dca431b76ce-merged.mount: Deactivated successfully.
Jan 20 14:10:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:16.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:16 compute-0 podman[169292]: 2026-01-20 14:10:16.550127335 +0000 UTC m=+1.450765623 container remove 738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:10:16 compute-0 systemd[1]: libpod-conmon-738e7df16354924d897ca4c7fd9dafaf1f5a272383243e9c5bc3e2b4405c5433.scope: Deactivated successfully.
Jan 20 14:10:16 compute-0 sudo[169062]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:10:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:10:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d7dbbcf1-af9c-4f9a-b207-51005c59ec15 does not exist
Jan 20 14:10:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7fd09d1e-da63-4253-9856-b61ce0dad4c9 does not exist
Jan 20 14:10:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 420bb207-eeb0-4394-b1c1-4ac9db50bc72 does not exist
Jan 20 14:10:16 compute-0 sudo[169400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:16 compute-0 sudo[169400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:16 compute-0 sudo[169400]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:16 compute-0 sudo[169426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:10:16 compute-0 sudo[169426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:16 compute-0 sudo[169426]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:17 compute-0 sudo[169576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nounfaeorsmvblzbpydyhisnoegidqly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918217.060395-851-203568261072341/AnsiballZ_getent.py'
Jan 20 14:10:17 compute-0 sudo[169576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:17 compute-0 python3.9[169578]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 20 14:10:17 compute-0 sudo[169576]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:10:17 compute-0 ceph-mon[74360]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:18.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:18 compute-0 sudo[169729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkyxudkmwcyvoepkwbphwfgdtuxcuxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918217.9845226-875-32915145321153/AnsiballZ_group.py'
Jan 20 14:10:18 compute-0 sudo[169729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:18 compute-0 python3.9[169731]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:10:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:19 compute-0 ceph-mon[74360]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:19 compute-0 groupadd[169733]: group added to /etc/group: name=libvirt, GID=42473
Jan 20 14:10:20 compute-0 groupadd[169733]: group added to /etc/gshadow: name=libvirt
Jan 20 14:10:20 compute-0 groupadd[169733]: new group: name=libvirt, GID=42473
Jan 20 14:10:20 compute-0 sudo[169729]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:20.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:20 compute-0 sudo[169889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfohkvlgitamfkdzpnxfwmmklvezdwxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918220.2665708-899-28309782081121/AnsiballZ_user.py'
Jan 20 14:10:20 compute-0 sudo[169889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:20 compute-0 python3.9[169891]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:10:21 compute-0 useradd[169893]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 14:10:21 compute-0 sudo[169889]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:21 compute-0 sudo[170049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzezrbywxfjdqlocunzdnmziinpwokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918221.6053107-932-89163579244392/AnsiballZ_setup.py'
Jan 20 14:10:21 compute-0 sudo[170049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:21 compute-0 ceph-mon[74360]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:22 compute-0 python3.9[170051]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:10:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:22 compute-0 sudo[170049]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:22.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:22 compute-0 sudo[170134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khomchuatzknlbpvboeiwagbwdbclwiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918221.6053107-932-89163579244392/AnsiballZ_dnf.py'
Jan 20 14:10:22 compute-0 sudo[170134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:10:23 compute-0 python3.9[170136]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:10:23 compute-0 ceph-mon[74360]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:24.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:26 compute-0 ceph-mon[74360]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:26.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:28 compute-0 ceph-mon[74360]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:28.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:28 compute-0 sudo[170150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:28 compute-0 sudo[170150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:28 compute-0 sudo[170150]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:28 compute-0 sudo[170175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:28 compute-0 sudo[170175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:28 compute-0 sudo[170175]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:29 compute-0 podman[170200]: 2026-01-20 14:10:29.522937654 +0000 UTC m=+0.087268275 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:30 compute-0 ceph-mon[74360]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:30.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:10:30.721 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:10:30.722 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:10:30.722 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:10:31 compute-0 ceph-mon[74360]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:32.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:33 compute-0 ceph-mon[74360]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:34.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:35 compute-0 podman[170394]: 2026-01-20 14:10:35.487819998 +0000 UTC m=+0.084909723 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:10:35 compute-0 ceph-mon[74360]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:36.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:38 compute-0 ceph-mon[74360]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:38.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:40 compute-0 ceph-mon[74360]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:40.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:41 compute-0 ceph-mon[74360]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:43 compute-0 ceph-mon[74360]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:44.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:46 compute-0 ceph-mon[74360]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:10:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:10:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:48 compute-0 sshd-session[170434]: Connection closed by authenticating user root 157.245.78.139 port 45174 [preauth]
Jan 20 14:10:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:48.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:48 compute-0 ceph-mon[74360]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:49 compute-0 sudo[170439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:49 compute-0 sudo[170439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:49 compute-0 sudo[170439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:49 compute-0 sudo[170466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:10:49 compute-0 sudo[170466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:10:49 compute-0 sudo[170466]: pam_unix(sudo:session): session closed for user root
Jan 20 14:10:50 compute-0 ceph-mon[74360]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:50 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:10:50 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:10:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:51 compute-0 ceph-mon[74360]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:10:52
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes', '.mgr', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:10:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:10:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:54.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:54 compute-0 ceph-mon[74360]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:56.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:56.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:56 compute-0 ceph-mon[74360]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:10:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:10:58.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:10:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:10:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:10:58.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:10:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:10:59 compute-0 ceph-mon[74360]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:00 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 20 14:11:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:00.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:00 compute-0 podman[170504]: 2026-01-20 14:11:00.555924199 +0000 UTC m=+0.109267163 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 20 14:11:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:00.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:01 compute-0 ceph-mon[74360]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:01 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:11:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:11:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:02.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:02.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:03 compute-0 ceph-mon[74360]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:04 compute-0 ceph-mon[74360]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:04.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:05 compute-0 ceph-mon[74360]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:06 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 20 14:11:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:06 compute-0 podman[170529]: 2026-01-20 14:11:06.522155258 +0000 UTC m=+0.102096364 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:11:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:07 compute-0 ceph-mon[74360]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:09 compute-0 sudo[170558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:09 compute-0 sudo[170558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:09 compute-0 sudo[170558]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:09 compute-0 sudo[170583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:09 compute-0 sudo[170583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:09 compute-0 sudo[170583]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:10 compute-0 ceph-mon[74360]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:10.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:10.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:11:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:11:11 compute-0 ceph-mon[74360]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:12.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:13 compute-0 ceph-mon[74360]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:16 compute-0 ceph-mon[74360]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:16.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:17 compute-0 sudo[171862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:17 compute-0 sudo[171862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-0 sudo[171862]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-0 sudo[171923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:11:17 compute-0 sudo[171923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-0 sudo[171923]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-0 sudo[171985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:17 compute-0 sudo[171985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-0 sudo[171985]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:17 compute-0 sudo[172052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:11:17 compute-0 sudo[172052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:17 compute-0 ceph-mon[74360]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:17 compute-0 sudo[172052]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:11:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:18.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ac08f057-865d-4288-8f17-5bdc14fdfec1 does not exist
Jan 20 14:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 481d0c22-6531-4f87-9140-c48496c8d308 does not exist
Jan 20 14:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f3a65fb7-d157-4671-8e8c-eb911ccf9233 does not exist
Jan 20 14:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:11:19 compute-0 ceph-mon[74360]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:19 compute-0 sudo[173063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:19 compute-0 sudo[173063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:19 compute-0 sudo[173063]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:19 compute-0 sudo[173129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:11:19 compute-0 sudo[173129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:19 compute-0 sudo[173129]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:19 compute-0 sudo[173193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:19 compute-0 sudo[173193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:19 compute-0 sudo[173193]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:19 compute-0 sudo[173251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:11:19 compute-0 sudo[173251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.184209537 +0000 UTC m=+0.095786118 container create 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.125676779 +0000 UTC m=+0.037253370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:20 compute-0 systemd[1]: Started libpod-conmon-9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8.scope.
Jan 20 14:11:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.326497587 +0000 UTC m=+0.238074208 container init 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.33304431 +0000 UTC m=+0.244620931 container start 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.33763069 +0000 UTC m=+0.249207351 container attach 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:11:20 compute-0 laughing_archimedes[173568]: 167 167
Jan 20 14:11:20 compute-0 systemd[1]: libpod-9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8.scope: Deactivated successfully.
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.347045768 +0000 UTC m=+0.258622389 container died 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:11:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:11:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f072d5d40648dec9f7839388229d37b0ad1f448d871439351c4b44a8d7386f-merged.mount: Deactivated successfully.
Jan 20 14:11:20 compute-0 podman[173494]: 2026-01-20 14:11:20.430121801 +0000 UTC m=+0.341698382 container remove 9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:11:20 compute-0 systemd[1]: libpod-conmon-9dcc12021172a21f2945db0129ee303d745b9f2faad6cc66e4a5281e108eb3b8.scope: Deactivated successfully.
Jan 20 14:11:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:20.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:20 compute-0 podman[173752]: 2026-01-20 14:11:20.599163594 +0000 UTC m=+0.025446769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:20 compute-0 podman[173752]: 2026-01-20 14:11:20.835731772 +0000 UTC m=+0.262014957 container create b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:11:20 compute-0 systemd[1]: Started libpod-conmon-b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e.scope.
Jan 20 14:11:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:21 compute-0 podman[173752]: 2026-01-20 14:11:21.20164475 +0000 UTC m=+0.627927905 container init b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:21 compute-0 podman[173752]: 2026-01-20 14:11:21.209012213 +0000 UTC m=+0.635295368 container start b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:11:21 compute-0 podman[173752]: 2026-01-20 14:11:21.213042389 +0000 UTC m=+0.639325584 container attach b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:11:21 compute-0 ceph-mon[74360]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:22 compute-0 focused_driscoll[173964]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:11:22 compute-0 focused_driscoll[173964]: --> relative data size: 1.0
Jan 20 14:11:22 compute-0 focused_driscoll[173964]: --> All data devices are unavailable
Jan 20 14:11:22 compute-0 systemd[1]: libpod-b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e.scope: Deactivated successfully.
Jan 20 14:11:22 compute-0 podman[173752]: 2026-01-20 14:11:22.096950053 +0000 UTC m=+1.523233238 container died b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:22.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:22.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f8346999b23ca797390ca42e6d148f02c3b53c801b0c22b1c42e3a149c113f2-merged.mount: Deactivated successfully.
Jan 20 14:11:22 compute-0 podman[173752]: 2026-01-20 14:11:22.713251412 +0000 UTC m=+2.139534577 container remove b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_driscoll, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:11:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:22 compute-0 sudo[173251]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:22 compute-0 systemd[1]: libpod-conmon-b20b2b00c20586aebaca0f9a4b13e8dcd8d20d21a1ba89d0b6f3b82b9cfcdd3e.scope: Deactivated successfully.
Jan 20 14:11:22 compute-0 sudo[174956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:22 compute-0 sudo[174956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:22 compute-0 sudo[174956]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:22 compute-0 sudo[175023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:11:22 compute-0 sudo[175023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:22 compute-0 sudo[175023]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:22 compute-0 sudo[175093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:22 compute-0 sudo[175093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:22 compute-0 sudo[175093]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:23 compute-0 sudo[175152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:11:23 compute-0 sudo[175152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.373567867 +0000 UTC m=+0.058345183 container create d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:11:23 compute-0 systemd[1]: Started libpod-conmon-d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4.scope.
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.342696856 +0000 UTC m=+0.027474272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.47219394 +0000 UTC m=+0.156971276 container init d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.47827362 +0000 UTC m=+0.163050936 container start d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.482396118 +0000 UTC m=+0.167173494 container attach d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:11:23 compute-0 clever_einstein[175525]: 167 167
Jan 20 14:11:23 compute-0 systemd[1]: libpod-d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4.scope: Deactivated successfully.
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.487868362 +0000 UTC m=+0.172645718 container died d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83ead4678cccd9362b28990808806fe2620e5b960e6a0761fe5b797c90e2800-merged.mount: Deactivated successfully.
Jan 20 14:11:23 compute-0 podman[175440]: 2026-01-20 14:11:23.541023349 +0000 UTC m=+0.225800705 container remove d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:11:23 compute-0 systemd[1]: libpod-conmon-d0a61c6f851a27c21d2e4b4ba860f43706d47f1f7b1120c90473b24b5594b6e4.scope: Deactivated successfully.
Jan 20 14:11:23 compute-0 podman[175678]: 2026-01-20 14:11:23.717266362 +0000 UTC m=+0.046266607 container create 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:11:23 compute-0 systemd[1]: Started libpod-conmon-2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c.scope.
Jan 20 14:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f0dd3af17f843d8236c54f164a645d089b94ea2416037dedd08388b95ccb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f0dd3af17f843d8236c54f164a645d089b94ea2416037dedd08388b95ccb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f0dd3af17f843d8236c54f164a645d089b94ea2416037dedd08388b95ccb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff6f0dd3af17f843d8236c54f164a645d089b94ea2416037dedd08388b95ccb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:23 compute-0 podman[175678]: 2026-01-20 14:11:23.700228124 +0000 UTC m=+0.029228379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:23 compute-0 ceph-mon[74360]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:23 compute-0 podman[175678]: 2026-01-20 14:11:23.799243766 +0000 UTC m=+0.128244001 container init 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:11:23 compute-0 podman[175678]: 2026-01-20 14:11:23.805105041 +0000 UTC m=+0.134105276 container start 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:11:23 compute-0 podman[175678]: 2026-01-20 14:11:23.808441928 +0000 UTC m=+0.137442193 container attach 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:11:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:11:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:11:24 compute-0 busy_panini[175770]: {
Jan 20 14:11:24 compute-0 busy_panini[175770]:     "0": [
Jan 20 14:11:24 compute-0 busy_panini[175770]:         {
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "devices": [
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "/dev/loop3"
Jan 20 14:11:24 compute-0 busy_panini[175770]:             ],
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "lv_name": "ceph_lv0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "lv_size": "7511998464",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "name": "ceph_lv0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "tags": {
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.cluster_name": "ceph",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.crush_device_class": "",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.encrypted": "0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.osd_id": "0",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.type": "block",
Jan 20 14:11:24 compute-0 busy_panini[175770]:                 "ceph.vdo": "0"
Jan 20 14:11:24 compute-0 busy_panini[175770]:             },
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "type": "block",
Jan 20 14:11:24 compute-0 busy_panini[175770]:             "vg_name": "ceph_vg0"
Jan 20 14:11:24 compute-0 busy_panini[175770]:         }
Jan 20 14:11:24 compute-0 busy_panini[175770]:     ]
Jan 20 14:11:24 compute-0 busy_panini[175770]: }
Jan 20 14:11:24 compute-0 podman[175678]: 2026-01-20 14:11:24.58197224 +0000 UTC m=+0.910972485 container died 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:11:24 compute-0 systemd[1]: libpod-2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c.scope: Deactivated successfully.
Jan 20 14:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff6f0dd3af17f843d8236c54f164a645d089b94ea2416037dedd08388b95ccb6-merged.mount: Deactivated successfully.
Jan 20 14:11:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:24 compute-0 podman[175678]: 2026-01-20 14:11:24.649643949 +0000 UTC m=+0.978644194 container remove 2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:24 compute-0 systemd[1]: libpod-conmon-2cb69d99539562ceb5be63d3039d4457e9b2043616b743e8f12dd05f6042c14c.scope: Deactivated successfully.
Jan 20 14:11:24 compute-0 sudo[175152]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:24 compute-0 sudo[176265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:24 compute-0 sudo[176265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:24 compute-0 sudo[176265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:24 compute-0 sudo[176329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:11:24 compute-0 sudo[176329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:24 compute-0 sudo[176329]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:24 compute-0 sudo[176406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:24 compute-0 sudo[176406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:24 compute-0 sudo[176406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:24 compute-0 sudo[176471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:11:24 compute-0 sudo[176471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:25 compute-0 podman[176767]: 2026-01-20 14:11:25.235821535 +0000 UTC m=+0.024860980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:25 compute-0 podman[176767]: 2026-01-20 14:11:25.792396864 +0000 UTC m=+0.581436279 container create 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:26 compute-0 systemd[1]: Started libpod-conmon-858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2.scope.
Jan 20 14:11:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:26.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:26.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:26 compute-0 ceph-mon[74360]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:27 compute-0 podman[176767]: 2026-01-20 14:11:27.431277167 +0000 UTC m=+2.220316672 container init 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:11:27 compute-0 podman[176767]: 2026-01-20 14:11:27.446275101 +0000 UTC m=+2.235314566 container start 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:11:27 compute-0 nice_almeida[177491]: 167 167
Jan 20 14:11:27 compute-0 systemd[1]: libpod-858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2.scope: Deactivated successfully.
Jan 20 14:11:28 compute-0 podman[176767]: 2026-01-20 14:11:28.004846123 +0000 UTC m=+2.793885578 container attach 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:11:28 compute-0 podman[176767]: 2026-01-20 14:11:28.005575403 +0000 UTC m=+2.794614888 container died 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:28 compute-0 ceph-mon[74360]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c32c4dd9f63f7f61076e7a901f3b4a5af03bb50fe3637796ce1d87489ba7e2a-merged.mount: Deactivated successfully.
Jan 20 14:11:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:28 compute-0 podman[176767]: 2026-01-20 14:11:28.778200968 +0000 UTC m=+3.567240423 container remove 858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:11:28 compute-0 systemd[1]: libpod-conmon-858cbbf575c878f36f8acb6d13059c82d93c65f81e3349e75bf84a3aea1c58a2.scope: Deactivated successfully.
Jan 20 14:11:29 compute-0 podman[178754]: 2026-01-20 14:11:28.925048513 +0000 UTC m=+0.020971746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:11:29 compute-0 podman[178754]: 2026-01-20 14:11:29.172680602 +0000 UTC m=+0.268603825 container create 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:11:29 compute-0 sudo[179029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:29 compute-0 sudo[179029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:29 compute-0 sudo[179029]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:29 compute-0 sudo[179092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:29 compute-0 sudo[179092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:29 compute-0 sudo[179092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:29 compute-0 systemd[1]: Started libpod-conmon-18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3.scope.
Jan 20 14:11:29 compute-0 ceph-mon[74360]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4c59bd7813f84cdf7159c2bc05c7fad81b9781d3662ccdf1db92b84a6a2f84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4c59bd7813f84cdf7159c2bc05c7fad81b9781d3662ccdf1db92b84a6a2f84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4c59bd7813f84cdf7159c2bc05c7fad81b9781d3662ccdf1db92b84a6a2f84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4c59bd7813f84cdf7159c2bc05c7fad81b9781d3662ccdf1db92b84a6a2f84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:29 compute-0 podman[178754]: 2026-01-20 14:11:29.923451769 +0000 UTC m=+1.019375022 container init 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:29 compute-0 podman[178754]: 2026-01-20 14:11:29.929441581 +0000 UTC m=+1.025364804 container start 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:11:30 compute-0 podman[178754]: 2026-01-20 14:11:30.123958899 +0000 UTC m=+1.219882162 container attach 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:11:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:30.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:11:30.722 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:11:30.725 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:11:30.725 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:11:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:30 compute-0 friendly_nobel[179217]: {
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:         "osd_id": 0,
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:         "type": "bluestore"
Jan 20 14:11:30 compute-0 friendly_nobel[179217]:     }
Jan 20 14:11:30 compute-0 friendly_nobel[179217]: }
Jan 20 14:11:30 compute-0 systemd[1]: libpod-18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3.scope: Deactivated successfully.
Jan 20 14:11:30 compute-0 podman[178754]: 2026-01-20 14:11:30.775852294 +0000 UTC m=+1.871775547 container died 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:11:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb4c59bd7813f84cdf7159c2bc05c7fad81b9781d3662ccdf1db92b84a6a2f84-merged.mount: Deactivated successfully.
Jan 20 14:11:31 compute-0 ceph-mon[74360]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:31 compute-0 podman[178754]: 2026-01-20 14:11:31.289157037 +0000 UTC m=+2.385080280 container remove 18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_nobel, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:11:31 compute-0 sudo[176471]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:11:31 compute-0 systemd[1]: libpod-conmon-18e46474f02057850ac2103dabe114b3c8411792cb767af84fc14f0d06dc09f3.scope: Deactivated successfully.
Jan 20 14:11:31 compute-0 podman[179858]: 2026-01-20 14:11:31.414450151 +0000 UTC m=+0.604396447 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:11:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:11:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 38b29b64-0eb1-4787-a172-bd11e597f69b does not exist
Jan 20 14:11:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e8316629-b5d7-44d3-9464-6dc608e3015e does not exist
Jan 20 14:11:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a068bd2b-fb9b-48a8-a119-22099255a087 does not exist
Jan 20 14:11:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:31 compute-0 sudo[180360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:31 compute-0 sudo[180360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:31 compute-0 sudo[180360]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:31 compute-0 sudo[180423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:11:31 compute-0 sudo[180423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:31 compute-0 sudo[180423]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:32 compute-0 sshd-session[180320]: Connection closed by authenticating user root 157.245.78.139 port 53428 [preauth]
Jan 20 14:11:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:32.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:11:33 compute-0 ceph-mon[74360]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:34.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:36 compute-0 ceph-mon[74360]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:36.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:36.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:37 compute-0 podman[184082]: 2026-01-20 14:11:37.48662902 +0000 UTC m=+0.078273529 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 14:11:38 compute-0 ceph-mon[74360]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:38.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:39 compute-0 ceph-mon[74360]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:40.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:41 compute-0 ceph-mon[74360]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:42.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:43 compute-0 ceph-mon[74360]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:44.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:44.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:46 compute-0 ceph-mon[74360]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:46.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:46.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:48 compute-0 ceph-mon[74360]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:48.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:49 compute-0 sudo[188445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:49 compute-0 sudo[188445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:49 compute-0 sudo[188445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:49 compute-0 sudo[188470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:11:49 compute-0 sudo[188470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:11:49 compute-0 sudo[188470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:11:50 compute-0 ceph-mon[74360]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:50.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:50.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:51 compute-0 ceph-mon[74360]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:11:52
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', 'images', 'backups', 'default.rgw.meta', 'vms']
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:11:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:52.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:11:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:54 compute-0 ceph-mon[74360]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:56.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:56 compute-0 ceph-mon[74360]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:11:57 compute-0 ceph-mon[74360]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:11:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:11:58.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:11:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:11:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:11:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:11:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:11:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:11:59 compute-0 ceph-mon[74360]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:00.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:02 compute-0 ceph-mon[74360]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:02 compute-0 podman[188524]: 2026-01-20 14:12:02.491825298 +0000 UTC m=+0.074176648 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:12:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:02.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:02 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:12:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:12:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:04 compute-0 ceph-mon[74360]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:04 compute-0 groupadd[188553]: group added to /etc/group: name=dnsmasq, GID=992
Jan 20 14:12:04 compute-0 groupadd[188553]: group added to /etc/gshadow: name=dnsmasq
Jan 20 14:12:04 compute-0 groupadd[188553]: new group: name=dnsmasq, GID=992
Jan 20 14:12:05 compute-0 useradd[188560]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 20 14:12:05 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 14:12:05 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 20 14:12:05 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 20 14:12:05 compute-0 ceph-mon[74360]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:06 compute-0 groupadd[188573]: group added to /etc/group: name=clevis, GID=991
Jan 20 14:12:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:06 compute-0 groupadd[188573]: group added to /etc/gshadow: name=clevis
Jan 20 14:12:06 compute-0 groupadd[188573]: new group: name=clevis, GID=991
Jan 20 14:12:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:06 compute-0 useradd[188581]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 20 14:12:06 compute-0 usermod[188591]: add 'clevis' to group 'tss'
Jan 20 14:12:06 compute-0 usermod[188591]: add 'clevis' to shadow group 'tss'
Jan 20 14:12:08 compute-0 podman[188604]: 2026-01-20 14:12:08.034087847 +0000 UTC m=+0.107308940 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:12:08 compute-0 ceph-mon[74360]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:08.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:09 compute-0 ceph-mon[74360]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:09 compute-0 sudo[188642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:09 compute-0 sudo[188642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:09 compute-0 sudo[188642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:09 compute-0 sudo[188667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:09 compute-0 sudo[188667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:09 compute-0 sudo[188667]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:10 compute-0 polkitd[43447]: Reloading rules
Jan 20 14:12:10 compute-0 polkitd[43447]: Collecting garbage unconditionally...
Jan 20 14:12:10 compute-0 polkitd[43447]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 14:12:10 compute-0 polkitd[43447]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 14:12:10 compute-0 polkitd[43447]: Finished loading, compiling and executing 3 rules
Jan 20 14:12:10 compute-0 polkitd[43447]: Reloading rules
Jan 20 14:12:10 compute-0 polkitd[43447]: Collecting garbage unconditionally...
Jan 20 14:12:10 compute-0 polkitd[43447]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 14:12:10 compute-0 polkitd[43447]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 14:12:10 compute-0 polkitd[43447]: Finished loading, compiling and executing 3 rules
Jan 20 14:12:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:12:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:12:11 compute-0 groupadd[188859]: group added to /etc/group: name=ceph, GID=167
Jan 20 14:12:11 compute-0 groupadd[188859]: group added to /etc/gshadow: name=ceph
Jan 20 14:12:11 compute-0 groupadd[188859]: new group: name=ceph, GID=167
Jan 20 14:12:11 compute-0 useradd[188865]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 20 14:12:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:12 compute-0 ceph-mon[74360]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:12.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:13 compute-0 ceph-mon[74360]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:14 compute-0 sshd[1004]: Received signal 15; terminating.
Jan 20 14:12:14 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 20 14:12:14 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 20 14:12:14 compute-0 systemd[1]: sshd.service: Unit process 189477 (sshd-session) remains running after unit stopped.
Jan 20 14:12:14 compute-0 systemd[1]: sshd.service: Unit process 189484 (sshd-session) remains running after unit stopped.
Jan 20 14:12:14 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 20 14:12:14 compute-0 systemd[1]: sshd.service: Consumed 4.871s CPU time, 39.0M memory peak, read 564.0K from disk, written 4.0K to disk.
Jan 20 14:12:14 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 20 14:12:14 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 20 14:12:14 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:12:14 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 20 14:12:14 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 20 14:12:14 compute-0 sshd[189493]: Server listening on 0.0.0.0 port 22.
Jan 20 14:12:14 compute-0 sshd[189493]: Server listening on :: port 22.
Jan 20 14:12:14 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 20 14:12:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:14.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:14 compute-0 sshd-session[189477]: Connection closed by authenticating user root 157.245.78.139 port 50496 [preauth]
Jan 20 14:12:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:16.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:16 compute-0 ceph-mon[74360]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:12:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:12:18 compute-0 systemd[1]: Reloading.
Jan 20 14:12:18 compute-0 systemd-rc-local-generator[189755]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:18 compute-0 systemd-sysv-generator[189759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:18 compute-0 ceph-mon[74360]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:12:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:18.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:20.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:22.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:23 compute-0 ceph-mon[74360]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:26 compute-0 ceph-mon[74360]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:26 compute-0 ceph-mon[74360]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:26 compute-0 sudo[170134]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:26.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:27 compute-0 ceph-mon[74360]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:27 compute-0 sudo[197034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgxrvojxhcxefmjqqmyhxaprolscmhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918346.5699995-968-236930537037840/AnsiballZ_systemd.py'
Jan 20 14:12:27 compute-0 sudo[197034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:27 compute-0 python3.9[197056]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:27 compute-0 systemd[1]: Reloading.
Jan 20 14:12:27 compute-0 systemd-sysv-generator[197502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:27 compute-0 systemd-rc-local-generator[197496]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:27 compute-0 sudo[197034]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:28 compute-0 ceph-mon[74360]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:28 compute-0 sudo[198292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmbzvluqboglspuvkeuzuokhxvbakgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918348.1037402-968-120709838125910/AnsiballZ_systemd.py'
Jan 20 14:12:28 compute-0 sudo[198292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:12:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:12:28 compute-0 python3.9[198316]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:12:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:12:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.722s CPU time.
Jan 20 14:12:28 compute-0 systemd[1]: run-r61739195cc1742549e617659872aa3dc.service: Deactivated successfully.
Jan 20 14:12:28 compute-0 systemd[1]: Reloading.
Jan 20 14:12:28 compute-0 systemd-rc-local-generator[198506]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:28 compute-0 systemd-sysv-generator[198510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:29 compute-0 sudo[198292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:29 compute-0 ceph-mon[74360]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:29 compute-0 sudo[198665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmcdmgvrenzmijxmabwutjxwozpyeznp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918349.3116884-968-237250084244765/AnsiballZ_systemd.py'
Jan 20 14:12:29 compute-0 sudo[198665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:29 compute-0 sudo[198668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:29 compute-0 sudo[198668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:29 compute-0 sudo[198668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:29 compute-0 python3.9[198667]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:29 compute-0 sudo[198693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:29 compute-0 sudo[198693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:29 compute-0 sudo[198693]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:29 compute-0 systemd[1]: Reloading.
Jan 20 14:12:30 compute-0 systemd-rc-local-generator[198745]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:30 compute-0 systemd-sysv-generator[198749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:30 compute-0 sudo[198665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:30.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:12:30.723 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:12:30.724 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:12:30.724 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:12:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:30 compute-0 sudo[198906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvofxmakqreeawmuwvchqcvqqjshgbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918350.6331396-968-36891638564805/AnsiballZ_systemd.py'
Jan 20 14:12:30 compute-0 sudo[198906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:31 compute-0 python3.9[198908]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:31 compute-0 systemd[1]: Reloading.
Jan 20 14:12:31 compute-0 systemd-rc-local-generator[198940]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:31 compute-0 systemd-sysv-generator[198944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:31 compute-0 sudo[198906]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-0 ceph-mon[74360]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:32 compute-0 sudo[199041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:32 compute-0 sudo[199041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-0 sudo[199041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-0 sudo[199076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:12:32 compute-0 sudo[199076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-0 sudo[199076]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-0 sudo[199123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:32 compute-0 sudo[199123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-0 sudo[199123]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:32 compute-0 sudo[199170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmcoplzevfkloupkozmmyqdrqsmrikq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918352.0520616-1055-136686553980494/AnsiballZ_systemd.py'
Jan 20 14:12:32 compute-0 sudo[199170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:32 compute-0 sudo[199173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:12:32 compute-0 sudo[199173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:32.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:32 compute-0 python3.9[199176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:32 compute-0 podman[199214]: 2026-01-20 14:12:32.873454263 +0000 UTC m=+0.080216830 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:12:32 compute-0 systemd[1]: Reloading.
Jan 20 14:12:32 compute-0 systemd-sysv-generator[199269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:32 compute-0 systemd-rc-local-generator[199262]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:33 compute-0 sudo[199173]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:12:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:12:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:12:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:12:33 compute-0 sudo[199170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:33 compute-0 sudo[199436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuguillztyvujhhlaykuxzcvbixefoov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918353.329554-1055-251260899384057/AnsiballZ_systemd.py'
Jan 20 14:12:33 compute-0 sudo[199436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:33 compute-0 python3.9[199438]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:34 compute-0 systemd[1]: Reloading.
Jan 20 14:12:34 compute-0 systemd-sysv-generator[199472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:34 compute-0 systemd-rc-local-generator[199468]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:34 compute-0 sudo[199436]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:34 compute-0 ceph-mon[74360]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e185e4ae-9ea8-4652-bc02-d6a7e9c2729e does not exist
Jan 20 14:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c37ffb61-eb8f-47a5-9b08-6e7392af00dc does not exist
Jan 20 14:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a4b81461-9287-4aef-8f09-5ad7a369306f does not exist
Jan 20 14:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:34 compute-0 sudo[199524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:34 compute-0 sudo[199524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:34 compute-0 sudo[199524]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:34 compute-0 sudo[199578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:12:34 compute-0 sudo[199578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:34 compute-0 sudo[199578]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:34 compute-0 sudo[199626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:34 compute-0 sudo[199626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:34.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:34 compute-0 sudo[199626]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:34 compute-0 sudo[199675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:12:34 compute-0 sudo[199675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:34 compute-0 sudo[199727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjtoxabraiyybxhesddawdtsbhgssje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918354.5112422-1055-78937718583670/AnsiballZ_systemd.py'
Jan 20 14:12:34 compute-0 sudo[199727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:35 compute-0 python3.9[199729]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.011248642 +0000 UTC m=+0.022238104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:35 compute-0 systemd[1]: Reloading.
Jan 20 14:12:35 compute-0 systemd-sysv-generator[199815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:35 compute-0 systemd-rc-local-generator[199811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.405853514 +0000 UTC m=+0.416842986 container create dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:12:35 compute-0 ceph-mon[74360]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:35 compute-0 systemd[1]: Started libpod-conmon-dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a.scope.
Jan 20 14:12:35 compute-0 sudo[199727]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.584049753 +0000 UTC m=+0.595039205 container init dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.594499721 +0000 UTC m=+0.605489203 container start dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.599994492 +0000 UTC m=+0.610983934 container attach dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:35 compute-0 elegant_babbage[199825]: 167 167
Jan 20 14:12:35 compute-0 systemd[1]: libpod-dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a.scope: Deactivated successfully.
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.601994558 +0000 UTC m=+0.612984020 container died dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4bf3840c29a03bbb65f46b2ebebc692b60bddd882f1604dfde163d4f89a4db-merged.mount: Deactivated successfully.
Jan 20 14:12:35 compute-0 podman[199770]: 2026-01-20 14:12:35.664623523 +0000 UTC m=+0.675612955 container remove dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:12:35 compute-0 systemd[1]: libpod-conmon-dab0e5e8950cbdec1ccf25e1ad702497c54b828bd27b5aa5f4a5b2701385143a.scope: Deactivated successfully.
Jan 20 14:12:35 compute-0 podman[199916]: 2026-01-20 14:12:35.861531288 +0000 UTC m=+0.058376579 container create 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:12:35 compute-0 systemd[1]: Started libpod-conmon-40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348.scope.
Jan 20 14:12:35 compute-0 podman[199916]: 2026-01-20 14:12:35.844331505 +0000 UTC m=+0.041176826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:35 compute-0 podman[199916]: 2026-01-20 14:12:35.974363718 +0000 UTC m=+0.171209099 container init 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:35 compute-0 podman[199916]: 2026-01-20 14:12:35.989072472 +0000 UTC m=+0.185917813 container start 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:12:35 compute-0 podman[199916]: 2026-01-20 14:12:35.99369183 +0000 UTC m=+0.190537221 container attach 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:36 compute-0 sudo[200021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amivlytrkkrwnyvzfywreghuavqenqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918355.7412486-1055-21294822570262/AnsiballZ_systemd.py'
Jan 20 14:12:36 compute-0 sudo[200021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:36 compute-0 python3.9[200023]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:36 compute-0 sudo[200021]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:36 compute-0 epic_panini[199966]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:12:36 compute-0 epic_panini[199966]: --> relative data size: 1.0
Jan 20 14:12:36 compute-0 epic_panini[199966]: --> All data devices are unavailable
Jan 20 14:12:36 compute-0 systemd[1]: libpod-40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348.scope: Deactivated successfully.
Jan 20 14:12:36 compute-0 podman[199916]: 2026-01-20 14:12:36.812714434 +0000 UTC m=+1.009559755 container died 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:12:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-065345f7f6a63c761acdff4d75e88f75c3986fe3ac807bfe451fb358e22d4c92-merged.mount: Deactivated successfully.
Jan 20 14:12:36 compute-0 podman[199916]: 2026-01-20 14:12:36.872866371 +0000 UTC m=+1.069711672 container remove 40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:12:36 compute-0 systemd[1]: libpod-conmon-40f0bd55889ab2f8f7c73578ce854904e0fae71c60e86cfe9cc30d710f317348.scope: Deactivated successfully.
Jan 20 14:12:36 compute-0 sudo[199675]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:36 compute-0 sudo[200203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czuyyfwgfxaytwhnxuhpqntnknlhloye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918356.6071198-1055-157158755753267/AnsiballZ_systemd.py'
Jan 20 14:12:36 compute-0 sudo[200203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:36 compute-0 sudo[200197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:36 compute-0 sudo[200197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:36 compute-0 sudo[200197]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:37 compute-0 sudo[200227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:12:37 compute-0 sudo[200227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:37 compute-0 sudo[200227]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:37 compute-0 sudo[200252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:37 compute-0 sudo[200252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:37 compute-0 sudo[200252]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:37 compute-0 sudo[200277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:12:37 compute-0 sudo[200277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:37 compute-0 python3.9[200221]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:37 compute-0 systemd[1]: Reloading.
Jan 20 14:12:37 compute-0 systemd-rc-local-generator[200361]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:37 compute-0 systemd-sysv-generator[200365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.459785303 +0000 UTC m=+0.040211430 container create 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.443893634 +0000 UTC m=+0.024319771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:37 compute-0 systemd[1]: Started libpod-conmon-33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108.scope.
Jan 20 14:12:37 compute-0 sudo[200203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.744846096 +0000 UTC m=+0.325272253 container init 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.753524725 +0000 UTC m=+0.333950862 container start 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.757661389 +0000 UTC m=+0.338087556 container attach 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:12:37 compute-0 heuristic_einstein[200396]: 167 167
Jan 20 14:12:37 compute-0 systemd[1]: libpod-33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108.scope: Deactivated successfully.
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.760373753 +0000 UTC m=+0.340799890 container died 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-318b357a05bd2637c07ad58737abf876ee65b3dbb8dc63d1130db7ebf7564bd4-merged.mount: Deactivated successfully.
Jan 20 14:12:37 compute-0 podman[200380]: 2026-01-20 14:12:37.802501174 +0000 UTC m=+0.382927301 container remove 33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:12:37 compute-0 ceph-mon[74360]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:37 compute-0 systemd[1]: libpod-conmon-33b8f05ff13607614d66b5104e342957b2fff2cb77b6718c0bfcb10b7578b108.scope: Deactivated successfully.
Jan 20 14:12:38 compute-0 podman[200446]: 2026-01-20 14:12:38.007222985 +0000 UTC m=+0.038992255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:38 compute-0 podman[200446]: 2026-01-20 14:12:38.319059706 +0000 UTC m=+0.350828986 container create 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:12:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:38 compute-0 systemd[1]: Started libpod-conmon-672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440.scope.
Jan 20 14:12:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13088b51f4c60c468ba3db3ef527b33ec50c3a0ea84460da47bf4f1219d84b69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13088b51f4c60c468ba3db3ef527b33ec50c3a0ea84460da47bf4f1219d84b69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13088b51f4c60c468ba3db3ef527b33ec50c3a0ea84460da47bf4f1219d84b69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13088b51f4c60c468ba3db3ef527b33ec50c3a0ea84460da47bf4f1219d84b69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:38 compute-0 podman[200446]: 2026-01-20 14:12:38.903688074 +0000 UTC m=+0.935457404 container init 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:12:38 compute-0 podman[200446]: 2026-01-20 14:12:38.918006928 +0000 UTC m=+0.949776178 container start 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:12:38 compute-0 podman[200446]: 2026-01-20 14:12:38.92206737 +0000 UTC m=+0.953836610 container attach 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:12:38 compute-0 podman[200461]: 2026-01-20 14:12:38.947888921 +0000 UTC m=+0.591677142 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:12:39 compute-0 sudo[200620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrdxhqfnzffyhapdavacfrxfcyspdxyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918359.1725216-1163-155261225299622/AnsiballZ_systemd.py'
Jan 20 14:12:39 compute-0 sudo[200620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]: {
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:     "0": [
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:         {
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "devices": [
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "/dev/loop3"
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             ],
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "lv_name": "ceph_lv0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "lv_size": "7511998464",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "name": "ceph_lv0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "tags": {
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.cluster_name": "ceph",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.crush_device_class": "",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.encrypted": "0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.osd_id": "0",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.type": "block",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:                 "ceph.vdo": "0"
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             },
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "type": "block",
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:             "vg_name": "ceph_vg0"
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:         }
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]:     ]
Jan 20 14:12:39 compute-0 hardcore_knuth[200481]: }
Jan 20 14:12:39 compute-0 systemd[1]: libpod-672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440.scope: Deactivated successfully.
Jan 20 14:12:39 compute-0 podman[200446]: 2026-01-20 14:12:39.721489895 +0000 UTC m=+1.753259135 container died 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:12:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-13088b51f4c60c468ba3db3ef527b33ec50c3a0ea84460da47bf4f1219d84b69-merged.mount: Deactivated successfully.
Jan 20 14:12:39 compute-0 podman[200446]: 2026-01-20 14:12:39.785171359 +0000 UTC m=+1.816940599 container remove 672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:12:39 compute-0 systemd[1]: libpod-conmon-672046a7eed3c774e587648ae134ec78c863e40965b302f701fa313ba7958440.scope: Deactivated successfully.
Jan 20 14:12:39 compute-0 sudo[200277]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:39 compute-0 python3.9[200622]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:39 compute-0 ceph-mon[74360]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:39 compute-0 sudo[200639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:39 compute-0 sudo[200639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:39 compute-0 sudo[200639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:39 compute-0 systemd[1]: Reloading.
Jan 20 14:12:39 compute-0 systemd-rc-local-generator[200713]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:39 compute-0 systemd-sysv-generator[200716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:40 compute-0 sudo[200668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:12:40 compute-0 sudo[200668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:40 compute-0 sudo[200668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:40 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 20 14:12:40 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 20 14:12:40 compute-0 sudo[200729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:40 compute-0 sudo[200729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:40 compute-0 sudo[200729]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:40 compute-0 sudo[200620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:40 compute-0 sudo[200755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:12:40 compute-0 sudo[200755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:40.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:40.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.755298817 +0000 UTC m=+0.057505405 container create 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:12:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:40 compute-0 systemd[1]: Started libpod-conmon-1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975.scope.
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.734932087 +0000 UTC m=+0.037138705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.876483296 +0000 UTC m=+0.178689894 container init 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.891031427 +0000 UTC m=+0.193238045 container start 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.895881961 +0000 UTC m=+0.198088559 container attach 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:12:40 compute-0 angry_austin[200936]: 167 167
Jan 20 14:12:40 compute-0 systemd[1]: libpod-1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975.scope: Deactivated successfully.
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.897433024 +0000 UTC m=+0.199639612 container died 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-df5c206090e05a373ee79ef34c06d689bb660102c644b72614807dfd5c9a9e71-merged.mount: Deactivated successfully.
Jan 20 14:12:40 compute-0 podman[200894]: 2026-01-20 14:12:40.94377753 +0000 UTC m=+0.245984148 container remove 1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_austin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:12:40 compute-0 systemd[1]: libpod-conmon-1450927e0d8fd2f24530a9bb7a61873d4633a3c5fb827cf01a138bdebbbb3975.scope: Deactivated successfully.
Jan 20 14:12:40 compute-0 sudo[201004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asnwgsfihybdzdwzhlcfrbjbdkhdxkrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918360.6118891-1187-277753921131309/AnsiballZ_systemd.py'
Jan 20 14:12:40 compute-0 sudo[201004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:41 compute-0 podman[201012]: 2026-01-20 14:12:41.15137317 +0000 UTC m=+0.044098826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:12:41 compute-0 python3.9[201006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:41 compute-0 sudo[201004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:41 compute-0 podman[201012]: 2026-01-20 14:12:41.813900133 +0000 UTC m=+0.706625749 container create 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:41 compute-0 sudo[201178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-badeaqmpefqonyfttdepjrhofljhujsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918361.5733845-1187-50076782930815/AnsiballZ_systemd.py'
Jan 20 14:12:41 compute-0 sudo[201178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:42 compute-0 python3.9[201180]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:42.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:43 compute-0 sudo[201178]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:43 compute-0 sudo[201334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwkwvgolkcsmuyvasexpxtcxiljijbft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918363.5546107-1187-149113654382733/AnsiballZ_systemd.py'
Jan 20 14:12:43 compute-0 sudo[201334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:44 compute-0 python3.9[201336]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:44 compute-0 ceph-mon[74360]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:44 compute-0 systemd[1]: Started libpod-conmon-197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e.scope.
Jan 20 14:12:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec02158e2852bf9c1629d3cfa53b134a434efe0d5eb70d55e1b43b34aa113f02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec02158e2852bf9c1629d3cfa53b134a434efe0d5eb70d55e1b43b34aa113f02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec02158e2852bf9c1629d3cfa53b134a434efe0d5eb70d55e1b43b34aa113f02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec02158e2852bf9c1629d3cfa53b134a434efe0d5eb70d55e1b43b34aa113f02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:44.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:44 compute-0 podman[201012]: 2026-01-20 14:12:44.703970019 +0000 UTC m=+3.596695655 container init 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:12:44 compute-0 podman[201012]: 2026-01-20 14:12:44.712768761 +0000 UTC m=+3.605494407 container start 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:12:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:44 compute-0 podman[201012]: 2026-01-20 14:12:44.962436859 +0000 UTC m=+3.855162495 container attach 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:12:45 compute-0 sudo[201334]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:45 compute-0 trusting_hellman[201341]: {
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:         "osd_id": 0,
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:         "type": "bluestore"
Jan 20 14:12:45 compute-0 trusting_hellman[201341]:     }
Jan 20 14:12:45 compute-0 trusting_hellman[201341]: }
Jan 20 14:12:45 compute-0 ceph-mon[74360]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:45 compute-0 ceph-mon[74360]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:45 compute-0 systemd[1]: libpod-197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e.scope: Deactivated successfully.
Jan 20 14:12:45 compute-0 podman[201012]: 2026-01-20 14:12:45.617922668 +0000 UTC m=+4.510648274 container died 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec02158e2852bf9c1629d3cfa53b134a434efe0d5eb70d55e1b43b34aa113f02-merged.mount: Deactivated successfully.
Jan 20 14:12:45 compute-0 podman[201012]: 2026-01-20 14:12:45.711567908 +0000 UTC m=+4.604293524 container remove 197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:12:45 compute-0 systemd[1]: libpod-conmon-197c3c26381f09ed8bb89b766d1dc3be0fb28ea258288082ee9afaa98209585e.scope: Deactivated successfully.
Jan 20 14:12:45 compute-0 sudo[200755]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:12:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:12:45 compute-0 sudo[201525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joshufncthegegeuskrzzgfibeglgkpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918365.5175908-1187-199401190520294/AnsiballZ_systemd.py'
Jan 20 14:12:45 compute-0 sudo[201525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 702da9f2-72bc-42a7-8094-1be306d3320f does not exist
Jan 20 14:12:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cfccb00e-c2c0-4992-bc91-12bdb2038f65 does not exist
Jan 20 14:12:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5dba2155-c004-41cf-a412-a552f9486899 does not exist
Jan 20 14:12:45 compute-0 sudo[201528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:45 compute-0 sudo[201528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:45 compute-0 sudo[201528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-0 sudo[201553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:12:46 compute-0 sudo[201553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:46 compute-0 sudo[201553]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-0 python3.9[201527]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:46 compute-0 sudo[201525]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:46 compute-0 sudo[201730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iossbvtimwyrkjytunjztvkvuztruhbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918366.384431-1187-142513985883624/AnsiballZ_systemd.py'
Jan 20 14:12:46 compute-0 sudo[201730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:46.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:12:46 compute-0 python3.9[201733]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:47 compute-0 sudo[201730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:47 compute-0 sudo[201886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hclcclpapmmkgvsplgznmsguxmehkioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918367.177271-1187-229851720283888/AnsiballZ_systemd.py'
Jan 20 14:12:47 compute-0 sudo[201886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:47 compute-0 python3.9[201888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:47 compute-0 sudo[201886]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:47 compute-0 ceph-mon[74360]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:48 compute-0 sudo[202041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvemhounzljjknoyvzeyichlerkvkjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918367.95926-1187-37762223891292/AnsiballZ_systemd.py'
Jan 20 14:12:48 compute-0 sudo[202041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:48 compute-0 python3.9[202043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:48.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:49 compute-0 sudo[202041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:49 compute-0 ceph-mon[74360]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:50 compute-0 sudo[202160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:50 compute-0 sudo[202160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:50 compute-0 sudo[202160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:50 compute-0 sudo[202227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhduowlzbaakpsbpvqhyhutjcquqflw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918369.7928965-1187-207046505468668/AnsiballZ_systemd.py'
Jan 20 14:12:50 compute-0 sudo[202227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:50 compute-0 sudo[202222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:12:50 compute-0 sudo[202222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:12:50 compute-0 sudo[202222]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:50 compute-0 python3.9[202247]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:50 compute-0 sudo[202227]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:50.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:50.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:50 compute-0 sudo[202403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foyfdgxmacogogebpogeudgkldfhpzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918370.6610627-1187-98868677689028/AnsiballZ_systemd.py'
Jan 20 14:12:50 compute-0 sudo[202403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:51 compute-0 python3.9[202405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:51 compute-0 sudo[202403]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:51 compute-0 sudo[202558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iriujbxiaretdcghwuecrmauhrpscihx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918371.59375-1187-88635784884908/AnsiballZ_systemd.py'
Jan 20 14:12:51 compute-0 ceph-mon[74360]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:51 compute-0 sudo[202558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:52 compute-0 python3.9[202560]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:52 compute-0 sudo[202558]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:12:52
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.log', 'backups']
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:52.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:52 compute-0 sudo[202714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazqndkrrqbidebcyhgjqvglwuwnaxov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918372.497252-1187-55147172765271/AnsiballZ_systemd.py'
Jan 20 14:12:52 compute-0 sudo[202714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:53 compute-0 python3.9[202716]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:53 compute-0 ceph-mon[74360]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:54 compute-0 sudo[202714]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:54 compute-0 sudo[202869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipmcytplsawwefvdixmnxlbptoocooj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918374.2773488-1187-169721618385204/AnsiballZ_systemd.py'
Jan 20 14:12:54 compute-0 sudo[202869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:12:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:12:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:54.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:54 compute-0 python3.9[202871]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:54 compute-0 sudo[202869]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:55 compute-0 ceph-mon[74360]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:55 compute-0 sudo[203025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neqcbsrnlafouihynrukjdyerblikrjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918375.1189876-1187-204819310597920/AnsiballZ_systemd.py'
Jan 20 14:12:55 compute-0 sudo[203025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:55 compute-0 python3.9[203027]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:55 compute-0 sudo[203025]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:56 compute-0 sudo[203180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqobarjzsynjarzfjwaimewttupfekd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918375.896498-1187-109393080120854/AnsiballZ_systemd.py'
Jan 20 14:12:56 compute-0 sudo[203180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:56 compute-0 python3.9[203182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:12:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:12:57 compute-0 sshd-session[203185]: Connection closed by authenticating user root 157.245.78.139 port 50992 [preauth]
Jan 20 14:12:57 compute-0 sudo[203180]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:57 compute-0 ceph-mon[74360]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:58 compute-0 sudo[203338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhzpqyxterxrbsfflpnzqzbuoiuyiwhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918378.0111058-1493-104277256645191/AnsiballZ_file.py'
Jan 20 14:12:58 compute-0 sudo[203338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:58 compute-0 python3.9[203340]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:58 compute-0 sudo[203338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:12:58.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:12:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:12:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:12:58.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:12:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:12:58 compute-0 sudo[203491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czpotkouzpdqhycpdoclftxfhcwegjqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918378.6549745-1493-207689604039773/AnsiballZ_file.py'
Jan 20 14:12:58 compute-0 sudo[203491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:59 compute-0 python3.9[203493]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:59 compute-0 sudo[203491]: pam_unix(sudo:session): session closed for user root
Jan 20 14:12:59 compute-0 sudo[203643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdrpslzkjuhifwjllljatfiehpomuwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918379.2928429-1493-194446859733920/AnsiballZ_file.py'
Jan 20 14:12:59 compute-0 sudo[203643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:12:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:59 compute-0 python3.9[203645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:59 compute-0 sudo[203643]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:00 compute-0 sudo[203795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyzxakajixfugojoauelytrwsyjeyfnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918379.921077-1493-242550447791253/AnsiballZ_file.py'
Jan 20 14:13:00 compute-0 sudo[203795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:00 compute-0 ceph-mon[74360]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:00 compute-0 python3.9[203797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:00 compute-0 sudo[203795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:00.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:00 compute-0 sudo[203948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfltfiwilldomrdeqaemevmzniekkvhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918380.6886866-1493-251433768179513/AnsiballZ_file.py'
Jan 20 14:13:00 compute-0 sudo[203948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:01 compute-0 python3.9[203950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:01 compute-0 sudo[203948]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:01 compute-0 ceph-mon[74360]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:01 compute-0 sudo[204100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibcsduvuzwjnikzcpxseopcfwilwsxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918381.3920896-1493-222529712978865/AnsiballZ_file.py'
Jan 20 14:13:01 compute-0 sudo[204100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:01 compute-0 python3.9[204102]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:13:01 compute-0 sudo[204100]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:02.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:02 compute-0 python3.9[204253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:13:03 compute-0 podman[204304]: 2026-01-20 14:13:03.537476765 +0000 UTC m=+0.103203184 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:13:03 compute-0 sudo[204422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcrpaunwsgfiquceljymnbbcnqzxfotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918383.3694782-1646-140944809785572/AnsiballZ_stat.py'
Jan 20 14:13:03 compute-0 sudo[204422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:03 compute-0 ceph-mon[74360]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:04 compute-0 python3.9[204424]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:04 compute-0 sudo[204422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:04 compute-0 sudo[204547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmeulxcgxnrfyqglfchirjtukuddfoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918383.3694782-1646-140944809785572/AnsiballZ_copy.py'
Jan 20 14:13:04 compute-0 sudo[204547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:04 compute-0 python3.9[204549]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918383.3694782-1646-140944809785572/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:04 compute-0 sudo[204547]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:05 compute-0 sudo[204700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzvectkgykoqjfjiibqssrmcrtwbcjsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918384.950755-1646-1844006735976/AnsiballZ_stat.py'
Jan 20 14:13:05 compute-0 sudo[204700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:05 compute-0 python3.9[204702]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:05 compute-0 sudo[204700]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:05 compute-0 sudo[204825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsbgwimebmqsjjgyisltaqjwmrphonu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918384.950755-1646-1844006735976/AnsiballZ_copy.py'
Jan 20 14:13:05 compute-0 sudo[204825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:05 compute-0 python3.9[204827]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918384.950755-1646-1844006735976/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:06 compute-0 sudo[204825]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:06 compute-0 ceph-mon[74360]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:06 compute-0 sudo[204977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbysbyeglwzpapjsxfbcryuhwicmkibc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918386.308461-1646-34439013734390/AnsiballZ_stat.py'
Jan 20 14:13:06 compute-0 sudo[204977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:06.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:06 compute-0 python3.9[204979]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:06 compute-0 sudo[204977]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:07 compute-0 sudo[205103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btrnyltqlyagxbdkuusxemmypxrnmotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918386.308461-1646-34439013734390/AnsiballZ_copy.py'
Jan 20 14:13:07 compute-0 sudo[205103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:07 compute-0 python3.9[205105]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918386.308461-1646-34439013734390/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:07 compute-0 sudo[205103]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:07 compute-0 ceph-mon[74360]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:07 compute-0 sudo[205255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miagxjxjolbztporpqwqxieucrfljfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918387.5690627-1646-276433295891361/AnsiballZ_stat.py'
Jan 20 14:13:07 compute-0 sudo[205255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:08 compute-0 python3.9[205257]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:08 compute-0 sudo[205255]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:08 compute-0 sudo[205380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asnwfjdvvtxekvvoxifqtlzpxccxfoqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918387.5690627-1646-276433295891361/AnsiballZ_copy.py'
Jan 20 14:13:08 compute-0 sudo[205380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:08 compute-0 python3.9[205382]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918387.5690627-1646-276433295891361/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:08 compute-0 sudo[205380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:08.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:09 compute-0 sudo[205543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-motqzviiumnycmytjxyctddytovtdkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918388.8642852-1646-194300929738156/AnsiballZ_stat.py'
Jan 20 14:13:09 compute-0 sudo[205543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:09 compute-0 podman[205507]: 2026-01-20 14:13:09.223976105 +0000 UTC m=+0.115030860 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:13:09 compute-0 python3.9[205552]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:09 compute-0 sudo[205543]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:09 compute-0 sudo[205685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsosmjghbflfacxuczuxkrjlekooecqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918388.8642852-1646-194300929738156/AnsiballZ_copy.py'
Jan 20 14:13:09 compute-0 sudo[205685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:09 compute-0 python3.9[205687]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918388.8642852-1646-194300929738156/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:09 compute-0 sudo[205685]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:09 compute-0 ceph-mon[74360]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:10 compute-0 sudo[205787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:10 compute-0 sudo[205787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:10 compute-0 sudo[205787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-0 sudo[205835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:10 compute-0 sudo[205835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:10 compute-0 sudo[205835]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-0 sudo[205887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctwwqjwvzfpodghavgljhdejwtcjhovs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918390.0694473-1646-226489178551398/AnsiballZ_stat.py'
Jan 20 14:13:10 compute-0 sudo[205887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:10 compute-0 python3.9[205889]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:10 compute-0 sudo[205887]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:10.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:10 compute-0 sudo[206013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixvlyiorcafzaypgbtozfmgqlmsyggf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918390.0694473-1646-226489178551398/AnsiballZ_copy.py'
Jan 20 14:13:10 compute-0 sudo[206013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:13:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:13:11 compute-0 python3.9[206015]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918390.0694473-1646-226489178551398/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:11 compute-0 sudo[206013]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:11 compute-0 sudo[206165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhkecifedgblqkbkbrazxlarukwszxcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918391.2940536-1646-141250246822941/AnsiballZ_stat.py'
Jan 20 14:13:11 compute-0 sudo[206165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:11 compute-0 python3.9[206167]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:11 compute-0 sudo[206165]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:12 compute-0 ceph-mon[74360]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:12 compute-0 sudo[206288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnknyugcaokvpvznhyztmkjfgttflgth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918391.2940536-1646-141250246822941/AnsiballZ_copy.py'
Jan 20 14:13:12 compute-0 sudo[206288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:12 compute-0 python3.9[206290]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918391.2940536-1646-141250246822941/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:12 compute-0 sudo[206288]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:12.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:12 compute-0 sudo[206441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytuqrcsqfxbgawtsrxoyddlinltwiaqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918392.6395383-1646-191073112252515/AnsiballZ_stat.py'
Jan 20 14:13:12 compute-0 sudo[206441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:13 compute-0 python3.9[206443]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:13 compute-0 sudo[206441]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:13 compute-0 ceph-mon[74360]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:13 compute-0 sudo[206566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohdtcgxnbtixqhejxvscjuxvubzsuvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918392.6395383-1646-191073112252515/AnsiballZ_copy.py'
Jan 20 14:13:13 compute-0 sudo[206566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:13 compute-0 python3.9[206568]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768918392.6395383-1646-191073112252515/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:13 compute-0 sudo[206566]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:15 compute-0 sudo[206719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpikpyjxtfjdcqqzlditjtpkgojpnwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918395.6600718-1985-31270260892891/AnsiballZ_command.py'
Jan 20 14:13:15 compute-0 sudo[206719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:16 compute-0 python3.9[206721]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 20 14:13:16 compute-0 ceph-mon[74360]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:16 compute-0 sudo[206719]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:16.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:16.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:16 compute-0 sudo[206873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdojuwedunhfqdveamgnxihxzhfedwtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918396.5936148-2012-201618767018884/AnsiballZ_file.py'
Jan 20 14:13:16 compute-0 sudo[206873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:17 compute-0 python3.9[206875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:17 compute-0 sudo[206873]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:17 compute-0 ceph-mon[74360]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:17 compute-0 sudo[207025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fswqonkfyszfekpkwazlkldezoqoerqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918397.2511268-2012-40440398894332/AnsiballZ_file.py'
Jan 20 14:13:17 compute-0 sudo[207025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:17 compute-0 python3.9[207027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:17 compute-0 sudo[207025]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:18 compute-0 sudo[207177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyosnugtyszwsoglghqznekvkdzmqlov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918397.929618-2012-265859470169301/AnsiballZ_file.py'
Jan 20 14:13:18 compute-0 sudo[207177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:18.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:19 compute-0 python3.9[207179]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:19 compute-0 sudo[207177]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:19 compute-0 ceph-mon[74360]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:19 compute-0 sudo[207330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thojknmhxxrlbgaorfxqeotwcdegytxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918399.1993062-2012-35271612327639/AnsiballZ_file.py'
Jan 20 14:13:19 compute-0 sudo[207330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:19 compute-0 python3.9[207332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:19 compute-0 sudo[207330]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:20 compute-0 sudo[207482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxtkkjzbzkndouwnlxejiggidmwhyca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918399.8823974-2012-85290955655150/AnsiballZ_file.py'
Jan 20 14:13:20 compute-0 sudo[207482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:20 compute-0 python3.9[207484]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:20 compute-0 sudo[207482]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:20 compute-0 sudo[207635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acudlettqhxiaclcsqmtcvdpwaxylbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918400.600497-2012-115015590307303/AnsiballZ_file.py'
Jan 20 14:13:20 compute-0 sudo[207635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:21 compute-0 python3.9[207637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:21 compute-0 sudo[207635]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:21 compute-0 ceph-mon[74360]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:21 compute-0 sudo[207787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlcfpweqwjoiajlrxvplxuytclxolum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918401.296414-2012-222835081965180/AnsiballZ_file.py'
Jan 20 14:13:21 compute-0 sudo[207787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:21 compute-0 python3.9[207789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:21 compute-0 sudo[207787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:22 compute-0 sudo[207939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqmsjzpqawsqfljvwmxusibnkyekxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918402.0312195-2012-270017010961243/AnsiballZ_file.py'
Jan 20 14:13:22 compute-0 sudo[207939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:22 compute-0 python3.9[207941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:22 compute-0 sudo[207939]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:22.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:23 compute-0 sudo[208092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlcrsiahuisiizdytagdhztnwrecynls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918402.799688-2012-193536249111860/AnsiballZ_file.py'
Jan 20 14:13:23 compute-0 sudo[208092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:23 compute-0 python3.9[208094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:23 compute-0 sudo[208092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:23 compute-0 sudo[208244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsyxskutlclzrlkrwqkowimaulfwecmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918403.5739126-2012-260899154960180/AnsiballZ_file.py'
Jan 20 14:13:23 compute-0 sudo[208244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:24 compute-0 python3.9[208246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:24 compute-0 sudo[208244]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:24 compute-0 ceph-mon[74360]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:24 compute-0 sudo[208397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvczjmqdzyannairkzpnfjqlueirwpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918404.3930025-2012-154964879466165/AnsiballZ_file.py'
Jan 20 14:13:24 compute-0 sudo[208397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:24 compute-0 python3.9[208399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:24 compute-0 sudo[208397]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:25 compute-0 sudo[208549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdsayqkyyhpwbvqqlsbchdiuyszezxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918405.1559734-2012-168478997094810/AnsiballZ_file.py'
Jan 20 14:13:25 compute-0 sudo[208549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:25 compute-0 python3.9[208551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:25 compute-0 sudo[208549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:26 compute-0 sudo[208701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-donyfynbdvtznopogeftylthqfxcmptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918405.9367828-2012-108173567921161/AnsiballZ_file.py'
Jan 20 14:13:26 compute-0 sudo[208701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:26 compute-0 python3.9[208703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:26 compute-0 sudo[208701]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:26 compute-0 ceph-mon[74360]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:26.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:26.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:27 compute-0 sudo[208854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msembjnncavlsgfrgbuwakzsarfisqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918406.8001842-2012-169173328510778/AnsiballZ_file.py'
Jan 20 14:13:27 compute-0 sudo[208854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:27 compute-0 python3.9[208856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:27 compute-0 sudo[208854]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:27 compute-0 ceph-mon[74360]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:27 compute-0 sudo[209006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrllwyqauwwdgmstkefcewnaaxkibrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918407.5090864-2309-133079003681109/AnsiballZ_stat.py'
Jan 20 14:13:27 compute-0 sudo[209006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:28 compute-0 python3.9[209008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:28 compute-0 sudo[209006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:28 compute-0 sudo[209129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svszpbcnesjwdwsmlnbhybcvoztlufis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918407.5090864-2309-133079003681109/AnsiballZ_copy.py'
Jan 20 14:13:28 compute-0 sudo[209129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:28 compute-0 python3.9[209131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918407.5090864-2309-133079003681109/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:28.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:28 compute-0 sudo[209129]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:28.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:29 compute-0 sudo[209282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwdidosywkpnrbmkbzvvwdicxsostwux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918408.8965187-2309-121751470175550/AnsiballZ_stat.py'
Jan 20 14:13:29 compute-0 sudo[209282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:29 compute-0 python3.9[209284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:29 compute-0 sudo[209282]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:29 compute-0 sudo[209405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpmihlzykrpowaznmhwuaoaivtwwimdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918408.8965187-2309-121751470175550/AnsiballZ_copy.py'
Jan 20 14:13:29 compute-0 sudo[209405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:29 compute-0 python3.9[209407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918408.8965187-2309-121751470175550/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:30 compute-0 sudo[209405]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-0 ceph-mon[74360]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:30 compute-0 sudo[209511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:30 compute-0 sudo[209511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:30 compute-0 sudo[209511]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-0 sudo[209565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:30 compute-0 sudo[209565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:30 compute-0 sudo[209565]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-0 sudo[209603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brppglxbqhofamshjvzoakwdpbbjanya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918410.1639357-2309-94479021353275/AnsiballZ_stat.py'
Jan 20 14:13:30 compute-0 sudo[209603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:30.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:13:30.725 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:13:30.726 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:13:30.726 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:13:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:30 compute-0 python3.9[209609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:30 compute-0 sudo[209603]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:31 compute-0 sudo[209731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggjtogxhdzbcprjjwbtalafyapgqbzph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918410.1639357-2309-94479021353275/AnsiballZ_copy.py'
Jan 20 14:13:31 compute-0 sudo[209731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:31 compute-0 python3.9[209733]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918410.1639357-2309-94479021353275/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:31 compute-0 sudo[209731]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:31 compute-0 sudo[209883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjepiwerbjgoiszybdjvxvqorzzadmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918411.5070953-2309-452610397894/AnsiballZ_stat.py'
Jan 20 14:13:31 compute-0 sudo[209883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:31 compute-0 ceph-mon[74360]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:31 compute-0 python3.9[209885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:31 compute-0 sudo[209883]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:32 compute-0 sudo[210006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypnyaemfbsglgqjwoglonzyisrmgcduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918411.5070953-2309-452610397894/AnsiballZ_copy.py'
Jan 20 14:13:32 compute-0 sudo[210006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:32 compute-0 python3.9[210008]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918411.5070953-2309-452610397894/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:32 compute-0 sudo[210006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:32.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:33 compute-0 sudo[210159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhapimwlnubmxnfzjfatdiewucxpwbel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918412.703772-2309-23845113773227/AnsiballZ_stat.py'
Jan 20 14:13:33 compute-0 sudo[210159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:33 compute-0 python3.9[210161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:33 compute-0 sudo[210159]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:33 compute-0 sudo[210292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buzahwwgjjzwdfyruviwbzkblgscsfdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918412.703772-2309-23845113773227/AnsiballZ_copy.py'
Jan 20 14:13:33 compute-0 sudo[210292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:33 compute-0 podman[210256]: 2026-01-20 14:13:33.662368832 +0000 UTC m=+0.083425309 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 14:13:33 compute-0 python3.9[210299]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918412.703772-2309-23845113773227/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:33 compute-0 sudo[210292]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:34 compute-0 ceph-mon[74360]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:34 compute-0 sudo[210452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovqiytpcnebdkynehqculirkubsefntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918414.0463154-2309-229106352863563/AnsiballZ_stat.py'
Jan 20 14:13:34 compute-0 sudo[210452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:34 compute-0 python3.9[210454]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:34 compute-0 sudo[210452]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:34.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:34 compute-0 sudo[210576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaxjxhehldwkluftvphjwibzfuimecuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918414.0463154-2309-229106352863563/AnsiballZ_copy.py'
Jan 20 14:13:34 compute-0 sudo[210576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:35 compute-0 python3.9[210578]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918414.0463154-2309-229106352863563/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:35 compute-0 sudo[210576]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:35 compute-0 sudo[210728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fikwrsepvruyyodeirbudwvjdkdvexrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918415.2133272-2309-260568217874807/AnsiballZ_stat.py'
Jan 20 14:13:35 compute-0 sudo[210728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:36 compute-0 python3.9[210730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:36 compute-0 sudo[210728]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:36 compute-0 sudo[210851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laxvzjsexstilvmgaznyptojmozezytk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918415.2133272-2309-260568217874807/AnsiballZ_copy.py'
Jan 20 14:13:36 compute-0 sudo[210851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:36 compute-0 python3.9[210853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918415.2133272-2309-260568217874807/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:36 compute-0 sudo[210851]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:37 compute-0 sudo[211004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgptrdxnswbeybxtxxgqrhfrezadkwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918416.8169167-2309-260019811781104/AnsiballZ_stat.py'
Jan 20 14:13:37 compute-0 sudo[211004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:37.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:37 compute-0 ceph-mon[74360]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:37 compute-0 python3.9[211006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:37 compute-0 sudo[211004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:38 compute-0 ceph-mon[74360]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:38 compute-0 sudo[211127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxwrpvuttflcfjggdxjspjmgubcbeuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918416.8169167-2309-260019811781104/AnsiballZ_copy.py'
Jan 20 14:13:38 compute-0 sudo[211127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:38 compute-0 python3.9[211129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918416.8169167-2309-260019811781104/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:38 compute-0 sudo[211127]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:38 compute-0 sudo[211280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmpruiwhkomsbgcpfzimzuehsqgvttoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918418.6016207-2309-162014769455726/AnsiballZ_stat.py'
Jan 20 14:13:38 compute-0 sudo[211280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:39 compute-0 python3.9[211282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:39 compute-0 sudo[211280]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:39 compute-0 ceph-mon[74360]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:39 compute-0 sudo[211422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhrfgneogyvseivplwcpvhklbskmizu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918418.6016207-2309-162014769455726/AnsiballZ_copy.py'
Jan 20 14:13:39 compute-0 sudo[211422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:39 compute-0 podman[211374]: 2026-01-20 14:13:39.491676106 +0000 UTC m=+0.087032259 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:13:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:39 compute-0 python3.9[211428]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918418.6016207-2309-162014769455726/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:39 compute-0 sudo[211422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:40 compute-0 sudo[211581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegzpktdimeozdnppzgepiqbqkbngivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918419.846497-2309-48332377505284/AnsiballZ_stat.py'
Jan 20 14:13:40 compute-0 sudo[211581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:40 compute-0 python3.9[211583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:40 compute-0 sudo[211581]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:40 compute-0 sshd-session[211584]: Connection closed by authenticating user root 157.245.78.139 port 59268 [preauth]
Jan 20 14:13:40 compute-0 sudo[211706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhajexxvmvyhtbmxgcreocpdetremqwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918419.846497-2309-48332377505284/AnsiballZ_copy.py'
Jan 20 14:13:40 compute-0 sudo[211706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:40.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:40 compute-0 python3.9[211708]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918419.846497-2309-48332377505284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:40 compute-0 sudo[211706]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:41.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:41 compute-0 sudo[211859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eygnkecejxzpeoitjesqmqfbttlntese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918421.0158513-2309-55243734409236/AnsiballZ_stat.py'
Jan 20 14:13:41 compute-0 sudo[211859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:41 compute-0 python3.9[211861]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:41 compute-0 sudo[211859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:41 compute-0 sudo[211982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxivlwectqtkoxgbhmhtgfmvjtdwole ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918421.0158513-2309-55243734409236/AnsiballZ_copy.py'
Jan 20 14:13:41 compute-0 sudo[211982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:41 compute-0 ceph-mon[74360]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:42 compute-0 python3.9[211984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918421.0158513-2309-55243734409236/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:42 compute-0 sudo[211982]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:42 compute-0 sudo[212135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqrcotzhpmrcubbeofhvcmypgbncbtoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918422.2859976-2309-208673892212100/AnsiballZ_stat.py'
Jan 20 14:13:42 compute-0 sudo[212135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:42 compute-0 python3.9[212137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:42 compute-0 sudo[212135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:43.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:43 compute-0 sudo[212258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtyhdpmwqizqhbrxulyxywekneagymph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918422.2859976-2309-208673892212100/AnsiballZ_copy.py'
Jan 20 14:13:43 compute-0 sudo[212258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:43 compute-0 python3.9[212260]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918422.2859976-2309-208673892212100/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:43 compute-0 sudo[212258]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:43 compute-0 sudo[212410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-defjuxknqkptivsrswguarfkrgawjpxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918423.6926966-2309-18502709344840/AnsiballZ_stat.py'
Jan 20 14:13:43 compute-0 sudo[212410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:44 compute-0 ceph-mon[74360]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:44 compute-0 python3.9[212412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:44 compute-0 sudo[212410]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:44 compute-0 sudo[212533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egubcopwtygesribjbkztdbtsyujlbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918423.6926966-2309-18502709344840/AnsiballZ_copy.py'
Jan 20 14:13:44 compute-0 sudo[212533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.603364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424603467, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2389, "num_deletes": 501, "total_data_size": 4105374, "memory_usage": 4152936, "flush_reason": "Manual Compaction"}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424627558, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2361262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13623, "largest_seqno": 16011, "table_properties": {"data_size": 2353696, "index_size": 3740, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 21565, "raw_average_key_size": 19, "raw_value_size": 2335088, "raw_average_value_size": 2130, "num_data_blocks": 169, "num_entries": 1096, "num_filter_entries": 1096, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918191, "oldest_key_time": 1768918191, "file_creation_time": 1768918424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 24301 microseconds, and 6435 cpu microseconds.
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.627696) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2361262 bytes OK
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.627751) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.631311) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.631326) EVENT_LOG_v1 {"time_micros": 1768918424631321, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.631343) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 4094798, prev total WAL file size 4094798, number of live WAL files 2.
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.632994) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2305KB)], [32(9677KB)]
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424633073, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12270910, "oldest_snapshot_seqno": -1}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4317 keys, 8320692 bytes, temperature: kUnknown
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424709662, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8320692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8290478, "index_size": 18328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 106602, "raw_average_key_size": 24, "raw_value_size": 8211057, "raw_average_value_size": 1902, "num_data_blocks": 773, "num_entries": 4317, "num_filter_entries": 4317, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.710112) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8320692 bytes
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.711411) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.1 rd, 108.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(8.7) write-amplify(3.5) OK, records in: 5230, records dropped: 913 output_compression: NoCompression
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.711428) EVENT_LOG_v1 {"time_micros": 1768918424711420, "job": 14, "event": "compaction_finished", "compaction_time_micros": 76655, "compaction_time_cpu_micros": 18333, "output_level": 6, "num_output_files": 1, "total_output_size": 8320692, "num_input_records": 5230, "num_output_records": 4317, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424711834, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918424713477, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.632858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.713553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.713558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.713560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.713561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:13:44.713563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:13:44 compute-0 python3.9[212535]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918423.6926966-2309-18502709344840/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:44 compute-0 sudo[212533]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:45.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:45 compute-0 sudo[212686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqdzpufkyvahknowtvspuppqitbtcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918424.9011855-2309-227449324066995/AnsiballZ_stat.py'
Jan 20 14:13:45 compute-0 sudo[212686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:45 compute-0 python3.9[212688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:45 compute-0 sudo[212686]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:45 compute-0 ceph-mon[74360]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:45 compute-0 sudo[212809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmepmjkargudsfrvicfjsrlwoveuvmyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918424.9011855-2309-227449324066995/AnsiballZ_copy.py'
Jan 20 14:13:45 compute-0 sudo[212809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:45 compute-0 python3.9[212811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918424.9011855-2309-227449324066995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:45 compute-0 sudo[212809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-0 sudo[212836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:46 compute-0 sudo[212836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-0 sudo[212836]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-0 sudo[212861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:13:46 compute-0 sudo[212861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-0 sudo[212861]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-0 sudo[212886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:46 compute-0 sudo[212886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-0 sudo[212886]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:46 compute-0 sudo[212911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:13:46 compute-0 sudo[212911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:13:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:13:47 compute-0 sudo[212911]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 57315e0b-fc6c-4765-a48b-6822fce09e2f does not exist
Jan 20 14:13:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 243dc43c-e02a-4201-b3f8-e09eeb09d6be does not exist
Jan 20 14:13:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1cc05920-691c-4245-a69d-dac5d002fb80 does not exist
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:47 compute-0 sudo[212969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:47 compute-0 sudo[212969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:47 compute-0 sudo[212969]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-0 sudo[212994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:13:47 compute-0 sudo[212994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:47 compute-0 sudo[212994]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-0 sudo[213019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:47 compute-0 sudo[213019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:47 compute-0 sudo[213019]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:47 compute-0 sudo[213044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:13:47 compute-0 sudo[213044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.032441214 +0000 UTC m=+0.023481738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.136585794 +0000 UTC m=+0.127626288 container create b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:13:48 compute-0 ceph-mon[74360]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:13:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:13:48 compute-0 systemd[1]: Started libpod-conmon-b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45.scope.
Jan 20 14:13:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.424282439 +0000 UTC m=+0.415323033 container init b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.433982547 +0000 UTC m=+0.425023041 container start b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.43771016 +0000 UTC m=+0.428750694 container attach b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:13:48 compute-0 eloquent_visvesvaraya[213127]: 167 167
Jan 20 14:13:48 compute-0 systemd[1]: libpod-b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45.scope: Deactivated successfully.
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.442184333 +0000 UTC m=+0.433224867 container died b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-12add913f32fca3349d7b9a5674bcfc6a5d1fff0c72a1985d8925c7a1d14f4b4-merged.mount: Deactivated successfully.
Jan 20 14:13:48 compute-0 podman[213110]: 2026-01-20 14:13:48.496855229 +0000 UTC m=+0.487895723 container remove b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:13:48 compute-0 systemd[1]: libpod-conmon-b3b3ec074e78460cfde946687932983678071858b8eac12f5cf7c7f5b95e8a45.scope: Deactivated successfully.
Jan 20 14:13:48 compute-0 podman[213151]: 2026-01-20 14:13:48.742514348 +0000 UTC m=+0.085553629 container create 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:13:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:48 compute-0 systemd[1]: Started libpod-conmon-522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79.scope.
Jan 20 14:13:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:48 compute-0 podman[213151]: 2026-01-20 14:13:48.72085358 +0000 UTC m=+0.063892841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:48 compute-0 podman[213151]: 2026-01-20 14:13:48.828705692 +0000 UTC m=+0.171744943 container init 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:48 compute-0 podman[213151]: 2026-01-20 14:13:48.834349207 +0000 UTC m=+0.177388448 container start 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:13:48 compute-0 podman[213151]: 2026-01-20 14:13:48.837267468 +0000 UTC m=+0.180306699 container attach 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:13:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:49 compute-0 ceph-mon[74360]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:49 compute-0 pensive_galileo[213169]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:13:49 compute-0 pensive_galileo[213169]: --> relative data size: 1.0
Jan 20 14:13:49 compute-0 pensive_galileo[213169]: --> All data devices are unavailable
Jan 20 14:13:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:49 compute-0 systemd[1]: libpod-522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79.scope: Deactivated successfully.
Jan 20 14:13:49 compute-0 podman[213184]: 2026-01-20 14:13:49.648340584 +0000 UTC m=+0.027514979 container died 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-335ac23d43ec2f9d5b21f7138e491bd74c579d4a12129c080cb23d0fba69bc37-merged.mount: Deactivated successfully.
Jan 20 14:13:49 compute-0 podman[213184]: 2026-01-20 14:13:49.693496848 +0000 UTC m=+0.072671243 container remove 522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:49 compute-0 systemd[1]: libpod-conmon-522443a6f94dfa381de70b3a9f2f730346734fd8dcdcbcbbabbaddb08807ae79.scope: Deactivated successfully.
Jan 20 14:13:49 compute-0 sudo[213044]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:49 compute-0 sudo[213199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:49 compute-0 sudo[213199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:49 compute-0 sudo[213199]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:49 compute-0 sudo[213224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:13:49 compute-0 sudo[213224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:49 compute-0 sudo[213224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:49 compute-0 sudo[213249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:49 compute-0 sudo[213249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:49 compute-0 sudo[213249]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:49 compute-0 sudo[213274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:13:49 compute-0 sudo[213274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.354169731 +0000 UTC m=+0.059812089 container create 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:50 compute-0 systemd[1]: Started libpod-conmon-9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf.scope.
Jan 20 14:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.334074716 +0000 UTC m=+0.039717114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.447521262 +0000 UTC m=+0.153163630 container init 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.453284851 +0000 UTC m=+0.158927189 container start 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.456126499 +0000 UTC m=+0.161769047 container attach 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 20 14:13:50 compute-0 recursing_shockley[213355]: 167 167
Jan 20 14:13:50 compute-0 systemd[1]: libpod-9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf.scope: Deactivated successfully.
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.458824133 +0000 UTC m=+0.164466471 container died 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-362514af88d1816a5e90bbbe36812ab515d9a5f948288033ef76c1cb55e3a967-merged.mount: Deactivated successfully.
Jan 20 14:13:50 compute-0 podman[213339]: 2026-01-20 14:13:50.526500728 +0000 UTC m=+0.232143076 container remove 9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:13:50 compute-0 systemd[1]: libpod-conmon-9136acd88905f0c8ed024e9c4cf85e39f61d3a45d1a1e6e7375e912017588dcf.scope: Deactivated successfully.
Jan 20 14:13:50 compute-0 sudo[213375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:50 compute-0 sudo[213375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:50 compute-0 sudo[213375]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:50 compute-0 sudo[213427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:50 compute-0 sudo[213427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:50 compute-0 sudo[213427]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:50 compute-0 podman[213447]: 2026-01-20 14:13:50.720863114 +0000 UTC m=+0.051163272 container create 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:50 compute-0 systemd[1]: Started libpod-conmon-114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa.scope.
Jan 20 14:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:50 compute-0 podman[213447]: 2026-01-20 14:13:50.700696777 +0000 UTC m=+0.030996985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68bc40870e98055ba2ce6a851b33528270fa8e74a755fcb55b8aa40a5294c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68bc40870e98055ba2ce6a851b33528270fa8e74a755fcb55b8aa40a5294c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68bc40870e98055ba2ce6a851b33528270fa8e74a755fcb55b8aa40a5294c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68bc40870e98055ba2ce6a851b33528270fa8e74a755fcb55b8aa40a5294c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:50 compute-0 podman[213447]: 2026-01-20 14:13:50.809328591 +0000 UTC m=+0.139628779 container init 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:13:50 compute-0 podman[213447]: 2026-01-20 14:13:50.822279757 +0000 UTC m=+0.152579955 container start 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:50 compute-0 podman[213447]: 2026-01-20 14:13:50.852013526 +0000 UTC m=+0.182313714 container attach 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:13:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:51 compute-0 python3.9[213577]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:51.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:51 compute-0 quirky_curie[213522]: {
Jan 20 14:13:51 compute-0 quirky_curie[213522]:     "0": [
Jan 20 14:13:51 compute-0 quirky_curie[213522]:         {
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "devices": [
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "/dev/loop3"
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             ],
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "lv_name": "ceph_lv0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "lv_size": "7511998464",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "name": "ceph_lv0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "tags": {
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.cluster_name": "ceph",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.crush_device_class": "",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.encrypted": "0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.osd_id": "0",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.type": "block",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:                 "ceph.vdo": "0"
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             },
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "type": "block",
Jan 20 14:13:51 compute-0 quirky_curie[213522]:             "vg_name": "ceph_vg0"
Jan 20 14:13:51 compute-0 quirky_curie[213522]:         }
Jan 20 14:13:51 compute-0 quirky_curie[213522]:     ]
Jan 20 14:13:51 compute-0 quirky_curie[213522]: }
Jan 20 14:13:51 compute-0 systemd[1]: libpod-114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa.scope: Deactivated successfully.
Jan 20 14:13:51 compute-0 podman[213447]: 2026-01-20 14:13:51.585747211 +0000 UTC m=+0.916047379 container died 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b68bc40870e98055ba2ce6a851b33528270fa8e74a755fcb55b8aa40a5294c3-merged.mount: Deactivated successfully.
Jan 20 14:13:51 compute-0 podman[213447]: 2026-01-20 14:13:51.647713328 +0000 UTC m=+0.978013486 container remove 114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:51 compute-0 systemd[1]: libpod-conmon-114599d2281f704b6b23fc44751b0bf09be5be579d6ba9fc399c061fdfffb5aa.scope: Deactivated successfully.
Jan 20 14:13:51 compute-0 sudo[213274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:51 compute-0 sudo[213675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:51 compute-0 sudo[213675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:51 compute-0 sudo[213675]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:51 compute-0 sudo[213700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:13:51 compute-0 sudo[213700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:51 compute-0 sudo[213700]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:51 compute-0 sudo[213725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:51 compute-0 sudo[213725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:51 compute-0 sudo[213725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:51 compute-0 sudo[213770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:13:51 compute-0 sudo[213770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:52 compute-0 sudo[213848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unhyjulmktillsqyzalzathlftnmtgkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918431.5032687-2927-150993893011482/AnsiballZ_seboolean.py'
Jan 20 14:13:52 compute-0 sudo[213848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:52 compute-0 ceph-mon[74360]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:52 compute-0 python3.9[213850]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.345165505 +0000 UTC m=+0.096438478 container create bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.271672489 +0000 UTC m=+0.022945482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:52 compute-0 systemd[1]: Started libpod-conmon-bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25.scope.
Jan 20 14:13:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:13:52
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.control', '.rgw.root']
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.447843603 +0000 UTC m=+0.199116596 container init bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.458217789 +0000 UTC m=+0.209490762 container start bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.462919079 +0000 UTC m=+0.214192082 container attach bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:13:52 compute-0 infallible_mirzakhani[213907]: 167 167
Jan 20 14:13:52 compute-0 systemd[1]: libpod-bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25.scope: Deactivated successfully.
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.466129477 +0000 UTC m=+0.217402480 container died bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2703c38bff379272689030a7c941dd5a54a6743eb7a30746e7ae593cdde357-merged.mount: Deactivated successfully.
Jan 20 14:13:52 compute-0 podman[213890]: 2026-01-20 14:13:52.559589082 +0000 UTC m=+0.310862085 container remove bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:13:52 compute-0 systemd[1]: libpod-conmon-bb2fafe28b7cc813233014fe6631d6d7449aad0059f8f58b10a4db234291ee25.scope: Deactivated successfully.
Jan 20 14:13:52 compute-0 podman[213932]: 2026-01-20 14:13:52.731876239 +0000 UTC m=+0.057599748 container create c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:52 compute-0 podman[213932]: 2026-01-20 14:13:52.712146406 +0000 UTC m=+0.037869905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:13:52 compute-0 systemd[1]: Started libpod-conmon-c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450.scope.
Jan 20 14:13:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba6bc2993d40288007a32bb6554d424c81b4bf3cc62b134b6c9716b4d9de8cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba6bc2993d40288007a32bb6554d424c81b4bf3cc62b134b6c9716b4d9de8cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba6bc2993d40288007a32bb6554d424c81b4bf3cc62b134b6c9716b4d9de8cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba6bc2993d40288007a32bb6554d424c81b4bf3cc62b134b6c9716b4d9de8cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:52 compute-0 podman[213932]: 2026-01-20 14:13:52.89089936 +0000 UTC m=+0.216622889 container init c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:13:52 compute-0 podman[213932]: 2026-01-20 14:13:52.897913263 +0000 UTC m=+0.223636762 container start c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:13:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:53.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:53 compute-0 awesome_merkle[213950]: {
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:         "osd_id": 0,
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:         "type": "bluestore"
Jan 20 14:13:53 compute-0 awesome_merkle[213950]:     }
Jan 20 14:13:53 compute-0 awesome_merkle[213950]: }
Jan 20 14:13:53 compute-0 systemd[1]: libpod-c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450.scope: Deactivated successfully.
Jan 20 14:13:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 20 14:13:53 compute-0 podman[213932]: 2026-01-20 14:13:53.835004802 +0000 UTC m=+1.160728331 container attach c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:13:53 compute-0 podman[213932]: 2026-01-20 14:13:53.836813461 +0000 UTC m=+1.162536990 container died c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba6bc2993d40288007a32bb6554d424c81b4bf3cc62b134b6c9716b4d9de8cd-merged.mount: Deactivated successfully.
Jan 20 14:13:53 compute-0 ceph-mon[74360]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:53 compute-0 podman[213932]: 2026-01-20 14:13:53.998738983 +0000 UTC m=+1.324462502 container remove c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 20 14:13:54 compute-0 systemd[1]: libpod-conmon-c815e715c0e2e9f914674ff5afe3ce405d564bb33dac19ba4834d5dd85d32450.scope: Deactivated successfully.
Jan 20 14:13:54 compute-0 sudo[213770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:13:54 compute-0 sudo[213848]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:13:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ae002554-6b44-4fc4-9c92-0260b4144171 does not exist
Jan 20 14:13:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 351878b8-c4d6-40f6-bd8f-318f37df67f6 does not exist
Jan 20 14:13:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6633d056-4154-4143-bde0-3ae177117eb1 does not exist
Jan 20 14:13:54 compute-0 sudo[214010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:13:54 compute-0 sudo[214010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:54 compute-0 sudo[214010]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-0 sudo[214058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:13:54 compute-0 sudo[214058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:13:54 compute-0 sudo[214058]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-0 sudo[214185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhfpiipdcjyfydjmlyyhivtbujcuqrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918434.317462-2951-61786025660521/AnsiballZ_copy.py'
Jan 20 14:13:54 compute-0 sudo[214185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:54 compute-0 python3.9[214187]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 20 14:13:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 20 14:13:54 compute-0 sudo[214185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:55 compute-0 sudo[214338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajeiqzrbwahrqcewmicybpblmoejpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918434.8929126-2951-110682883095307/AnsiballZ_copy.py'
Jan 20 14:13:55 compute-0 sudo[214338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:13:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:55.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:13:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:13:55 compute-0 ceph-mon[74360]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:55 compute-0 python3.9[214340]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:55 compute-0 sudo[214338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:55 compute-0 sudo[214490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfufujmlvtgtjxkjnvwjdjhgkmvhpgia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918435.589715-2951-9025896581746/AnsiballZ_copy.py'
Jan 20 14:13:55 compute-0 sudo[214490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:56 compute-0 python3.9[214492]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:56 compute-0 sudo[214490]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:56 compute-0 sudo[214642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbatjcxzurrfmpapbqjatyjcnrvhwvfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918436.2513413-2951-169280322611099/AnsiballZ_copy.py'
Jan 20 14:13:56 compute-0 sudo[214642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:56.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:56 compute-0 python3.9[214644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:56 compute-0 sudo[214642]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:57.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:57 compute-0 sudo[214795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcwqkypzxlrybewmdizvwnehlyvvvusc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918436.9334304-2951-280773922293574/AnsiballZ_copy.py'
Jan 20 14:13:57 compute-0 sudo[214795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:13:57 compute-0 python3.9[214797]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:57 compute-0 sudo[214795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:58 compute-0 ceph-mon[74360]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:58 compute-0 sudo[214947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyvrftjdkwgodgwfawdwthnxadduxrtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918438.28059-3059-122045190286733/AnsiballZ_copy.py'
Jan 20 14:13:58 compute-0 sudo[214947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:13:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:13:58.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:13:58 compute-0 python3.9[214949]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:58 compute-0 sudo[214947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:13:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:13:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:13:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:13:59 compute-0 sudo[215100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djeaoxgpbtoqgtsuocqkhqdzwcmylgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918438.925743-3059-205605745369455/AnsiballZ_copy.py'
Jan 20 14:13:59 compute-0 sudo[215100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:13:59 compute-0 python3.9[215102]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:59 compute-0 sudo[215100]: pam_unix(sudo:session): session closed for user root
Jan 20 14:13:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:59 compute-0 ceph-mon[74360]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:13:59 compute-0 sudo[215252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ortqjbmlemfrfoarpuwvctbsebjefqel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918439.5665061-3059-181270224140813/AnsiballZ_copy.py'
Jan 20 14:13:59 compute-0 sudo[215252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:00 compute-0 python3.9[215254]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:00 compute-0 sudo[215252]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:00 compute-0 sudo[215404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjexgarcvzkyicrhupodmtrifzsdzvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918440.1731217-3059-40963401241948/AnsiballZ_copy.py'
Jan 20 14:14:00 compute-0 sudo[215404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:00 compute-0 python3.9[215406]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:00 compute-0 sudo[215404]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:01 compute-0 sudo[215557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwwxvrhpetfmmsialzukjtanejejvqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918440.8707325-3059-43833389294082/AnsiballZ_copy.py'
Jan 20 14:14:01 compute-0 sudo[215557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:01.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:01 compute-0 ceph-mon[74360]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:01 compute-0 python3.9[215559]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:01 compute-0 sudo[215557]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:02 compute-0 sudo[215709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qryczrfmynjxwspbiduzerzprzdkkpiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918442.2438972-3167-251977714406381/AnsiballZ_systemd.py'
Jan 20 14:14:02 compute-0 sudo[215709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:02 compute-0 python3.9[215711]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:02 compute-0 systemd[1]: Reloading.
Jan 20 14:14:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:02 compute-0 systemd-rc-local-generator[215736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:02 compute-0 systemd-sysv-generator[215742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:03 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 20 14:14:03 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 20 14:14:03 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 20 14:14:03 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 20 14:14:03 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 20 14:14:03 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 20 14:14:03 compute-0 sudo[215709]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:03 compute-0 ceph-mon[74360]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:03 compute-0 sudo[215913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcjdsotcikrphkkumqjihfjwxoorufla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918443.5876915-3167-116576635598592/AnsiballZ_systemd.py'
Jan 20 14:14:03 compute-0 sudo[215913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:04 compute-0 podman[215877]: 2026-01-20 14:14:04.01418283 +0000 UTC m=+0.123448092 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 14:14:04 compute-0 python3.9[215917]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:04 compute-0 systemd[1]: Reloading.
Jan 20 14:14:04 compute-0 systemd-rc-local-generator[215949]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:04 compute-0 systemd-sysv-generator[215953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:04 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 20 14:14:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:04 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 20 14:14:04 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 20 14:14:04 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 20 14:14:04 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 20 14:14:04 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 20 14:14:04 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:14:04 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:14:04 compute-0 sudo[215913]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:05 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 20 14:14:05 compute-0 sudo[216137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcbryrspdtsepjerlxaiwwquzmaxgjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918444.816569-3167-36117788870551/AnsiballZ_systemd.py'
Jan 20 14:14:05 compute-0 sudo[216137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:05 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 20 14:14:05 compute-0 python3.9[216139]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:05 compute-0 systemd[1]: Reloading.
Jan 20 14:14:05 compute-0 systemd-rc-local-generator[216165]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:05 compute-0 systemd-sysv-generator[216172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:05 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 20 14:14:05 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 20 14:14:05 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 20 14:14:05 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 20 14:14:05 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:14:06 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 20 14:14:06 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 20 14:14:06 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:14:06 compute-0 sudo[216137]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:06 compute-0 ceph-mon[74360]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:06 compute-0 sudo[216357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duecmdcoahrgydsvyahvspznghgxpkhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918446.3040879-3167-212250922887311/AnsiballZ_systemd.py'
Jan 20 14:14:06 compute-0 sudo[216357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:06 compute-0 python3.9[216359]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:06 compute-0 systemd[1]: Reloading.
Jan 20 14:14:06 compute-0 setroubleshoot[216110]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0c05defa-fcef-4d0d-94b3-a8afd4a56d34
Jan 20 14:14:07 compute-0 setroubleshoot[216110]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 14:14:07 compute-0 setroubleshoot[216110]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0c05defa-fcef-4d0d-94b3-a8afd4a56d34
Jan 20 14:14:07 compute-0 setroubleshoot[216110]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 14:14:07 compute-0 systemd-rc-local-generator[216386]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:07 compute-0 systemd-sysv-generator[216392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:07.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:07 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 20 14:14:07 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 20 14:14:07 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 20 14:14:07 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 20 14:14:07 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 20 14:14:07 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 20 14:14:07 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 20 14:14:07 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 20 14:14:07 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 20 14:14:07 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 20 14:14:07 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:14:07 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:14:07 compute-0 sudo[216357]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:07 compute-0 ceph-mon[74360]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:07 compute-0 sudo[216574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdzxxwexcremvqzltrprmnzbyydrfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918447.5992985-3167-79451702838119/AnsiballZ_systemd.py'
Jan 20 14:14:07 compute-0 sudo[216574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:08 compute-0 python3.9[216576]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:14:08 compute-0 systemd[1]: Reloading.
Jan 20 14:14:08 compute-0 systemd-rc-local-generator[216605]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:08 compute-0 systemd-sysv-generator[216610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:08 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 20 14:14:08 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 20 14:14:08 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 20 14:14:08 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 20 14:14:08 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 20 14:14:08 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 20 14:14:08 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 14:14:08 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 14:14:08 compute-0 sudo[216574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:09 compute-0 sudo[216787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtqdggeuibavarseyojwripdqobkanc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918449.244014-3278-49427310830111/AnsiballZ_file.py'
Jan 20 14:14:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:09 compute-0 sudo[216787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:09 compute-0 ceph-mon[74360]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:09 compute-0 podman[216789]: 2026-01-20 14:14:09.750724799 +0000 UTC m=+0.127620356 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:14:09 compute-0 python3.9[216790]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:09 compute-0 sudo[216787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-0 sudo[216967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdppgirhxcysulxoixkkxchzqawwkojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918450.0512774-3302-244609226221000/AnsiballZ_find.py'
Jan 20 14:14:10 compute-0 sudo[216967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:10 compute-0 python3.9[216969]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:14:10 compute-0 sudo[216967]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:10 compute-0 sudo[216995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:10 compute-0 sudo[216995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:10 compute-0 sudo[216995]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-0 sudo[217020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:10 compute-0 sudo[217020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:10 compute-0 sudo[217020]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:14:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:14:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:11 compute-0 sudo[217170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikafczmuqmlalvynedspxmssjbwebap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918450.8923874-3326-19869306228511/AnsiballZ_command.py'
Jan 20 14:14:11 compute-0 sudo[217170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:11 compute-0 python3.9[217172]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:11 compute-0 sudo[217170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:12 compute-0 ceph-mon[74360]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:12 compute-0 python3.9[217326]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:14:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:13.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:13 compute-0 python3.9[217477]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:13 compute-0 ceph-mon[74360]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:13 compute-0 python3.9[217598]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918452.8143203-3383-4316021351817/.source.xml follow=False _original_basename=secret.xml.j2 checksum=35bbbade4f0995b3fba698d107c82491080dc0dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:14 compute-0 sudo[217748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybmuwazvhiroipxhilkfiamxqhwoyhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918454.1908793-3428-152098085617941/AnsiballZ_command.py'
Jan 20 14:14:14 compute-0 sudo[217748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:14 compute-0 python3.9[217750]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e399cf45-e6b6-5393-99f1-75c601d3f188
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:14 compute-0 polkitd[43447]: Registered Authentication Agent for unix-process:217753:439551 (system bus name :1.3010 [pkttyagent --process 217753 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:14 compute-0 polkitd[43447]: Unregistered Authentication Agent for unix-process:217753:439551 (system bus name :1.3010, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:14 compute-0 polkitd[43447]: Registered Authentication Agent for unix-process:217752:439550 (system bus name :1.3011 [pkttyagent --process 217752 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:14 compute-0 polkitd[43447]: Unregistered Authentication Agent for unix-process:217752:439550 (system bus name :1.3011, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:14 compute-0 sudo[217748]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:15 compute-0 python3.9[217913]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:16 compute-0 ceph-mon[74360]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:16 compute-0 sudo[218063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijwdrxnlhtqvzcimsvzdulyljcizhvog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918456.2562687-3476-100964051798606/AnsiballZ_command.py'
Jan 20 14:14:16 compute-0 sudo[218063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:16.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:16 compute-0 sudo[218063]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:17 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 20 14:14:17 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.002s CPU time.
Jan 20 14:14:17 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 20 14:14:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:17 compute-0 ceph-mon[74360]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:17 compute-0 sudo[218217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqoswimskhxfljuekzurnnblwinhzxp ; FSID=e399cf45-e6b6-5393-99f1-75c601d3f188 KEY=AQAciW9pAAAAABAAwJYC9p1PAwdI6pFMhbpXIA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918457.1792846-3500-146380952864180/AnsiballZ_command.py'
Jan 20 14:14:17 compute-0 sudo[218217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:17 compute-0 polkitd[43447]: Registered Authentication Agent for unix-process:218220:439842 (system bus name :1.3014 [pkttyagent --process 218220 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 14:14:17 compute-0 polkitd[43447]: Unregistered Authentication Agent for unix-process:218220:439842 (system bus name :1.3014, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 14:14:17 compute-0 sudo[218217]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:18 compute-0 sudo[218375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbarniaiksstjfeacnnwpecnfdwyxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918458.0561519-3524-47678193123284/AnsiballZ_copy.py'
Jan 20 14:14:18 compute-0 sudo[218375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:18 compute-0 python3.9[218377]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:18 compute-0 sudo[218375]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:18.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:19.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:19 compute-0 sudo[218528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onfyvsgbhciqysdhnobxkritfmhxykez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918459.1414247-3548-272501929929182/AnsiballZ_stat.py'
Jan 20 14:14:19 compute-0 sudo[218528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:19 compute-0 python3.9[218530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:19 compute-0 sudo[218528]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:20 compute-0 sudo[218651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrgrmrekcpdxtkdgleijsrmaachhgdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918459.1414247-3548-272501929929182/AnsiballZ_copy.py'
Jan 20 14:14:20 compute-0 sudo[218651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:20 compute-0 ceph-mon[74360]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:20 compute-0 python3.9[218653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918459.1414247-3548-272501929929182/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:20 compute-0 sudo[218651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:20.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:21 compute-0 sudo[218804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqtiehftblsumpegqibnygwwnmhbqenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918460.7485936-3596-116761956881448/AnsiballZ_file.py'
Jan 20 14:14:21 compute-0 sudo[218804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:21.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:21 compute-0 ceph-mon[74360]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:21 compute-0 python3.9[218806]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:21 compute-0 sudo[218804]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:21 compute-0 sudo[218956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnzzmatmumyvzdcsfgijxkuthupfzdnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918461.680746-3620-184949507035469/AnsiballZ_stat.py'
Jan 20 14:14:22 compute-0 sudo[218956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:22 compute-0 python3.9[218958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:22 compute-0 sudo[218956]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:22 compute-0 sudo[219034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykhutjusbxbmksjyyqckddfnjykynlgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918461.680746-3620-184949507035469/AnsiballZ_file.py'
Jan 20 14:14:22 compute-0 sudo[219034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:22 compute-0 python3.9[219036]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:22 compute-0 sudo[219034]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:23 compute-0 sudo[219187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbkqwzmccmwaftumivvwkelodnktopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918463.0790384-3656-272485546892595/AnsiballZ_stat.py'
Jan 20 14:14:23 compute-0 sudo[219187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:23 compute-0 python3.9[219189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:23 compute-0 sudo[219187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:23 compute-0 sudo[219265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldqgeotxrmzcpksczgmjwhilklwasph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918463.0790384-3656-272485546892595/AnsiballZ_file.py'
Jan 20 14:14:23 compute-0 sudo[219265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:23 compute-0 ceph-mon[74360]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:24 compute-0 python3.9[219267]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zys2n6sw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:24 compute-0 sudo[219265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:24 compute-0 sshd-session[219268]: Connection closed by authenticating user root 157.245.78.139 port 37030 [preauth]
Jan 20 14:14:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:24.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:24 compute-0 sudo[219420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohcagrprmlirjnxicwbsvmzmlhwcfaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918464.4793096-3692-247704749447393/AnsiballZ_stat.py'
Jan 20 14:14:24 compute-0 sudo[219420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:25 compute-0 python3.9[219422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:25 compute-0 sudo[219420]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:25 compute-0 sudo[219498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpydjzylbgcbrsyxrosgjbpuppuywtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918464.4793096-3692-247704749447393/AnsiballZ_file.py'
Jan 20 14:14:25 compute-0 sudo[219498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:25 compute-0 python3.9[219500]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:25 compute-0 sudo[219498]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:25 compute-0 ceph-mon[74360]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:26 compute-0 sudo[219650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdttbcuhtqtgqrvjisolaszfqkygeglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918465.9291635-3731-271387889586009/AnsiballZ_command.py'
Jan 20 14:14:26 compute-0 sudo[219650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:26 compute-0 python3.9[219652]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:26 compute-0 sudo[219650]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:27 compute-0 sudo[219805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvailqqroyyzgyrxtpjeasmftrbtrita ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918466.853985-3755-210099896536533/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 14:14:27 compute-0 sudo[219805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:27 compute-0 ceph-mon[74360]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:27 compute-0 python3[219807]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:14:27 compute-0 sudo[219805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:28 compute-0 sudo[219957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjzhsyfcieidixgczhfkhhcmjmghwqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918467.9207015-3779-142230715357887/AnsiballZ_stat.py'
Jan 20 14:14:28 compute-0 sudo[219957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:28 compute-0 python3.9[219959]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:28 compute-0 sudo[219957]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:28 compute-0 sudo[220036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyybfcbpvybfxakxbtpirdnapovyyfwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918467.9207015-3779-142230715357887/AnsiballZ_file.py'
Jan 20 14:14:28 compute-0 sudo[220036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:28 compute-0 python3.9[220038]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:29 compute-0 sudo[220036]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:29 compute-0 sudo[220188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuqbdemufrchzjkcpspseqganqerphiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918469.369348-3815-10597723991690/AnsiballZ_stat.py'
Jan 20 14:14:29 compute-0 sudo[220188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:29 compute-0 python3.9[220190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:29 compute-0 sudo[220188]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:30 compute-0 ceph-mon[74360]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:30 compute-0 sudo[220313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcriqvoyepttyvcfpdvahvwumoajtfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918469.369348-3815-10597723991690/AnsiballZ_copy.py'
Jan 20 14:14:30 compute-0 sudo[220313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:30 compute-0 python3.9[220315]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918469.369348-3815-10597723991690/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:30 compute-0 sudo[220313]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:14:30.727 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:14:30.727 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:14:30.727 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:14:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:30 compute-0 sudo[220348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:30 compute-0 sudo[220348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:30 compute-0 sudo[220348]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-0 sudo[220396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:31 compute-0 sudo[220396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:31 compute-0 sudo[220396]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:31 compute-0 sudo[220516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvcqsqfdqnhvwuwaetznlqbnfsumbnwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918470.9556792-3860-219737646942372/AnsiballZ_stat.py'
Jan 20 14:14:31 compute-0 sudo[220516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:31 compute-0 python3.9[220518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:31 compute-0 sudo[220516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:31 compute-0 sudo[220594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzdghmxuapbzwgwtkcpfngqwgbabzgoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918470.9556792-3860-219737646942372/AnsiballZ_file.py'
Jan 20 14:14:31 compute-0 sudo[220594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:32 compute-0 ceph-mon[74360]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:32 compute-0 python3.9[220596]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:32 compute-0 sudo[220594]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:32 compute-0 sudo[220747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyrqeoaduakkfslyvcqosikzyjvcdod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918472.4353237-3896-4198934627209/AnsiballZ_stat.py'
Jan 20 14:14:32 compute-0 sudo[220747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:32.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:32 compute-0 python3.9[220749]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:32 compute-0 sudo[220747]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:33 compute-0 sudo[220825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfnzsvjuxxdpxgiaplzeoojjargvcapr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918472.4353237-3896-4198934627209/AnsiballZ_file.py'
Jan 20 14:14:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:33 compute-0 sudo[220825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:33 compute-0 python3.9[220827]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:33 compute-0 sudo[220825]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:34 compute-0 ceph-mon[74360]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:34 compute-0 sudo[220987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsbfnuczywlrlbagxrabtinqjmatmipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918473.694568-3932-189370659494072/AnsiballZ_stat.py'
Jan 20 14:14:34 compute-0 sudo[220987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:34 compute-0 podman[220951]: 2026-01-20 14:14:34.198925546 +0000 UTC m=+0.100503500 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:14:34 compute-0 python3.9[220989]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:34 compute-0 sudo[220987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:34 compute-0 sudo[221122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyfchtulwdkzwryjfnqxuazpdencduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918473.694568-3932-189370659494072/AnsiballZ_copy.py'
Jan 20 14:14:34 compute-0 sudo[221122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:34 compute-0 python3.9[221124]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768918473.694568-3932-189370659494072/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:34 compute-0 sudo[221122]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:35 compute-0 sudo[221274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuitllsjxdqushavrbvmgubkeemkioyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918475.3465703-3977-3841225769625/AnsiballZ_file.py'
Jan 20 14:14:35 compute-0 sudo[221274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:35 compute-0 ceph-mon[74360]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:35 compute-0 python3.9[221276]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:35 compute-0 sudo[221274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:36 compute-0 sudo[221426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijcrhltbocixflyljpvjriborhtshrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918476.1266117-4001-108832191732327/AnsiballZ_command.py'
Jan 20 14:14:36 compute-0 sudo[221426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:36 compute-0 python3.9[221428]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:36 compute-0 sudo[221426]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:36.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:37 compute-0 sudo[221582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycoyqedtnbecqbmsggzaoizacgrnywah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918477.020941-4025-218502247547992/AnsiballZ_blockinfile.py'
Jan 20 14:14:37 compute-0 sudo[221582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:37 compute-0 python3.9[221584]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:37 compute-0 sudo[221582]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:37 compute-0 ceph-mon[74360]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:38 compute-0 sudo[221734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykcqsmajgpethrywprjiwsyqfaoohuid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918478.1785524-4052-25928341075687/AnsiballZ_command.py'
Jan 20 14:14:38 compute-0 sudo[221734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:38 compute-0 python3.9[221736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:38 compute-0 sudo[221734]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:14:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:38.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:14:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:39 compute-0 sudo[221888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocotdbpirtdxjdveouixbypsotdknfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918479.139412-4076-248894052172260/AnsiballZ_stat.py'
Jan 20 14:14:39 compute-0 sudo[221888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:39 compute-0 python3.9[221890]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:14:39 compute-0 sudo[221888]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:39 compute-0 ceph-mon[74360]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:40 compute-0 sudo[222060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spfkqzjmochkjvppvurtgigykadjcafy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918480.1157038-4100-119105841866019/AnsiballZ_command.py'
Jan 20 14:14:40 compute-0 sudo[222060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:40 compute-0 podman[222016]: 2026-01-20 14:14:40.525455243 +0000 UTC m=+0.117218533 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:14:40 compute-0 python3.9[222064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:14:40 compute-0 sudo[222060]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:40.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:41 compute-0 sudo[222223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwigbycuiflknqhvuquwxvqhiupzhuun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.009207-4124-245368863821711/AnsiballZ_file.py'
Jan 20 14:14:41 compute-0 sudo[222223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:41 compute-0 python3.9[222225]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:41 compute-0 sudo[222223]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:42 compute-0 ceph-mon[74360]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:42 compute-0 sudo[222375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjvzqrvpdckvzumjagajeqtwziplsjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.9142952-4148-26517924446221/AnsiballZ_stat.py'
Jan 20 14:14:42 compute-0 sudo[222375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:42 compute-0 python3.9[222377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:42 compute-0 sudo[222375]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:42 compute-0 sudo[222499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akdxjlhmdsjsftfhmqnahhsfvvvjknxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918481.9142952-4148-26517924446221/AnsiballZ_copy.py'
Jan 20 14:14:42 compute-0 sudo[222499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:42.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:42 compute-0 python3.9[222501]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918481.9142952-4148-26517924446221/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:42 compute-0 sudo[222499]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:43 compute-0 sudo[222651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jixyepyiskiiervtahfprfimeuylnsxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918483.4293106-4193-258839920145823/AnsiballZ_stat.py'
Jan 20 14:14:43 compute-0 sudo[222651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:43 compute-0 python3.9[222653]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:43 compute-0 sudo[222651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:44 compute-0 ceph-mon[74360]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:44 compute-0 sudo[222774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxnpvndsnjwvtzlhdabevreetdjkpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918483.4293106-4193-258839920145823/AnsiballZ_copy.py'
Jan 20 14:14:44 compute-0 sudo[222774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:44 compute-0 python3.9[222776]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918483.4293106-4193-258839920145823/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:44 compute-0 sudo[222774]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:45 compute-0 sudo[222927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clepzexuliqtbjwbemejgapmenyncrpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918484.9011385-4238-165528375926805/AnsiballZ_stat.py'
Jan 20 14:14:45 compute-0 sudo[222927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:45 compute-0 ceph-mon[74360]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:45 compute-0 python3.9[222929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:14:45 compute-0 sudo[222927]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:45 compute-0 sudo[223050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtrpzbphflturfienccjwamepwqgfava ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918484.9011385-4238-165528375926805/AnsiballZ_copy.py'
Jan 20 14:14:45 compute-0 sudo[223050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:46 compute-0 python3.9[223052]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918484.9011385-4238-165528375926805/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:14:46 compute-0 sudo[223050]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:46.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:46 compute-0 sudo[223203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqjcgzbzsnpijbthevexkwetknvhcyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918486.6434429-4283-101923158781629/AnsiballZ_systemd.py'
Jan 20 14:14:47 compute-0 sudo[223203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:47.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:47 compute-0 python3.9[223205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:14:47 compute-0 systemd[1]: Reloading.
Jan 20 14:14:47 compute-0 systemd-rc-local-generator[223229]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:47 compute-0 systemd-sysv-generator[223237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:47 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 20 14:14:47 compute-0 sudo[223203]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:47 compute-0 ceph-mon[74360]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:48 compute-0 sudo[223395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgvchkgsiktbuigmtlbpnhksliugiel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918488.0556734-4307-205101628063391/AnsiballZ_systemd.py'
Jan 20 14:14:48 compute-0 sudo[223395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:14:48 compute-0 python3.9[223397]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:14:48 compute-0 systemd[1]: Reloading.
Jan 20 14:14:48 compute-0 systemd-sysv-generator[223426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:48 compute-0 systemd-rc-local-generator[223420]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:48.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:49 compute-0 systemd[1]: Reloading.
Jan 20 14:14:49 compute-0 systemd-rc-local-generator[223462]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:14:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:49.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:49 compute-0 systemd-sysv-generator[223467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:14:49 compute-0 sudo[223395]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:50 compute-0 sshd-session[160883]: Connection closed by 192.168.122.30 port 58580
Jan 20 14:14:50 compute-0 sshd-session[160878]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:14:50 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 20 14:14:50 compute-0 systemd[1]: session-50.scope: Consumed 3min 33.154s CPU time.
Jan 20 14:14:50 compute-0 systemd-logind[796]: Session 50 logged out. Waiting for processes to exit.
Jan 20 14:14:50 compute-0 systemd-logind[796]: Removed session 50.
Jan 20 14:14:50 compute-0 ceph-mon[74360]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:50.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:51 compute-0 sudo[223497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:51 compute-0 sudo[223497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:51 compute-0 sudo[223497]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:51 compute-0 sudo[223522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:51 compute-0 sudo[223522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:51 compute-0 sudo[223522]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:51 compute-0 ceph-mon[74360]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:14:52
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:52.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:54 compute-0 ceph-mon[74360]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:54 compute-0 sudo[223549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:54 compute-0 sudo[223549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-0 sudo[223549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:54.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:54 compute-0 sudo[223574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:14:54 compute-0 sudo[223574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-0 sudo[223574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:54 compute-0 sudo[223599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:54 compute-0 sudo[223599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:54 compute-0 sudo[223599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:55 compute-0 sudo[223624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:14:55 compute-0 sudo[223624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:55 compute-0 sshd-session[223667]: Accepted publickey for zuul from 192.168.122.30 port 55334 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:14:55 compute-0 systemd-logind[796]: New session 51 of user zuul.
Jan 20 14:14:55 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 20 14:14:55 compute-0 sshd-session[223667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:14:55 compute-0 sudo[223624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:14:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 16474e44-cc97-41a9-bedd-aac15684b884 does not exist
Jan 20 14:14:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dd6c83ad-42e2-425c-aae6-33abf37a291c does not exist
Jan 20 14:14:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0a97ebc0-60d0-4e6d-ae65-c91896de4fca does not exist
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:14:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:14:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:55 compute-0 sudo[223737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:55 compute-0 sudo[223737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:55 compute-0 sudo[223737]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:55 compute-0 sudo[223762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:14:55 compute-0 sudo[223762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:55 compute-0 sudo[223762]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:56 compute-0 sudo[223790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:56 compute-0 sudo[223790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:56 compute-0 sudo[223790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:56 compute-0 sudo[223840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:14:56 compute-0 sudo[223840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:56 compute-0 podman[223974]: 2026-01-20 14:14:56.48397993 +0000 UTC m=+0.026341414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:14:56 compute-0 python3.9[223941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:14:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:56 compute-0 podman[223974]: 2026-01-20 14:14:56.890204882 +0000 UTC m=+0.432566316 container create d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:14:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:14:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:56 compute-0 systemd[1]: Started libpod-conmon-d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865.scope.
Jan 20 14:14:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:14:57 compute-0 podman[223974]: 2026-01-20 14:14:57.000292884 +0000 UTC m=+0.542654338 container init d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:57 compute-0 podman[223974]: 2026-01-20 14:14:57.009838999 +0000 UTC m=+0.552200463 container start d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:14:57 compute-0 podman[223974]: 2026-01-20 14:14:57.013958309 +0000 UTC m=+0.556319773 container attach d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:14:57 compute-0 hardcore_hypatia[223996]: 167 167
Jan 20 14:14:57 compute-0 systemd[1]: libpod-d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865.scope: Deactivated successfully.
Jan 20 14:14:57 compute-0 podman[223974]: 2026-01-20 14:14:57.018316726 +0000 UTC m=+0.560678190 container died d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2c35deda5cebeb57f3afd00e1058cd989769eb34f290d061a246faea4e3561f-merged.mount: Deactivated successfully.
Jan 20 14:14:57 compute-0 podman[223974]: 2026-01-20 14:14:57.248136175 +0000 UTC m=+0.790497609 container remove d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:14:57 compute-0 systemd[1]: libpod-conmon-d4718293130eb12aa434f71fcccc93e6f0333c4b5e209a6b1b25e62daf2a2865.scope: Deactivated successfully.
Jan 20 14:14:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:14:57 compute-0 podman[224066]: 2026-01-20 14:14:57.476146267 +0000 UTC m=+0.070474034 container create b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:14:57 compute-0 systemd[1]: Started libpod-conmon-b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c.scope.
Jan 20 14:14:57 compute-0 podman[224066]: 2026-01-20 14:14:57.437277778 +0000 UTC m=+0.031605635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:14:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:57 compute-0 podman[224066]: 2026-01-20 14:14:57.747595399 +0000 UTC m=+0.341923196 container init b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:14:57 compute-0 podman[224066]: 2026-01-20 14:14:57.753997069 +0000 UTC m=+0.348324836 container start b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 20 14:14:57 compute-0 podman[224066]: 2026-01-20 14:14:57.758547021 +0000 UTC m=+0.352874818 container attach b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:57 compute-0 ceph-mon[74360]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:58 compute-0 python3.9[224190]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:14:58 compute-0 network[224207]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:14:58 compute-0 network[224208]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:14:58 compute-0 network[224209]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:14:58 compute-0 angry_fermat[224112]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:14:58 compute-0 angry_fermat[224112]: --> relative data size: 1.0
Jan 20 14:14:58 compute-0 angry_fermat[224112]: --> All data devices are unavailable
Jan 20 14:14:58 compute-0 podman[224066]: 2026-01-20 14:14:58.607556593 +0000 UTC m=+1.201884450 container died b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:14:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:14:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:14:58.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:14:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:59 compute-0 systemd[1]: libpod-b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c.scope: Deactivated successfully.
Jan 20 14:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b7196c9adb9ff4b780e43a1b0c8a2d78cf616a6b498e17872cbf43f2a6ca320-merged.mount: Deactivated successfully.
Jan 20 14:14:59 compute-0 podman[224066]: 2026-01-20 14:14:59.266877117 +0000 UTC m=+1.861204924 container remove b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:59 compute-0 systemd[1]: libpod-conmon-b9bdaef7f97122f8bcd80ef89fa8171729dc0d2c1ce1b425a3248fb91cc6816c.scope: Deactivated successfully.
Jan 20 14:14:59 compute-0 sudo[223840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:59 compute-0 ceph-mon[74360]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:14:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:14:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:14:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:14:59.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:14:59 compute-0 sudo[224252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:59 compute-0 sudo[224252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:59 compute-0 sudo[224252]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:59 compute-0 sudo[224281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:14:59 compute-0 sudo[224281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:59 compute-0 sudo[224281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:59 compute-0 sudo[224309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:14:59 compute-0 sudo[224309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:59 compute-0 sudo[224309]: pam_unix(sudo:session): session closed for user root
Jan 20 14:14:59 compute-0 sudo[224338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:14:59 compute-0 sudo[224338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:14:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:59 compute-0 podman[224416]: 2026-01-20 14:14:59.839844904 +0000 UTC m=+0.052176395 container create 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:14:59 compute-0 podman[224416]: 2026-01-20 14:14:59.810330825 +0000 UTC m=+0.022662336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:14:59 compute-0 systemd[1]: Started libpod-conmon-0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5.scope.
Jan 20 14:15:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:15:00 compute-0 podman[224416]: 2026-01-20 14:15:00.634231836 +0000 UTC m=+0.846563417 container init 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:15:00 compute-0 podman[224416]: 2026-01-20 14:15:00.647511141 +0000 UTC m=+0.859842672 container start 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:15:00 compute-0 determined_franklin[224445]: 167 167
Jan 20 14:15:00 compute-0 systemd[1]: libpod-0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5.scope: Deactivated successfully.
Jan 20 14:15:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:00 compute-0 podman[224416]: 2026-01-20 14:15:00.981492745 +0000 UTC m=+1.193824286 container attach 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:15:00 compute-0 podman[224416]: 2026-01-20 14:15:00.982916073 +0000 UTC m=+1.195247624 container died 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-605dc2bd2454e88085a8a4d78f3df80db88bafca2e90b034d09339c31c753b4a-merged.mount: Deactivated successfully.
Jan 20 14:15:01 compute-0 podman[224416]: 2026-01-20 14:15:01.655853349 +0000 UTC m=+1.868184880 container remove 0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:15:01 compute-0 systemd[1]: libpod-conmon-0674254414e7aeeefa375d33a5f04959f6e753d7ddfcbb5bca7e2bf5f03f70a5.scope: Deactivated successfully.
Jan 20 14:15:01 compute-0 podman[224503]: 2026-01-20 14:15:01.84301601 +0000 UTC m=+0.058737591 container create 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:15:01 compute-0 systemd[1]: Started libpod-conmon-549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f.scope.
Jan 20 14:15:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:15:01 compute-0 podman[224503]: 2026-01-20 14:15:01.815667689 +0000 UTC m=+0.031389290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c4f22dc2803bceefa1305fb5de97745baf74fa71249bec757f73cc1250f93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c4f22dc2803bceefa1305fb5de97745baf74fa71249bec757f73cc1250f93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c4f22dc2803bceefa1305fb5de97745baf74fa71249bec757f73cc1250f93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6c4f22dc2803bceefa1305fb5de97745baf74fa71249bec757f73cc1250f93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:01 compute-0 podman[224503]: 2026-01-20 14:15:01.923552301 +0000 UTC m=+0.139273882 container init 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:15:01 compute-0 podman[224503]: 2026-01-20 14:15:01.929240843 +0000 UTC m=+0.144962424 container start 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:15:01 compute-0 podman[224503]: 2026-01-20 14:15:01.932309125 +0000 UTC m=+0.148030706 container attach 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:15:02 compute-0 ceph-mon[74360]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:02 compute-0 fervent_merkle[224522]: {
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:     "0": [
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:         {
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "devices": [
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "/dev/loop3"
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             ],
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "lv_name": "ceph_lv0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "lv_size": "7511998464",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "name": "ceph_lv0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "tags": {
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.cluster_name": "ceph",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.crush_device_class": "",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.encrypted": "0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.osd_id": "0",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.type": "block",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:                 "ceph.vdo": "0"
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             },
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "type": "block",
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:             "vg_name": "ceph_vg0"
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:         }
Jan 20 14:15:02 compute-0 fervent_merkle[224522]:     ]
Jan 20 14:15:02 compute-0 fervent_merkle[224522]: }
Jan 20 14:15:02 compute-0 systemd[1]: libpod-549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f.scope: Deactivated successfully.
Jan 20 14:15:02 compute-0 podman[224503]: 2026-01-20 14:15:02.708343707 +0000 UTC m=+0.924065298 container died 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 14:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6c4f22dc2803bceefa1305fb5de97745baf74fa71249bec757f73cc1250f93-merged.mount: Deactivated successfully.
Jan 20 14:15:02 compute-0 podman[224503]: 2026-01-20 14:15:02.775640515 +0000 UTC m=+0.991362116 container remove 549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:15:02 compute-0 systemd[1]: libpod-conmon-549472a7f5a66ef3eefa653f530b5773e4d02e617c6752b676b3dd3ce4dbae7f.scope: Deactivated successfully.
Jan 20 14:15:02 compute-0 sudo[224338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:02.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:02 compute-0 sudo[224542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:02 compute-0 sudo[224542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:02 compute-0 sudo[224542]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:02 compute-0 sudo[224572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:15:02 compute-0 sudo[224572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:02 compute-0 sudo[224572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:03 compute-0 sudo[224600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:03 compute-0 sudo[224600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:03 compute-0 sudo[224600]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:03 compute-0 sudo[224629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:15:03 compute-0 sudo[224629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.430394607 +0000 UTC m=+0.024962488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.57796216 +0000 UTC m=+0.172530061 container create 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:15:03 compute-0 systemd[1]: Started libpod-conmon-70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57.scope.
Jan 20 14:15:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.724993557 +0000 UTC m=+0.319561518 container init 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.737368678 +0000 UTC m=+0.331936569 container start 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:15:03 compute-0 strange_kirch[224748]: 167 167
Jan 20 14:15:03 compute-0 systemd[1]: libpod-70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57.scope: Deactivated successfully.
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.791134284 +0000 UTC m=+0.385702185 container attach 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.791676819 +0000 UTC m=+0.386244710 container died 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa4a223033db609c8276ae828a37264177b1f2cb9cd3bf3ffd661377840bf8da-merged.mount: Deactivated successfully.
Jan 20 14:15:03 compute-0 podman[224708]: 2026-01-20 14:15:03.871549023 +0000 UTC m=+0.466116924 container remove 70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:15:03 compute-0 systemd[1]: libpod-conmon-70974d460eb192a7ac8b9102d2d6c50d2d43e9fd1d65118a42cc305fbfccca57.scope: Deactivated successfully.
Jan 20 14:15:04 compute-0 ceph-mon[74360]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.088158669 +0000 UTC m=+0.054918318 container create 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:15:04 compute-0 systemd[1]: Started libpod-conmon-85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730.scope.
Jan 20 14:15:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ab0cd123fb9f5d2612bc840fede95ace24edc66c2400b2e3ed0ab3d33ed67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ab0cd123fb9f5d2612bc840fede95ace24edc66c2400b2e3ed0ab3d33ed67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ab0cd123fb9f5d2612bc840fede95ace24edc66c2400b2e3ed0ab3d33ed67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96ab0cd123fb9f5d2612bc840fede95ace24edc66c2400b2e3ed0ab3d33ed67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.161291553 +0000 UTC m=+0.128051273 container init 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.068511084 +0000 UTC m=+0.035270733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.170791156 +0000 UTC m=+0.137550825 container start 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.179352095 +0000 UTC m=+0.146111754 container attach 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:15:04 compute-0 podman[224870]: 2026-01-20 14:15:04.506215058 +0000 UTC m=+0.087865739 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 20 14:15:04 compute-0 sudo[224939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhaghrbdnunfsolmwcbxwynfnwvryjpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918504.2271786-101-260769091266004/AnsiballZ_setup.py'
Jan 20 14:15:04 compute-0 sudo[224939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:04 compute-0 python3.9[224941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:15:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:04 compute-0 sharp_robinson[224790]: {
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:         "osd_id": 0,
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:         "type": "bluestore"
Jan 20 14:15:04 compute-0 sharp_robinson[224790]:     }
Jan 20 14:15:04 compute-0 sharp_robinson[224790]: }
Jan 20 14:15:04 compute-0 systemd[1]: libpod-85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730.scope: Deactivated successfully.
Jan 20 14:15:04 compute-0 podman[224774]: 2026-01-20 14:15:04.997587005 +0000 UTC m=+0.964346674 container died 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96ab0cd123fb9f5d2612bc840fede95ace24edc66c2400b2e3ed0ab3d33ed67-merged.mount: Deactivated successfully.
Jan 20 14:15:05 compute-0 podman[224774]: 2026-01-20 14:15:05.061282086 +0000 UTC m=+1.028041715 container remove 85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 14:15:05 compute-0 systemd[1]: libpod-conmon-85717bf5b5680eacd82ccce00cb1096df85d59ab719809f6024f4f2f1c3c7730.scope: Deactivated successfully.
Jan 20 14:15:05 compute-0 sudo[224629]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:15:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:15:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f8c0a2fb-51dd-46c2-8c1f-7c11c0d14d58 does not exist
Jan 20 14:15:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2a61022e-dd94-4e02-94ff-b37916f5e85f does not exist
Jan 20 14:15:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1905b78d-d724-4ab6-873d-4d9870245f13 does not exist
Jan 20 14:15:05 compute-0 sudo[224939]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-0 sudo[224980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:05 compute-0 sudo[224980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:05 compute-0 sudo[224980]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-0 sudo[225006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:15:05 compute-0 sudo[225006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:05 compute-0 sudo[225006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:05 compute-0 sudo[225104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsdwzxdaxaaetygwwzmsblhnypgrmqsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918504.2271786-101-260769091266004/AnsiballZ_dnf.py'
Jan 20 14:15:05 compute-0 sudo[225104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:05 compute-0 python3.9[225106]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:15:06 compute-0 ceph-mon[74360]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:15:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:15:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3818 writes, 16K keys, 3817 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3818 writes, 3817 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1414 writes, 5852 keys, 1413 commit groups, 1.0 writes per commit group, ingest: 9.95 MB, 0.02 MB/s
                                           Interval WAL: 1414 writes, 1413 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    114.1      0.16              0.07         7    0.023       0      0       0.0       0.0
                                             L6      1/0    7.94 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    147.0    121.1      0.40              0.16         6    0.067     26K   3269       0.0       0.0
                                            Sum      1/0    7.94 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7    105.1    119.1      0.57              0.23        13    0.043     26K   3269       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    113.3    115.7      0.28              0.09         6    0.047     14K   1967       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    147.0    121.1      0.40              0.16         6    0.067     26K   3269       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    116.2      0.16              0.07         6    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.6 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 308.00 MB usage: 2.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000128 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.84 MB,0.598689%) FilterBlock(14,83.05 KB,0.0263313%) IndexBlock(14,161.03 KB,0.0510575%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:15:08 compute-0 ceph-mon[74360]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:09 compute-0 ceph-mon[74360]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:10 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:15:11 compute-0 sudo[225104]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-0 sudo[225160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:11 compute-0 sudo[225160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:11 compute-0 sudo[225160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:11 compute-0 sshd-session[225111]: Connection closed by authenticating user root 157.245.78.139 port 55160 [preauth]
Jan 20 14:15:11 compute-0 sudo[225204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:11 compute-0 sudo[225204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:11 compute-0 sudo[225204]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:11 compute-0 podman[225186]: 2026-01-20 14:15:11.470099381 +0000 UTC m=+0.112223070 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:15:11 compute-0 sudo[225339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqphrmpgrmlboisnaapjbebwmrjscgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918511.2761104-137-72130027624663/AnsiballZ_stat.py'
Jan 20 14:15:11 compute-0 sudo[225339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:12 compute-0 python3.9[225341]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:12 compute-0 sudo[225339]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:12 compute-0 ceph-mon[74360]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:12.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:12 compute-0 sudo[225492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqxgfhshmrewejcnvixaokaslqysiht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918512.4768057-167-239511061121179/AnsiballZ_command.py'
Jan 20 14:15:12 compute-0 sudo[225492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:13 compute-0 python3.9[225494]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:13 compute-0 sudo[225492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:14 compute-0 sudo[225645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgolupfazyhaticjbnhtgcbqpdajojmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918513.7773294-197-73351954523352/AnsiballZ_stat.py'
Jan 20 14:15:14 compute-0 sudo[225645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:14 compute-0 ceph-mon[74360]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:14 compute-0 python3.9[225647]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:14 compute-0 sudo[225645]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:14.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:15 compute-0 ceph-mon[74360]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:15 compute-0 sudo[225798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxwtilmkbxrcpnutcsfiwjpzmpstqnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918514.9331317-221-140073279231821/AnsiballZ_command.py'
Jan 20 14:15:15 compute-0 sudo[225798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:15 compute-0 python3.9[225800]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:15 compute-0 sudo[225798]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:16 compute-0 sudo[225951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrityjquhrievppkgrgvnxzskvddhkct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918515.8595545-245-222716652152603/AnsiballZ_stat.py'
Jan 20 14:15:16 compute-0 sudo[225951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:16 compute-0 python3.9[225953]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:16 compute-0 sudo[225951]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:16.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:17 compute-0 sudo[226075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zowfxxhrwfsplbqpnujirdpwohmxibne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918515.8595545-245-222716652152603/AnsiballZ_copy.py'
Jan 20 14:15:17 compute-0 sudo[226075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:17 compute-0 python3.9[226077]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918515.8595545-245-222716652152603/.source.iscsi _original_basename=.x3loxks6 follow=False checksum=5c0928496fb8f6c58c132ef17089be906f86f9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:17 compute-0 sudo[226075]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:17 compute-0 ceph-mon[74360]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:17 compute-0 sudo[226227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uedkedwfmbjkfmptkybmbwuhuhsdaigo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918517.523872-290-120042725928740/AnsiballZ_file.py'
Jan 20 14:15:18 compute-0 sudo[226227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:18 compute-0 python3.9[226229]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:18 compute-0 sudo[226227]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.002273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519002408, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1002, "num_deletes": 251, "total_data_size": 1687718, "memory_usage": 1704992, "flush_reason": "Manual Compaction"}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519017731, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1658900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16012, "largest_seqno": 17013, "table_properties": {"data_size": 1653996, "index_size": 2492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10360, "raw_average_key_size": 19, "raw_value_size": 1644225, "raw_average_value_size": 3090, "num_data_blocks": 113, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918425, "oldest_key_time": 1768918425, "file_creation_time": 1768918519, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15501 microseconds, and 7698 cpu microseconds.
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.017795) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1658900 bytes OK
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.017819) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.019674) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.019694) EVENT_LOG_v1 {"time_micros": 1768918519019687, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.019715) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1683129, prev total WAL file size 1683129, number of live WAL files 2.
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.020647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1620KB)], [35(8125KB)]
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519020718, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9979592, "oldest_snapshot_seqno": -1}
Jan 20 14:15:19 compute-0 sudo[226380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puenahvywkvuclvcijgcoyactlltiujb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918518.5639327-314-281057050838232/AnsiballZ_lineinfile.py'
Jan 20 14:15:19 compute-0 sudo[226380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4334 keys, 7965225 bytes, temperature: kUnknown
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519074518, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7965225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7935191, "index_size": 18084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107500, "raw_average_key_size": 24, "raw_value_size": 7855720, "raw_average_value_size": 1812, "num_data_blocks": 759, "num_entries": 4334, "num_filter_entries": 4334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918519, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.074739) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7965225 bytes
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.075978) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.2 rd, 147.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.9 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.8) write-amplify(4.8) OK, records in: 4849, records dropped: 515 output_compression: NoCompression
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.075992) EVENT_LOG_v1 {"time_micros": 1768918519075985, "job": 16, "event": "compaction_finished", "compaction_time_micros": 53881, "compaction_time_cpu_micros": 18476, "output_level": 6, "num_output_files": 1, "total_output_size": 7965225, "num_input_records": 4849, "num_output_records": 4334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519076309, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918519077712, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.020535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.077897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.077903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.077906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.077908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:19.077910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:19 compute-0 python3.9[226382]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:19 compute-0 sudo[226380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:20 compute-0 ceph-mon[74360]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:20 compute-0 sudo[226532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-updtxxmrqsimybxsdupwdjfnxhkhhjqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918519.9173253-341-273262175088955/AnsiballZ_systemd_service.py'
Jan 20 14:15:20 compute-0 sudo[226532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:20.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:20 compute-0 python3.9[226534]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:20 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 20 14:15:20 compute-0 sudo[226532]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:21.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:21 compute-0 ceph-mon[74360]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:21 compute-0 sudo[226689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cavxqmeebyzxefuynhtqhrwzdaxrvysm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918521.3589575-365-226126680186009/AnsiballZ_systemd_service.py'
Jan 20 14:15:21 compute-0 sudo[226689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:22 compute-0 python3.9[226691]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:23 compute-0 systemd[1]: Reloading.
Jan 20 14:15:23 compute-0 systemd-sysv-generator[226719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:23 compute-0 systemd-rc-local-generator[226715]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:23.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:23 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:15:23 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 14:15:23 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 20 14:15:23 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 14:15:23 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 20 14:15:23 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 20 14:15:23 compute-0 sudo[226689]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:24 compute-0 ceph-mon[74360]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.658266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524658350, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 256, "total_data_size": 77820, "memory_usage": 83848, "flush_reason": "Manual Compaction"}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524662177, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 77787, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17014, "largest_seqno": 17310, "table_properties": {"data_size": 75866, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4505, "raw_average_key_size": 16, "raw_value_size": 72071, "raw_average_value_size": 259, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918520, "oldest_key_time": 1768918520, "file_creation_time": 1768918524, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4293 microseconds, and 1586 cpu microseconds.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.662566) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 77787 bytes OK
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.662749) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.664366) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.664416) EVENT_LOG_v1 {"time_micros": 1768918524664409, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.664437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 75641, prev total WAL file size 75641, number of live WAL files 2.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-0 python3.9[226890]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.666078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(75KB)], [38(7778KB)]
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524666463, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 8043012, "oldest_snapshot_seqno": -1}
Jan 20 14:15:24 compute-0 network[226908]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:15:24 compute-0 network[226909]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:15:24 compute-0 network[226910]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4092 keys, 7701260 bytes, temperature: kUnknown
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524749891, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7701260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7673045, "index_size": 16905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 103774, "raw_average_key_size": 25, "raw_value_size": 7597921, "raw_average_value_size": 1856, "num_data_blocks": 696, "num_entries": 4092, "num_filter_entries": 4092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918524, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.750123) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7701260 bytes
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.751729) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.4 rd, 92.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(202.4) write-amplify(99.0) OK, records in: 4612, records dropped: 520 output_compression: NoCompression
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.751747) EVENT_LOG_v1 {"time_micros": 1768918524751738, "job": 18, "event": "compaction_finished", "compaction_time_micros": 83445, "compaction_time_cpu_micros": 44065, "output_level": 6, "num_output_files": 1, "total_output_size": 7701260, "num_input_records": 4612, "num_output_records": 4092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524751864, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918524753532, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.665896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.753604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.753611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.753614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.753617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:15:24.753620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:15:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:25.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:25 compute-0 ceph-mon[74360]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:28 compute-0 ceph-mon[74360]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:28.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:30 compute-0 ceph-mon[74360]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:15:30.728 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:15:30.729 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:15:30.729 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:15:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:30.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:31 compute-0 sudo[227206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfecmfbbdufunwqoeleenfuibhqvdybt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918531.2179751-434-169941652867403/AnsiballZ_dnf.py'
Jan 20 14:15:31 compute-0 sudo[227206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:31 compute-0 sudo[227158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:31 compute-0 sudo[227158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:31 compute-0 sudo[227158]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:31 compute-0 sudo[227211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:31 compute-0 sudo[227211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:31 compute-0 sudo[227211]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:31 compute-0 python3.9[227209]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:15:32 compute-0 ceph-mon[74360]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:33.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:15:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:15:34 compute-0 systemd[1]: Reloading.
Jan 20 14:15:34 compute-0 systemd-rc-local-generator[227283]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:34 compute-0 ceph-mon[74360]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:34 compute-0 systemd-sysv-generator[227288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:15:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:34 compute-0 podman[227294]: 2026-01-20 14:15:34.994525226 +0000 UTC m=+0.072025725 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:15:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:35 compute-0 ceph-mon[74360]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:35 compute-0 sudo[227206]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:15:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:15:35 compute-0 systemd[1]: run-r74c9bf0215fc45ce96a9a6833de61c98.service: Deactivated successfully.
Jan 20 14:15:36 compute-0 sudo[227572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqajcnifxqaennfduvjsnnkwszihsea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918536.0342855-461-165853217370513/AnsiballZ_file.py'
Jan 20 14:15:36 compute-0 sudo[227572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:36 compute-0 python3.9[227574]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:15:36 compute-0 sudo[227572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:37 compute-0 sudo[227725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeljlqtljmlgyekstvrljuvivdpsdsyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918536.8826096-485-267585040581719/AnsiballZ_modprobe.py'
Jan 20 14:15:37 compute-0 sudo[227725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:37 compute-0 python3.9[227727]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 20 14:15:37 compute-0 sudo[227725]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:37 compute-0 ceph-mon[74360]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:38 compute-0 sudo[227881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-menysiprkqowkvzdooxzysfvvbvwgmpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918537.9521332-509-89700400549249/AnsiballZ_stat.py'
Jan 20 14:15:38 compute-0 sudo[227881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:38 compute-0 python3.9[227883]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:38 compute-0 sudo[227881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:38.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:39 compute-0 sudo[228005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlkrzqvsbnjfzucxnrzewacmskxydvij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918537.9521332-509-89700400549249/AnsiballZ_copy.py'
Jan 20 14:15:39 compute-0 sudo[228005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:39 compute-0 python3.9[228007]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918537.9521332-509-89700400549249/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:39 compute-0 sudo[228005]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:39.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:40 compute-0 sudo[228157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwqikkjctbyqaqejybchvvlzeudpwgcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918539.7052312-557-83469292406950/AnsiballZ_lineinfile.py'
Jan 20 14:15:40 compute-0 sudo[228157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:40 compute-0 python3.9[228159]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:40 compute-0 sudo[228157]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:40 compute-0 ceph-mon[74360]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:40.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:41 compute-0 sudo[228310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqktbszehwfrhuorysylkijumisjwxdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918540.63403-581-254337570444405/AnsiballZ_systemd.py'
Jan 20 14:15:41 compute-0 sudo[228310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:41.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:41 compute-0 python3.9[228312]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:15:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:15:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:15:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:15:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:15:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:15:41 compute-0 sudo[228310]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:41 compute-0 podman[228314]: 2026-01-20 14:15:41.80240037 +0000 UTC m=+0.087520308 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:15:41 compute-0 ceph-mon[74360]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:42 compute-0 sudo[228492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymorqygzqeufvkvcrzfxhmcjbkxptysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918542.0120811-605-204065513180926/AnsiballZ_command.py'
Jan 20 14:15:42 compute-0 sudo[228492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:42 compute-0 python3.9[228494]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:42 compute-0 sudo[228492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:42.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:43 compute-0 sudo[228646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntdtwipodcrtcewtbwyzmpvumnicfrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918543.0422156-635-169638145465095/AnsiballZ_stat.py'
Jan 20 14:15:43 compute-0 sudo[228646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:43 compute-0 python3.9[228648]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:43 compute-0 sudo[228646]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:43 compute-0 ceph-mon[74360]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:44 compute-0 sudo[228798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxlpjqjkkntvdaoqiufflogomdkzjxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918544.045523-662-91722468303120/AnsiballZ_stat.py'
Jan 20 14:15:44 compute-0 sudo[228798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:44 compute-0 python3.9[228800]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:44 compute-0 sudo[228798]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:44 compute-0 sudo[228922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrgeupdougxthvudamipmoncerbnbykp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918544.045523-662-91722468303120/AnsiballZ_copy.py'
Jan 20 14:15:44 compute-0 sudo[228922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:45 compute-0 python3.9[228924]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918544.045523-662-91722468303120/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:45 compute-0 sudo[228922]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:45.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:45 compute-0 sudo[229074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rarjtzvxfotervjcergapquytjgpifgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918545.466576-707-140139096264162/AnsiballZ_command.py'
Jan 20 14:15:45 compute-0 sudo[229074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:45 compute-0 python3.9[229076]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:45 compute-0 sudo[229074]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:46 compute-0 ceph-mon[74360]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:46 compute-0 sudo[229227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofohkvqwbjvtuvweoffakqyusxsgtqgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918546.1707392-731-20155487522668/AnsiballZ_lineinfile.py'
Jan 20 14:15:46 compute-0 sudo[229227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:46 compute-0 python3.9[229229]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:46 compute-0 sudo[229227]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:47.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:47 compute-0 sudo[229380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvzpajuxflvyxlegotjrnsgslgecgzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918546.9576266-755-124852548026575/AnsiballZ_replace.py'
Jan 20 14:15:47 compute-0 sudo[229380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:47 compute-0 python3.9[229382]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:47 compute-0 sudo[229380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:48 compute-0 sudo[229532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdrxvitbnmlijqjcunlfjicrlxulhbqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918547.9022043-779-235390124656195/AnsiballZ_replace.py'
Jan 20 14:15:48 compute-0 sudo[229532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:48 compute-0 ceph-mon[74360]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:48 compute-0 python3.9[229534]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:48 compute-0 sudo[229532]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:48.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:49 compute-0 sudo[229685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjhucqpkgxcretbivdboypncmgjymkkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918548.8852646-806-157061259537034/AnsiballZ_lineinfile.py'
Jan 20 14:15:49 compute-0 sudo[229685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:49 compute-0 python3.9[229687]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:49 compute-0 sudo[229685]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:49 compute-0 ceph-mon[74360]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:49 compute-0 sudo[229837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoeampsepgazhujvpbystojdzycjnsly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918549.5659447-806-6779476432704/AnsiballZ_lineinfile.py'
Jan 20 14:15:49 compute-0 sudo[229837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:50 compute-0 python3.9[229839]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:50 compute-0 sudo[229837]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:50 compute-0 sudo[229989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxlpsskmmfozwnoszpyzxdvogaontmsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918550.186627-806-158231701578782/AnsiballZ_lineinfile.py'
Jan 20 14:15:50 compute-0 sudo[229989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:50 compute-0 python3.9[229991]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:50 compute-0 sudo[229989]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:50.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:51 compute-0 sudo[230142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqqsugijvmyxdnvqhhfvbaeadfywrcxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918550.9078348-806-51238788692204/AnsiballZ_lineinfile.py'
Jan 20 14:15:51 compute-0 sudo[230142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:51.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:51 compute-0 ceph-mon[74360]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:51 compute-0 python3.9[230144]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:15:51 compute-0 sudo[230142]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:51 compute-0 sudo[230169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:51 compute-0 sudo[230169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:51 compute-0 sudo[230169]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:51 compute-0 sudo[230194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:15:51 compute-0 sudo[230194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:15:51 compute-0 sudo[230194]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:52 compute-0 sudo[230344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdddmkrpxqzoydaknrhwcwasmfbcjyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918551.934615-893-49467846317249/AnsiballZ_stat.py'
Jan 20 14:15:52 compute-0 sudo[230344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:15:52
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.log', '.rgw.root', 'images', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:52 compute-0 python3.9[230346]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:15:52 compute-0 sudo[230344]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:53 compute-0 sudo[230499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkfhzkeouewskqfzyeoaykszyxqfiyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918552.8483374-917-195659500669019/AnsiballZ_command.py'
Jan 20 14:15:53 compute-0 sudo[230499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:53 compute-0 python3.9[230501]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:15:53 compute-0 sudo[230499]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:54 compute-0 ceph-mon[74360]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:54 compute-0 sudo[230652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpsttchmwlggwmpfisxowddpsohvzlne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918553.7741883-944-178678559766848/AnsiballZ_systemd_service.py'
Jan 20 14:15:54 compute-0 sudo[230652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:54 compute-0 python3.9[230654]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:54 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 20 14:15:54 compute-0 sudo[230652]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:55 compute-0 sudo[230809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlxldgzyrxijlukaysvsfoqzbdkvvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918554.8492064-968-37309687783532/AnsiballZ_systemd_service.py'
Jan 20 14:15:55 compute-0 sudo[230809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:55 compute-0 ceph-mon[74360]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:55 compute-0 python3.9[230811]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:15:55 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 20 14:15:55 compute-0 udevadm[230816]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 20 14:15:55 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 20 14:15:55 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:15:55 compute-0 multipathd[230819]: --------start up--------
Jan 20 14:15:55 compute-0 multipathd[230819]: read /etc/multipath.conf
Jan 20 14:15:55 compute-0 multipathd[230819]: path checkers start up
Jan 20 14:15:55 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:15:55 compute-0 sudo[230809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:56.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:15:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:57.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:57 compute-0 sshd-session[230852]: Connection closed by authenticating user root 157.245.78.139 port 35304 [preauth]
Jan 20 14:15:57 compute-0 sudo[230979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxssbfhwufvdwgrbrhwujsavaezstgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918557.4268472-1004-180163323718076/AnsiballZ_file.py'
Jan 20 14:15:57 compute-0 sudo[230979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:57 compute-0 python3.9[230981]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:15:58 compute-0 ceph-mon[74360]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:58 compute-0 sudo[230979]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:58 compute-0 sudo[231132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdmbblaycmjsxtzqcbpgehtslmfvfvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918558.3831458-1028-113979448730626/AnsiballZ_modprobe.py'
Jan 20 14:15:58 compute-0 sudo[231132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:15:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:15:58.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:15:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:15:58 compute-0 python3.9[231134]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 20 14:15:58 compute-0 kernel: Key type psk registered
Jan 20 14:15:59 compute-0 sudo[231132]: pam_unix(sudo:session): session closed for user root
Jan 20 14:15:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:15:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:15:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:15:59.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:15:59 compute-0 sudo[231294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbegmgrwvbqfkhraluywpynqivpaqppm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918559.2318609-1052-23501851439419/AnsiballZ_stat.py'
Jan 20 14:15:59 compute-0 sudo[231294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:15:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:59 compute-0 python3.9[231296]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:15:59 compute-0 sudo[231294]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:00 compute-0 ceph-mon[74360]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:00 compute-0 sudo[231417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhtirnaojdvqsgfbxgqreolozuvaogpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918559.2318609-1052-23501851439419/AnsiballZ_copy.py'
Jan 20 14:16:00 compute-0 sudo[231417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:00 compute-0 python3.9[231419]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768918559.2318609-1052-23501851439419/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:00 compute-0 sudo[231417]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:01 compute-0 sudo[231570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmrsxatbtlkjsrspdcyuodemztrquviy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918560.803201-1100-273772880985469/AnsiballZ_lineinfile.py'
Jan 20 14:16:01 compute-0 sudo[231570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:01 compute-0 python3.9[231572]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:01 compute-0 sudo[231570]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:01.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:02 compute-0 ceph-mon[74360]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:02 compute-0 sudo[231722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efyrjelodvmncnofaszvcehwvmfjzowc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918561.7921581-1124-280026321392321/AnsiballZ_systemd.py'
Jan 20 14:16:02 compute-0 sudo[231722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:02 compute-0 python3.9[231724]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:02 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:16:02 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:16:02 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:16:02 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:16:02 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:16:02 compute-0 sudo[231722]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:03 compute-0 sudo[231879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlgdnkkiodeptgarzfprhtbomhbxwrom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918562.8742235-1148-36966763859569/AnsiballZ_dnf.py'
Jan 20 14:16:03 compute-0 sudo[231879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:03.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:03 compute-0 python3.9[231881]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:16:04 compute-0 ceph-mon[74360]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:04 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 20 14:16:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:05.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:05 compute-0 podman[231888]: 2026-01-20 14:16:05.506494914 +0000 UTC m=+0.084758695 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:16:05 compute-0 systemd[1]: Reloading.
Jan 20 14:16:05 compute-0 systemd-sysv-generator[231961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:05 compute-0 systemd-rc-local-generator[231958]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:05 compute-0 sudo[231910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:05 compute-0 systemd[1]: Reloading.
Jan 20 14:16:05 compute-0 systemd-sysv-generator[231996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:05 compute-0 systemd-rc-local-generator[231992]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:06 compute-0 ceph-mon[74360]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:06 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:16:06 compute-0 sudo[231910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:06 compute-0 sudo[231910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:06 compute-0 sudo[232005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:16:06 compute-0 sudo[232005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:06 compute-0 sudo[232005]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:06 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 14:16:06 compute-0 sudo[232031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:06 compute-0 sudo[232031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:06 compute-0 sudo[232031]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:06 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 14:16:06 compute-0 lvm[232096]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:16:06 compute-0 lvm[232096]: VG ceph_vg0 finished
Jan 20 14:16:06 compute-0 sudo[232091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:16:06 compute-0 sudo[232091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:16:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:16:06 compute-0 systemd[1]: Reloading.
Jan 20 14:16:06 compute-0 systemd-rc-local-generator[232180]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:06 compute-0 systemd-sysv-generator[232183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:16:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:06.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:06 compute-0 sudo[232091]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f4b1a62a-6a35-4b0b-9023-a5fb95eefae2 does not exist
Jan 20 14:16:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a2d3188a-d168-434c-82a8-3b15a7b5cbcd does not exist
Jan 20 14:16:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1b9c6b11-7161-4935-a936-1c7f597c5675 does not exist
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:16:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:16:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:16:07 compute-0 sudo[232535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:07 compute-0 sudo[232535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:07 compute-0 sudo[232535]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:07 compute-0 sudo[232634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:16:07 compute-0 sudo[232634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:07 compute-0 sudo[232634]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:07 compute-0 sudo[232724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:07 compute-0 sudo[232724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:07 compute-0 sudo[232724]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:07 compute-0 sudo[232814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:16:07 compute-0 sudo[232814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:07 compute-0 sudo[231879]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.664572498 +0000 UTC m=+0.050350686 container create 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:16:07 compute-0 systemd[1]: Started libpod-conmon-321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1.scope.
Jan 20 14:16:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.644570314 +0000 UTC m=+0.030348522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.745809828 +0000 UTC m=+0.131588106 container init 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.757789309 +0000 UTC m=+0.143567497 container start 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:07 compute-0 magical_kapitsa[233404]: 167 167
Jan 20 14:16:07 compute-0 systemd[1]: libpod-321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1.scope: Deactivated successfully.
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.877714283 +0000 UTC m=+0.263492501 container attach 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:16:07 compute-0 podman[233273]: 2026-01-20 14:16:07.878657877 +0000 UTC m=+0.264436105 container died 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:16:08 compute-0 ceph-mon[74360]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-637956c8ca0f7cb04a9f4ec22dc2f0a4c43028ea0544d19dd46a43d88e3b9b00-merged.mount: Deactivated successfully.
Jan 20 14:16:08 compute-0 podman[233273]: 2026-01-20 14:16:08.212879397 +0000 UTC m=+0.598657585 container remove 321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:16:08 compute-0 systemd[1]: libpod-conmon-321775f5a37ba8a9b9a32095a5523887df44e060d6d5269158ec4d48d39996a1.scope: Deactivated successfully.
Jan 20 14:16:08 compute-0 sudo[233677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbzljbyvqljztbzdbxlculwjahiuelr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918568.003011-1172-263421596510489/AnsiballZ_systemd_service.py'
Jan 20 14:16:08 compute-0 sudo[233677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:08 compute-0 podman[233685]: 2026-01-20 14:16:08.453056233 +0000 UTC m=+0.066534799 container create 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:16:08 compute-0 podman[233685]: 2026-01-20 14:16:08.40686768 +0000 UTC m=+0.020346266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:08 compute-0 systemd[1]: Started libpod-conmon-28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c.scope.
Jan 20 14:16:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:16:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:16:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.616s CPU time.
Jan 20 14:16:08 compute-0 systemd[1]: run-r8b13a1c4085c4bbda28ab1a0eaa06473.service: Deactivated successfully.
Jan 20 14:16:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:08 compute-0 podman[233685]: 2026-01-20 14:16:08.612197824 +0000 UTC m=+0.225676440 container init 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:08 compute-0 podman[233685]: 2026-01-20 14:16:08.622266013 +0000 UTC m=+0.235744619 container start 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:16:08 compute-0 podman[233685]: 2026-01-20 14:16:08.626228799 +0000 UTC m=+0.239707405 container attach 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:08 compute-0 python3.9[233679]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:08 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 20 14:16:08 compute-0 iscsid[226732]: iscsid shutting down.
Jan 20 14:16:08 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 20 14:16:08 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 20 14:16:08 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:16:08 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 14:16:08 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 14:16:08 compute-0 sudo[233677]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:08.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:09 compute-0 sudo[233865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eznalpyhmzvuxifrwbndpiekgvohcwpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918569.0028026-1196-222938006668301/AnsiballZ_systemd_service.py'
Jan 20 14:16:09 compute-0 sudo[233865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:09 compute-0 ceph-mon[74360]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:09.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:09 compute-0 charming_almeida[233702]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:16:09 compute-0 charming_almeida[233702]: --> relative data size: 1.0
Jan 20 14:16:09 compute-0 charming_almeida[233702]: --> All data devices are unavailable
Jan 20 14:16:09 compute-0 systemd[1]: libpod-28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c.scope: Deactivated successfully.
Jan 20 14:16:09 compute-0 podman[233685]: 2026-01-20 14:16:09.495174624 +0000 UTC m=+1.108653200 container died 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b52f3debb0c40259b4a6df77802eb322bbd9ac1ca20b03b8135f88930d5190d-merged.mount: Deactivated successfully.
Jan 20 14:16:09 compute-0 podman[233685]: 2026-01-20 14:16:09.567569628 +0000 UTC m=+1.181048214 container remove 28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:16:09 compute-0 systemd[1]: libpod-conmon-28d07f46deb6934633463572fea7824a031aa8922bb9f8198aef8e8972d2cf5c.scope: Deactivated successfully.
Jan 20 14:16:09 compute-0 sudo[232814]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-0 python3.9[233869]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:09 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 20 14:16:09 compute-0 multipathd[230819]: exit (signal)
Jan 20 14:16:09 compute-0 multipathd[230819]: --------shut down-------
Jan 20 14:16:09 compute-0 sudo[233889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:09 compute-0 sudo[233889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:09 compute-0 sudo[233889]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 20 14:16:09 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 20 14:16:09 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:16:09 compute-0 sudo[233917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:16:09 compute-0 sudo[233917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:09 compute-0 sudo[233917]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-0 multipathd[233942]: --------start up--------
Jan 20 14:16:09 compute-0 multipathd[233942]: read /etc/multipath.conf
Jan 20 14:16:09 compute-0 multipathd[233942]: path checkers start up
Jan 20 14:16:09 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:16:09 compute-0 sudo[233865]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-0 sudo[233950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:09 compute-0 sudo[233950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:09 compute-0 sudo[233950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:09 compute-0 sudo[233978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:16:09 compute-0 sudo[233978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.216012851 +0000 UTC m=+0.025096941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.403846079 +0000 UTC m=+0.212930119 container create fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:16:10 compute-0 python3.9[234201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:16:10 compute-0 systemd[1]: Started libpod-conmon-fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a.scope.
Jan 20 14:16:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.76359804 +0000 UTC m=+0.572682100 container init fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.772543079 +0000 UTC m=+0.581627099 container start fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.776653969 +0000 UTC m=+0.585738029 container attach fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:16:10 compute-0 epic_shamir[234208]: 167 167
Jan 20 14:16:10 compute-0 systemd[1]: libpod-fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a.scope: Deactivated successfully.
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.781691134 +0000 UTC m=+0.590775154 container died fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8ca7a43b961a3d52a8ae78e07ecbc89c89edfbd1b8fe743c559e34d9d635f6-merged.mount: Deactivated successfully.
Jan 20 14:16:10 compute-0 podman[234137]: 2026-01-20 14:16:10.817877841 +0000 UTC m=+0.626961861 container remove fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:16:10 compute-0 systemd[1]: libpod-conmon-fca1bbbf6fa7d866d02f17334d37e6a5aaabf47f5d5672c39680adfb79f40e1a.scope: Deactivated successfully.
Jan 20 14:16:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:10.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.040303062 +0000 UTC m=+0.064805092 container create ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:16:11 compute-0 systemd[1]: Started libpod-conmon-ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7.scope.
Jan 20 14:16:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc01d4e1785936b6b8316899c7cdb83447f02a931389f515e243e3c7a0b21885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.017463752 +0000 UTC m=+0.041965872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc01d4e1785936b6b8316899c7cdb83447f02a931389f515e243e3c7a0b21885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc01d4e1785936b6b8316899c7cdb83447f02a931389f515e243e3c7a0b21885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc01d4e1785936b6b8316899c7cdb83447f02a931389f515e243e3c7a0b21885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.122877458 +0000 UTC m=+0.147379518 container init ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.138011202 +0000 UTC m=+0.162513272 container start ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.143352275 +0000 UTC m=+0.167854295 container attach ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:16:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:11.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:11 compute-0 sudo[234402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enulcbvhqdghdviaudjppiofoykjduoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918571.2421045-1248-258375473609930/AnsiballZ_file.py'
Jan 20 14:16:11 compute-0 sudo[234402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:11 compute-0 python3.9[234404]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:11 compute-0 sudo[234405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:11 compute-0 sudo[234405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:11 compute-0 sudo[234405]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:11 compute-0 sudo[234402]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:11 compute-0 sudo[234432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]: {
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:     "0": [
Jan 20 14:16:11 compute-0 sudo[234432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:         {
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "devices": [
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "/dev/loop3"
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             ],
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "lv_name": "ceph_lv0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "lv_size": "7511998464",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "name": "ceph_lv0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "tags": {
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.cluster_name": "ceph",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.crush_device_class": "",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.encrypted": "0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.osd_id": "0",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.type": "block",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:                 "ceph.vdo": "0"
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             },
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "type": "block",
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:             "vg_name": "ceph_vg0"
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:         }
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]:     ]
Jan 20 14:16:11 compute-0 distracted_sutherland[234272]: }
Jan 20 14:16:11 compute-0 sudo[234432]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:11 compute-0 systemd[1]: libpod-ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7.scope: Deactivated successfully.
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.909702018 +0000 UTC m=+0.934204038 container died ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc01d4e1785936b6b8316899c7cdb83447f02a931389f515e243e3c7a0b21885-merged.mount: Deactivated successfully.
Jan 20 14:16:11 compute-0 podman[234255]: 2026-01-20 14:16:11.970821891 +0000 UTC m=+0.995323921 container remove ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:11 compute-0 systemd[1]: libpod-conmon-ef82770ab0e17977da53eea2c34d4d3aa829a49ab6ffeeecfce91dd2369b89b7.scope: Deactivated successfully.
Jan 20 14:16:12 compute-0 sudo[233978]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:12 compute-0 ceph-mon[74360]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:12 compute-0 podman[234482]: 2026-01-20 14:16:12.042282491 +0000 UTC m=+0.151431627 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:16:12 compute-0 sudo[234519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:12 compute-0 sudo[234519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:12 compute-0 sudo[234519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:12 compute-0 sudo[234548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:16:12 compute-0 sudo[234548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:12 compute-0 sudo[234548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:12 compute-0 sudo[234573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:12 compute-0 sudo[234573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:12 compute-0 sudo[234573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:12 compute-0 sudo[234598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:16:12 compute-0 sudo[234598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.630047723 +0000 UTC m=+0.039321991 container create d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:16:12 compute-0 sudo[234802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizrjdbpxuefrwinopayfxdkwezioyit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918572.2944617-1281-61092766211860/AnsiballZ_systemd_service.py'
Jan 20 14:16:12 compute-0 sudo[234802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:12 compute-0 systemd[1]: Started libpod-conmon-d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c.scope.
Jan 20 14:16:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.614318333 +0000 UTC m=+0.023592621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.730689992 +0000 UTC m=+0.139964280 container init d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.742317912 +0000 UTC m=+0.151592180 container start d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.745674732 +0000 UTC m=+0.154949060 container attach d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:16:12 compute-0 zen_cohen[234807]: 167 167
Jan 20 14:16:12 compute-0 systemd[1]: libpod-d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c.scope: Deactivated successfully.
Jan 20 14:16:12 compute-0 conmon[234807]: conmon d27b6e679bea72c5bf41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c.scope/container/memory.events
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.750704886 +0000 UTC m=+0.159979174 container died d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-27a4b407e10e21309742007588a35f3bd7151cb85884f87c040cf0ef2e8b56f4-merged.mount: Deactivated successfully.
Jan 20 14:16:12 compute-0 podman[234762]: 2026-01-20 14:16:12.803517147 +0000 UTC m=+0.212791415 container remove d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:12 compute-0 systemd[1]: libpod-conmon-d27b6e679bea72c5bf415fa5335106e639a169535ba5eec3e427b2929e777c1c.scope: Deactivated successfully.
Jan 20 14:16:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:12 compute-0 python3.9[234804]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:16:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:12 compute-0 systemd[1]: Reloading.
Jan 20 14:16:13 compute-0 systemd-rc-local-generator[234869]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:13 compute-0 systemd-sysv-generator[234872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:13 compute-0 podman[234832]: 2026-01-20 14:16:13.061608852 +0000 UTC m=+0.086151283 container create c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:16:13 compute-0 podman[234832]: 2026-01-20 14:16:13.020616577 +0000 UTC m=+0.045159028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:16:13 compute-0 sudo[234802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:13 compute-0 systemd[1]: Started libpod-conmon-c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29.scope.
Jan 20 14:16:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abe1fa596036be208ac06f50a4acb0c96e885ce3a2b39e09e752a6eff6db064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abe1fa596036be208ac06f50a4acb0c96e885ce3a2b39e09e752a6eff6db064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abe1fa596036be208ac06f50a4acb0c96e885ce3a2b39e09e752a6eff6db064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abe1fa596036be208ac06f50a4acb0c96e885ce3a2b39e09e752a6eff6db064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:13.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:14 compute-0 podman[234832]: 2026-01-20 14:16:14.159622936 +0000 UTC m=+1.184165377 container init c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:14 compute-0 podman[234832]: 2026-01-20 14:16:14.168956055 +0000 UTC m=+1.193498486 container start c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:16:14 compute-0 python3.9[235035]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:16:14 compute-0 network[235054]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:16:14 compute-0 network[235055]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:16:14 compute-0 network[235056]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:16:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:14 compute-0 podman[234832]: 2026-01-20 14:16:14.692756739 +0000 UTC m=+1.717299250 container attach c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:16:14 compute-0 ceph-mon[74360]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]: {
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:         "osd_id": 0,
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:         "type": "bluestore"
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]:     }
Jan 20 14:16:14 compute-0 romantic_dewdney[234883]: }
Jan 20 14:16:15 compute-0 podman[234832]: 2026-01-20 14:16:15.009664025 +0000 UTC m=+2.034206456 container died c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:16:15 compute-0 systemd[1]: libpod-c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29.scope: Deactivated successfully.
Jan 20 14:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abe1fa596036be208ac06f50a4acb0c96e885ce3a2b39e09e752a6eff6db064-merged.mount: Deactivated successfully.
Jan 20 14:16:15 compute-0 podman[234832]: 2026-01-20 14:16:15.165908989 +0000 UTC m=+2.190451420 container remove c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_dewdney, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:16:15 compute-0 systemd[1]: libpod-conmon-c464a87bae90eab90d06af7c29feb8cab0fce092d2ef664060be466b3677ef29.scope: Deactivated successfully.
Jan 20 14:16:15 compute-0 sudo[234598]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:16:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:16:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 273f188e-1a6a-4c0f-ae7a-00156df49dac does not exist
Jan 20 14:16:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4d63a2aa-83d1-441d-a704-9dd75cd1f55e does not exist
Jan 20 14:16:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 29861c5e-2213-4b77-9486-80c94273a75c does not exist
Jan 20 14:16:15 compute-0 sudo[235105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:15 compute-0 sudo[235105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:15 compute-0 sudo[235105]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:15 compute-0 sudo[235134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:16:15 compute-0 sudo[235134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:15 compute-0 sudo[235134]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:15 compute-0 ceph-mon[74360]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:16:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:17 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 14:16:17 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 20 14:16:18 compute-0 ceph-mon[74360]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 14:16:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:18.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:19 compute-0 ceph-mon[74360]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:20 compute-0 sudo[235412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twkkwlxchwxsbpofmietbfowizegmuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918580.2853677-1338-212433838322020/AnsiballZ_systemd_service.py'
Jan 20 14:16:20 compute-0 sudo[235412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:20 compute-0 python3.9[235414]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:20 compute-0 sudo[235412]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:21 compute-0 sudo[235566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gawzbqtudxlloobdlrvqcxaifarlkeah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918581.0352514-1338-139256183506744/AnsiballZ_systemd_service.py'
Jan 20 14:16:21 compute-0 sudo[235566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:21 compute-0 python3.9[235568]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:21 compute-0 sudo[235566]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:22 compute-0 ceph-mon[74360]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:22 compute-0 sudo[235719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzfjkrttacffxaowrwvwobyxuloftnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918581.8022084-1338-115207754679590/AnsiballZ_systemd_service.py'
Jan 20 14:16:22 compute-0 sudo[235719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:22 compute-0 python3.9[235721]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:22 compute-0 sudo[235719]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:22 compute-0 sudo[235873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkgmlgwwycslyyyxzurhkxnfitfqcbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918582.5359542-1338-39471163199120/AnsiballZ_systemd_service.py'
Jan 20 14:16:22 compute-0 sudo[235873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:22.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:23 compute-0 python3.9[235875]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:23 compute-0 sudo[235873]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:23 compute-0 sudo[236026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqnwuqrqzakzesrmjwuyuukmyrydadpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918583.3366845-1338-67136943914388/AnsiballZ_systemd_service.py'
Jan 20 14:16:23 compute-0 sudo[236026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:23 compute-0 python3.9[236028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:23 compute-0 sudo[236026]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:24 compute-0 ceph-mon[74360]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:24 compute-0 sudo[236179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwwqcqdlazunclkhhkpkadwpzxtikfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918584.1103227-1338-230477940280863/AnsiballZ_systemd_service.py'
Jan 20 14:16:24 compute-0 sudo[236179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:24 compute-0 python3.9[236181]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:24 compute-0 sudo[236179]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:24.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:25 compute-0 sudo[236333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adblwkifscthlgyhqbxplotfefydufwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918584.8851738-1338-28365712611531/AnsiballZ_systemd_service.py'
Jan 20 14:16:25 compute-0 sudo[236333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:26 compute-0 python3.9[236335]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:26 compute-0 sudo[236333]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:26 compute-0 ceph-mon[74360]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:26 compute-0 sudo[236486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exrynlksluufhaktjeybcfikjpzuvbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918586.4088025-1338-163137433771536/AnsiballZ_systemd_service.py'
Jan 20 14:16:26 compute-0 sudo[236486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:26.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:26 compute-0 python3.9[236489]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:16:27 compute-0 sudo[236486]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:28 compute-0 ceph-mon[74360]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:28.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:29 compute-0 sudo[236641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrzunzvijrvmgnmoihujgqtzrdbbnlib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918589.1249273-1515-179329201141483/AnsiballZ_file.py'
Jan 20 14:16:29 compute-0 sudo[236641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:29 compute-0 python3.9[236643]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:29 compute-0 sudo[236641]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:30 compute-0 sudo[236793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutbtqampncxwpdbldtjhlgvmmkmrcld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918589.7377598-1515-100642775628803/AnsiballZ_file.py'
Jan 20 14:16:30 compute-0 sudo[236793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:30 compute-0 ceph-mon[74360]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:30 compute-0 python3.9[236795]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:30 compute-0 sudo[236793]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:30 compute-0 sudo[236945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knymqnyznzdhlwvfeithhihhjncpsmax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918590.3750124-1515-107945483317119/AnsiballZ_file.py'
Jan 20 14:16:30 compute-0 sudo[236945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:16:30.729 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:16:30.730 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:16:30.730 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:16:30 compute-0 python3.9[236947]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:30 compute-0 sudo[236945]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:30.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:31 compute-0 sudo[237098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uueyxxidykpsddxscghknrqseliqffob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918590.9522672-1515-219529532556231/AnsiballZ_file.py'
Jan 20 14:16:31 compute-0 sudo[237098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:31 compute-0 python3.9[237100]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:31 compute-0 sudo[237098]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:31.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:31 compute-0 sudo[237250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddvrwgiayctrnzibhasvbwqfiykykmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918591.5923488-1515-130544906413821/AnsiballZ_file.py'
Jan 20 14:16:31 compute-0 sudo[237250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:31 compute-0 sudo[237251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:31 compute-0 sudo[237251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:31 compute-0 sudo[237251]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-0 sudo[237278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:32 compute-0 sudo[237278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:32 compute-0 sudo[237278]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-0 python3.9[237259]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:32 compute-0 ceph-mon[74360]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:32 compute-0 sudo[237250]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-0 sudo[237452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcfnyhyopniqjkzghxvunnjfkflnrex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918592.2747297-1515-246660511555714/AnsiballZ_file.py'
Jan 20 14:16:32 compute-0 sudo[237452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:32 compute-0 python3.9[237454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:32 compute-0 sudo[237452]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:32.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:33 compute-0 sudo[237605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwaplrshvmjjwnsrvdsljsthvvjwuub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918592.9652834-1515-25880247127847/AnsiballZ_file.py'
Jan 20 14:16:33 compute-0 sudo[237605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:33 compute-0 python3.9[237607]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:33 compute-0 sudo[237605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:34 compute-0 sudo[237757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghjqffebvyjkgegiencukqekxirjprvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918593.7498848-1515-104221463781219/AnsiballZ_file.py'
Jan 20 14:16:34 compute-0 sudo[237757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:34 compute-0 python3.9[237759]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:34 compute-0 sudo[237757]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:34 compute-0 ceph-mon[74360]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:35 compute-0 sudo[237921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuifwgjxnpftrymavtxatyfsysvbizwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918595.535449-1686-3862285646376/AnsiballZ_file.py'
Jan 20 14:16:35 compute-0 sudo[237921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:35 compute-0 podman[237884]: 2026-01-20 14:16:35.88549465 +0000 UTC m=+0.086684197 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 20 14:16:36 compute-0 python3.9[237925]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:36 compute-0 sudo[237921]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:36 compute-0 ceph-mon[74360]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:36 compute-0 sudo[238079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywhvrzbydxpzcqxapqqddzxljuidzmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918596.1878977-1686-25607931512697/AnsiballZ_file.py'
Jan 20 14:16:36 compute-0 sudo[238079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:36 compute-0 python3.9[238081]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:36 compute-0 sudo[238079]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:37 compute-0 sudo[238232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipntynydspufhusklhkxoslgsmjngdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918596.8542974-1686-80309112757647/AnsiballZ_file.py'
Jan 20 14:16:37 compute-0 sudo[238232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:37 compute-0 python3.9[238234]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:37 compute-0 sudo[238232]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:37.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:37 compute-0 sudo[238384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvvebwcumkecwoeoegiabyunwswbddk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918597.4692316-1686-101990848827715/AnsiballZ_file.py'
Jan 20 14:16:37 compute-0 sudo[238384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:37 compute-0 python3.9[238386]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:37 compute-0 sudo[238384]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:38 compute-0 ceph-mon[74360]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:38 compute-0 sudo[238536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkvfvvakqzemqxvjvtjuvpohjzvhkyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918598.096434-1686-182371235450931/AnsiballZ_file.py'
Jan 20 14:16:38 compute-0 sudo[238536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:38 compute-0 python3.9[238538]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:38 compute-0 sudo[238536]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:38.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:39 compute-0 sudo[238689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhnxjhuhitjuxhzkmarspapiwnxnqlll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918598.774043-1686-141428602472937/AnsiballZ_file.py'
Jan 20 14:16:39 compute-0 sudo[238689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:39 compute-0 python3.9[238691]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:39 compute-0 sudo[238689]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:39.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:39 compute-0 sudo[238841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kerfppzgdvmbomsogbgygzdckraobzcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918599.484497-1686-190611001706312/AnsiballZ_file.py'
Jan 20 14:16:39 compute-0 sudo[238841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:40 compute-0 python3.9[238843]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:40 compute-0 sudo[238841]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:40 compute-0 ceph-mon[74360]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:40 compute-0 sudo[238993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shpvnzsjltbpcsrbkpoctfafdaodetsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918600.1861095-1686-70487026915108/AnsiballZ_file.py'
Jan 20 14:16:40 compute-0 sudo[238993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:40 compute-0 python3.9[238995]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:40 compute-0 sudo[238993]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:40.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:41.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:41 compute-0 ceph-mon[74360]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:42 compute-0 sudo[239148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giyeicjzjpfcmnxtmxpzfmnwviedlrmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918601.7820678-1860-179487587075400/AnsiballZ_command.py'
Jan 20 14:16:42 compute-0 sudo[239148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:42 compute-0 sshd-session[239021]: Connection closed by authenticating user root 157.245.78.139 port 50346 [preauth]
Jan 20 14:16:42 compute-0 podman[239150]: 2026-01-20 14:16:42.3061219 +0000 UTC m=+0.159752218 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:16:42 compute-0 python3.9[239151]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:42 compute-0 sudo[239148]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:43 compute-0 python3.9[239330]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:16:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:43.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:44 compute-0 ceph-mon[74360]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:44 compute-0 sudo[239480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqfgfyrgcomlfwcjhiueimzvngwrbsff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918603.8240287-1914-232386463534271/AnsiballZ_systemd_service.py'
Jan 20 14:16:44 compute-0 sudo[239480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:44 compute-0 python3.9[239482]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:16:44 compute-0 systemd[1]: Reloading.
Jan 20 14:16:44 compute-0 systemd-rc-local-generator[239507]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:44 compute-0 systemd-sysv-generator[239511]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:44 compute-0 sudo[239480]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:44.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:45 compute-0 sudo[239668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqxpddxnoplfawzsffajqyihxqcogutg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918605.0268192-1938-204247892058398/AnsiballZ_command.py'
Jan 20 14:16:45 compute-0 sudo[239668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:45 compute-0 python3.9[239670]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:45 compute-0 sudo[239668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:45.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:45 compute-0 sudo[239821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtzovwektuwuxoqhrvlzowetismlajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918605.6757617-1938-44908134728419/AnsiballZ_command.py'
Jan 20 14:16:45 compute-0 sudo[239821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:46 compute-0 ceph-mon[74360]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:46 compute-0 python3.9[239823]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:46 compute-0 sudo[239821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:46 compute-0 sudo[239974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bycxpqvqlufcrclofvmoepsitrxhmydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918606.414845-1938-62671876342420/AnsiballZ_command.py'
Jan 20 14:16:46 compute-0 sudo[239974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:46 compute-0 python3.9[239977]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:46 compute-0 sudo[239974]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:47 compute-0 sudo[240128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxkglodxexmndbmmjpjqnkyvnglajyje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918607.0534205-1938-21913194216252/AnsiballZ_command.py'
Jan 20 14:16:47 compute-0 sudo[240128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:47 compute-0 python3.9[240130]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:47 compute-0 sudo[240128]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:48 compute-0 sudo[240281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxzhmduwssdekapybvbuuapakoclvymk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918607.7857025-1938-197298439162719/AnsiballZ_command.py'
Jan 20 14:16:48 compute-0 sudo[240281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:48 compute-0 ceph-mon[74360]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:48 compute-0 python3.9[240283]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:48 compute-0 sudo[240281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:48 compute-0 sudo[240434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyinyjmhmgutlfnmkiyjdxkrahpkmcqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918608.4253812-1938-268238337491900/AnsiballZ_command.py'
Jan 20 14:16:48 compute-0 sudo[240434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:48 compute-0 python3.9[240436]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:48.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:49 compute-0 sudo[240434]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:50 compute-0 ceph-mon[74360]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:50 compute-0 sudo[240588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slqdevzelwvgnyeojdahdtlgfffietwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918610.0477052-1938-113397974328067/AnsiballZ_command.py'
Jan 20 14:16:50 compute-0 sudo[240588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:50 compute-0 python3.9[240590]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:50 compute-0 sudo[240588]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:50.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:50 compute-0 sudo[240742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctefpmxsffshhsqaixghevidjthxziqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918610.7147112-1938-83539088730577/AnsiballZ_command.py'
Jan 20 14:16:50 compute-0 sudo[240742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:51 compute-0 python3.9[240744]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:51 compute-0 sudo[240742]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:52 compute-0 sudo[240770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:52 compute-0 sudo[240770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:52 compute-0 sudo[240770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:52 compute-0 sudo[240795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:16:52 compute-0 sudo[240795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:16:52 compute-0 sudo[240795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:52 compute-0 ceph-mon[74360]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:16:52
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:53 compute-0 sudo[240946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbayjbzlbqesiygcbeunffynaufklehc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918612.5039952-2145-105414275669344/AnsiballZ_file.py'
Jan 20 14:16:53 compute-0 sudo[240946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:53 compute-0 python3.9[240948]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:53 compute-0 sudo[240946]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:53 compute-0 sudo[241098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiavmzebxxnphruvohyjqwkmxlcyvnts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918613.4307594-2145-23382370205878/AnsiballZ_file.py'
Jan 20 14:16:53 compute-0 sudo[241098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:54 compute-0 python3.9[241100]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:54 compute-0 sudo[241098]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:54 compute-0 ceph-mon[74360]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:54 compute-0 sudo[241250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxlrcttgjojqbiwijcylcxzmzmgfirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918614.2332592-2145-189293393108555/AnsiballZ_file.py'
Jan 20 14:16:54 compute-0 sudo[241250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:54 compute-0 python3.9[241252]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:54 compute-0 sudo[241250]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:54.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:55 compute-0 sudo[241403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrjlmcdzvqxhtzwtekjranmleufnzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918615.0695682-2211-14419320176430/AnsiballZ_file.py'
Jan 20 14:16:55 compute-0 sudo[241403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:55.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:55 compute-0 python3.9[241405]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:55 compute-0 sudo[241403]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:56 compute-0 sudo[241555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebkqqyouxhmvzzkqgywomibfzcnxakct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918615.7667863-2211-33469158486124/AnsiballZ_file.py'
Jan 20 14:16:56 compute-0 sudo[241555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:56 compute-0 python3.9[241557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:56 compute-0 ceph-mon[74360]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:56 compute-0 sudo[241555]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:56 compute-0 sudo[241708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvkspdoohcmryhruzevsjhegycexkenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918616.5017834-2211-206608046614671/AnsiballZ_file.py'
Jan 20 14:16:56 compute-0 sudo[241708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:56.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:57 compute-0 python3.9[241710]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:57 compute-0 sudo[241708]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:16:57 compute-0 sudo[241860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugklwkpjoorpidaljnbfymqwkgsbxhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918617.2024007-2211-231423469707241/AnsiballZ_file.py'
Jan 20 14:16:57 compute-0 sudo[241860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:16:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:57.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:16:57 compute-0 python3.9[241862]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:57 compute-0 sudo[241860]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:58 compute-0 sudo[242012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-navzegyrrrubbydrilobdhmmxwcogiku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918617.954089-2211-127062623450623/AnsiballZ_file.py'
Jan 20 14:16:58 compute-0 sudo[242012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:58 compute-0 python3.9[242014]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:58 compute-0 sudo[242012]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:58 compute-0 sudo[242165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodeetqbyaepkztbfxlmbvqxmrlfwzpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918618.6383655-2211-272653477523658/AnsiballZ_file.py'
Jan 20 14:16:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:16:58 compute-0 sudo[242165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:16:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:16:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:16:59 compute-0 python3.9[242167]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:59 compute-0 sudo[242165]: pam_unix(sudo:session): session closed for user root
Jan 20 14:16:59 compute-0 sudo[242317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mostzsvypvqihdvrraoxkxphycotdewr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918619.2638946-2211-76164963517988/AnsiballZ_file.py'
Jan 20 14:16:59 compute-0 sudo[242317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:16:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:16:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:16:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:16:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:16:59 compute-0 python3.9[242319]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:59 compute-0 sudo[242317]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:00 compute-0 ceph-mon[74360]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:01 compute-0 ceph-mon[74360]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:01.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:02 compute-0 ceph-mon[74360]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:03.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:04 compute-0 ceph-mon[74360]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:05.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:05 compute-0 sudo[242472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjkvtmwrhbznsiavvkzbwnkixhwobgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918625.0876997-2536-62031286756136/AnsiballZ_getent.py'
Jan 20 14:17:05 compute-0 sudo[242472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:05 compute-0 python3.9[242474]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 20 14:17:05 compute-0 sudo[242472]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:06 compute-0 podman[242476]: 2026-01-20 14:17:06.01036046 +0000 UTC m=+0.070335911 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:17:06 compute-0 ceph-mon[74360]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:06 compute-0 sudo[242643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zshcyivrmslixvmcvekvrfkzdkpuszjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918626.1437-2560-195362725514321/AnsiballZ_group.py'
Jan 20 14:17:06 compute-0 sudo[242643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:06 compute-0 python3.9[242646]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:17:06 compute-0 groupadd[242647]: group added to /etc/group: name=nova, GID=42436
Jan 20 14:17:06 compute-0 groupadd[242647]: group added to /etc/gshadow: name=nova
Jan 20 14:17:06 compute-0 groupadd[242647]: new group: name=nova, GID=42436
Jan 20 14:17:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:06 compute-0 sudo[242643]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:07.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:07 compute-0 sudo[242802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjwsqbnzriwawozlnxnrgslfjoqfekkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918627.256657-2584-60461249253339/AnsiballZ_user.py'
Jan 20 14:17:07 compute-0 sudo[242802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:08 compute-0 python3.9[242804]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:17:08 compute-0 useradd[242806]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 20 14:17:08 compute-0 useradd[242806]: add 'nova' to group 'libvirt'
Jan 20 14:17:08 compute-0 useradd[242806]: add 'nova' to shadow group 'libvirt'
Jan 20 14:17:08 compute-0 ceph-mon[74360]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:08 compute-0 sudo[242802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:08.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:09 compute-0 sshd-session[242838]: Accepted publickey for zuul from 192.168.122.30 port 53310 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 14:17:09 compute-0 systemd-logind[796]: New session 52 of user zuul.
Jan 20 14:17:09 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 20 14:17:09 compute-0 sshd-session[242838]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 14:17:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:09.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:09 compute-0 sshd-session[242841]: Received disconnect from 192.168.122.30 port 53310:11: disconnected by user
Jan 20 14:17:09 compute-0 sshd-session[242841]: Disconnected from user zuul 192.168.122.30 port 53310
Jan 20 14:17:09 compute-0 sshd-session[242838]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:17:09 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 20 14:17:09 compute-0 systemd-logind[796]: Session 52 logged out. Waiting for processes to exit.
Jan 20 14:17:09 compute-0 systemd-logind[796]: Removed session 52.
Jan 20 14:17:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:10 compute-0 python3.9[242991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:10.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:11 compute-0 ceph-mon[74360]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:17:11 compute-0 python3.9[243113]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918630.1029549-2659-251420187133324/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:11.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:11 compute-0 python3.9[243263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:12 compute-0 sudo[243340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:12 compute-0 sudo[243340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:12 compute-0 sudo[243340]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:12 compute-0 ceph-mon[74360]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:12 compute-0 python3.9[243339]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:12 compute-0 sudo[243373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:12 compute-0 sudo[243373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:12 compute-0 sudo[243373]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:12 compute-0 podman[243364]: 2026-01-20 14:17:12.47976944 +0000 UTC m=+0.090208837 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:17:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:13 compute-0 python3.9[243566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:13.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:13 compute-0 python3.9[243687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918632.6024573-2659-187556329530338/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:14 compute-0 python3.9[243837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:14 compute-0 ceph-mon[74360]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:14.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:15 compute-0 python3.9[243959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918633.8030605-2659-267184759300550/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:15.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:15 compute-0 python3.9[244109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:15 compute-0 sudo[244156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:15 compute-0 sudo[244156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:15 compute-0 sudo[244156]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:16 compute-0 sudo[244201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 sudo[244201]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:16 compute-0 sudo[244236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 sudo[244236]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:17:16 compute-0 sudo[244295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 ceph-mon[74360]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:16 compute-0 python3.9[244313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918635.2544355-2659-97888763811435/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:16 compute-0 sudo[244295]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:17:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:17:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:16 compute-0 sudo[244381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:16 compute-0 sudo[244381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 sudo[244381]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:16 compute-0 sudo[244425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 sudo[244425]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:16 compute-0 sudo[244478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 sudo[244478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:16 compute-0 sudo[244533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:17:16 compute-0 sudo[244533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:17:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 8467 writes, 34K keys, 8467 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8467 writes, 1693 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 560 writes, 959 keys, 560 commit groups, 1.0 writes per commit group, ingest: 0.31 MB, 0.00 MB/s
                                           Interval WAL: 560 writes, 232 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 14:17:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:16 compute-0 python3.9[244602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:16.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:17 compute-0 sudo[244533]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 07837e2f-8d73-49a9-8de3-64bbe8b4e4ed does not exist
Jan 20 14:17:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b942eb56-ab8f-4d54-abe6-139200a0a3e8 does not exist
Jan 20 14:17:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0e824fe0-4788-4b28-9b37-1303cd9c5561 does not exist
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:17:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-0 sudo[244732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:17 compute-0 sudo[244732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:17 compute-0 sudo[244732]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:17 compute-0 sudo[244780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:17 compute-0 sudo[244780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:17 compute-0 sudo[244780]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:17 compute-0 sudo[244805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:17 compute-0 sudo[244805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:17 compute-0 sudo[244805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:17:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:17:17 compute-0 sudo[244830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:17:17 compute-0 sudo[244830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:17 compute-0 python3.9[244777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918636.5124595-2659-63903454966555/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.794098491 +0000 UTC m=+0.037256617 container create 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:17:17 compute-0 systemd[1]: Started libpod-conmon-6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff.scope.
Jan 20 14:17:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.861396489 +0000 UTC m=+0.104554645 container init 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.867475843 +0000 UTC m=+0.110633979 container start 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:17:17 compute-0 blissful_babbage[244934]: 167 167
Jan 20 14:17:17 compute-0 systemd[1]: libpod-6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff.scope: Deactivated successfully.
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.779225489 +0000 UTC m=+0.022383635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:17 compute-0 conmon[244934]: conmon 6288d0a33502e3289de9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff.scope/container/memory.events
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.875936911 +0000 UTC m=+0.119095037 container attach 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.876169547 +0000 UTC m=+0.119327673 container died 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0170edae9d5b461dd6b5cb770c5116339f4611da1e36965984529382f9126515-merged.mount: Deactivated successfully.
Jan 20 14:17:17 compute-0 podman[244918]: 2026-01-20 14:17:17.919145929 +0000 UTC m=+0.162304055 container remove 6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_babbage, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:17:17 compute-0 systemd[1]: libpod-conmon-6288d0a33502e3289de96991be569bb40971648404896f48b12f38ebf16d2fff.scope: Deactivated successfully.
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.074503995 +0000 UTC m=+0.046303132 container create 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:17:18 compute-0 systemd[1]: Started libpod-conmon-2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8.scope.
Jan 20 14:17:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.145259946 +0000 UTC m=+0.117059093 container init 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.052688615 +0000 UTC m=+0.024487782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.153997812 +0000 UTC m=+0.125796959 container start 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.157538677 +0000 UTC m=+0.129337814 container attach 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:17:18 compute-0 sudo[245104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvxnabgjelanbhpzcxlimvltorqlqxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918637.9355738-2908-53224510438821/AnsiballZ_file.py'
Jan 20 14:17:18 compute-0 sudo[245104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:18 compute-0 python3.9[245106]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:18 compute-0 sudo[245104]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:18 compute-0 ceph-mon[74360]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:18 compute-0 amazing_mcnulty[245072]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:17:18 compute-0 amazing_mcnulty[245072]: --> relative data size: 1.0
Jan 20 14:17:18 compute-0 amazing_mcnulty[245072]: --> All data devices are unavailable
Jan 20 14:17:18 compute-0 systemd[1]: libpod-2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8.scope: Deactivated successfully.
Jan 20 14:17:18 compute-0 podman[245009]: 2026-01-20 14:17:18.912454787 +0000 UTC m=+0.884253934 container died 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 14:17:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:18.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-47230b98f7b696c46b864be193d5600d7a16121245ebf0a07b5d2dce534082eb-merged.mount: Deactivated successfully.
Jan 20 14:17:19 compute-0 podman[245009]: 2026-01-20 14:17:19.07949773 +0000 UTC m=+1.051296857 container remove 2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:17:19 compute-0 systemd[1]: libpod-conmon-2decbc1bc5349eaca6b38cd5136bfdcf548a16cc95a7cc65b132d43d2c2937b8.scope: Deactivated successfully.
Jan 20 14:17:19 compute-0 sudo[244830]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-0 sudo[245293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgspndqrpkvpayrkpakeznyqnecjjtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918638.7667012-2932-223445528454986/AnsiballZ_copy.py'
Jan 20 14:17:19 compute-0 sudo[245293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:19 compute-0 sudo[245267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:19 compute-0 sudo[245267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:19 compute-0 sudo[245267]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-0 sudo[245309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:19 compute-0 sudo[245309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:19 compute-0 sudo[245309]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-0 sudo[245334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:19 compute-0 sudo[245334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:19 compute-0 sudo[245334]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-0 python3.9[245306]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:19 compute-0 sudo[245359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:17:19 compute-0 sudo[245359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:19 compute-0 sudo[245293]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:19.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.668621582 +0000 UTC m=+0.040616198 container create d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:17:19 compute-0 systemd[1]: Started libpod-conmon-d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99.scope.
Jan 20 14:17:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.739444785 +0000 UTC m=+0.111439431 container init d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.650098471 +0000 UTC m=+0.022093117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.747367949 +0000 UTC m=+0.119362565 container start d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.751110259 +0000 UTC m=+0.123104875 container attach d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:17:19 compute-0 hungry_saha[245488]: 167 167
Jan 20 14:17:19 compute-0 systemd[1]: libpod-d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99.scope: Deactivated successfully.
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.755370714 +0000 UTC m=+0.127365330 container died d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:17:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f09910f91c6cd529da3d90a10cc44c733a39ea56e47915b2443c5b1509675fce-merged.mount: Deactivated successfully.
Jan 20 14:17:19 compute-0 podman[245448]: 2026-01-20 14:17:19.794969635 +0000 UTC m=+0.166964241 container remove d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:17:19 compute-0 systemd[1]: libpod-conmon-d57095fc2707973e7ef4d11e2ba63012fe1e904138ed052349bcc1c8c51c5e99.scope: Deactivated successfully.
Jan 20 14:17:19 compute-0 podman[245567]: 2026-01-20 14:17:19.984455922 +0000 UTC m=+0.044701908 container create 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:20 compute-0 systemd[1]: Started libpod-conmon-2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b.scope.
Jan 20 14:17:20 compute-0 sudo[245630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjugfklwnysqctntbiadmfsoooqupnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918639.7097862-2956-251315747636298/AnsiballZ_stat.py'
Jan 20 14:17:20 compute-0 sudo[245630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:20 compute-0 podman[245567]: 2026-01-20 14:17:19.966203219 +0000 UTC m=+0.026449225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610817dffbdbb7b154d5da99d1da84def1492db8ab0e77ac1aeb14adc4e120d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610817dffbdbb7b154d5da99d1da84def1492db8ab0e77ac1aeb14adc4e120d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610817dffbdbb7b154d5da99d1da84def1492db8ab0e77ac1aeb14adc4e120d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610817dffbdbb7b154d5da99d1da84def1492db8ab0e77ac1aeb14adc4e120d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:20 compute-0 podman[245567]: 2026-01-20 14:17:20.08099345 +0000 UTC m=+0.141239446 container init 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:17:20 compute-0 podman[245567]: 2026-01-20 14:17:20.089772207 +0000 UTC m=+0.150018193 container start 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:17:20 compute-0 podman[245567]: 2026-01-20 14:17:20.092910842 +0000 UTC m=+0.153156838 container attach 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:20 compute-0 python3.9[245634]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:20 compute-0 sudo[245630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:20 compute-0 cool_galois[245632]: {
Jan 20 14:17:20 compute-0 cool_galois[245632]:     "0": [
Jan 20 14:17:20 compute-0 cool_galois[245632]:         {
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "devices": [
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "/dev/loop3"
Jan 20 14:17:20 compute-0 cool_galois[245632]:             ],
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "lv_name": "ceph_lv0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "lv_size": "7511998464",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "name": "ceph_lv0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "tags": {
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.cluster_name": "ceph",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.crush_device_class": "",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.encrypted": "0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.osd_id": "0",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.type": "block",
Jan 20 14:17:20 compute-0 cool_galois[245632]:                 "ceph.vdo": "0"
Jan 20 14:17:20 compute-0 cool_galois[245632]:             },
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "type": "block",
Jan 20 14:17:20 compute-0 cool_galois[245632]:             "vg_name": "ceph_vg0"
Jan 20 14:17:20 compute-0 cool_galois[245632]:         }
Jan 20 14:17:20 compute-0 cool_galois[245632]:     ]
Jan 20 14:17:20 compute-0 cool_galois[245632]: }
Jan 20 14:17:20 compute-0 systemd[1]: libpod-2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b.scope: Deactivated successfully.
Jan 20 14:17:20 compute-0 podman[245567]: 2026-01-20 14:17:20.878602384 +0000 UTC m=+0.938848380 container died 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:17:20 compute-0 sudo[245798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indjneffbkczcdqynhuwfbnrfzbcmgek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918640.566022-2980-260620640726103/AnsiballZ_stat.py'
Jan 20 14:17:20 compute-0 sudo[245798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:20.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:21 compute-0 ceph-mon[74360]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-610817dffbdbb7b154d5da99d1da84def1492db8ab0e77ac1aeb14adc4e120d0-merged.mount: Deactivated successfully.
Jan 20 14:17:21 compute-0 podman[245567]: 2026-01-20 14:17:21.1068889 +0000 UTC m=+1.167134886 container remove 2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:17:21 compute-0 systemd[1]: libpod-conmon-2ecf4c555e17769713a7eff43f8b415ddb48dfb0718a2e9f00bde74bb8af9a3b.scope: Deactivated successfully.
Jan 20 14:17:21 compute-0 sudo[245359]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 python3.9[245806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:21 compute-0 sudo[245798]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 sudo[245809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:21 compute-0 sudo[245809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:21 compute-0 sudo[245809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 sudo[245835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:17:21 compute-0 sudo[245835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:21 compute-0 sudo[245835]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 sudo[245882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:21 compute-0 sudo[245882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:21 compute-0 sudo[245882]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 sudo[245931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:17:21 compute-0 sudo[245931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:21 compute-0 sudo[246036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmjggcjqoillcqewlwoipivuwmtzzgww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918640.566022-2980-260620640726103/AnsiballZ_copy.py'
Jan 20 14:17:21 compute-0 sudo[246036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:21.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.627419019 +0000 UTC m=+0.034927234 container create 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:21 compute-0 systemd[1]: Started libpod-conmon-4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343.scope.
Jan 20 14:17:21 compute-0 python3.9[246045]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768918640.566022-2980-260620640726103/.source _original_basename=._jirmdnm follow=False checksum=3381fc9478c2a16354405910fa158ede76c2022f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 20 14:17:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.70740981 +0000 UTC m=+0.114918035 container init 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.613149813 +0000 UTC m=+0.020658048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:21 compute-0 sudo[246036]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.714478791 +0000 UTC m=+0.121987006 container start 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:17:21 compute-0 modest_dewdney[246089]: 167 167
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.719182497 +0000 UTC m=+0.126690712 container attach 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:17:21 compute-0 systemd[1]: libpod-4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343.scope: Deactivated successfully.
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.719906837 +0000 UTC m=+0.127415052 container died 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e673d0ffdfc9b5aed30c24f05eb1c74a384a7527911eaf41fc817d3a445034a-merged.mount: Deactivated successfully.
Jan 20 14:17:21 compute-0 podman[246072]: 2026-01-20 14:17:21.758252543 +0000 UTC m=+0.165760748 container remove 4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:17:21 compute-0 systemd[1]: libpod-conmon-4c4246f823f6f25a46d0d7423eef44fdf91fecd822da013495f4c69c3f2d6343.scope: Deactivated successfully.
Jan 20 14:17:21 compute-0 podman[246138]: 2026-01-20 14:17:21.898675875 +0000 UTC m=+0.025610512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:17:22 compute-0 podman[246138]: 2026-01-20 14:17:22.391291241 +0000 UTC m=+0.518225868 container create d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:22 compute-0 python3.9[246277]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:22.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:22 compute-0 ceph-mon[74360]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:23 compute-0 systemd[1]: Started libpod-conmon-d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8.scope.
Jan 20 14:17:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2999bba1f06c5a2b19f5198779918d63bb4c88fc80c8da682d8f7be00f1feb8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2999bba1f06c5a2b19f5198779918d63bb4c88fc80c8da682d8f7be00f1feb8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2999bba1f06c5a2b19f5198779918d63bb4c88fc80c8da682d8f7be00f1feb8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2999bba1f06c5a2b19f5198779918d63bb4c88fc80c8da682d8f7be00f1feb8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:23 compute-0 podman[246138]: 2026-01-20 14:17:23.069275703 +0000 UTC m=+1.196210330 container init d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:17:23 compute-0 podman[246138]: 2026-01-20 14:17:23.077139486 +0000 UTC m=+1.204074093 container start d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:23 compute-0 podman[246138]: 2026-01-20 14:17:23.080857857 +0000 UTC m=+1.207792464 container attach d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:17:23 compute-0 python3.9[246437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:23.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:23 compute-0 python3.9[246558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918642.8849201-3058-58502251199678/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:23 compute-0 adoring_poincare[246382]: {
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:         "osd_id": 0,
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:         "type": "bluestore"
Jan 20 14:17:23 compute-0 adoring_poincare[246382]:     }
Jan 20 14:17:23 compute-0 adoring_poincare[246382]: }
Jan 20 14:17:23 compute-0 systemd[1]: libpod-d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8.scope: Deactivated successfully.
Jan 20 14:17:23 compute-0 podman[246138]: 2026-01-20 14:17:23.948355368 +0000 UTC m=+2.075289975 container died d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2999bba1f06c5a2b19f5198779918d63bb4c88fc80c8da682d8f7be00f1feb8d-merged.mount: Deactivated successfully.
Jan 20 14:17:24 compute-0 podman[246138]: 2026-01-20 14:17:24.004568966 +0000 UTC m=+2.131503573 container remove d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:24 compute-0 systemd[1]: libpod-conmon-d9da1caaf96c68ddeede8cdeedf27fc856d413b9eaf106ed01efc1c08d733fc8.scope: Deactivated successfully.
Jan 20 14:17:24 compute-0 sudo[245931]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:24 compute-0 ceph-mon[74360]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:17:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:17:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 17624ac9-4553-4787-8917-4b810c643cb4 does not exist
Jan 20 14:17:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 57dcd5f6-69fb-4385-830b-96df346edf6d does not exist
Jan 20 14:17:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 586159c5-0498-4099-892c-51949ce17089 does not exist
Jan 20 14:17:24 compute-0 sudo[246663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:24 compute-0 sudo[246663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:24 compute-0 sudo[246663]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:24 compute-0 sudo[246711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:17:24 compute-0 sudo[246711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:24 compute-0 sudo[246711]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:24 compute-0 python3.9[246786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:17:25 compute-0 python3.9[246908]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768918644.2111464-3103-45310921758469/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:17:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:25.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:26 compute-0 ceph-mon[74360]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:26 compute-0 sudo[247058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqsuskgdmxaenbicabttvrtprjwedwdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918646.0225334-3154-172960706839899/AnsiballZ_container_config_data.py'
Jan 20 14:17:26 compute-0 sudo[247058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:26 compute-0 python3.9[247060]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 20 14:17:26 compute-0 sudo[247058]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:26.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:27.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:27 compute-0 sudo[247211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kecntoyvqfdjkdfwlcywfcvojalzjxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918647.1529875-3187-99160908956348/AnsiballZ_container_config_hash.py'
Jan 20 14:17:27 compute-0 sudo[247211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:27 compute-0 python3.9[247213]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:17:27 compute-0 sudo[247211]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:28 compute-0 sshd-session[247214]: Connection closed by authenticating user root 157.245.78.139 port 37496 [preauth]
Jan 20 14:17:28 compute-0 ceph-mon[74360]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:28 compute-0 sudo[247366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmxiylecafqillxdfrpwacnqbiczwno ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918648.3557062-3217-90348129771398/AnsiballZ_edpm_container_manage.py'
Jan 20 14:17:28 compute-0 sudo[247366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:29 compute-0 python3[247368]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:17:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:29.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:29 compute-0 ceph-mon[74360]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:17:30.730 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:17:30.731 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:17:30.731 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:17:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:31.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:17:32 compute-0 sudo[247422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:32 compute-0 sudo[247422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:32 compute-0 sudo[247422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:32 compute-0 ceph-mon[74360]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:32 compute-0 sudo[247447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:32 compute-0 sudo[247447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:32 compute-0 sudo[247447]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:36 compute-0 ceph-mon[74360]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:36 compute-0 podman[247483]: 2026-01-20 14:17:36.655159091 +0000 UTC m=+0.238420872 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:17:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:36.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:37.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:38.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:39.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:41.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:43.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:44 compute-0 ceph-mon[74360]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:44 compute-0 ceph-mon[74360]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:44 compute-0 podman[247522]: 2026-01-20 14:17:44.966278385 +0000 UTC m=+1.533572772 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:17:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:45.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:45 compute-0 podman[247381]: 2026-01-20 14:17:45.115991149 +0000 UTC m=+15.845914892 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:45 compute-0 podman[247571]: 2026-01-20 14:17:45.251931291 +0000 UTC m=+0.024130773 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:45 compute-0 podman[247571]: 2026-01-20 14:17:45.384110981 +0000 UTC m=+0.156310443 container create 5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:17:45 compute-0 python3[247368]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 20 14:17:45 compute-0 sudo[247366]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:45.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:45 compute-0 ceph-mon[74360]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-0 ceph-mon[74360]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-0 ceph-mon[74360]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:45 compute-0 ceph-mon[74360]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:46 compute-0 sudo[247760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefrlxymtizqeuvrxhjaarijsowkuplv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918665.8878274-3241-146913740509183/AnsiballZ_stat.py'
Jan 20 14:17:46 compute-0 sudo[247760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:46 compute-0 python3.9[247762]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:46 compute-0 sudo[247760]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:47 compute-0 sudo[247915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjimatijfawbdcwgysioxvjftlyceqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918667.0992877-3277-75512987442186/AnsiballZ_container_config_data.py'
Jan 20 14:17:47 compute-0 sudo[247915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:47 compute-0 python3.9[247917]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 20 14:17:47 compute-0 sudo[247915]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:48 compute-0 ceph-mon[74360]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:48 compute-0 sudo[248067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqgwxxhdxckzqsicpcggovmcuahczkww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918668.1915224-3310-170959864727510/AnsiballZ_container_config_hash.py'
Jan 20 14:17:48 compute-0 sudo[248067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:48 compute-0 python3.9[248069]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:17:48 compute-0 sudo[248067]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:49.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:49 compute-0 sudo[248220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hildqarpaeavfrehbcumugkxdpfldmuv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768918669.2513576-3340-198205514246548/AnsiballZ_edpm_container_manage.py'
Jan 20 14:17:49 compute-0 sudo[248220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:49 compute-0 python3[248222]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:17:50 compute-0 podman[248259]: 2026-01-20 14:17:50.026553145 +0000 UTC m=+0.048969114 container create e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Jan 20 14:17:50 compute-0 podman[248259]: 2026-01-20 14:17:49.999154394 +0000 UTC m=+0.021570383 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:17:50 compute-0 python3[248222]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 20 14:17:50 compute-0 sudo[248220]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:50 compute-0 ceph-mon[74360]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:51 compute-0 sudo[248449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpiammekmugiseusoqaibjoyulynobya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918671.055024-3364-146785407672536/AnsiballZ_stat.py'
Jan 20 14:17:51 compute-0 sudo[248449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:51 compute-0 python3.9[248451]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:51 compute-0 sudo[248449]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:51.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:52 compute-0 ceph-mon[74360]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:17:52
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:52 compute-0 sudo[248553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:52 compute-0 sudo[248553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:52 compute-0 sudo[248553]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:52 compute-0 sudo[248603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:17:52 compute-0 sudo[248603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:17:52 compute-0 sudo[248603]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:52 compute-0 sudo[248654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrnghtweubbrbkpksrrktohaehpjgmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918672.0236135-3391-202520507106401/AnsiballZ_file.py'
Jan 20 14:17:52 compute-0 sudo[248654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:53 compute-0 python3.9[248656]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:53 compute-0 sudo[248654]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:53 compute-0 sudo[248805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbyurnoixpwtsvykhzuhrwwsijifeea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.1976194-3391-11046723681414/AnsiballZ_copy.py'
Jan 20 14:17:53 compute-0 sudo[248805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:53 compute-0 python3.9[248807]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768918673.1976194-3391-11046723681414/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:53 compute-0 sudo[248805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:54 compute-0 sudo[248881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okavcuxzywmjmlycvuqadtmmhpczbcno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.1976194-3391-11046723681414/AnsiballZ_systemd.py'
Jan 20 14:17:54 compute-0 sudo[248881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:54 compute-0 ceph-mon[74360]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:54 compute-0 python3.9[248883]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:17:54 compute-0 systemd[1]: Reloading.
Jan 20 14:17:54 compute-0 systemd-rc-local-generator[248913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:54 compute-0 systemd-sysv-generator[248916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:54 compute-0 sudo[248881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:17:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:17:55 compute-0 sudo[248994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spcypvhizpmmvmifoxtniztxzkqzfqzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918673.1976194-3391-11046723681414/AnsiballZ_systemd.py'
Jan 20 14:17:55 compute-0 sudo[248994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:17:55 compute-0 python3.9[248996]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:17:55 compute-0 systemd[1]: Reloading.
Jan 20 14:17:55 compute-0 systemd-sysv-generator[249025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:55 compute-0 systemd-rc-local-generator[249021]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:55.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:55 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 14:17:55 compute-0 ceph-mon[74360]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:56 compute-0 podman[249036]: 2026-01-20 14:17:56.144644974 +0000 UTC m=+0.291733020 container init e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 14:17:56 compute-0 podman[249036]: 2026-01-20 14:17:56.151481088 +0000 UTC m=+0.298569074 container start e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202)
Jan 20 14:17:56 compute-0 nova_compute[249051]: + sudo -E kolla_set_configs
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Validating config file
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying service configuration files
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Writing out command to execute
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:17:56 compute-0 nova_compute[249051]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:17:56 compute-0 nova_compute[249051]: ++ cat /run_command
Jan 20 14:17:56 compute-0 podman[249036]: nova_compute
Jan 20 14:17:56 compute-0 nova_compute[249051]: + CMD=nova-compute
Jan 20 14:17:56 compute-0 nova_compute[249051]: + ARGS=
Jan 20 14:17:56 compute-0 nova_compute[249051]: + sudo kolla_copy_cacerts
Jan 20 14:17:56 compute-0 systemd[1]: Started nova_compute container.
Jan 20 14:17:56 compute-0 nova_compute[249051]: + [[ ! -n '' ]]
Jan 20 14:17:56 compute-0 nova_compute[249051]: + . kolla_extend_start
Jan 20 14:17:56 compute-0 nova_compute[249051]: Running command: 'nova-compute'
Jan 20 14:17:56 compute-0 nova_compute[249051]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:17:56 compute-0 nova_compute[249051]: + umask 0022
Jan 20 14:17:56 compute-0 nova_compute[249051]: + exec nova-compute
Jan 20 14:17:56 compute-0 sudo[248994]: pam_unix(sudo:session): session closed for user root
Jan 20 14:17:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:17:57 compute-0 python3.9[249214]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.364 249055 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.365 249055 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.365 249055 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.365 249055 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 14:17:58 compute-0 ceph-mon[74360]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.504 249055 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.536 249055 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:17:58 compute-0 nova_compute[249051]: 2026-01-20 14:17:58.537 249055 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:17:58 compute-0 python3.9[249366]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:17:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:17:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:17:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.374 249055 INFO nova.virt.driver [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 14:17:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:17:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:17:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:17:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.776 249055 INFO nova.compute.provider_config [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 14:17:59 compute-0 python3.9[249519]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.829 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.830 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.830 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.830 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.831 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.832 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.833 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.834 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.835 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.836 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.837 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.838 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.839 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.840 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.841 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.842 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.843 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.844 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.845 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.846 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.847 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.848 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.849 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.850 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.851 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.852 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.853 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.854 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.855 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.856 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.856 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.856 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.856 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.856 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.857 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.858 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.859 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.859 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.859 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.859 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.859 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.860 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.861 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.862 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.863 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.864 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.864 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.864 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.864 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.864 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.865 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.866 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.867 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.868 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.869 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.870 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.870 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.870 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.870 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.870 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.871 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.872 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.873 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.874 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.875 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.876 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.877 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.878 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.879 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.879 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.879 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.879 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.879 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.880 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.881 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.882 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.882 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.882 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.882 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.882 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.883 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.884 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.885 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.886 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.887 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.888 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.889 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.890 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.891 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.892 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.893 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.894 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.895 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.896 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.897 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.897 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.897 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.897 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.897 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.898 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.899 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.900 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.900 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.900 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.900 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.900 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.901 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.902 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.902 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.902 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.902 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.902 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.903 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.904 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.905 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.905 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.905 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.905 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.905 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.906 249055 WARNING oslo_config.cfg [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:17:59 compute-0 nova_compute[249051]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:17:59 compute-0 nova_compute[249051]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:17:59 compute-0 nova_compute[249051]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:17:59 compute-0 nova_compute[249051]: ).  Its value may be silently ignored in the future.
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.906 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.906 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.906 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.906 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.907 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.908 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rbd_secret_uuid        = e399cf45-e6b6-5393-99f1-75c601d3f188 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.909 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.910 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.911 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.911 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.911 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.911 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.911 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.912 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.913 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.913 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.913 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.913 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.913 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.914 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.915 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.916 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.917 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.918 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.919 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.920 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.921 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.922 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.923 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.924 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.925 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.926 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.927 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.927 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.927 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.927 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.927 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.928 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.928 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.928 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.928 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.928 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 ceph-mon[74360]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.929 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.930 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.931 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.932 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.933 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.934 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.934 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.934 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.934 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.934 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.935 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.936 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.937 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.938 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.939 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.940 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.941 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.942 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.943 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.944 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.945 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.946 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.947 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.948 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.949 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.950 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.951 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.952 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.953 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.954 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.955 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.956 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.957 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.958 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.959 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.960 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.960 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.960 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.960 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.960 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.961 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.962 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.963 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.964 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.965 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.966 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.967 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.968 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.969 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.970 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.970 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.970 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.970 249055 DEBUG oslo_service.service [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.971 249055 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.991 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.992 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.992 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 14:17:59 compute-0 nova_compute[249051]: 2026-01-20 14:17:59.993 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 14:18:00 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:18:00 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.062 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f715b7d7670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.064 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f715b7d7670> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.064 249055 INFO nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Connection event '1' reason 'None'
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.101 249055 WARNING nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.102 249055 DEBUG nova.virt.libvirt.volume.mount [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 14:18:00 compute-0 sudo[249723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrdsvbowvpseckecktjnwzqajjwvrmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918680.123947-3571-42601508571906/AnsiballZ_podman_container.py'
Jan 20 14:18:00 compute-0 sudo[249723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:00 compute-0 python3.9[249731]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.940 249055 INFO nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 14:18:00 compute-0 nova_compute[249051]: 
Jan 20 14:18:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <host>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <uuid>35085f33-1a27-41e3-805d-02c7ac6a1d7f</uuid>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <cpu>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <arch>x86_64</arch>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model>EPYC-Rome-v4</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <vendor>AMD</vendor>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <microcode version='16777317'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <signature family='23' model='49' stepping='0'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='x2apic'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='tsc-deadline'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='osxsave'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='hypervisor'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='tsc_adjust'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='spec-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='stibp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='arch-capabilities'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='ssbd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='cmp_legacy'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='topoext'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='virt-ssbd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='lbrv'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='tsc-scale'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='vmcb-clean'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='pause-filter'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='pfthreshold'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='svme-addr-chk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='rdctl-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='mds-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature name='pschange-mc-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <pages unit='KiB' size='4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <pages unit='KiB' size='2048'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <pages unit='KiB' size='1048576'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </cpu>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <power_management>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <suspend_mem/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </power_management>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <iommu support='no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <migration_features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <live/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <uri_transports>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <uri_transport>tcp</uri_transport>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <uri_transport>rdma</uri_transport>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </uri_transports>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </migration_features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <topology>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <cells num='1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <cell id='0'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <memory unit='KiB'>7864308</memory>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <distances>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <sibling id='0' value='10'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           </distances>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           <cpus num='8'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:           </cpus>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         </cell>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </cells>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </topology>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <cache>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </cache>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <secmodel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model>selinux</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <doi>0</doi>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </secmodel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <secmodel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model>dac</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <doi>0</doi>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </secmodel>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   </host>
Jan 20 14:18:00 compute-0 nova_compute[249051]: 
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <guest>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <os_type>hvm</os_type>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <arch name='i686'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <wordsize>32</wordsize>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <domain type='qemu'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <domain type='kvm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </arch>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <pae/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <nonpae/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <apic default='on' toggle='no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <cpuselection/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <deviceboot/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <externalSnapshot/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   </guest>
Jan 20 14:18:00 compute-0 nova_compute[249051]: 
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <guest>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <os_type>hvm</os_type>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <arch name='x86_64'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <wordsize>64</wordsize>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <domain type='qemu'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <domain type='kvm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </arch>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <apic default='on' toggle='no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <cpuselection/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <deviceboot/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <externalSnapshot/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </features>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   </guest>
Jan 20 14:18:00 compute-0 nova_compute[249051]: 
Jan 20 14:18:00 compute-0 nova_compute[249051]: </capabilities>
Jan 20 14:18:00 compute-0 nova_compute[249051]: 
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.947 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:00 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.976 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 14:18:00 compute-0 nova_compute[249051]: <domainCapabilities>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <domain>kvm</domain>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <arch>i686</arch>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <vcpu max='240'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <iothreads supported='yes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <os supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <enum name='firmware'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <loader supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>rom</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>pflash</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <enum name='readonly'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>yes</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <enum name='secure'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </loader>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   </os>
Jan 20 14:18:00 compute-0 nova_compute[249051]:   <cpu>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <enum name='maximumMigratable'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <vendor>AMD</vendor>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='succor'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:00 compute-0 nova_compute[249051]:     <mode name='custom' supported='yes'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 sudo[249723]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cooperlake'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Denverton'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Denverton-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Denverton-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Denverton-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-v4'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='EPYC-v5'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-v1'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-v2'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:00 compute-0 nova_compute[249051]:       <blockers model='Haswell-v3'>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:00 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>anonymous</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>memfd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </memoryBacking>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <disk supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>disk</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cdrom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>floppy</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>lun</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ide</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>fdc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>sata</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </disk>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vnc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </graphics>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <video supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='modelType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vga</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cirrus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>none</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>bochs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ramfb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </video>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='mode'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>subsystem</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>mandatory</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>requisite</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>optional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pci</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hostdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <rng supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>random</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </rng>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='driverType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>path</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>handle</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </filesystem>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emulator</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>external</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>2.0</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </tpm>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </redirdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <channel supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </channel>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </crypto>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <interface supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>passt</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </interface>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <panic supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>isa</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>hyperv</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </panic>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <console supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>null</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dev</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pipe</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stdio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>udp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tcp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </console>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <features>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <gic supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sev supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='features'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>relaxed</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vapic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vpindex</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>runtime</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>synic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stimer</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reset</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>frequencies</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ipi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>avic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hyperv>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </features>
Jan 20 14:18:01 compute-0 nova_compute[249051]: </domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:00.982 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 14:18:01 compute-0 nova_compute[249051]: <domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <arch>i686</arch>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <vcpu max='4096'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <os supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='firmware'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <loader supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>rom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pflash</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='readonly'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>yes</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='secure'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </loader>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </os>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>anonymous</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>memfd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </memoryBacking>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <disk supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>disk</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cdrom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>floppy</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>lun</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>fdc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>sata</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </disk>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vnc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </graphics>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <video supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='modelType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vga</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cirrus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>none</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>bochs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ramfb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </video>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='mode'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>subsystem</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>mandatory</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>requisite</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>optional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pci</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hostdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <rng supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>random</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </rng>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='driverType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>path</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>handle</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </filesystem>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emulator</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>external</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>2.0</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </tpm>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </redirdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <channel supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </channel>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </crypto>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <interface supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>passt</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </interface>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <panic supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>isa</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>hyperv</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </panic>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <console supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>null</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dev</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pipe</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stdio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>udp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tcp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </console>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <features>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <gic supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sev supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='features'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>relaxed</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vapic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vpindex</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>runtime</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>synic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stimer</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reset</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>frequencies</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ipi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>avic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hyperv>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </features>
Jan 20 14:18:01 compute-0 nova_compute[249051]: </domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.055 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.059 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 14:18:01 compute-0 nova_compute[249051]: <domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <arch>x86_64</arch>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <vcpu max='240'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <os supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='firmware'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <loader supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>rom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pflash</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='readonly'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>yes</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='secure'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </loader>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </os>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>anonymous</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>memfd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </memoryBacking>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <disk supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>disk</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cdrom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>floppy</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>lun</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ide</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>fdc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>sata</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </disk>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vnc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </graphics>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <video supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='modelType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vga</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cirrus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>none</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>bochs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ramfb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </video>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='mode'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>subsystem</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>mandatory</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>requisite</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>optional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pci</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hostdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <rng supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>random</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </rng>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='driverType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>path</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>handle</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </filesystem>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emulator</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>external</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>2.0</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </tpm>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </redirdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <channel supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </channel>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </crypto>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <interface supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>passt</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </interface>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <panic supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>isa</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>hyperv</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </panic>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <console supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>null</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dev</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pipe</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stdio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>udp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tcp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </console>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <features>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <gic supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sev supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='features'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>relaxed</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vapic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vpindex</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>runtime</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>synic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stimer</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reset</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>frequencies</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ipi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>avic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hyperv>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </features>
Jan 20 14:18:01 compute-0 nova_compute[249051]: </domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.130 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 14:18:01 compute-0 nova_compute[249051]: <domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <domain>kvm</domain>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <arch>x86_64</arch>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <vcpu max='4096'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <iothreads supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <os supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='firmware'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>efi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <loader supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>rom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pflash</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='readonly'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>yes</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='secure'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>yes</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>no</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </loader>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </os>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='maximumMigratable'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>on</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>off</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <vendor>AMD</vendor>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='succor'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <mode name='custom' supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ddpd-u'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sha512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm3'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sm4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Denverton-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amd-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='auto-ibrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='perfmon-v2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbpb'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='stibp-always-on'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='EPYC-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-128'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-256'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx10-512'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='prefetchiti'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Haswell-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512er'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512pf'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fma4'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tbm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xop'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='amx-tile'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-bf16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-fp16'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bitalg'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrc'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fzrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='la57'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='taa-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ifma'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cmpccxadd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fbsdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='fsrs'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ibrs-all'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='intel-psfd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='lam'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mcdt-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pbrsb-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='psdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='serialize'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vaes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='hle'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='rtm'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512bw'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512cd'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512dq'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512f'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='avx512vl'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='invpcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pcid'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='pku'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='mpx'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='core-capability'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='split-lock-detect'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='cldemote'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='erms'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='gfni'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdir64b'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='movdiri'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='xsaves'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='athlon-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='core2duo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='coreduo-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='n270-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='ss'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <blockers model='phenom-v1'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnow'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <feature name='3dnowext'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </blockers>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </mode>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <memoryBacking supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <enum name='sourceType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>anonymous</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <value>memfd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </memoryBacking>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <disk supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='diskDevice'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>disk</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cdrom</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>floppy</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>lun</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>fdc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>sata</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </disk>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <graphics supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vnc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egl-headless</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </graphics>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <video supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='modelType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vga</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>cirrus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>none</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>bochs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ramfb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </video>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hostdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='mode'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>subsystem</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='startupPolicy'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>mandatory</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>requisite</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>optional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='subsysType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pci</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>scsi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='capsType'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='pciBackend'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hostdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <rng supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtio-non-transitional</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>random</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>egd</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </rng>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <filesystem supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='driverType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>path</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>handle</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>virtiofs</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </filesystem>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tpm supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-tis</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tpm-crb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emulator</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>external</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendVersion'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>2.0</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </tpm>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <redirdev supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='bus'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>usb</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </redirdev>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <channel supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </channel>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <crypto supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendModel'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>builtin</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </crypto>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <interface supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='backendType'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>default</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>passt</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </interface>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <panic supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='model'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>isa</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>hyperv</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </panic>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <console supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='type'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>null</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vc</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pty</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dev</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>file</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>pipe</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stdio</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>udp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tcp</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>unix</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>qemu-vdagent</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>dbus</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </console>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </devices>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <features>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <gic supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <genid supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <backup supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <async-teardown supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <s390-pv supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <ps2 supported='yes'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <tdx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sev supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <sgx supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <hyperv supported='yes'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <enum name='features'>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>relaxed</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vapic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>spinlocks</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vpindex</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>runtime</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>synic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>stimer</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reset</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>vendor_id</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>frequencies</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>reenlightenment</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>tlbflush</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>ipi</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>avic</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>emsr_bitmap</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <value>xmm_input</value>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </enum>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       <defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:01 compute-0 nova_compute[249051]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:01 compute-0 nova_compute[249051]:       </defaults>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     </hyperv>
Jan 20 14:18:01 compute-0 nova_compute[249051]:     <launchSecurity supported='no'/>
Jan 20 14:18:01 compute-0 nova_compute[249051]:   </features>
Jan 20 14:18:01 compute-0 nova_compute[249051]: </domainCapabilities>
Jan 20 14:18:01 compute-0 nova_compute[249051]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.198 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.198 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.199 249055 DEBUG nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.204 249055 INFO nova.virt.libvirt.host [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Secure Boot support detected
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.206 249055 INFO nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.206 249055 INFO nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.218 249055 DEBUG nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 20 14:18:01 compute-0 nova_compute[249051]:   <model>Nehalem</model>
Jan 20 14:18:01 compute-0 nova_compute[249051]: </cpu>
Jan 20 14:18:01 compute-0 nova_compute[249051]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.221 249055 DEBUG nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.296 249055 INFO nova.virt.node [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Determined node identity 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from /var/lib/nova/compute_id
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.335 249055 WARNING nova.compute.manager [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Compute nodes ['068db7fd-4bd6-45a9-8bd6-a22cfe7596ed'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.399 249055 INFO nova.compute.manager [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.466 249055 WARNING nova.compute.manager [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.467 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.467 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.467 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.467 249055 DEBUG nova.compute.resource_tracker [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.468 249055 DEBUG oslo_concurrency.processutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:01 compute-0 sudo[249927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdncsgmwkhwjsrsjmchnujcanahiqaqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918681.411564-3595-80078685143696/AnsiballZ_systemd.py'
Jan 20 14:18:01 compute-0 sudo[249927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750596979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:01 compute-0 nova_compute[249051]: 2026-01-20 14:18:01.887 249055 DEBUG oslo_concurrency.processutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:01 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:18:01 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:18:01 compute-0 python3.9[249929]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:02 compute-0 systemd[1]: Stopping nova_compute container...
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.189 249055 WARNING nova.virt.libvirt.driver [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.191 249055 DEBUG nova.compute.resource_tracker [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.191 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.191 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.194 249055 DEBUG oslo_concurrency.lockutils [None req-d498a5ea-3498-4835-8e0d-db2e5510c5d0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.194 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.195 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:18:02 compute-0 nova_compute[249051]: 2026-01-20 14:18:02.195 249055 DEBUG oslo_concurrency.lockutils [None req-c799cc91-c372-4e01-89c6-3b50a5ab5486 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:18:02 compute-0 ceph-mon[74360]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1750596979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3870360195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/996481584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:02 compute-0 virtqemud[249565]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 20 14:18:02 compute-0 virtqemud[249565]: hostname: compute-0
Jan 20 14:18:02 compute-0 virtqemud[249565]: End of file while reading data: Input/output error
Jan 20 14:18:02 compute-0 systemd[1]: libpod-e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7.scope: Deactivated successfully.
Jan 20 14:18:02 compute-0 systemd[1]: libpod-e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7.scope: Consumed 3.652s CPU time.
Jan 20 14:18:02 compute-0 podman[249956]: 2026-01-20 14:18:02.594128205 +0000 UTC m=+0.561147097 container died e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute)
Jan 20 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7-userdata-shm.mount: Deactivated successfully.
Jan 20 14:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326-merged.mount: Deactivated successfully.
Jan 20 14:18:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:05 compute-0 ceph-mon[74360]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:05 compute-0 podman[249956]: 2026-01-20 14:18:05.494217658 +0000 UTC m=+3.461236580 container cleanup e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.schema-version=1.0)
Jan 20 14:18:05 compute-0 podman[249956]: nova_compute
Jan 20 14:18:05 compute-0 podman[249990]: nova_compute
Jan 20 14:18:05 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 20 14:18:05 compute-0 systemd[1]: Stopped nova_compute container.
Jan 20 14:18:05 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 14:18:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e986e01db23e995d9372c40eb693cafbf8744ce09435827bf540451254d326/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:05 compute-0 podman[250003]: 2026-01-20 14:18:05.82900872 +0000 UTC m=+0.217102175 container init e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.license=GPLv2)
Jan 20 14:18:05 compute-0 podman[250003]: 2026-01-20 14:18:05.836776999 +0000 UTC m=+0.224870434 container start e178c14938af17df7b6705234e090287daeb1c12fc64b9f587424d2e33238ca7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:18:05 compute-0 nova_compute[250018]: + sudo -E kolla_set_configs
Jan 20 14:18:05 compute-0 podman[250003]: nova_compute
Jan 20 14:18:05 compute-0 systemd[1]: Started nova_compute container.
Jan 20 14:18:05 compute-0 sudo[249927]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Validating config file
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying service configuration files
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Writing out command to execute
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:05 compute-0 nova_compute[250018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:18:05 compute-0 nova_compute[250018]: ++ cat /run_command
Jan 20 14:18:05 compute-0 nova_compute[250018]: + CMD=nova-compute
Jan 20 14:18:05 compute-0 nova_compute[250018]: + ARGS=
Jan 20 14:18:05 compute-0 nova_compute[250018]: + sudo kolla_copy_cacerts
Jan 20 14:18:05 compute-0 nova_compute[250018]: + [[ ! -n '' ]]
Jan 20 14:18:05 compute-0 nova_compute[250018]: + . kolla_extend_start
Jan 20 14:18:05 compute-0 nova_compute[250018]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:18:05 compute-0 nova_compute[250018]: Running command: 'nova-compute'
Jan 20 14:18:05 compute-0 nova_compute[250018]: + umask 0022
Jan 20 14:18:05 compute-0 nova_compute[250018]: + exec nova-compute
Jan 20 14:18:06 compute-0 sudo[250179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgbxeurekdsgmxtpibrvsaxykywzdzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768918686.1684825-3622-265149945595279/AnsiballZ_podman_container.py'
Jan 20 14:18:06 compute-0 sudo[250179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 14:18:06 compute-0 ceph-mon[74360]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:06 compute-0 python3.9[250181]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:18:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:07.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:07 compute-0 systemd[1]: Started libpod-conmon-5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4.scope.
Jan 20 14:18:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f95b95f83d501b469981f1c62a8012b6c1535d2881e0e3095b225c23b9e374/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f95b95f83d501b469981f1c62a8012b6c1535d2881e0e3095b225c23b9e374/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f95b95f83d501b469981f1c62a8012b6c1535d2881e0e3095b225c23b9e374/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:07 compute-0 podman[250208]: 2026-01-20 14:18:07.098813068 +0000 UTC m=+0.165635486 container init 5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 20 14:18:07 compute-0 podman[250208]: 2026-01-20 14:18:07.106094494 +0000 UTC m=+0.172916902 container start 5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Jan 20 14:18:07 compute-0 python3.9[250181]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Applying nova statedir ownership
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 20 14:18:07 compute-0 nova_compute_init[250229]: INFO:nova_statedir:Nova statedir ownership complete
Jan 20 14:18:07 compute-0 systemd[1]: libpod-5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4.scope: Deactivated successfully.
Jan 20 14:18:07 compute-0 podman[250243]: 2026-01-20 14:18:07.219269351 +0000 UTC m=+0.028842200 container died 5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3)
Jan 20 14:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4-userdata-shm.mount: Deactivated successfully.
Jan 20 14:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f95b95f83d501b469981f1c62a8012b6c1535d2881e0e3095b225c23b9e374-merged.mount: Deactivated successfully.
Jan 20 14:18:07 compute-0 sudo[250179]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:07 compute-0 podman[250243]: 2026-01-20 14:18:07.254037451 +0000 UTC m=+0.063610280 container cleanup 5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:18:07 compute-0 systemd[1]: libpod-conmon-5a0171988aa80514e0d57f5f0c689fae405b814c42e10f881bf8be41c50ddac4.scope: Deactivated successfully.
Jan 20 14:18:07 compute-0 ceph-mon[74360]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:07 compute-0 nova_compute[250018]: 2026-01-20 14:18:07.878 250022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-0 nova_compute[250018]: 2026-01-20 14:18:07.879 250022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-0 nova_compute[250018]: 2026-01-20 14:18:07.879 250022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 14:18:07 compute-0 nova_compute[250018]: 2026-01-20 14:18:07.879 250022 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 14:18:07 compute-0 sshd-session[223678]: Connection closed by 192.168.122.30 port 55334
Jan 20 14:18:07 compute-0 sshd-session[223667]: pam_unix(sshd:session): session closed for user zuul
Jan 20 14:18:07 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 20 14:18:07 compute-0 systemd[1]: session-51.scope: Consumed 2min 4.543s CPU time.
Jan 20 14:18:07 compute-0 systemd-logind[796]: Session 51 logged out. Waiting for processes to exit.
Jan 20 14:18:07 compute-0 systemd-logind[796]: Removed session 51.
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.012 250022 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.040 250022 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.040 250022 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.518 250022 INFO nova.virt.driver [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 14:18:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/366490253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.641 250022 INFO nova.compute.provider_config [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.651 250022 DEBUG oslo_concurrency.lockutils [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.651 250022 DEBUG oslo_concurrency.lockutils [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.652 250022 DEBUG oslo_concurrency.lockutils [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.652 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.652 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.652 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.653 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.654 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.655 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.656 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.657 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.658 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.658 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.658 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.658 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.658 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.659 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.660 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.661 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.662 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.663 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.664 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.665 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.666 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.667 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.668 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.669 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.670 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.671 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.672 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.673 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.674 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.675 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.676 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.677 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.678 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.678 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.678 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.678 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.678 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.679 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.680 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.681 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.682 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.683 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.684 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.685 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.686 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.687 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.687 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.687 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.687 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.687 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.688 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.688 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.688 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.688 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.688 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.689 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.690 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.691 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.692 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.693 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.694 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.695 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.696 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.697 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.698 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.699 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.700 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.701 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.702 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.703 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.704 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.705 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.706 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.706 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.706 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.706 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.706 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.707 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.708 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.709 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.710 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.711 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.712 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.713 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.714 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.715 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.716 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.717 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.718 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.719 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.720 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.721 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.722 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.723 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.724 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.725 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.726 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.726 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.726 250022 WARNING oslo_config.cfg [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:18:08 compute-0 nova_compute[250018]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:18:08 compute-0 nova_compute[250018]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:18:08 compute-0 nova_compute[250018]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:18:08 compute-0 nova_compute[250018]: ).  Its value may be silently ignored in the future.
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.726 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.726 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.727 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.728 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rbd_secret_uuid        = e399cf45-e6b6-5393-99f1-75c601d3f188 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.729 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.730 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.731 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.731 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.732 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.733 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.734 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.735 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.736 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.737 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.738 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.739 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.740 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.741 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.742 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.743 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.744 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.745 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.746 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.747 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.748 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.748 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.748 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.748 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.749 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.750 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.751 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.752 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.753 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.754 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.755 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.756 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.757 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.758 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.759 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.760 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.761 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.762 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.763 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.764 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.765 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.766 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.767 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.768 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.769 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.770 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.771 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.772 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.773 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.774 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.775 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.776 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.777 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.778 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.779 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.780 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.781 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.782 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.783 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.784 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.785 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.786 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.787 250022 DEBUG oslo_service.service [None req-d35c0f22-2a65-47a6-a86e-2944a4abbff6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.789 250022 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.823 250022 INFO nova.virt.node [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Determined node identity 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from /var/lib/nova/compute_id
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.824 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.825 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.825 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.825 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.837 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb7a42b4d30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.840 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb7a42b4d30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.841 250022 INFO nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Connection event '1' reason 'None'
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.851 250022 INFO nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 14:18:08 compute-0 nova_compute[250018]: 
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <host>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <uuid>35085f33-1a27-41e3-805d-02c7ac6a1d7f</uuid>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <cpu>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <arch>x86_64</arch>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <microcode version='16777317'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <signature family='23' model='49' stepping='0'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='x2apic'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='tsc-deadline'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='osxsave'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='hypervisor'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='tsc_adjust'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='spec-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='stibp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='arch-capabilities'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='cmp_legacy'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='topoext'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='virt-ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='lbrv'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='tsc-scale'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='vmcb-clean'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='pause-filter'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='pfthreshold'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='svme-addr-chk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='rdctl-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='mds-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature name='pschange-mc-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <pages unit='KiB' size='4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <pages unit='KiB' size='2048'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <pages unit='KiB' size='1048576'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </cpu>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <power_management>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <suspend_mem/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </power_management>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <iommu support='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <migration_features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <live/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <uri_transports>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <uri_transport>tcp</uri_transport>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <uri_transport>rdma</uri_transport>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </uri_transports>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </migration_features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <topology>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <cells num='1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <cell id='0'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <memory unit='KiB'>7864308</memory>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <distances>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <sibling id='0' value='10'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           </distances>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           <cpus num='8'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:           </cpus>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         </cell>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </cells>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </topology>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <cache>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </cache>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <secmodel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model>selinux</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <doi>0</doi>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </secmodel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <secmodel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model>dac</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <doi>0</doi>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </secmodel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </host>
Jan 20 14:18:08 compute-0 nova_compute[250018]: 
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <guest>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <os_type>hvm</os_type>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <arch name='i686'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <wordsize>32</wordsize>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <domain type='qemu'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <domain type='kvm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </arch>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <pae/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <nonpae/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <apic default='on' toggle='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <cpuselection/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <deviceboot/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <externalSnapshot/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </guest>
Jan 20 14:18:08 compute-0 nova_compute[250018]: 
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <guest>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <os_type>hvm</os_type>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <arch name='x86_64'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <wordsize>64</wordsize>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <domain type='qemu'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <domain type='kvm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </arch>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <acpi default='on' toggle='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <apic default='on' toggle='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <cpuselection/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <deviceboot/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <disksnapshot default='on' toggle='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <externalSnapshot/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </guest>
Jan 20 14:18:08 compute-0 nova_compute[250018]: 
Jan 20 14:18:08 compute-0 nova_compute[250018]: </capabilities>
Jan 20 14:18:08 compute-0 nova_compute[250018]: 
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.858 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.862 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 14:18:08 compute-0 nova_compute[250018]: <domainCapabilities>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <arch>i686</arch>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <vcpu max='240'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <os supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <enum name='firmware'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <loader supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>rom</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pflash</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='readonly'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>yes</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='secure'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </loader>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </os>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <cpu>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Snowridge'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='athlon'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='athlon-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='core2duo'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='core2duo-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='coreduo'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='coreduo-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='n270'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='n270-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='phenom'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='phenom-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <memoryBacking supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <enum name='sourceType'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <value>file</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <value>anonymous</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <value>memfd</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </memoryBacking>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <disk supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='diskDevice'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>disk</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>cdrom</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>floppy</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>lun</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>ide</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>fdc</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>sata</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <graphics supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vnc</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>egl-headless</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <video supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='modelType'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vga</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>cirrus</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>none</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>bochs</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>ramfb</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </video>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <hostdev supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='mode'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>subsystem</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='startupPolicy'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>mandatory</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>requisite</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>optional</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='subsysType'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pci</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='capsType'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='pciBackend'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </hostdev>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <rng supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>random</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>egd</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <filesystem supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='driverType'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>path</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>handle</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>virtiofs</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </filesystem>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <tpm supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>tpm-tis</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>tpm-crb</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>emulator</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>external</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='backendVersion'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>2.0</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </tpm>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <redirdev supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </redirdev>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <channel supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </channel>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <crypto supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='model'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>qemu</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </crypto>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <interface supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='backendType'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>passt</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <panic supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>isa</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>hyperv</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </panic>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <console supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>null</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vc</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>dev</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>file</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pipe</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>stdio</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>udp</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>tcp</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>qemu-vdagent</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </console>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <features>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <gic supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <genid supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <backup supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <async-teardown supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <s390-pv supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <ps2 supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <tdx supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <sev supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <sgx supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <hyperv supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='features'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>relaxed</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vapic</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>spinlocks</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vpindex</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>runtime</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>synic</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>stimer</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>reset</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>vendor_id</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>frequencies</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>reenlightenment</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>tlbflush</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>ipi</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>avic</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>emsr_bitmap</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>xmm_input</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <defaults>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </defaults>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </hyperv>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <launchSecurity supported='no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </features>
Jan 20 14:18:08 compute-0 nova_compute[250018]: </domainCapabilities>
Jan 20 14:18:08 compute-0 nova_compute[250018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.871 250022 DEBUG nova.virt.libvirt.volume.mount [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 14:18:08 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.873 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 14:18:08 compute-0 nova_compute[250018]: <domainCapabilities>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <domain>kvm</domain>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <arch>i686</arch>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <vcpu max='4096'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <iothreads supported='yes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <os supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <enum name='firmware'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <loader supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>rom</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>pflash</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='readonly'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>yes</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='secure'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </loader>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   </os>
Jan 20 14:18:08 compute-0 nova_compute[250018]:   <cpu>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <enum name='maximumMigratable'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <vendor>AMD</vendor>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='succor'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:08 compute-0 nova_compute[250018]:     <mode name='custom' supported='yes'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Denverton-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='EPYC-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Haswell-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='KnightsMill'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:08 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:08 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <memoryBacking supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <enum name='sourceType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>anonymous</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>memfd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </memoryBacking>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <disk supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='diskDevice'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>disk</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cdrom</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>floppy</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>lun</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>fdc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>sata</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <graphics supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vnc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egl-headless</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <video supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='modelType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vga</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cirrus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>none</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>bochs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ramfb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </video>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hostdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='mode'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>subsystem</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='startupPolicy'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>mandatory</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>requisite</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>optional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='subsysType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pci</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='capsType'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='pciBackend'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hostdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <rng supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>random</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <filesystem supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='driverType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>path</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>handle</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtiofs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </filesystem>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tpm supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-tis</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-crb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emulator</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>external</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendVersion'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>2.0</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </tpm>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <redirdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </redirdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <channel supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </channel>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <crypto supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </crypto>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <interface supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>passt</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <panic supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>isa</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>hyperv</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </panic>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <console supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>null</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dev</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pipe</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stdio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>udp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tcp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu-vdagent</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </console>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <features>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <gic supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <genid supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backup supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <async-teardown supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <s390-pv supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <ps2 supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tdx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sev supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sgx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hyperv supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='features'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>relaxed</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vapic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>spinlocks</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vpindex</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>runtime</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>synic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stimer</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reset</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vendor_id</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>frequencies</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reenlightenment</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tlbflush</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ipi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>avic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emsr_bitmap</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>xmm_input</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hyperv>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <launchSecurity supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </features>
Jan 20 14:18:09 compute-0 nova_compute[250018]: </domainCapabilities>
Jan 20 14:18:09 compute-0 nova_compute[250018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.935 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:08.940 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 14:18:09 compute-0 nova_compute[250018]: <domainCapabilities>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <domain>kvm</domain>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <arch>x86_64</arch>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <vcpu max='240'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <iothreads supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <os supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <enum name='firmware'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <loader supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>rom</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pflash</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='readonly'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>yes</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='secure'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </loader>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </os>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='maximumMigratable'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <vendor>AMD</vendor>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='succor'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='custom' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 rsyslogd[1003]: imjournal from <np0005588918:nova_compute>: begin to drop messages due to rate-limiting
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='KnightsMill'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <memoryBacking supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <enum name='sourceType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>anonymous</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>memfd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </memoryBacking>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <disk supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='diskDevice'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>disk</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cdrom</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>floppy</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>lun</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ide</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>fdc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>sata</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <graphics supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vnc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egl-headless</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <video supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='modelType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vga</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cirrus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>none</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>bochs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ramfb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </video>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hostdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='mode'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>subsystem</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='startupPolicy'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>mandatory</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>requisite</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>optional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='subsysType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pci</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='capsType'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='pciBackend'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hostdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <rng supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>random</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <filesystem supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='driverType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>path</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>handle</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtiofs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </filesystem>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tpm supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-tis</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-crb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emulator</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>external</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendVersion'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>2.0</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </tpm>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <redirdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </redirdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <channel supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </channel>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <crypto supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </crypto>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <interface supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>passt</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <panic supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>isa</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>hyperv</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </panic>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <console supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>null</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dev</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pipe</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stdio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>udp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tcp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu-vdagent</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </console>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <features>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <gic supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <genid supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backup supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <async-teardown supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <s390-pv supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <ps2 supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tdx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sev supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sgx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hyperv supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='features'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>relaxed</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vapic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>spinlocks</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vpindex</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>runtime</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>synic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stimer</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reset</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vendor_id</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>frequencies</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reenlightenment</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tlbflush</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ipi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>avic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emsr_bitmap</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>xmm_input</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hyperv>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <launchSecurity supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </features>
Jan 20 14:18:09 compute-0 nova_compute[250018]: </domainCapabilities>
Jan 20 14:18:09 compute-0 nova_compute[250018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.024 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 14:18:09 compute-0 nova_compute[250018]: <domainCapabilities>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <domain>kvm</domain>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <arch>x86_64</arch>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <vcpu max='4096'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <iothreads supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <os supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <enum name='firmware'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>efi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <loader supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>rom</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pflash</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='readonly'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>yes</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='secure'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>yes</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>no</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </loader>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </os>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='host-passthrough' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='hostPassthroughMigratable'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='maximum' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='maximumMigratable'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>on</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>off</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='host-model' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <vendor>AMD</vendor>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='x2apic'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='hypervisor'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='stibp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='overflow-recov'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='succor'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='lbrv'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='tsc-scale'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='flushbyasid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='pause-filter'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='pfthreshold'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <feature policy='disable' name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <mode name='custom' supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Broadwell-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='ClearwaterForest-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ddpd-u'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sha512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm3'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sm4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Cooperlake-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Denverton-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Dhyana-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Milan-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Rome-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-Turin-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amd-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='auto-ibrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vp2intersect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fs-gs-base-ns'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibpb-brtype'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='no-nested-data-bp'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='null-sel-clr-base'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='perfmon-v2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbpb'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='srso-user-kernel-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='stibp-always-on'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='EPYC-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='GraniteRapids-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-128'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-256'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx10-512'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='prefetchiti'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Haswell-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v6'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Icelake-Server-v7'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='IvyBridge-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='KnightsMill'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='KnightsMill-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4fmaps'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-4vnniw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512er'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512pf'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G4-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Opteron_G5-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fma4'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tbm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xop'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SapphireRapids-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='amx-tile'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-bf16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-fp16'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512-vpopcntdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bitalg'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vbmi2'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrc'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fzrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='la57'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='taa-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='tsx-ldtrk'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='SierraForest-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ifma'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-ne-convert'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx-vnni-int8'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bhi-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='bus-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cmpccxadd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fbsdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='fsrs'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ibrs-all'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='intel-psfd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ipred-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='lam'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mcdt-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pbrsb-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='psdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rrsba-ctrl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='sbdr-ssdp-no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='serialize'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vaes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='vpclmulqdq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Client-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='hle'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='rtm'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Skylake-Server-v5'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512bw'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512cd'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512dq'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512f'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='avx512vl'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='invpcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pcid'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='pku'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='mpx'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v2'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v3'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='core-capability'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='split-lock-detect'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='Snowridge-v4'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='cldemote'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='erms'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='gfni'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdir64b'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='movdiri'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='xsaves'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='athlon-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='core2duo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='coreduo-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='n270-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='ss'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <blockers model='phenom-v1'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnow'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <feature name='3dnowext'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </blockers>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </mode>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <memoryBacking supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <enum name='sourceType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>anonymous</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <value>memfd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </memoryBacking>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <disk supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='diskDevice'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>disk</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cdrom</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>floppy</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>lun</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>fdc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>sata</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <graphics supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vnc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egl-headless</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <video supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='modelType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vga</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>cirrus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>none</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>bochs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ramfb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </video>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hostdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='mode'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>subsystem</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='startupPolicy'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>mandatory</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>requisite</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>optional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='subsysType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pci</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>scsi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='capsType'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='pciBackend'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hostdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <rng supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtio-non-transitional</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>random</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>egd</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <filesystem supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='driverType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>path</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>handle</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>virtiofs</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </filesystem>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tpm supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-tis</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tpm-crb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emulator</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>external</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendVersion'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>2.0</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </tpm>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <redirdev supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='bus'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>usb</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </redirdev>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <channel supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </channel>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <crypto supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendModel'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>builtin</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </crypto>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <interface supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='backendType'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>default</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>passt</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <panic supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='model'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>isa</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>hyperv</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </panic>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <console supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='type'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>null</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vc</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pty</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dev</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>file</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>pipe</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stdio</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>udp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tcp</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>unix</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>qemu-vdagent</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>dbus</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </console>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <features>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <gic supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <vmcoreinfo supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <genid supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backingStoreInput supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <backup supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <async-teardown supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <s390-pv supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <ps2 supported='yes'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <tdx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sev supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <sgx supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <hyperv supported='yes'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <enum name='features'>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>relaxed</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vapic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>spinlocks</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vpindex</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>runtime</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>synic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>stimer</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reset</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>vendor_id</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>frequencies</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>reenlightenment</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>tlbflush</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>ipi</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>avic</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>emsr_bitmap</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <value>xmm_input</value>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </enum>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       <defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <spinlocks>4095</spinlocks>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <stimer_direct>on</stimer_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:18:09 compute-0 nova_compute[250018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:18:09 compute-0 nova_compute[250018]:       </defaults>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     </hyperv>
Jan 20 14:18:09 compute-0 nova_compute[250018]:     <launchSecurity supported='no'/>
Jan 20 14:18:09 compute-0 nova_compute[250018]:   </features>
Jan 20 14:18:09 compute-0 nova_compute[250018]: </domainCapabilities>
Jan 20 14:18:09 compute-0 nova_compute[250018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.106 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.106 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.107 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.113 250022 INFO nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Secure Boot support detected
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.115 250022 INFO nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.115 250022 INFO nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.123 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 20 14:18:09 compute-0 nova_compute[250018]:   <model>Nehalem</model>
Jan 20 14:18:09 compute-0 nova_compute[250018]: </cpu>
Jan 20 14:18:09 compute-0 nova_compute[250018]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.125 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.151 250022 INFO nova.virt.node [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Determined node identity 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from /var/lib/nova/compute_id
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.184 250022 WARNING nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Compute nodes ['068db7fd-4bd6-45a9-8bd6-a22cfe7596ed'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.224 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.263 250022 WARNING nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.263 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.263 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.263 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.263 250022 DEBUG nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.264 250022 DEBUG oslo_concurrency.processutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613810926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.649 250022 DEBUG oslo_concurrency.processutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:09.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.796 250022 WARNING nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.797 250022 DEBUG nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.797 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.797 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.832 250022 WARNING nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] No compute node record for compute-0.ctlplane.example.com:068db7fd-4bd6-45a9-8bd6-a22cfe7596ed: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed could not be found.
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.869 250022 INFO nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.927 250022 DEBUG nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:18:09 compute-0 nova_compute[250018]: 2026-01-20 14:18:09.927 250022 DEBUG nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:18:09 compute-0 ceph-mon[74360]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1912408237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1613810926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.058 250022 INFO nova.scheduler.client.report [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [req-3fe32660-79aa-4140-9eb9-29f85e2f49f2] Created resource provider record via placement API for resource provider with UUID 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed and name compute-0.ctlplane.example.com.
Jan 20 14:18:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.111 250022 DEBUG oslo_concurrency.processutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:18:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:18:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236723296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.598 250022 DEBUG oslo_concurrency.processutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.603 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 20 14:18:10 compute-0 nova_compute[250018]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.604 250022 INFO nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] kernel doesn't support AMD SEV
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.605 250022 DEBUG nova.compute.provider_tree [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.605 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.607 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Libvirt baseline CPU <cpu>
Jan 20 14:18:10 compute-0 nova_compute[250018]:   <arch>x86_64</arch>
Jan 20 14:18:10 compute-0 nova_compute[250018]:   <model>Nehalem</model>
Jan 20 14:18:10 compute-0 nova_compute[250018]:   <vendor>AMD</vendor>
Jan 20 14:18:10 compute-0 nova_compute[250018]:   <topology sockets="8" cores="1" threads="1"/>
Jan 20 14:18:10 compute-0 nova_compute[250018]: </cpu>
Jan 20 14:18:10 compute-0 nova_compute[250018]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.672 250022 DEBUG nova.scheduler.client.report [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Updated inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.673 250022 DEBUG nova.compute.provider_tree [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Updating resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.673 250022 DEBUG nova.compute.provider_tree [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.769 250022 DEBUG nova.compute.provider_tree [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Updating resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.797 250022 DEBUG nova.compute.resource_tracker [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.798 250022 DEBUG oslo_concurrency.lockutils [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.798 250022 DEBUG nova.service [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.887 250022 DEBUG nova.service [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 20 14:18:10 compute-0 nova_compute[250018]: 2026-01-20 14:18:10.888 250022 DEBUG nova.servicegroup.drivers.db [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 20 14:18:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:18:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2481846038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3236723296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1303255634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:18:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:12 compute-0 sshd-session[250361]: Connection closed by authenticating user root 157.245.78.139 port 60860 [preauth]
Jan 20 14:18:12 compute-0 ceph-mon[74360]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:12 compute-0 sudo[250364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:12 compute-0 sudo[250364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:12 compute-0 sudo[250364]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:12 compute-0 sudo[250389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:12 compute-0 sudo[250389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:12 compute-0 sudo[250389]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:13.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:15 compute-0 podman[250416]: 2026-01-20 14:18:15.464460736 +0000 UTC m=+0.054544624 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:15 compute-0 podman[250415]: 2026-01-20 14:18:15.518212988 +0000 UTC m=+0.111272016 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:18:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:15.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:16 compute-0 ceph-mon[74360]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:16 compute-0 ceph-mon[74360]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:17.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:18 compute-0 ceph-mon[74360]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:19 compute-0 ceph-mon[74360]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:21.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:22 compute-0 ceph-mon[74360]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:23.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:24 compute-0 ceph-mon[74360]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:24 compute-0 sudo[250463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:24 compute-0 sudo[250463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:24 compute-0 sudo[250463]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:24 compute-0 sudo[250488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:24 compute-0 sudo[250488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:24 compute-0 sudo[250488]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:25 compute-0 sudo[250513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:25 compute-0 sudo[250513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-0 sudo[250513]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-0 sudo[250538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:18:25 compute-0 sudo[250538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:18:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:18:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:18:25 compute-0 sudo[250538]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:18:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:18:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:25.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:26 compute-0 ceph-mon[74360]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f588344c-2f45-442b-9b73-3ba35d750c8d does not exist
Jan 20 14:18:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 907ed693-2fec-4ea3-8a04-f23e2b5e0724 does not exist
Jan 20 14:18:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7c57c514-0852-46f1-b610-abfc11f43b85 does not exist
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:18:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:26 compute-0 sudo[250595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:26 compute-0 sudo[250595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:26 compute-0 sudo[250595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:26 compute-0 sudo[250621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:26 compute-0 sudo[250621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:26 compute-0 sudo[250621]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:26 compute-0 sudo[250646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:26 compute-0 sudo[250646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:26 compute-0 sudo[250646]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:26 compute-0 sudo[250671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:18:26 compute-0 sudo[250671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.225458013 +0000 UTC m=+0.051969845 container create 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:18:27 compute-0 systemd[1]: Started libpod-conmon-46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff.scope.
Jan 20 14:18:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.198029032 +0000 UTC m=+0.024540874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.304893508 +0000 UTC m=+0.131405360 container init 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.313127821 +0000 UTC m=+0.139639643 container start 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.317987123 +0000 UTC m=+0.144498975 container attach 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:18:27 compute-0 cool_swirles[250752]: 167 167
Jan 20 14:18:27 compute-0 systemd[1]: libpod-46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff.scope: Deactivated successfully.
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.31935716 +0000 UTC m=+0.145868982 container died 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-659e326039f1312fe5a10ddb9a1feae6928b48e6b61cf49b13ba17999cf65eef-merged.mount: Deactivated successfully.
Jan 20 14:18:27 compute-0 podman[250736]: 2026-01-20 14:18:27.375555528 +0000 UTC m=+0.202067370 container remove 46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swirles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:18:27 compute-0 systemd[1]: libpod-conmon-46dff44da0272ded2bd0cdd2d592364d3cae59e5bfa6a89d19d772cdd60587ff.scope: Deactivated successfully.
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:18:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:18:27 compute-0 podman[250778]: 2026-01-20 14:18:27.592737474 +0000 UTC m=+0.055711776 container create 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:27 compute-0 systemd[1]: Started libpod-conmon-249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699.scope.
Jan 20 14:18:27 compute-0 podman[250778]: 2026-01-20 14:18:27.577639406 +0000 UTC m=+0.040613728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:27 compute-0 podman[250778]: 2026-01-20 14:18:27.696917647 +0000 UTC m=+0.159891999 container init 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:18:27 compute-0 podman[250778]: 2026-01-20 14:18:27.709129937 +0000 UTC m=+0.172104249 container start 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:18:27 compute-0 podman[250778]: 2026-01-20 14:18:27.712966921 +0000 UTC m=+0.175941303 container attach 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:18:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:28 compute-0 ceph-mon[74360]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:28 compute-0 magical_ptolemy[250794]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:18:28 compute-0 magical_ptolemy[250794]: --> relative data size: 1.0
Jan 20 14:18:28 compute-0 magical_ptolemy[250794]: --> All data devices are unavailable
Jan 20 14:18:28 compute-0 systemd[1]: libpod-249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699.scope: Deactivated successfully.
Jan 20 14:18:28 compute-0 podman[250778]: 2026-01-20 14:18:28.514215813 +0000 UTC m=+0.977190155 container died 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7dfba83ff619d6f65428d871430b6c37d2ed60c6f5c776af0c39764d7c6d87-merged.mount: Deactivated successfully.
Jan 20 14:18:28 compute-0 podman[250778]: 2026-01-20 14:18:28.582298742 +0000 UTC m=+1.045273044 container remove 249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:18:28 compute-0 systemd[1]: libpod-conmon-249e96465b5372521f67d311fec06cd2f9447c8109920a1000f0ca36da3f1699.scope: Deactivated successfully.
Jan 20 14:18:28 compute-0 sudo[250671]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:28 compute-0 sudo[250823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:28 compute-0 sudo[250823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:28 compute-0 sudo[250823]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:28 compute-0 sudo[250849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:28 compute-0 sudo[250849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:28 compute-0 sudo[250849]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:28 compute-0 sudo[250874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:28 compute-0 sudo[250874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:28 compute-0 sudo[250874]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:28 compute-0 sudo[250899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:18:28 compute-0 sudo[250899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.303258954 +0000 UTC m=+0.058168361 container create 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:18:29 compute-0 rsyslogd[1003]: imjournal: 6957 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 20 14:18:29 compute-0 systemd[1]: Started libpod-conmon-47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d.scope.
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.27567328 +0000 UTC m=+0.030582737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.414717715 +0000 UTC m=+0.169627182 container init 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.421779076 +0000 UTC m=+0.176688453 container start 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.426419771 +0000 UTC m=+0.181329178 container attach 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:29 compute-0 vibrant_chatterjee[250981]: 167 167
Jan 20 14:18:29 compute-0 systemd[1]: libpod-47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d.scope: Deactivated successfully.
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.429084483 +0000 UTC m=+0.183993880 container died 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ff57c3c376dc90180893e817cd1e49daadd67fe74db0c410a3e25186ed8ec70-merged.mount: Deactivated successfully.
Jan 20 14:18:29 compute-0 podman[250964]: 2026-01-20 14:18:29.484520501 +0000 UTC m=+0.239429878 container remove 47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:18:29 compute-0 systemd[1]: libpod-conmon-47749e1c503ad1861d33be6c73e278b5dca0ee65850687e1d1b969b871f8275d.scope: Deactivated successfully.
Jan 20 14:18:29 compute-0 podman[251008]: 2026-01-20 14:18:29.680842293 +0000 UTC m=+0.043978468 container create 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:18:29 compute-0 systemd[1]: Started libpod-conmon-4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09.scope.
Jan 20 14:18:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79759e32e86adb4863f02916c7426b90d9732757a0e20990c5a0b0c6602fb47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79759e32e86adb4863f02916c7426b90d9732757a0e20990c5a0b0c6602fb47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:29 compute-0 podman[251008]: 2026-01-20 14:18:29.661010228 +0000 UTC m=+0.024146423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79759e32e86adb4863f02916c7426b90d9732757a0e20990c5a0b0c6602fb47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79759e32e86adb4863f02916c7426b90d9732757a0e20990c5a0b0c6602fb47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:29 compute-0 podman[251008]: 2026-01-20 14:18:29.768513002 +0000 UTC m=+0.131649177 container init 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:18:29 compute-0 podman[251008]: 2026-01-20 14:18:29.774160664 +0000 UTC m=+0.137296819 container start 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:29 compute-0 podman[251008]: 2026-01-20 14:18:29.776717893 +0000 UTC m=+0.139854048 container attach 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:18:30 compute-0 ceph-mon[74360]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:30 compute-0 jovial_shannon[251025]: {
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:     "0": [
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:         {
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "devices": [
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "/dev/loop3"
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             ],
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "lv_name": "ceph_lv0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "lv_size": "7511998464",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "name": "ceph_lv0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "tags": {
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.cluster_name": "ceph",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.crush_device_class": "",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.encrypted": "0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.osd_id": "0",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.type": "block",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:                 "ceph.vdo": "0"
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             },
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "type": "block",
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:             "vg_name": "ceph_vg0"
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:         }
Jan 20 14:18:30 compute-0 jovial_shannon[251025]:     ]
Jan 20 14:18:30 compute-0 jovial_shannon[251025]: }
Jan 20 14:18:30 compute-0 systemd[1]: libpod-4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09.scope: Deactivated successfully.
Jan 20 14:18:30 compute-0 podman[251008]: 2026-01-20 14:18:30.561814868 +0000 UTC m=+0.924951093 container died 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f79759e32e86adb4863f02916c7426b90d9732757a0e20990c5a0b0c6602fb47-merged.mount: Deactivated successfully.
Jan 20 14:18:30 compute-0 podman[251008]: 2026-01-20 14:18:30.63258939 +0000 UTC m=+0.995725545 container remove 4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 14:18:30 compute-0 systemd[1]: libpod-conmon-4d844f2e152b0dd314ff6d72ae787016d413f824e7291fe9c92c765f92407e09.scope: Deactivated successfully.
Jan 20 14:18:30 compute-0 sudo[250899]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:30 compute-0 sudo[251048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:18:30.732 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:18:30.733 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:18:30.733 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:18:30 compute-0 sudo[251048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:30 compute-0 sudo[251048]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:30 compute-0 sudo[251074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:18:30 compute-0 sudo[251074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:30 compute-0 sudo[251074]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:30 compute-0 sudo[251099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:30 compute-0 sudo[251099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:30 compute-0 sudo[251099]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:30 compute-0 sudo[251124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:18:30 compute-0 sudo[251124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.248173258 +0000 UTC m=+0.043521708 container create bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:18:31 compute-0 systemd[1]: Started libpod-conmon-bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa.scope.
Jan 20 14:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.31712982 +0000 UTC m=+0.112478290 container init bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.324024466 +0000 UTC m=+0.119372916 container start bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.231652521 +0000 UTC m=+0.027000991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.327258813 +0000 UTC m=+0.122607283 container attach bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:18:31 compute-0 quizzical_varahamihira[251203]: 167 167
Jan 20 14:18:31 compute-0 systemd[1]: libpod-bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa.scope: Deactivated successfully.
Jan 20 14:18:31 compute-0 conmon[251203]: conmon bbae2ee0557a6ee61a29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa.scope/container/memory.events
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.331235911 +0000 UTC m=+0.126584401 container died bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-51aeff513c7bef509b0f04ab04c58172942019677cd6b1425f5bfd04b79aa97b-merged.mount: Deactivated successfully.
Jan 20 14:18:31 compute-0 podman[251189]: 2026-01-20 14:18:31.378007994 +0000 UTC m=+0.173356454 container remove bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:18:31 compute-0 systemd[1]: libpod-conmon-bbae2ee0557a6ee61a29d12e153d0c053aa5999acb7a000224506ea3b0b575aa.scope: Deactivated successfully.
Jan 20 14:18:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:31 compute-0 podman[251229]: 2026-01-20 14:18:31.524368028 +0000 UTC m=+0.039940301 container create ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:31 compute-0 systemd[1]: Started libpod-conmon-ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190.scope.
Jan 20 14:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb7431d53647f1284452b6b4178bcdbf6c29ec9db6b58da1395883673826f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb7431d53647f1284452b6b4178bcdbf6c29ec9db6b58da1395883673826f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb7431d53647f1284452b6b4178bcdbf6c29ec9db6b58da1395883673826f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb7431d53647f1284452b6b4178bcdbf6c29ec9db6b58da1395883673826f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:31 compute-0 podman[251229]: 2026-01-20 14:18:31.506532216 +0000 UTC m=+0.022104509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:18:31 compute-0 podman[251229]: 2026-01-20 14:18:31.603588087 +0000 UTC m=+0.119160360 container init ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:31 compute-0 podman[251229]: 2026-01-20 14:18:31.609708943 +0000 UTC m=+0.125281216 container start ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:18:31 compute-0 podman[251229]: 2026-01-20 14:18:31.613116814 +0000 UTC m=+0.128689087 container attach ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:18:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]: {
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:         "osd_id": 0,
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:         "type": "bluestore"
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]:     }
Jan 20 14:18:32 compute-0 dreamy_bohr[251245]: }
Jan 20 14:18:32 compute-0 systemd[1]: libpod-ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190.scope: Deactivated successfully.
Jan 20 14:18:32 compute-0 podman[251229]: 2026-01-20 14:18:32.447897022 +0000 UTC m=+0.963469295 container died ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 14:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfb7431d53647f1284452b6b4178bcdbf6c29ec9db6b58da1395883673826f64-merged.mount: Deactivated successfully.
Jan 20 14:18:32 compute-0 ceph-mon[74360]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:32 compute-0 podman[251229]: 2026-01-20 14:18:32.5089178 +0000 UTC m=+1.024490073 container remove ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:18:32 compute-0 systemd[1]: libpod-conmon-ce17445b9f3f86cbf48b9c69c641000041cb5c1f2eaa3962d2e7d37fb9476190.scope: Deactivated successfully.
Jan 20 14:18:32 compute-0 sudo[251124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:18:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:18:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b4717343-371b-4f70-af24-c395870eb865 does not exist
Jan 20 14:18:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 30ec384b-e8ac-49bb-b42a-78fa72f89131 does not exist
Jan 20 14:18:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 93562b9c-0085-4d21-a281-51dfd1ffb53d does not exist
Jan 20 14:18:32 compute-0 sudo[251276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:32 compute-0 sudo[251276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:32 compute-0 sudo[251276]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:32 compute-0 sudo[251301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:18:32 compute-0 sudo[251301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:32 compute-0 sudo[251301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:33 compute-0 sudo[251327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:33 compute-0 sudo[251327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:33 compute-0 sudo[251327]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:33 compute-0 sudo[251352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:33 compute-0 sudo[251352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:33 compute-0 sudo[251352]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:18:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:34 compute-0 ceph-mon[74360]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:35.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:35 compute-0 ceph-mon[74360]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:37.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:38 compute-0 ceph-mon[74360]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:39.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:40 compute-0 ceph-mon[74360]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:41.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:18:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:41.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:18:42 compute-0 ceph-mon[74360]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:18:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:18:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/513778578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1220553920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:43.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:18:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:18:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:44 compute-0 ceph-mon[74360]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:18:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3907418823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:18:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:45.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:45.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:46 compute-0 ceph-mon[74360]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:46 compute-0 podman[251384]: 2026-01-20 14:18:46.456649381 +0000 UTC m=+0.043519696 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:18:46 compute-0 podman[251383]: 2026-01-20 14:18:46.481337017 +0000 UTC m=+0.072905780 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:18:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:47.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:47.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:48 compute-0 ceph-mon[74360]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:18:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 20 14:18:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:18:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:49.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:18:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:49.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:50 compute-0 ceph-mon[74360]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 20 14:18:50 compute-0 nova_compute[250018]: 2026-01-20 14:18:50.889 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:18:50 compute-0 nova_compute[250018]: 2026-01-20 14:18:50.939 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:18:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:51.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:51.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:52 compute-0 ceph-mon[74360]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:18:52
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:53.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:53 compute-0 sudo[251431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:53 compute-0 sudo[251431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:53 compute-0 sudo[251431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:53 compute-0 sudo[251456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:18:53 compute-0 sudo[251456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:18:53 compute-0 sudo[251456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:18:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:53.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:54 compute-0 ceph-mon[74360]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 82 op/s
Jan 20 14:18:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:55 compute-0 ceph-mon[74360]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:56 compute-0 sshd-session[251482]: Connection closed by authenticating user root 157.245.78.139 port 52506 [preauth]
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:57.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:18:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:57.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:58 compute-0 ceph-mon[74360]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:18:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:18:59.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:18:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:18:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:18:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:18:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:00 compute-0 ceph-mon[74360]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Jan 20 14:19:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 135 op/s
Jan 20 14:19:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:01.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:02 compute-0 ceph-mon[74360]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 135 op/s
Jan 20 14:19:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:03.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:04 compute-0 ceph-mon[74360]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:05.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:06 compute-0 ceph-mon[74360]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 14:19:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:07.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:07 compute-0 ceph-mon[74360]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:08 compute-0 nova_compute[250018]: 2026-01-20 14:19:08.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:08 compute-0 nova_compute[250018]: 2026-01-20 14:19:08.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:08 compute-0 nova_compute[250018]: 2026-01-20 14:19:08.054 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:19:08 compute-0 nova_compute[250018]: 2026-01-20 14:19:08.054 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:19:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/564146264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.055 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.055 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.056 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.056 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.057 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.057 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.057 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.057 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.058 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:19:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:09.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.097 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.098 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.099 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:19:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:19:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1175703373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.570 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:19:09 compute-0 ceph-mon[74360]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/777619478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1175703373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.739 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.740 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5199MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.740 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:09 compute-0 nova_compute[250018]: 2026-01-20 14:19:09.740 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:09.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.277 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.277 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.307 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:19:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2626967478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:19:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397359392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.771 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.778 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.828 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.829 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:19:10 compute-0 nova_compute[250018]: 2026-01-20 14:19:10.829 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:19:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:11.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3397359392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4174787175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:19:11 compute-0 ceph-mon[74360]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:11.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:13.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:13 compute-0 sudo[251537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:13 compute-0 sudo[251537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:13 compute-0 sudo[251537]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:13 compute-0 sudo[251562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:13 compute-0 sudo[251562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:13 compute-0 sudo[251562]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:13.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:14 compute-0 ceph-mon[74360]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.101434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754101615, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2062, "num_deletes": 251, "total_data_size": 3843672, "memory_usage": 3905488, "flush_reason": "Manual Compaction"}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754133423, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3767487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17311, "largest_seqno": 19372, "table_properties": {"data_size": 3758175, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18600, "raw_average_key_size": 20, "raw_value_size": 3739661, "raw_average_value_size": 4029, "num_data_blocks": 262, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918525, "oldest_key_time": 1768918525, "file_creation_time": 1768918754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 31928 microseconds, and 7380 cpu microseconds.
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.133462) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3767487 bytes OK
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.133480) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.136526) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.136539) EVENT_LOG_v1 {"time_micros": 1768918754136535, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.136554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3835395, prev total WAL file size 3835395, number of live WAL files 2.
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.137721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3679KB)], [41(7520KB)]
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754137816, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11468747, "oldest_snapshot_seqno": -1}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4501 keys, 9395491 bytes, temperature: kUnknown
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754217690, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9395491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9363346, "index_size": 19811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112696, "raw_average_key_size": 25, "raw_value_size": 9279724, "raw_average_value_size": 2061, "num_data_blocks": 822, "num_entries": 4501, "num_filter_entries": 4501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918754, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.218023) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9395491 bytes
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.219500) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.4 rd, 117.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5020, records dropped: 519 output_compression: NoCompression
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.219529) EVENT_LOG_v1 {"time_micros": 1768918754219515, "job": 20, "event": "compaction_finished", "compaction_time_micros": 79991, "compaction_time_cpu_micros": 41886, "output_level": 6, "num_output_files": 1, "total_output_size": 9395491, "num_input_records": 5020, "num_output_records": 4501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754220919, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918754223554, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.137576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.223628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.223634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.223638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.223641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:19:14.223643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:15.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:16 compute-0 ceph-mon[74360]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:17 compute-0 podman[251590]: 2026-01-20 14:19:17.483356555 +0000 UTC m=+0.065292439 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:19:17 compute-0 podman[251589]: 2026-01-20 14:19:17.516358775 +0000 UTC m=+0.095844252 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:19:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:17.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:18 compute-0 ceph-mon[74360]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:20 compute-0 ceph-mon[74360]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:21.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:21.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:22 compute-0 ceph-mon[74360]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:23.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:24 compute-0 ceph-mon[74360]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:25.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:25.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:26 compute-0 ceph-mon[74360]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:27.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:27 compute-0 ceph-mon[74360]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:29.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:29.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:30 compute-0 ceph-mon[74360]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:19:30.733 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:19:30.734 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:19:30.734 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:19:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:31.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:31.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:32 compute-0 ceph-mon[74360]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:33 compute-0 sudo[251640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-0 sudo[251640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251640]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:33 compute-0 sudo[251665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:19:33 compute-0 sudo[251665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 sudo[251690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-0 sudo[251690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 sudo[251715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:19:33 compute-0 sudo[251715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-0 sudo[251754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251754]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 sudo[251782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:33 compute-0 sudo[251782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:33 compute-0 sudo[251782]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:33.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:33 compute-0 sudo[251715]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 505264a9-2bee-47fc-a027-af5ae600cebe does not exist
Jan 20 14:19:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 190f7ea8-7ee3-4803-8662-e4544341e2fe does not exist
Jan 20 14:19:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c7562d90-714e-4ead-b761-0f8af011a88e does not exist
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:19:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:19:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:34 compute-0 sudo[251821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:34 compute-0 sudo[251821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:34 compute-0 sudo[251821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:34 compute-0 sudo[251846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:19:34 compute-0 sudo[251846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:34 compute-0 sudo[251846]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:34 compute-0 ceph-mon[74360]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:19:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:19:34 compute-0 sudo[251871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:34 compute-0 sudo[251871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:34 compute-0 sudo[251871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:34 compute-0 sudo[251896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:19:34 compute-0 sudo[251896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.599646361 +0000 UTC m=+0.042157652 container create 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:19:34 compute-0 systemd[1]: Started libpod-conmon-401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b.scope.
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.5810885 +0000 UTC m=+0.023599811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.696872939 +0000 UTC m=+0.139384260 container init 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.705192548 +0000 UTC m=+0.147703839 container start 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.708764497 +0000 UTC m=+0.151275798 container attach 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:34 compute-0 systemd[1]: libpod-401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b.scope: Deactivated successfully.
Jan 20 14:19:34 compute-0 eager_carson[251977]: 167 167
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.713781155 +0000 UTC m=+0.156292456 container died 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:34 compute-0 conmon[251977]: conmon 401a9995f91662dcf1a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b.scope/container/memory.events
Jan 20 14:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-32b38ac557afce5526707e128b1d68fb018a07e5c9b0a2314139c6f02ba3345a-merged.mount: Deactivated successfully.
Jan 20 14:19:34 compute-0 podman[251961]: 2026-01-20 14:19:34.757827389 +0000 UTC m=+0.200338680 container remove 401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:19:34 compute-0 systemd[1]: libpod-conmon-401a9995f91662dcf1a0bba04ea0b0af30a019c353580e439de26ee43c85b34b.scope: Deactivated successfully.
Jan 20 14:19:34 compute-0 podman[252001]: 2026-01-20 14:19:34.927854143 +0000 UTC m=+0.045585236 container create b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:34 compute-0 systemd[1]: Started libpod-conmon-b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c.scope.
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:34.907088431 +0000 UTC m=+0.024819544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:35.022156752 +0000 UTC m=+0.139887845 container init b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:35.029874764 +0000 UTC m=+0.147605857 container start b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:19:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:35.033257688 +0000 UTC m=+0.150988781 container attach b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:19:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:35 compute-0 happy_proskuriakova[252017]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:19:35 compute-0 happy_proskuriakova[252017]: --> relative data size: 1.0
Jan 20 14:19:35 compute-0 happy_proskuriakova[252017]: --> All data devices are unavailable
Jan 20 14:19:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:35.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:35 compute-0 systemd[1]: libpod-b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c.scope: Deactivated successfully.
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:35.859842601 +0000 UTC m=+0.977573734 container died b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2f370a942533dd42f56a32d1ec0db3245f03c93e78312f70c6857426924f48-merged.mount: Deactivated successfully.
Jan 20 14:19:35 compute-0 podman[252001]: 2026-01-20 14:19:35.934790075 +0000 UTC m=+1.052521208 container remove b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:35 compute-0 systemd[1]: libpod-conmon-b8b19f636cb5a7c71a20dd3352133dbe81920fe685fb2ee58fcf9f8b967e4e6c.scope: Deactivated successfully.
Jan 20 14:19:35 compute-0 sudo[251896]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:36 compute-0 sudo[252047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:36 compute-0 sudo[252047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:36 compute-0 sudo[252047]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:36 compute-0 sudo[252072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:19:36 compute-0 sudo[252072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:36 compute-0 sudo[252072]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:36 compute-0 sudo[252097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:36 compute-0 sudo[252097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:36 compute-0 sudo[252097]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:36 compute-0 sudo[252122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:19:36 compute-0 sudo[252122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:36 compute-0 ceph-mon[74360]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.719219038 +0000 UTC m=+0.055404748 container create 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:36 compute-0 systemd[1]: Started libpod-conmon-109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310.scope.
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.686105805 +0000 UTC m=+0.022291535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.807589923 +0000 UTC m=+0.143775633 container init 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.817077274 +0000 UTC m=+0.153262964 container start 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.820151889 +0000 UTC m=+0.156337589 container attach 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 14:19:36 compute-0 laughing_meninsky[252204]: 167 167
Jan 20 14:19:36 compute-0 systemd[1]: libpod-109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310.scope: Deactivated successfully.
Jan 20 14:19:36 compute-0 conmon[252204]: conmon 109e7e831bd35473c86d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310.scope/container/memory.events
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.827415359 +0000 UTC m=+0.163601049 container died 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8231eb4af8b7893fc486fb66fc09969dbdcffcd394de99b7e7e52ce190aaa83d-merged.mount: Deactivated successfully.
Jan 20 14:19:36 compute-0 podman[252187]: 2026-01-20 14:19:36.873624392 +0000 UTC m=+0.209810102 container remove 109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_meninsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:19:36 compute-0 systemd[1]: libpod-conmon-109e7e831bd35473c86d4c2969992fc9329831140a26e56754ca2deac6a9c310.scope: Deactivated successfully.
Jan 20 14:19:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.068921552 +0000 UTC m=+0.052120347 container create c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:19:37 compute-0 systemd[1]: Started libpod-conmon-c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6.scope.
Jan 20 14:19:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:37.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.04667693 +0000 UTC m=+0.029875725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560e4a066bfdbd9b2265b885ef7526520da4a000403ffeb3d5a4f42040876d61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560e4a066bfdbd9b2265b885ef7526520da4a000403ffeb3d5a4f42040876d61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560e4a066bfdbd9b2265b885ef7526520da4a000403ffeb3d5a4f42040876d61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560e4a066bfdbd9b2265b885ef7526520da4a000403ffeb3d5a4f42040876d61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.174291246 +0000 UTC m=+0.157490091 container init c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.186215234 +0000 UTC m=+0.169413989 container start c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.189811633 +0000 UTC m=+0.173010428 container attach c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:19:37 compute-0 ceph-mon[74360]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:37.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:37 compute-0 elated_ellis[252244]: {
Jan 20 14:19:37 compute-0 elated_ellis[252244]:     "0": [
Jan 20 14:19:37 compute-0 elated_ellis[252244]:         {
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "devices": [
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "/dev/loop3"
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             ],
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "lv_name": "ceph_lv0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "lv_size": "7511998464",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "name": "ceph_lv0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "tags": {
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.cluster_name": "ceph",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.crush_device_class": "",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.encrypted": "0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.osd_id": "0",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.type": "block",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:                 "ceph.vdo": "0"
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             },
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "type": "block",
Jan 20 14:19:37 compute-0 elated_ellis[252244]:             "vg_name": "ceph_vg0"
Jan 20 14:19:37 compute-0 elated_ellis[252244]:         }
Jan 20 14:19:37 compute-0 elated_ellis[252244]:     ]
Jan 20 14:19:37 compute-0 elated_ellis[252244]: }
Jan 20 14:19:37 compute-0 systemd[1]: libpod-c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6.scope: Deactivated successfully.
Jan 20 14:19:37 compute-0 podman[252228]: 2026-01-20 14:19:37.968853856 +0000 UTC m=+0.952052621 container died c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-560e4a066bfdbd9b2265b885ef7526520da4a000403ffeb3d5a4f42040876d61-merged.mount: Deactivated successfully.
Jan 20 14:19:38 compute-0 podman[252228]: 2026-01-20 14:19:38.021663772 +0000 UTC m=+1.004862517 container remove c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:38 compute-0 systemd[1]: libpod-conmon-c16ac0af0379233accd0fd8a64d273df0a872e7d1333ef788090652c1d81e5b6.scope: Deactivated successfully.
Jan 20 14:19:38 compute-0 sudo[252122]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:38 compute-0 sudo[252265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:38 compute-0 sudo[252265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:38 compute-0 sudo[252265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:38 compute-0 sudo[252290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:19:38 compute-0 sudo[252290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:38 compute-0 sudo[252290]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:38 compute-0 sudo[252315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:38 compute-0 sudo[252315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:38 compute-0 sudo[252315]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:38 compute-0 sudo[252340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:19:38 compute-0 sudo[252340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.703115387 +0000 UTC m=+0.053854706 container create 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:19:38 compute-0 systemd[1]: Started libpod-conmon-407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46.scope.
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.673084669 +0000 UTC m=+0.023824028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.790899975 +0000 UTC m=+0.141639364 container init 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.802186166 +0000 UTC m=+0.152925525 container start 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.807204294 +0000 UTC m=+0.157943653 container attach 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:38 compute-0 competent_jackson[252420]: 167 167
Jan 20 14:19:38 compute-0 systemd[1]: libpod-407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46.scope: Deactivated successfully.
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.810508245 +0000 UTC m=+0.161247594 container died 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-46f3f9d1e8ef9329a0c24f8d34534aed27e8dd005201871a8e2a0fd74cc5be08-merged.mount: Deactivated successfully.
Jan 20 14:19:38 compute-0 podman[252403]: 2026-01-20 14:19:38.862473266 +0000 UTC m=+0.213212595 container remove 407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_jackson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:38 compute-0 systemd[1]: libpod-conmon-407a3a751556dab19c6689a78649dfeff2a8a04b361a31b07811b3beb2e7ff46.scope: Deactivated successfully.
Jan 20 14:19:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:39 compute-0 podman[252446]: 2026-01-20 14:19:39.060974506 +0000 UTC m=+0.050182674 container create 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:39 compute-0 systemd[1]: Started libpod-conmon-17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25.scope.
Jan 20 14:19:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:39 compute-0 podman[252446]: 2026-01-20 14:19:39.039418052 +0000 UTC m=+0.028626260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:19:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d770ad3515dad86747ecd4c6c16442640037f725e05169ec9874c798e1c9bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d770ad3515dad86747ecd4c6c16442640037f725e05169ec9874c798e1c9bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d770ad3515dad86747ecd4c6c16442640037f725e05169ec9874c798e1c9bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d770ad3515dad86747ecd4c6c16442640037f725e05169ec9874c798e1c9bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:39 compute-0 podman[252446]: 2026-01-20 14:19:39.169860316 +0000 UTC m=+0.159068564 container init 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 20 14:19:39 compute-0 podman[252446]: 2026-01-20 14:19:39.182299649 +0000 UTC m=+0.171507847 container start 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:19:39 compute-0 podman[252446]: 2026-01-20 14:19:39.186497844 +0000 UTC m=+0.175706022 container attach 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:19:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:39.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:40 compute-0 confident_gates[252462]: {
Jan 20 14:19:40 compute-0 confident_gates[252462]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:19:40 compute-0 confident_gates[252462]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:19:40 compute-0 confident_gates[252462]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:19:40 compute-0 confident_gates[252462]:         "osd_id": 0,
Jan 20 14:19:40 compute-0 confident_gates[252462]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:19:40 compute-0 confident_gates[252462]:         "type": "bluestore"
Jan 20 14:19:40 compute-0 confident_gates[252462]:     }
Jan 20 14:19:40 compute-0 confident_gates[252462]: }
Jan 20 14:19:40 compute-0 ceph-mon[74360]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:40 compute-0 systemd[1]: libpod-17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25.scope: Deactivated successfully.
Jan 20 14:19:40 compute-0 podman[252446]: 2026-01-20 14:19:40.122080101 +0000 UTC m=+1.111288259 container died 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d770ad3515dad86747ecd4c6c16442640037f725e05169ec9874c798e1c9bdc-merged.mount: Deactivated successfully.
Jan 20 14:19:40 compute-0 podman[252446]: 2026-01-20 14:19:40.195559615 +0000 UTC m=+1.184767783 container remove 17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gates, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:40 compute-0 systemd[1]: libpod-conmon-17b0e1ab80f8edf5aa2d6d65f15d81f5d5cd74359b324e53382adbd644f4dc25.scope: Deactivated successfully.
Jan 20 14:19:40 compute-0 sudo[252340]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:19:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:19:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e60319c7-32b7-4572-b6d6-d83b85ed2639 does not exist
Jan 20 14:19:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c02b3a21-2bc9-4bb5-9d69-d175368ef850 does not exist
Jan 20 14:19:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2f5a9ed4-c8c5-4654-a060-e2c72030ac30 does not exist
Jan 20 14:19:40 compute-0 sudo[252497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:40 compute-0 sudo[252497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:40 compute-0 sudo[252497]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:40 compute-0 sudo[252522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:19:40 compute-0 sudo[252522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:40 compute-0 sudo[252522]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:40 compute-0 sshd-session[252547]: Connection closed by authenticating user root 157.245.78.139 port 54358 [preauth]
Jan 20 14:19:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:19:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:41.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:42 compute-0 ceph-mon[74360]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:43.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:44 compute-0 ceph-mon[74360]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:45.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:46 compute-0 ceph-mon[74360]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:47.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:47.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:48 compute-0 ceph-mon[74360]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:48 compute-0 podman[252554]: 2026-01-20 14:19:48.463411263 +0000 UTC m=+0.056199970 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 14:19:48 compute-0 podman[252553]: 2026-01-20 14:19:48.513768121 +0000 UTC m=+0.108690686 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:19:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:49.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:49.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:50 compute-0 ceph-mon[74360]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:51.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:52 compute-0 ceph-mon[74360]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:19:52
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:53.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:53 compute-0 sudo[252599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:53 compute-0 sudo[252599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:53 compute-0 sudo[252599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:53.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:53 compute-0 sudo[252624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:19:53 compute-0 sudo[252624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:19:53 compute-0 sudo[252624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:19:54 compute-0 ceph-mon[74360]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:55.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:55.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:56 compute-0 ceph-mon[74360]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:57.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:19:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:57.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:19:58 compute-0 ceph-mon[74360]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:19:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:19:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:19:59.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:19:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:19:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:19:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:19:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:20:00 compute-0 ceph-mon[74360]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:20:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:02 compute-0 ceph-mon[74360]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:03 compute-0 ceph-mon[74360]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:05.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:06 compute-0 ceph-mon[74360]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:07.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:08 compute-0 ceph-mon[74360]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:09.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:10 compute-0 ceph-mon[74360]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.821 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.822 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.846 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.847 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.847 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.877 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.877 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.877 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.878 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.905 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.906 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.906 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.906 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:20:10 compute-0 nova_compute[250018]: 2026-01-20 14:20:10.907 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:20:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:11.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4055056685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:20:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052810782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.371 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:20:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.564 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.565 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5194MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.565 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.566 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.916 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.917 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:20:11 compute-0 nova_compute[250018]: 2026-01-20 14:20:11.934 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:20:12 compute-0 ceph-mon[74360]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1052810782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2351083615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/90274225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:20:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526916081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:12 compute-0 nova_compute[250018]: 2026-01-20 14:20:12.403 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:20:12 compute-0 nova_compute[250018]: 2026-01-20 14:20:12.408 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:20:12 compute-0 nova_compute[250018]: 2026-01-20 14:20:12.758 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:20:12 compute-0 nova_compute[250018]: 2026-01-20 14:20:12.759 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:20:12 compute-0 nova_compute[250018]: 2026-01-20 14:20:12.759 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:13.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/526916081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/574053826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:20:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:14 compute-0 sudo[252703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:14 compute-0 sudo[252703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:14 compute-0 sudo[252703]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:14 compute-0 sudo[252728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:14 compute-0 sudo[252728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:14 compute-0 sudo[252728]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:14 compute-0 ceph-mon[74360]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:20:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4041639714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:20:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:15.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:15.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:16 compute-0 ceph-mon[74360]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:17.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:17.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:18 compute-0 ceph-mon[74360]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:19.157 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:20:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:19.159 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:20:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:19.160 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:20:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:19 compute-0 podman[252757]: 2026-01-20 14:20:19.502032259 +0000 UTC m=+0.084301524 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 14:20:19 compute-0 podman[252756]: 2026-01-20 14:20:19.506151392 +0000 UTC m=+0.101803976 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:20:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:19.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:20 compute-0 ceph-mon[74360]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:22 compute-0 ceph-mon[74360]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:24 compute-0 ceph-mon[74360]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:25 compute-0 sshd-session[252802]: Connection closed by authenticating user root 157.245.78.139 port 44040 [preauth]
Jan 20 14:20:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:26 compute-0 ceph-mon[74360]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:27.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:28 compute-0 ceph-mon[74360]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:30 compute-0 ceph-mon[74360]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:30.734 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:30.735 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:20:30.735 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:20:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:31.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:32 compute-0 ceph-mon[74360]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:34 compute-0 sudo[252808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:34 compute-0 sudo[252808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:34 compute-0 sudo[252808]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:34 compute-0 sudo[252833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:34 compute-0 sudo[252833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:34 compute-0 sudo[252833]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:34 compute-0 ceph-mon[74360]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:36 compute-0 ceph-mon[74360]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:37.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:37 compute-0 ceph-mon[74360]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:20:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:20:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:40 compute-0 ceph-mon[74360]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:40 compute-0 sudo[252862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:40 compute-0 sudo[252862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:40 compute-0 sudo[252862]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:40 compute-0 sudo[252887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:40 compute-0 sudo[252887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:40 compute-0 sudo[252887]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:41 compute-0 sudo[252912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:41 compute-0 sudo[252912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:41 compute-0 sudo[252912]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:41 compute-0 sudo[252937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:20:41 compute-0 sudo[252937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:41 compute-0 podman[253036]: 2026-01-20 14:20:41.615697197 +0000 UTC m=+0.066185915 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:20:41 compute-0 podman[253036]: 2026-01-20 14:20:41.703526656 +0000 UTC m=+0.154015364 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:20:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:41.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:42 compute-0 ceph-mon[74360]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:42 compute-0 podman[253193]: 2026-01-20 14:20:42.304953437 +0000 UTC m=+0.054419951 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:20:42 compute-0 podman[253193]: 2026-01-20 14:20:42.318654504 +0000 UTC m=+0.068121018 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:20:42 compute-0 podman[253258]: 2026-01-20 14:20:42.526087529 +0000 UTC m=+0.051865850 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=keepalived for Ceph)
Jan 20 14:20:42 compute-0 podman[253258]: 2026-01-20 14:20:42.536810734 +0000 UTC m=+0.062589035 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Jan 20 14:20:42 compute-0 sudo[252937]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:20:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:20:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:20:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:20:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:42 compute-0 sudo[253291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:42 compute-0 sudo[253291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:42 compute-0 sudo[253291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:42 compute-0 sudo[253316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:42 compute-0 sudo[253316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:42 compute-0 sudo[253316]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:42 compute-0 sudo[253342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:42 compute-0 sudo[253342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:42 compute-0 sudo[253342]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:42 compute-0 sudo[253367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:20:42 compute-0 sudo[253367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:20:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:20:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 sudo[253367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8b7481d9-44a5-4d86-b648-b9f5e179e245 does not exist
Jan 20 14:20:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c4a98c37-289d-420c-9ecc-7de73beb22da does not exist
Jan 20 14:20:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d7621948-0db5-44d5-ad03-41e01a75d2e7 does not exist
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:20:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-0 sudo[253425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:44 compute-0 sudo[253425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:44 compute-0 sudo[253425]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:44 compute-0 sudo[253450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:44 compute-0 sudo[253450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:44 compute-0 sudo[253450]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:44 compute-0 sudo[253475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:44 compute-0 sudo[253475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:44 compute-0 sudo[253475]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:44 compute-0 ceph-mon[74360]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:20:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:20:44 compute-0 sudo[253500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:20:44 compute-0 sudo[253500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:44 compute-0 podman[253566]: 2026-01-20 14:20:44.973062876 +0000 UTC m=+0.039502599 container create 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:20:45 compute-0 systemd[1]: Started libpod-conmon-06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5.scope.
Jan 20 14:20:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:45.045341187 +0000 UTC m=+0.111780930 container init 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:44.957758384 +0000 UTC m=+0.024198127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:45.054042797 +0000 UTC m=+0.120482530 container start 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:20:45 compute-0 vigorous_faraday[253583]: 167 167
Jan 20 14:20:45 compute-0 systemd[1]: libpod-06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5.scope: Deactivated successfully.
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:45.063192789 +0000 UTC m=+0.129632532 container attach 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:45.063557929 +0000 UTC m=+0.129997652 container died 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:20:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d305a7b695b8865ebcd0b6a33d9502641aecebb733145b9d8cc18c8a3a53d876-merged.mount: Deactivated successfully.
Jan 20 14:20:45 compute-0 podman[253566]: 2026-01-20 14:20:45.110251885 +0000 UTC m=+0.176691608 container remove 06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:20:45 compute-0 systemd[1]: libpod-conmon-06e6e9e6a3c870565f4c43919d740a8f8f13f00aad9383244dbc2eebe80931c5.scope: Deactivated successfully.
Jan 20 14:20:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:45 compute-0 podman[253606]: 2026-01-20 14:20:45.25814619 +0000 UTC m=+0.037833763 container create 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:20:45 compute-0 systemd[1]: Started libpod-conmon-5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf.scope.
Jan 20 14:20:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:45 compute-0 podman[253606]: 2026-01-20 14:20:45.241986295 +0000 UTC m=+0.021673888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:45 compute-0 podman[253606]: 2026-01-20 14:20:45.3412639 +0000 UTC m=+0.120951503 container init 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:20:45 compute-0 podman[253606]: 2026-01-20 14:20:45.3473962 +0000 UTC m=+0.127083773 container start 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:20:45 compute-0 podman[253606]: 2026-01-20 14:20:45.349993241 +0000 UTC m=+0.129680814 container attach 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:20:45 compute-0 ceph-mon[74360]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:45.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:46 compute-0 exciting_meitner[253623]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:20:46 compute-0 exciting_meitner[253623]: --> relative data size: 1.0
Jan 20 14:20:46 compute-0 exciting_meitner[253623]: --> All data devices are unavailable
Jan 20 14:20:46 compute-0 systemd[1]: libpod-5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf.scope: Deactivated successfully.
Jan 20 14:20:46 compute-0 podman[253606]: 2026-01-20 14:20:46.1680721 +0000 UTC m=+0.947759673 container died 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd5190c6b0f6e3f073eb0962abe2259d56a0e0f674f6629cfe78675e5ef0e4d5-merged.mount: Deactivated successfully.
Jan 20 14:20:46 compute-0 podman[253606]: 2026-01-20 14:20:46.21454449 +0000 UTC m=+0.994232063 container remove 5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:20:46 compute-0 systemd[1]: libpod-conmon-5e6289b23bc8728dde31fdd994c5da4f3599795c1483c86d6042bb292d3f64cf.scope: Deactivated successfully.
Jan 20 14:20:46 compute-0 sudo[253500]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:46 compute-0 sudo[253651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:46 compute-0 sudo[253651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:46 compute-0 sudo[253651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:46 compute-0 sudo[253676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:46 compute-0 sudo[253676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:46 compute-0 sudo[253676]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:46 compute-0 sudo[253701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:46 compute-0 sudo[253701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:46 compute-0 sudo[253701]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:46 compute-0 sudo[253726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:20:46 compute-0 sudo[253726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.747284458 +0000 UTC m=+0.037536595 container create ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:20:46 compute-0 systemd[1]: Started libpod-conmon-ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b.scope.
Jan 20 14:20:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.819297222 +0000 UTC m=+0.109549379 container init ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.731645177 +0000 UTC m=+0.021897334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.826949012 +0000 UTC m=+0.117201149 container start ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.830423549 +0000 UTC m=+0.120675696 container attach ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:20:46 compute-0 ecstatic_beaver[253808]: 167 167
Jan 20 14:20:46 compute-0 systemd[1]: libpod-ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b.scope: Deactivated successfully.
Jan 20 14:20:46 compute-0 conmon[253808]: conmon ffc825c6394434f269e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b.scope/container/memory.events
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.834424769 +0000 UTC m=+0.124676916 container died ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-68e745e2c4a1cc8ce774b5e8c6bc263c7cf7e4507f73432114fed4bc8f783d26-merged.mount: Deactivated successfully.
Jan 20 14:20:46 compute-0 podman[253791]: 2026-01-20 14:20:46.870795911 +0000 UTC m=+0.161048048 container remove ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:20:46 compute-0 systemd[1]: libpod-conmon-ffc825c6394434f269e38a928e85ca761d6a19b42fc85a50ba30faa65294371b.scope: Deactivated successfully.
Jan 20 14:20:47 compute-0 podman[253831]: 2026-01-20 14:20:47.036414294 +0000 UTC m=+0.042136362 container create 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:20:47 compute-0 systemd[1]: Started libpod-conmon-3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705.scope.
Jan 20 14:20:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebd761b988cc70738407b18887f5feaf4cee57d2d35492bcf8721ba40c463a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebd761b988cc70738407b18887f5feaf4cee57d2d35492bcf8721ba40c463a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebd761b988cc70738407b18887f5feaf4cee57d2d35492bcf8721ba40c463a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ebd761b988cc70738407b18887f5feaf4cee57d2d35492bcf8721ba40c463a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:47 compute-0 podman[253831]: 2026-01-20 14:20:47.111820072 +0000 UTC m=+0.117542170 container init 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:20:47 compute-0 podman[253831]: 2026-01-20 14:20:47.016859185 +0000 UTC m=+0.022581283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:47 compute-0 podman[253831]: 2026-01-20 14:20:47.12011557 +0000 UTC m=+0.125837648 container start 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:20:47 compute-0 podman[253831]: 2026-01-20 14:20:47.123455582 +0000 UTC m=+0.129177680 container attach 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:20:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:47 compute-0 frosty_goodall[253847]: {
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:     "0": [
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:         {
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "devices": [
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "/dev/loop3"
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             ],
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "lv_name": "ceph_lv0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "lv_size": "7511998464",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "name": "ceph_lv0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "tags": {
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.cluster_name": "ceph",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.crush_device_class": "",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.encrypted": "0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.osd_id": "0",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.type": "block",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:                 "ceph.vdo": "0"
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             },
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "type": "block",
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:             "vg_name": "ceph_vg0"
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:         }
Jan 20 14:20:47 compute-0 frosty_goodall[253847]:     ]
Jan 20 14:20:47 compute-0 frosty_goodall[253847]: }
Jan 20 14:20:47 compute-0 systemd[1]: libpod-3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705.scope: Deactivated successfully.
Jan 20 14:20:47 compute-0 podman[253856]: 2026-01-20 14:20:47.905282603 +0000 UTC m=+0.024592179 container died 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-95ebd761b988cc70738407b18887f5feaf4cee57d2d35492bcf8721ba40c463a-merged.mount: Deactivated successfully.
Jan 20 14:20:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:47.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:47 compute-0 podman[253856]: 2026-01-20 14:20:47.953156021 +0000 UTC m=+0.072465597 container remove 3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:20:47 compute-0 systemd[1]: libpod-conmon-3f558cd94df70fc28e9adf445df90945730e32e136057a67352ec1bdb4487705.scope: Deactivated successfully.
Jan 20 14:20:47 compute-0 sudo[253726]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:48 compute-0 sudo[253871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:48 compute-0 sudo[253871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:48 compute-0 sudo[253871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:48 compute-0 sudo[253896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:20:48 compute-0 sudo[253896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:48 compute-0 sudo[253896]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:48 compute-0 ceph-mon[74360]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:48 compute-0 sudo[253921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:48 compute-0 sudo[253921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:48 compute-0 sudo[253921]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:48 compute-0 sudo[253946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:20:48 compute-0 sudo[253946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.529621994 +0000 UTC m=+0.040754235 container create e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:20:48 compute-0 systemd[1]: Started libpod-conmon-e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1.scope.
Jan 20 14:20:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.513400057 +0000 UTC m=+0.024532318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.886710452 +0000 UTC m=+0.397842723 container init e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.897663183 +0000 UTC m=+0.408795424 container start e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.901647283 +0000 UTC m=+0.412779594 container attach e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:20:48 compute-0 stupefied_sammet[254027]: 167 167
Jan 20 14:20:48 compute-0 systemd[1]: libpod-e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1.scope: Deactivated successfully.
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.903023421 +0000 UTC m=+0.414155662 container died e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d2a8cfbdbdcffb9ed2157c3644d9a88397bf1cfbee39979ac9f9e370ff70209-merged.mount: Deactivated successfully.
Jan 20 14:20:48 compute-0 podman[254011]: 2026-01-20 14:20:48.942321894 +0000 UTC m=+0.453454135 container remove e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:20:48 compute-0 systemd[1]: libpod-conmon-e698ee3b09885bf6070e233155e1cd16f0e0ad70ab016b87b62ca58484d92bd1.scope: Deactivated successfully.
Jan 20 14:20:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.094090176 +0000 UTC m=+0.035841399 container create a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:20:49 compute-0 systemd[1]: Started libpod-conmon-a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd.scope.
Jan 20 14:20:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9047a33a146ba95d6e1593576918b8024cb5817ee9e2d82fb768c5349ef1e6ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9047a33a146ba95d6e1593576918b8024cb5817ee9e2d82fb768c5349ef1e6ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9047a33a146ba95d6e1593576918b8024cb5817ee9e2d82fb768c5349ef1e6ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9047a33a146ba95d6e1593576918b8024cb5817ee9e2d82fb768c5349ef1e6ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.165220605 +0000 UTC m=+0.106971848 container init a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.171413926 +0000 UTC m=+0.113165149 container start a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.078010272 +0000 UTC m=+0.019761515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.174034278 +0000 UTC m=+0.115785531 container attach a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:20:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]: {
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:         "osd_id": 0,
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:         "type": "bluestore"
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]:     }
Jan 20 14:20:49 compute-0 affectionate_mclaren[254068]: }
Jan 20 14:20:49 compute-0 systemd[1]: libpod-a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd.scope: Deactivated successfully.
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.933192784 +0000 UTC m=+0.874944047 container died a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9047a33a146ba95d6e1593576918b8024cb5817ee9e2d82fb768c5349ef1e6ff-merged.mount: Deactivated successfully.
Jan 20 14:20:49 compute-0 podman[254051]: 2026-01-20 14:20:49.997560597 +0000 UTC m=+0.939311840 container remove a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:20:50 compute-0 systemd[1]: libpod-conmon-a2a5ecf6b784d776b10e4393fafdac1082a3be2c3984a1495ebdb646153660fd.scope: Deactivated successfully.
Jan 20 14:20:50 compute-0 sudo[253946]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:20:50 compute-0 podman[254099]: 2026-01-20 14:20:50.036366027 +0000 UTC m=+0.061009163 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 14:20:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:20:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0171b3e8-f68f-43c8-9129-3052280ece11 does not exist
Jan 20 14:20:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b838def7-6bf7-4791-b635-f9ccd9d4693a does not exist
Jan 20 14:20:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f095d4d4-439e-47eb-aeb7-9fd503f75b89 does not exist
Jan 20 14:20:50 compute-0 podman[254090]: 2026-01-20 14:20:50.089662095 +0000 UTC m=+0.109910490 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:20:50 compute-0 sudo[254146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:50 compute-0 sudo[254146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:50 compute-0 sudo[254146]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:50 compute-0 sudo[254172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:20:50 compute-0 sudo[254172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:50 compute-0 sudo[254172]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:50 compute-0 ceph-mon[74360]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:20:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:51.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:20:52
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes']
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:52 compute-0 ceph-mon[74360]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:53 compute-0 ceph-mon[74360]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:53.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:54 compute-0 sudo[254199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:54 compute-0 sudo[254199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:54 compute-0 sudo[254199]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:54 compute-0 sudo[254224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:20:54 compute-0 sudo[254224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:20:54 compute-0 sudo[254224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:20:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:20:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:20:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:56 compute-0 ceph-mon[74360]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:20:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:20:58 compute-0 ceph-mon[74360]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:20:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:20:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:20:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:20:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:20:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:20:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:20:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:00 compute-0 ceph-mon[74360]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:01.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:01 compute-0 ceph-mon[74360]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:01.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:21:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:21:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:03.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:04 compute-0 ceph-mon[74360]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:05.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:06 compute-0 ceph-mon[74360]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:07 compute-0 ceph-mon[74360]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:07.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:09.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:09.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:10 compute-0 ceph-mon[74360]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:21:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:11 compute-0 ceph-mon[74360]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:11.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:12 compute-0 sshd-session[254258]: Connection closed by authenticating user root 157.245.78.139 port 34698 [preauth]
Jan 20 14:21:12 compute-0 nova_compute[250018]: 2026-01-20 14:21:12.761 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:12 compute-0 nova_compute[250018]: 2026-01-20 14:21:12.762 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:12 compute-0 nova_compute[250018]: 2026-01-20 14:21:12.762 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:21:12 compute-0 nova_compute[250018]: 2026-01-20 14:21:12.763 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:21:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4122199275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:21:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:21:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:21:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.985 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.985 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.986 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.986 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.986 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.987 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.987 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.987 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:21:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:13.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:13 compute-0 nova_compute[250018]: 2026-01-20 14:21:13.988 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.028 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.029 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.029 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.029 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.030 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:21:14 compute-0 sudo[254272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:14 compute-0 sudo[254272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:14 compute-0 sudo[254272]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:14 compute-0 sudo[254297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:14 compute-0 sudo[254297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:14 compute-0 sudo[254297]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:21:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812704314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:14 compute-0 nova_compute[250018]: 2026-01-20 14:21:14.916 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:21:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:15 compute-0 ceph-mon[74360]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:21:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3277933881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.129 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.130 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.130 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.130 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.215 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.216 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:21:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.236 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:21:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:21:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432590098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.927 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:21:15 compute-0 nova_compute[250018]: 2026-01-20 14:21:15.932 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:21:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/757655099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3118763901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/812704314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-0 ceph-mon[74360]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1836825557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3432590098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:21:16 compute-0 nova_compute[250018]: 2026-01-20 14:21:16.196 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:21:16 compute-0 nova_compute[250018]: 2026-01-20 14:21:16.198 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:21:16 compute-0 nova_compute[250018]: 2026-01-20 14:21:16.198 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:17.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:19.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:19 compute-0 ceph-mon[74360]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:20 compute-0 ceph-mon[74360]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:20 compute-0 podman[254359]: 2026-01-20 14:21:20.464891966 +0000 UTC m=+0.057612905 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:21:20 compute-0 podman[254358]: 2026-01-20 14:21:20.50137062 +0000 UTC m=+0.096103452 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:21:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:21 compute-0 ceph-mon[74360]: pgmap v886: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:21.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:23.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:24 compute-0 ceph-mon[74360]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:25.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:26 compute-0 ceph-mon[74360]: pgmap v888: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:27.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:28.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:28 compute-0 ceph-mon[74360]: pgmap v889: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:30.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:30 compute-0 ceph-mon[74360]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:21:30.735 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:21:30.736 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:21:30.736 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:21:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:31.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:32 compute-0 ceph-mon[74360]: pgmap v891: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:34.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:34 compute-0 ceph-mon[74360]: pgmap v892: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:34 compute-0 sudo[254406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:34 compute-0 sudo[254406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:34 compute-0 sudo[254406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:34 compute-0 sudo[254432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:34 compute-0 sudo[254432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:34 compute-0 sudo[254432]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:35.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:36.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:36 compute-0 ceph-mon[74360]: pgmap v893: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:37.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:37 compute-0 ceph-mon[74360]: pgmap v894: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:38.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:39.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:40 compute-0 ceph-mon[74360]: pgmap v895: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:42.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:42 compute-0 ceph-mon[74360]: pgmap v896: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:43.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:44.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:44 compute-0 ceph-mon[74360]: pgmap v897: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:46.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:46 compute-0 ceph-mon[74360]: pgmap v898: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:48.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:48 compute-0 ceph-mon[74360]: pgmap v899: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:50 compute-0 ceph-mon[74360]: pgmap v900: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:50 compute-0 sudo[254464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:50 compute-0 sudo[254464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-0 sudo[254464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-0 podman[254489]: 2026-01-20 14:21:50.720849048 +0000 UTC m=+0.077990852 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:21:50 compute-0 sudo[254499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:21:50 compute-0 sudo[254499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-0 sudo[254499]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-0 podman[254488]: 2026-01-20 14:21:50.764929463 +0000 UTC m=+0.132399931 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:21:50 compute-0 sudo[254560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:50 compute-0 sudo[254560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:50 compute-0 sudo[254560]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:50 compute-0 sudo[254585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:21:50 compute-0 sudo[254585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:51 compute-0 sudo[254585]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c4969f06-6bd5-4a50-ac54-69caa4d40db8 does not exist
Jan 20 14:21:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3d2b2930-772e-46e9-9608-8d2538351276 does not exist
Jan 20 14:21:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7e873145-fa58-4a5b-89bb-a4be0b432efa does not exist
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: pgmap v901: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:21:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:51 compute-0 sudo[254643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:51 compute-0 sudo[254643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:51 compute-0 sudo[254643]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:51 compute-0 sudo[254668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:21:51 compute-0 sudo[254668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:51 compute-0 sudo[254668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:51 compute-0 sudo[254693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:51 compute-0 sudo[254693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:51 compute-0 sudo[254693]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:51 compute-0 sudo[254718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:21:51 compute-0 sudo[254718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:21:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.265590243 +0000 UTC m=+0.039313281 container create 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:21:52 compute-0 systemd[1]: Started libpod-conmon-2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8.scope.
Jan 20 14:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.339080766 +0000 UTC m=+0.112803804 container init 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.247893255 +0000 UTC m=+0.021616283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.34531069 +0000 UTC m=+0.119033708 container start 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.348341611 +0000 UTC m=+0.122064649 container attach 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:21:52 compute-0 stoic_hermann[254799]: 167 167
Jan 20 14:21:52 compute-0 systemd[1]: libpod-2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8.scope: Deactivated successfully.
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.350861867 +0000 UTC m=+0.124584875 container died 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac1da2706a91378a91e261f2e9f37ac184f3da04f096baa871de55589aa19a20-merged.mount: Deactivated successfully.
Jan 20 14:21:52 compute-0 podman[254782]: 2026-01-20 14:21:52.385724079 +0000 UTC m=+0.159447087 container remove 2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:21:52 compute-0 systemd[1]: libpod-conmon-2ac535d2f2f00b692643a55ebfba8cb697ef71a6a6f983ebba18d7a5d2ccaaa8.scope: Deactivated successfully.
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:21:52
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:52 compute-0 podman[254824]: 2026-01-20 14:21:52.585412629 +0000 UTC m=+0.082061031 container create a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:21:52 compute-0 systemd[1]: Started libpod-conmon-a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69.scope.
Jan 20 14:21:52 compute-0 podman[254824]: 2026-01-20 14:21:52.55747169 +0000 UTC m=+0.054120152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:52 compute-0 podman[254824]: 2026-01-20 14:21:52.714068281 +0000 UTC m=+0.210716753 container init a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:21:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:21:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:21:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:21:52 compute-0 podman[254824]: 2026-01-20 14:21:52.73068436 +0000 UTC m=+0.227332742 container start a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:52 compute-0 podman[254824]: 2026-01-20 14:21:52.740243303 +0000 UTC m=+0.236891665 container attach a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:21:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:21:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:53.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=404 latency=0.001000025s ======
Jan 20 14:21:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:53.487 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.001000025s
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - - [20/Jan/2026:14:21:53.505 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 20 14:21:53 compute-0 brave_keldysh[254840]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:21:53 compute-0 brave_keldysh[254840]: --> relative data size: 1.0
Jan 20 14:21:53 compute-0 brave_keldysh[254840]: --> All data devices are unavailable
Jan 20 14:21:53 compute-0 systemd[1]: libpod-a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69.scope: Deactivated successfully.
Jan 20 14:21:53 compute-0 podman[254824]: 2026-01-20 14:21:53.557001059 +0000 UTC m=+1.053649421 container died a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-799bdba10da643d0d6bafc04a431f29c9e7e5ba21ac5e347535367bc114141f9-merged.mount: Deactivated successfully.
Jan 20 14:21:53 compute-0 podman[254824]: 2026-01-20 14:21:53.631430866 +0000 UTC m=+1.128079228 container remove a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:21:53 compute-0 systemd[1]: libpod-conmon-a0aa83eb6f51a1ac7b9707fc4903951bd6f102dd870a115ef3c4f81ca7bc5c69.scope: Deactivated successfully.
Jan 20 14:21:53 compute-0 sudo[254718]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:53 compute-0 sudo[254872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:53 compute-0 sudo[254872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:53 compute-0 sudo[254872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:53 compute-0 ceph-mon[74360]: pgmap v902: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:53 compute-0 sudo[254897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:21:53 compute-0 sudo[254897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:53 compute-0 sudo[254897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:53 compute-0 sudo[254922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:53 compute-0 sudo[254922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:53 compute-0 sudo[254922]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:53 compute-0 sudo[254947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:21:53 compute-0 sudo[254947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.262560404 +0000 UTC m=+0.037020430 container create ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:21:54 compute-0 systemd[1]: Started libpod-conmon-ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58.scope.
Jan 20 14:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.325299863 +0000 UTC m=+0.099759909 container init ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.331433275 +0000 UTC m=+0.105893291 container start ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.335121273 +0000 UTC m=+0.109581299 container attach ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:21:54 compute-0 systemd[1]: libpod-ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58.scope: Deactivated successfully.
Jan 20 14:21:54 compute-0 admiring_galois[255031]: 167 167
Jan 20 14:21:54 compute-0 conmon[255031]: conmon ea19013517b8aec2247e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58.scope/container/memory.events
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.338689447 +0000 UTC m=+0.113149473 container died ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.247621979 +0000 UTC m=+0.022082015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-47fa6c83e9cb7978d254fde8e897b1a2fccba5225daaea99287cb83e494f3737-merged.mount: Deactivated successfully.
Jan 20 14:21:54 compute-0 podman[255013]: 2026-01-20 14:21:54.375433508 +0000 UTC m=+0.149893534 container remove ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_galois, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:21:54 compute-0 systemd[1]: libpod-conmon-ea19013517b8aec2247e2722f2ae0faf1e968fa8fde61c1ff400c6179f2b2b58.scope: Deactivated successfully.
Jan 20 14:21:54 compute-0 podman[255056]: 2026-01-20 14:21:54.540309888 +0000 UTC m=+0.039996358 container create 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 20 14:21:54 compute-0 systemd[1]: Started libpod-conmon-1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c.scope.
Jan 20 14:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8872fc30be48afe1492044b3dfee5cdb016326894f78c41b6376586f02036c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8872fc30be48afe1492044b3dfee5cdb016326894f78c41b6376586f02036c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8872fc30be48afe1492044b3dfee5cdb016326894f78c41b6376586f02036c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:54 compute-0 podman[255056]: 2026-01-20 14:21:54.523489053 +0000 UTC m=+0.023175553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8872fc30be48afe1492044b3dfee5cdb016326894f78c41b6376586f02036c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:54 compute-0 podman[255056]: 2026-01-20 14:21:54.629755783 +0000 UTC m=+0.129442273 container init 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:21:54 compute-0 podman[255056]: 2026-01-20 14:21:54.635541586 +0000 UTC m=+0.135228056 container start 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:54 compute-0 podman[255056]: 2026-01-20 14:21:54.6394968 +0000 UTC m=+0.139183270 container attach 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:21:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:55 compute-0 sudo[255079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-0 sudo[255079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-0 sudo[255079]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 sudo[255104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-0 sudo[255104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-0 sudo[255104]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]: {
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:     "0": [
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:         {
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "devices": [
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "/dev/loop3"
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             ],
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "lv_name": "ceph_lv0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "lv_size": "7511998464",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "name": "ceph_lv0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "tags": {
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.cluster_name": "ceph",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.crush_device_class": "",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.encrypted": "0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.osd_id": "0",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.type": "block",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:                 "ceph.vdo": "0"
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             },
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "type": "block",
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:             "vg_name": "ceph_vg0"
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:         }
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]:     ]
Jan 20 14:21:55 compute-0 vibrant_elgamal[255073]: }
Jan 20 14:21:55 compute-0 systemd[1]: libpod-1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c.scope: Deactivated successfully.
Jan 20 14:21:55 compute-0 podman[255056]: 2026-01-20 14:21:55.437926901 +0000 UTC m=+0.937613371 container died 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d8872fc30be48afe1492044b3dfee5cdb016326894f78c41b6376586f02036c-merged.mount: Deactivated successfully.
Jan 20 14:21:55 compute-0 podman[255056]: 2026-01-20 14:21:55.491410945 +0000 UTC m=+0.991097425 container remove 1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:21:55 compute-0 systemd[1]: libpod-conmon-1363947e28c9aa0c44e4bbae25e5f9c6efb21aeae160e9c93b67cde55f5d9e0c.scope: Deactivated successfully.
Jan 20 14:21:55 compute-0 sudo[254947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 sudo[255145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-0 sudo[255145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-0 sudo[255145]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 sudo[255170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:21:55 compute-0 sudo[255170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-0 sudo[255170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 sudo[255195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:55 compute-0 sudo[255195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:55 compute-0 sudo[255195]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:55 compute-0 sudo[255220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:21:55 compute-0 sudo[255220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.042150068 +0000 UTC m=+0.037000760 container create f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:21:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:56 compute-0 systemd[1]: Started libpod-conmon-f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd.scope.
Jan 20 14:21:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.027405297 +0000 UTC m=+0.022256019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.128662565 +0000 UTC m=+0.123513287 container init f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.136985445 +0000 UTC m=+0.131836157 container start f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.141113414 +0000 UTC m=+0.135964126 container attach f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:21:56 compute-0 vigorous_shockley[255302]: 167 167
Jan 20 14:21:56 compute-0 systemd[1]: libpod-f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd.scope: Deactivated successfully.
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.142867111 +0000 UTC m=+0.137717823 container died f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e66a5e1b879b31172b21b52061270dabfba3a5f76d547d89b52de1d04f2ae8a0-merged.mount: Deactivated successfully.
Jan 20 14:21:56 compute-0 podman[255285]: 2026-01-20 14:21:56.179472228 +0000 UTC m=+0.174322910 container remove f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:21:56 compute-0 ceph-mon[74360]: pgmap v903: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:56 compute-0 systemd[1]: libpod-conmon-f0aac6c043406a4df553838b4c9c6f52a13654be23296c7501ddb2300a1daedd.scope: Deactivated successfully.
Jan 20 14:21:56 compute-0 podman[255325]: 2026-01-20 14:21:56.323262831 +0000 UTC m=+0.038108910 container create e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:21:56 compute-0 systemd[1]: Started libpod-conmon-e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757.scope.
Jan 20 14:21:56 compute-0 podman[255325]: 2026-01-20 14:21:56.306132237 +0000 UTC m=+0.020978296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:21:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4973166055bf823a4e7891cbd0f9839b48ba39c049abae0e296332f2d5e75bba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4973166055bf823a4e7891cbd0f9839b48ba39c049abae0e296332f2d5e75bba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4973166055bf823a4e7891cbd0f9839b48ba39c049abae0e296332f2d5e75bba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4973166055bf823a4e7891cbd0f9839b48ba39c049abae0e296332f2d5e75bba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:21:56 compute-0 podman[255325]: 2026-01-20 14:21:56.428098642 +0000 UTC m=+0.142944701 container init e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:21:56 compute-0 podman[255325]: 2026-01-20 14:21:56.445869992 +0000 UTC m=+0.160716041 container start e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:21:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:56 compute-0 podman[255325]: 2026-01-20 14:21:56.450132685 +0000 UTC m=+0.164978764 container attach e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:21:56 compute-0 sshd-session[255344]: Connection closed by authenticating user root 157.245.78.139 port 39714 [preauth]
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]: {
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:         "osd_id": 0,
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:         "type": "bluestore"
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]:     }
Jan 20 14:21:57 compute-0 distracted_wilbur[255341]: }
Jan 20 14:21:57 compute-0 systemd[1]: libpod-e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757.scope: Deactivated successfully.
Jan 20 14:21:57 compute-0 podman[255365]: 2026-01-20 14:21:57.261200019 +0000 UTC m=+0.021588601 container died e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:21:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4973166055bf823a4e7891cbd0f9839b48ba39c049abae0e296332f2d5e75bba-merged.mount: Deactivated successfully.
Jan 20 14:21:57 compute-0 podman[255365]: 2026-01-20 14:21:57.316229825 +0000 UTC m=+0.076618407 container remove e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:21:57 compute-0 systemd[1]: libpod-conmon-e6172c4b21012cc0f2e690dd8149d25544a0c99d892a3f93d3fd7f7d57515757.scope: Deactivated successfully.
Jan 20 14:21:57 compute-0 sudo[255220]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:21:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:21:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 33beceed-8f54-418e-8a99-3f861558e07b does not exist
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c64756d8-a84e-4d3c-af45-32160e23adb3 does not exist
Jan 20 14:21:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c04e6f56-035c-4bd0-9561-02f615477647 does not exist
Jan 20 14:21:57 compute-0 sudo[255381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:21:57 compute-0 sudo[255381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:57 compute-0 sudo[255381]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:57 compute-0 sudo[255406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:21:57 compute-0 sudo[255406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:21:57 compute-0 sudo[255406]: pam_unix(sudo:session): session closed for user root
Jan 20 14:21:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:21:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:58 compute-0 ceph-mon[74360]: pgmap v904: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:21:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:21:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:21:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:21:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:21:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:21:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 20 14:21:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 20 14:21:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 20 14:22:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 20 14:22:00 compute-0 ceph-mon[74360]: pgmap v905: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:00 compute-0 ceph-mon[74360]: osdmap e129: 3 total, 3 up, 3 in
Jan 20 14:22:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 20 14:22:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 2.0 MiB/s wr, 6 op/s
Jan 20 14:22:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 20 14:22:01 compute-0 ceph-mon[74360]: osdmap e130: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 20 14:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:02.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:02 compute-0 ceph-mon[74360]: pgmap v908: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 2.0 MiB/s wr, 6 op/s
Jan 20 14:22:02 compute-0 ceph-mon[74360]: osdmap e131: 3 total, 3 up, 3 in
Jan 20 14:22:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 2.7 MiB/s wr, 9 op/s
Jan 20 14:22:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 20 14:22:04 compute-0 ceph-mon[74360]: pgmap v910: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 2.7 MiB/s wr, 9 op/s
Jan 20 14:22:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 20 14:22:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 20 14:22:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Jan 20 14:22:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:05 compute-0 ceph-mon[74360]: osdmap e132: 3 total, 3 up, 3 in
Jan 20 14:22:05 compute-0 ceph-mon[74360]: pgmap v912: 321 pgs: 321 active+clean; 16 MiB data, 153 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Jan 20 14:22:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 6.1 MiB/s wr, 45 op/s
Jan 20 14:22:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:22:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:22:07 compute-0 ceph-mon[74360]: pgmap v913: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 6.1 MiB/s wr, 45 op/s
Jan 20 14:22:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:08.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 20 14:22:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:10 compute-0 ceph-mon[74360]: pgmap v914: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 20 14:22:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4026262527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:22:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 34 op/s
Jan 20 14:22:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:11.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 20 14:22:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 20 14:22:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:12.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 20 14:22:12 compute-0 ceph-mon[74360]: pgmap v915: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 34 op/s
Jan 20 14:22:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 2.9 MiB/s wr, 12 op/s
Jan 20 14:22:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:13 compute-0 ceph-mon[74360]: osdmap e133: 3 total, 3 up, 3 in
Jan 20 14:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2009229127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2964033409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.482 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.483 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.513 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.513 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.513 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.545 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.546 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.546 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.546 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.546 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.547 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.547 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.547 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.547 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.571 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.571 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.571 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.571 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.572 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:22:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:22:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:22:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:22:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420770576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:13 compute-0 nova_compute[250018]: 2026-01-20 14:22:13.987 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.123 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.125 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.125 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.125 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.210 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.210 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.250 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670245392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.671 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.677 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:22:14 compute-0 ceph-mon[74360]: pgmap v917: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 2.9 MiB/s wr, 12 op/s
Jan 20 14:22:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:22:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4042614107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:22:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3420770576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4064589682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.702 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.704 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:22:14 compute-0 nova_compute[250018]: 2026-01-20 14:22:14.705 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Jan 20 14:22:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:22:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:22:15 compute-0 sudo[255484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:15 compute-0 sudo[255484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:15 compute-0 sudo[255484]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:15 compute-0 sudo[255509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:15 compute-0 sudo[255509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:15 compute-0 sudo[255509]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3670245392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:15 compute-0 ceph-mon[74360]: pgmap v918: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Jan 20 14:22:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.052296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937052357, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1825, "num_deletes": 250, "total_data_size": 3340530, "memory_usage": 3396904, "flush_reason": "Manual Compaction"}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937065968, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1972422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19374, "largest_seqno": 21197, "table_properties": {"data_size": 1966168, "index_size": 3265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15784, "raw_average_key_size": 20, "raw_value_size": 1952328, "raw_average_value_size": 2552, "num_data_blocks": 147, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918754, "oldest_key_time": 1768918754, "file_creation_time": 1768918937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13711 microseconds, and 5458 cpu microseconds.
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.066012) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1972422 bytes OK
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.066030) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.067328) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.067342) EVENT_LOG_v1 {"time_micros": 1768918937067338, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.067357) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3333001, prev total WAL file size 3333001, number of live WAL files 2.
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.068512) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1926KB)], [44(9175KB)]
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937068585, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11367913, "oldest_snapshot_seqno": -1}
Jan 20 14:22:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4830 keys, 8797344 bytes, temperature: kUnknown
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937127566, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8797344, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8764960, "index_size": 19218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 119843, "raw_average_key_size": 24, "raw_value_size": 8677485, "raw_average_value_size": 1796, "num_data_blocks": 799, "num_entries": 4830, "num_filter_entries": 4830, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918937, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.127793) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8797344 bytes
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.132301) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 149.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(10.2) write-amplify(4.5) OK, records in: 5266, records dropped: 436 output_compression: NoCompression
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.132317) EVENT_LOG_v1 {"time_micros": 1768918937132309, "job": 22, "event": "compaction_finished", "compaction_time_micros": 59061, "compaction_time_cpu_micros": 21229, "output_level": 6, "num_output_files": 1, "total_output_size": 8797344, "num_input_records": 5266, "num_output_records": 4830, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937132721, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918937134177, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.068327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.134287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.134294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.134300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.134302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:17.134304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:18 compute-0 ceph-mon[74360]: pgmap v919: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 818 B/s wr, 7 op/s
Jan 20 14:22:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:22:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:18.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:22:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:20.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:20 compute-0 ceph-mon[74360]: pgmap v920: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:21.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:21 compute-0 podman[255539]: 2026-01-20 14:22:21.464492528 +0000 UTC m=+0.054771479 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:22:21 compute-0 podman[255538]: 2026-01-20 14:22:21.497316966 +0000 UTC m=+0.087514625 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 20 14:22:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:22 compute-0 ceph-mon[74360]: pgmap v921: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:23.206 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:22:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:23.207 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:22:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:24 compute-0 ceph-mon[74360]: pgmap v922: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:25.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:26.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:26.210 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:22:26 compute-0 ceph-mon[74360]: pgmap v923: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:28 compute-0 ceph-mon[74360]: pgmap v924: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:30.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:30 compute-0 ceph-mon[74360]: pgmap v925: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:30.737 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:30.737 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:22:30.737 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:31 compute-0 nova_compute[250018]: 2026-01-20 14:22:31.933 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "458c23a5-1bf5-4160-9265-1db326ecf321" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:31 compute-0 nova_compute[250018]: 2026-01-20 14:22:31.934 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.017 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:22:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:32.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.136 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.137 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.144 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.145 250022 INFO nova.compute.claims [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.462 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:32 compute-0 ceph-mon[74360]: pgmap v926: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936891983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.922 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:32 compute-0 nova_compute[250018]: 2026-01-20 14:22:32.929 250022 DEBUG nova.compute.provider_tree [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:22:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.223 250022 DEBUG nova.scheduler.client.report [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.260 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.260 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:22:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.458 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.517 250022 INFO nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:22:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1936891983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.612 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.994 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.996 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:22:33 compute-0 nova_compute[250018]: 2026-01-20 14:22:33.997 250022 INFO nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Creating image(s)
Jan 20 14:22:34 compute-0 nova_compute[250018]: 2026-01-20 14:22:34.032 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:34 compute-0 nova_compute[250018]: 2026-01-20 14:22:34.060 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:34 compute-0 nova_compute[250018]: 2026-01-20 14:22:34.088 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:34 compute-0 nova_compute[250018]: 2026-01-20 14:22:34.091 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:34 compute-0 nova_compute[250018]: 2026-01-20 14:22:34.092 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:34.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:34 compute-0 ceph-mon[74360]: pgmap v927: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:35 compute-0 sudo[255670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:35 compute-0 sudo[255670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:35 compute-0 sudo[255670]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:35 compute-0 sudo[255695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:35 compute-0 sudo[255695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:35 compute-0 nova_compute[250018]: 2026-01-20 14:22:35.622 250022 DEBUG nova.virt.libvirt.imagebackend [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a32b3e07-16d8-46fd-9a7b-c242c432fcf9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/a32b3e07-16d8-46fd-9a7b-c242c432fcf9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:22:35 compute-0 sudo[255695]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:35 compute-0 ceph-mon[74360]: pgmap v928: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:22:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:36.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 20 14:22:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:22:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:37.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:22:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:38 compute-0 ceph-mon[74360]: pgmap v929: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.252 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.303 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.304 250022 DEBUG nova.virt.images [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] a32b3e07-16d8-46fd-9a7b-c242c432fcf9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.305 250022 DEBUG nova.privsep.utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.305 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.511 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.part /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.519 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.608 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462.converted --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.609 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.642 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:38 compute-0 nova_compute[250018]: 2026-01-20 14:22:38.646 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 458c23a5-1bf5-4160-9265-1db326ecf321_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 5 op/s
Jan 20 14:22:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 20 14:22:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 20 14:22:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 20 14:22:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:39.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:40 compute-0 ceph-mon[74360]: pgmap v930: 321 pgs: 321 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 5 op/s
Jan 20 14:22:40 compute-0 ceph-mon[74360]: osdmap e134: 3 total, 3 up, 3 in
Jan 20 14:22:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 20 14:22:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 20 14:22:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 20 14:22:40 compute-0 nova_compute[250018]: 2026-01-20 14:22:40.480 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 458c23a5-1bf5-4160-9265-1db326ecf321_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:40 compute-0 nova_compute[250018]: 2026-01-20 14:22:40.577 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] resizing rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:22:40 compute-0 nova_compute[250018]: 2026-01-20 14:22:40.714 250022 DEBUG nova.objects.instance [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'migration_context' on Instance uuid 458c23a5-1bf5-4160-9265-1db326ecf321 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:22:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 57 MiB data, 201 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 901 KiB/s wr, 47 op/s
Jan 20 14:22:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.345 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.346 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Ensure instance console log exists: /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.346 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.347 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.347 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.350 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.356 250022 WARNING nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.363 250022 DEBUG nova.virt.libvirt.host [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.364 250022 DEBUG nova.virt.libvirt.host [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.368 250022 DEBUG nova.virt.libvirt.host [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.369 250022 DEBUG nova.virt.libvirt.host [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.372 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.372 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.373 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.373 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.374 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.374 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.375 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.376 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.377 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.377 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.378 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.379 250022 DEBUG nova.virt.hardware [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.384 250022 DEBUG nova.privsep.utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.385 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:41 compute-0 ceph-mon[74360]: osdmap e135: 3 total, 3 up, 3 in
Jan 20 14:22:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:22:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092988802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.868 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.898 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:41 compute-0 nova_compute[250018]: 2026-01-20 14:22:41.902 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:42.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:22:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988409306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.323 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.326 250022 DEBUG nova.objects.instance [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'pci_devices' on Instance uuid 458c23a5-1bf5-4160-9265-1db326ecf321 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:22:42 compute-0 ceph-mon[74360]: pgmap v933: 321 pgs: 321 active+clean; 57 MiB data, 201 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 901 KiB/s wr, 47 op/s
Jan 20 14:22:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2092988802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3988409306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.632 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <uuid>458c23a5-1bf5-4160-9265-1db326ecf321</uuid>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <name>instance-00000001</name>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:name>tempest-AutoAllocateNetworkTest-server-1028931541</nova:name>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:22:41</nova:creationTime>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:user uuid="918f290d4c414b71807eacf0b27ad165">tempest-AutoAllocateNetworkTest-314960358-project-member</nova:user>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <nova:project uuid="e024eef627014f829fa6e45ffe36c281">tempest-AutoAllocateNetworkTest-314960358</nova:project>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <system>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="serial">458c23a5-1bf5-4160-9265-1db326ecf321</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="uuid">458c23a5-1bf5-4160-9265-1db326ecf321</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </system>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <os>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </os>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <features>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </features>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/458c23a5-1bf5-4160-9265-1db326ecf321_disk">
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </source>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/458c23a5-1bf5-4160-9265-1db326ecf321_disk.config">
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </source>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:22:42 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/console.log" append="off"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <video>
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </video>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:22:42 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:22:42 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:22:42 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:22:42 compute-0 nova_compute[250018]: </domain>
Jan 20 14:22:42 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:22:42 compute-0 sshd-session[255908]: Invalid user admin from 157.245.78.139 port 53468
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.727 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.727 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.727 250022 INFO nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Using config drive
Jan 20 14:22:42 compute-0 sshd-session[255908]: Connection closed by invalid user admin 157.245.78.139 port 53468 [preauth]
Jan 20 14:22:42 compute-0 nova_compute[250018]: 2026-01-20 14:22:42.758 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 77 MiB data, 208 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 20 14:22:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.412 250022 INFO nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Creating config drive at /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.421 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkqyqwzp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.555 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnkqyqwzp" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.598 250022 DEBUG nova.storage.rbd_utils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 458c23a5-1bf5-4160-9265-1db326ecf321_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.603 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config 458c23a5-1bf5-4160-9265-1db326ecf321_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.776 250022 DEBUG oslo_concurrency.processutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config 458c23a5-1bf5-4160-9265-1db326ecf321_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:43 compute-0 nova_compute[250018]: 2026-01-20 14:22:43.777 250022 INFO nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Deleting local config drive /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321/disk.config because it was imported into RBD.
Jan 20 14:22:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 14:22:43 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 14:22:43 compute-0 systemd-machined[216401]: New machine qemu-1-instance-00000001.
Jan 20 14:22:43 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 20 14:22:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.415 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768918964.4150417, 458c23a5-1bf5-4160-9265-1db326ecf321 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.416 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] VM Resumed (Lifecycle Event)
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.419 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.420 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.424 250022 INFO nova.virt.libvirt.driver [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance spawned successfully.
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.424 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.464 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.469 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.469 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.470 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.470 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.471 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.471 250022 DEBUG nova.virt.libvirt.driver [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.475 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.510 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.510 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768918964.419088, 458c23a5-1bf5-4160-9265-1db326ecf321 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.511 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] VM Started (Lifecycle Event)
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.548 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.551 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.574 250022 INFO nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Took 10.58 seconds to spawn the instance on the hypervisor.
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.575 250022 DEBUG nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.576 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.632 250022 INFO nova.compute.manager [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Took 12.53 seconds to build instance.
Jan 20 14:22:44 compute-0 nova_compute[250018]: 2026-01-20 14:22:44.652 250022 DEBUG oslo_concurrency.lockutils [None req-2d39c384-4751-4db6-bfe8-d13a63c5bd4b 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:44 compute-0 ceph-mon[74360]: pgmap v934: 321 pgs: 321 active+clean; 77 MiB data, 208 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 20 14:22:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:22:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:45 compute-0 ceph-mon[74360]: pgmap v935: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:22:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:46.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 20 14:22:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 20 14:22:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 20 14:22:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 20 14:22:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:48 compute-0 ceph-mon[74360]: pgmap v936: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 20 14:22:48 compute-0 ceph-mon[74360]: osdmap e136: 3 total, 3 up, 3 in
Jan 20 14:22:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 136 op/s
Jan 20 14:22:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:50.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.227 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.227 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.244 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.407 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.407 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.412 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.413 250022 INFO nova.compute.claims [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:22:50 compute-0 ceph-mon[74360]: pgmap v938: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 136 op/s
Jan 20 14:22:50 compute-0 nova_compute[250018]: 2026-01-20 14:22:50.580 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273476056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.007 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.013 250022 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.078 250022 ERROR nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [req-9cff0aee-59a9-4b53-b2a1-cfa74afc0f41] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-9cff0aee-59a9-4b53-b2a1-cfa74afc0f41"}]}
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.105 250022 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:22:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.171 250022 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.171 250022 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.201 250022 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.224 250022 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.311 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:22:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:51.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:22:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2527053545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/273476056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3645120924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:22:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499628460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.800 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:51 compute-0 nova_compute[250018]: 2026-01-20 14:22:51.805 250022 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:22:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:52.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:22:52
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.481 250022 DEBUG nova.scheduler.client.report [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updated inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.481 250022 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.482 250022 DEBUG nova.compute.provider_tree [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:52 compute-0 podman[256095]: 2026-01-20 14:22:52.491561709 +0000 UTC m=+0.065635767 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:22:52 compute-0 podman[256094]: 2026-01-20 14:22:52.586049627 +0000 UTC m=+0.160124405 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.735 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.736 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:22:52 compute-0 ceph-mon[74360]: pgmap v939: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Jan 20 14:22:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1499628460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.773005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972773094, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 576, "num_deletes": 251, "total_data_size": 648879, "memory_usage": 658872, "flush_reason": "Manual Compaction"}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.789 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.790 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.813 250022 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972826975, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 641457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21198, "largest_seqno": 21773, "table_properties": {"data_size": 638367, "index_size": 1062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7302, "raw_average_key_size": 19, "raw_value_size": 632061, "raw_average_value_size": 1658, "num_data_blocks": 48, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918938, "oldest_key_time": 1768918938, "file_creation_time": 1768918972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 54001 microseconds, and 2938 cpu microseconds.
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.827015) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 641457 bytes OK
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.827032) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.830136) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.830148) EVENT_LOG_v1 {"time_micros": 1768918972830144, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.830164) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 645748, prev total WAL file size 645748, number of live WAL files 2.
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.830647) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(626KB)], [47(8591KB)]
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972830694, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9438801, "oldest_snapshot_seqno": -1}
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.854 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4697 keys, 7403354 bytes, temperature: kUnknown
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972943020, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7403354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7373032, "index_size": 17476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117783, "raw_average_key_size": 25, "raw_value_size": 7288953, "raw_average_value_size": 1551, "num_data_blocks": 719, "num_entries": 4697, "num_filter_entries": 4697, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768918972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.943239) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7403354 bytes
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.944763) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.0 rd, 65.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.4 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(26.3) write-amplify(11.5) OK, records in: 5211, records dropped: 514 output_compression: NoCompression
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.944779) EVENT_LOG_v1 {"time_micros": 1768918972944771, "job": 24, "event": "compaction_finished", "compaction_time_micros": 112410, "compaction_time_cpu_micros": 15486, "output_level": 6, "num_output_files": 1, "total_output_size": 7403354, "num_input_records": 5211, "num_output_records": 4697, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972944970, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768918972946203, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.830557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.946245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.946250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.946252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.946254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:22:52.946256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.970 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.972 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.972 250022 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Creating image(s)
Jan 20 14:22:52 compute-0 nova_compute[250018]: 2026-01-20 14:22:52.998 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.036 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.066 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.069 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.121 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.122 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.123 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.123 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 727 KiB/s wr, 89 op/s
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.170 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.173 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:22:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.543 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Automatically allocating a network for project e024eef627014f829fa6e45ffe36c281. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.603 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.684 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] resizing rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:22:53 compute-0 ceph-mon[74360]: pgmap v940: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 727 KiB/s wr, 89 op/s
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.811 250022 DEBUG nova.objects.instance [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'migration_context' on Instance uuid 288f4f83-d33c-44ef-bf78-16cacd3f811f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.833 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.834 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Ensure instance console log exists: /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.835 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.835 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:22:53 compute-0 nova_compute[250018]: 2026-01-20 14:22:53.836 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:22:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:22:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:54.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:22:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 101 op/s
Jan 20 14:22:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:22:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:22:55 compute-0 sudo[256306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:55 compute-0 sudo[256306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:55 compute-0 sudo[256306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:55 compute-0 sudo[256331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:55 compute-0 sudo[256331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:55 compute-0 sudo[256331]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:56 compute-0 ceph-mon[74360]: pgmap v941: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 101 op/s
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 217 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.6 MiB/s wr, 150 op/s
Jan 20 14:22:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:22:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:57.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:22:57 compute-0 sudo[256357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:57 compute-0 sudo[256357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:57 compute-0 sudo[256357]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-0 sudo[256382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:22:58 compute-0 sudo[256382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-0 sudo[256382]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-0 sudo[256407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:58 compute-0 sudo[256407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-0 sudo[256407]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:58 compute-0 sudo[256432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:22:58 compute-0 sudo[256432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:22:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:58 compute-0 ceph-mon[74360]: pgmap v942: 321 pgs: 321 active+clean; 217 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.6 MiB/s wr, 150 op/s
Jan 20 14:22:58 compute-0 sudo[256432]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 241 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.7 MiB/s wr, 153 op/s
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:22:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:22:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:22:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:22:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:22:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f1053d53-c83c-4702-830b-1180ca2f5658 does not exist
Jan 20 14:22:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 583024bc-12fa-41ec-84cd-62d062bddfa0 does not exist
Jan 20 14:22:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 04aba9bd-e661-4711-b5e1-b1096503265c does not exist
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:22:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:22:59 compute-0 sudo[256492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:59 compute-0 sudo[256492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:59 compute-0 sudo[256492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:59 compute-0 sudo[256517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:22:59 compute-0 sudo[256517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:59 compute-0 sudo[256517]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:59 compute-0 sudo[256542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:22:59 compute-0 sudo[256542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:59 compute-0 sudo[256542]: pam_unix(sudo:session): session closed for user root
Jan 20 14:22:59 compute-0 sudo[256567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:22:59 compute-0 sudo[256567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:22:59 compute-0 ceph-mon[74360]: pgmap v943: 321 pgs: 321 active+clean; 241 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.7 MiB/s wr, 153 op/s
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:22:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.049357481 +0000 UTC m=+0.076594697 container create 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:22:59.998771572 +0000 UTC m=+0.026008808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:00 compute-0 systemd[1]: Started libpod-conmon-0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff.scope.
Jan 20 14:23:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:00.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.191644772 +0000 UTC m=+0.218882018 container init 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.199734956 +0000 UTC m=+0.226972172 container start 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:00 compute-0 frosty_johnson[256649]: 167 167
Jan 20 14:23:00 compute-0 systemd[1]: libpod-0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff.scope: Deactivated successfully.
Jan 20 14:23:00 compute-0 conmon[256649]: conmon 0ce1465480f90ccaa5bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff.scope/container/memory.events
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.220765092 +0000 UTC m=+0.248002308 container attach 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.221426869 +0000 UTC m=+0.248664095 container died 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f1a3ce26e233c89ae70bd8351ba46de4fcbef57c30b49c66e1cdb8ac4e1f3fa-merged.mount: Deactivated successfully.
Jan 20 14:23:00 compute-0 podman[256633]: 2026-01-20 14:23:00.414449943 +0000 UTC m=+0.441687149 container remove 0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:00 compute-0 systemd[1]: libpod-conmon-0ce1465480f90ccaa5bf95d562dda7701c650bc40252f431432fac49c7da2dff.scope: Deactivated successfully.
Jan 20 14:23:00 compute-0 podman[256675]: 2026-01-20 14:23:00.613943548 +0000 UTC m=+0.075570298 container create 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:00 compute-0 podman[256675]: 2026-01-20 14:23:00.579684572 +0000 UTC m=+0.041311352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:00 compute-0 systemd[1]: Started libpod-conmon-3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e.scope.
Jan 20 14:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:00 compute-0 podman[256675]: 2026-01-20 14:23:00.727429409 +0000 UTC m=+0.189056189 container init 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:00 compute-0 podman[256675]: 2026-01-20 14:23:00.73503812 +0000 UTC m=+0.196664880 container start 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:00 compute-0 podman[256675]: 2026-01-20 14:23:00.752940503 +0000 UTC m=+0.214567283 container attach 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:23:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 249 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 145 op/s
Jan 20 14:23:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:01 compute-0 intelligent_mendel[256692]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:23:01 compute-0 intelligent_mendel[256692]: --> relative data size: 1.0
Jan 20 14:23:01 compute-0 intelligent_mendel[256692]: --> All data devices are unavailable
Jan 20 14:23:01 compute-0 systemd[1]: libpod-3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e.scope: Deactivated successfully.
Jan 20 14:23:01 compute-0 podman[256675]: 2026-01-20 14:23:01.502129242 +0000 UTC m=+0.963756012 container died 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8e3b2d8aca5b5ccf1fffea93414a503df6714ee6e71219e0f7c95e59ce2eec6-merged.mount: Deactivated successfully.
Jan 20 14:23:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:02 compute-0 podman[256675]: 2026-01-20 14:23:02.384943465 +0000 UTC m=+1.846570255 container remove 3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:23:02 compute-0 sudo[256567]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:02 compute-0 systemd[1]: libpod-conmon-3d3d486599ffd532664f8a1d978ef9d5b14826d3817b0c16e563593011d1ff1e.scope: Deactivated successfully.
Jan 20 14:23:02 compute-0 sudo[256720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:02 compute-0 sudo[256720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:02 compute-0 sudo[256720]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:02 compute-0 ceph-mon[74360]: pgmap v944: 321 pgs: 321 active+clean; 249 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.4 MiB/s wr, 145 op/s
Jan 20 14:23:02 compute-0 sudo[256745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:23:02 compute-0 sudo[256745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:02 compute-0 sudo[256745]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:02 compute-0 sudo[256770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:02 compute-0 sudo[256770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:02 compute-0 sudo[256770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:02 compute-0 sudo[256796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:23:02 compute-0 sudo[256796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 257 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 155 op/s
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.133400284 +0000 UTC m=+0.020930175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.383880367 +0000 UTC m=+0.271410238 container create e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:03 compute-0 systemd[1]: Started libpod-conmon-e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15.scope.
Jan 20 14:23:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.612899583 +0000 UTC m=+0.500429494 container init e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.618752837 +0000 UTC m=+0.506282708 container start e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:03 compute-0 reverent_bell[256879]: 167 167
Jan 20 14:23:03 compute-0 systemd[1]: libpod-e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15.scope: Deactivated successfully.
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.803597814 +0000 UTC m=+0.691127715 container attach e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:03 compute-0 podman[256862]: 2026-01-20 14:23:03.804135648 +0000 UTC m=+0.691665579 container died e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:23:03 compute-0 ceph-mon[74360]: pgmap v945: 321 pgs: 321 active+clean; 257 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 155 op/s
Jan 20 14:23:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:04.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe61ea1b9c8200080347688f22910c37a625067d32705eeabbdd5c8252cf284b-merged.mount: Deactivated successfully.
Jan 20 14:23:04 compute-0 podman[256862]: 2026-01-20 14:23:04.665695409 +0000 UTC m=+1.553225320 container remove e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:23:04 compute-0 systemd[1]: libpod-conmon-e9df24653b672cd759e24873fb9157b489c7e398fef2cb7181df8521ff46cf15.scope: Deactivated successfully.
Jan 20 14:23:04 compute-0 podman[256906]: 2026-01-20 14:23:04.804477959 +0000 UTC m=+0.022759034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:04 compute-0 podman[256906]: 2026-01-20 14:23:04.989917721 +0000 UTC m=+0.208198716 container create 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:23:05 compute-0 systemd[1]: Started libpod-conmon-4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7.scope.
Jan 20 14:23:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abd967856f76c7e21d0556b1ef2cf9916bddf4d7ea988bc1ef714fade107eb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abd967856f76c7e21d0556b1ef2cf9916bddf4d7ea988bc1ef714fade107eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abd967856f76c7e21d0556b1ef2cf9916bddf4d7ea988bc1ef714fade107eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abd967856f76c7e21d0556b1ef2cf9916bddf4d7ea988bc1ef714fade107eb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 157 op/s
Jan 20 14:23:05 compute-0 podman[256906]: 2026-01-20 14:23:05.226574228 +0000 UTC m=+0.444855293 container init 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:05 compute-0 podman[256906]: 2026-01-20 14:23:05.23610019 +0000 UTC m=+0.454381165 container start 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:05 compute-0 podman[256906]: 2026-01-20 14:23:05.438715478 +0000 UTC m=+0.656996463 container attach 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:23:06 compute-0 kind_shannon[256922]: {
Jan 20 14:23:06 compute-0 kind_shannon[256922]:     "0": [
Jan 20 14:23:06 compute-0 kind_shannon[256922]:         {
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "devices": [
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "/dev/loop3"
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             ],
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "lv_name": "ceph_lv0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "lv_size": "7511998464",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "name": "ceph_lv0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "tags": {
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.cluster_name": "ceph",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.crush_device_class": "",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.encrypted": "0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.osd_id": "0",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.type": "block",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:                 "ceph.vdo": "0"
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             },
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "type": "block",
Jan 20 14:23:06 compute-0 kind_shannon[256922]:             "vg_name": "ceph_vg0"
Jan 20 14:23:06 compute-0 kind_shannon[256922]:         }
Jan 20 14:23:06 compute-0 kind_shannon[256922]:     ]
Jan 20 14:23:06 compute-0 kind_shannon[256922]: }
Jan 20 14:23:06 compute-0 systemd[1]: libpod-4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7.scope: Deactivated successfully.
Jan 20 14:23:06 compute-0 podman[256906]: 2026-01-20 14:23:06.075104364 +0000 UTC m=+1.293385339 container died 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3abd967856f76c7e21d0556b1ef2cf9916bddf4d7ea988bc1ef714fade107eb3-merged.mount: Deactivated successfully.
Jan 20 14:23:06 compute-0 ceph-mon[74360]: pgmap v946: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 157 op/s
Jan 20 14:23:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 146 op/s
Jan 20 14:23:07 compute-0 podman[256906]: 2026-01-20 14:23:07.194491462 +0000 UTC m=+2.412772427 container remove 4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:23:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:07 compute-0 systemd[1]: libpod-conmon-4d2774a6fa00b8eb4ae1a8b07d595f57be30509bb0620022be9c02c18ef1f9f7.scope: Deactivated successfully.
Jan 20 14:23:07 compute-0 sudo[256796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:07 compute-0 sudo[256947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:07 compute-0 sudo[256947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:07 compute-0 sudo[256947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:07 compute-0 sudo[256972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:23:07 compute-0 sudo[256972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:07 compute-0 sudo[256972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:07 compute-0 sudo[256997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:07 compute-0 sudo[256997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:07 compute-0 sudo[256997]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:07 compute-0 sudo[257022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:23:07 compute-0 sudo[257022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:07 compute-0 ceph-mon[74360]: pgmap v947: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 146 op/s
Jan 20 14:23:08 compute-0 podman[257083]: 2026-01-20 14:23:07.915331531 +0000 UTC m=+0.031917665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:08 compute-0 podman[257083]: 2026-01-20 14:23:08.050551706 +0000 UTC m=+0.167137780 container create 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.137 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.138 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.138 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:23:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:08 compute-0 nova_compute[250018]: 2026-01-20 14:23:08.161 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:08.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:08 compute-0 systemd[1]: Started libpod-conmon-9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641.scope.
Jan 20 14:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:09 compute-0 podman[257083]: 2026-01-20 14:23:09.047630569 +0000 UTC m=+1.164216663 container init 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:23:09 compute-0 podman[257083]: 2026-01-20 14:23:09.05635596 +0000 UTC m=+1.172942014 container start 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:09 compute-0 distracted_albattani[257101]: 167 167
Jan 20 14:23:09 compute-0 systemd[1]: libpod-9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641.scope: Deactivated successfully.
Jan 20 14:23:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 182 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 20 14:23:09 compute-0 podman[257083]: 2026-01-20 14:23:09.185505175 +0000 UTC m=+1.302091259 container attach 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:23:09 compute-0 podman[257083]: 2026-01-20 14:23:09.187960289 +0000 UTC m=+1.304546373 container died 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:23:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-099b2cdd649733834cb6844f909b4eaf6a2d0ad6feed33c1b610f66096a1de37-merged.mount: Deactivated successfully.
Jan 20 14:23:09 compute-0 podman[257083]: 2026-01-20 14:23:09.978721128 +0000 UTC m=+2.095307182 container remove 9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:23:09 compute-0 systemd[1]: libpod-conmon-9de3dff08367ac5b613609506d09c4c68e92b9548d95dbbac17c0ccf01cd1641.scope: Deactivated successfully.
Jan 20 14:23:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:10.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:10 compute-0 nova_compute[250018]: 2026-01-20 14:23:10.203 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-0 nova_compute[250018]: 2026-01-20 14:23:10.204 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-0 nova_compute[250018]: 2026-01-20 14:23:10.204 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:10 compute-0 podman[257127]: 2026-01-20 14:23:10.199646599 +0000 UTC m=+0.039686150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:23:10 compute-0 podman[257127]: 2026-01-20 14:23:10.589835477 +0000 UTC m=+0.429875008 container create 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:23:10 compute-0 ceph-mon[74360]: pgmap v948: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 182 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 20 14:23:10 compute-0 systemd[1]: Started libpod-conmon-7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6.scope.
Jan 20 14:23:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9a95345b92666e3428fc07c1d08bf56d4283558ed28da4522026d876649d0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9a95345b92666e3428fc07c1d08bf56d4283558ed28da4522026d876649d0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9a95345b92666e3428fc07c1d08bf56d4283558ed28da4522026d876649d0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9a95345b92666e3428fc07c1d08bf56d4283558ed28da4522026d876649d0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:10 compute-0 podman[257127]: 2026-01-20 14:23:10.938087094 +0000 UTC m=+0.778126665 container init 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:10 compute-0 podman[257127]: 2026-01-20 14:23:10.945825219 +0000 UTC m=+0.785864780 container start 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:23:10 compute-0 podman[257127]: 2026-01-20 14:23:10.996356675 +0000 UTC m=+0.836396256 container attach 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:23:11 compute-0 nova_compute[250018]: 2026-01-20 14:23:11.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005132905557882003 of space, bias 1.0, pg target 1.539871667364601 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:23:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 773 KiB/s wr, 26 op/s
Jan 20 14:23:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]: {
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:         "osd_id": 0,
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:         "type": "bluestore"
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]:     }
Jan 20 14:23:11 compute-0 unruffled_wiles[257145]: }
Jan 20 14:23:11 compute-0 systemd[1]: libpod-7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6.scope: Deactivated successfully.
Jan 20 14:23:11 compute-0 podman[257127]: 2026-01-20 14:23:11.896473455 +0000 UTC m=+1.736513026 container died 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.126 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:23:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:12.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/781625582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:12 compute-0 ceph-mon[74360]: pgmap v949: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 773 KiB/s wr, 26 op/s
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.351 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.352 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.352 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:23:12 compute-0 nova_compute[250018]: 2026-01-20 14:23:12.353 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 458c23a5-1bf5-4160-9265-1db326ecf321 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb9a95345b92666e3428fc07c1d08bf56d4283558ed28da4522026d876649d0c-merged.mount: Deactivated successfully.
Jan 20 14:23:12 compute-0 podman[257127]: 2026-01-20 14:23:12.683772371 +0000 UTC m=+2.523811912 container remove 7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:23:12 compute-0 systemd[1]: libpod-conmon-7f197913703203e051cf716038223e4821472e200845bd3cee406d49b47424a6.scope: Deactivated successfully.
Jan 20 14:23:12 compute-0 sudo[257022]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:23:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:23:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 64a6a8a0-2da6-415d-8223-50264fb8521c does not exist
Jan 20 14:23:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4d7d68ba-1aad-4b5f-b37e-218eea5a7518 does not exist
Jan 20 14:23:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 53a97ca9-7d9c-4e58-9364-464a76569a61 does not exist
Jan 20 14:23:13 compute-0 sudo[257180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:13 compute-0 sudo[257180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:13 compute-0 sudo[257180]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:13 compute-0 sudo[257205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:23:13 compute-0 sudo[257205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:13 compute-0 sudo[257205]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 71 KiB/s wr, 12 op/s
Jan 20 14:23:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:13 compute-0 nova_compute[250018]: 2026-01-20 14:23:13.390 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:23:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3176499354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:23:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.216 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.234 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.235 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.235 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.236 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.236 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.236 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.263 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.264 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.264 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.265 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.265 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:14 compute-0 ceph-mon[74360]: pgmap v950: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 71 KiB/s wr, 12 op/s
Jan 20 14:23:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:23:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1058192503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:23:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2885088964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114115571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.837 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.945 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:23:14 compute-0 nova_compute[250018]: 2026-01-20 14:23:14.945 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.130 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.132 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4948MB free_disk=20.880550384521484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.132 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.133 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.311 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 458c23a5-1bf5-4160-9265-1db326ecf321 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.312 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 288f4f83-d33c-44ef-bf78-16cacd3f811f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.312 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.313 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:23:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.422 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:23:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453826868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.894 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:15 compute-0 sudo[257274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:15 compute-0 sudo[257274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.904 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:23:15 compute-0 sudo[257274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.934 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.957 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:23:15 compute-0 nova_compute[250018]: 2026-01-20 14:23:15.958 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:15 compute-0 sudo[257301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:15 compute-0 sudo[257301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:16 compute-0 sudo[257301]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3114115571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:16 compute-0 ceph-mon[74360]: pgmap v951: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 6.3 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 20 14:23:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3120421228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:16.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Jan 20 14:23:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3453826868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:18.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:18 compute-0 ceph-mon[74360]: pgmap v952: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Jan 20 14:23:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:19.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 20 14:23:19 compute-0 ceph-mon[74360]: pgmap v953: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 20 14:23:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 20 14:23:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 20 14:23:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 20 14:23:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:22 compute-0 ceph-mon[74360]: osdmap e137: 3 total, 3 up, 3 in
Jan 20 14:23:22 compute-0 ceph-mon[74360]: pgmap v955: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 20 14:23:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:23.056971) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003057037, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 546, "num_deletes": 262, "total_data_size": 589844, "memory_usage": 600296, "flush_reason": "Manual Compaction"}
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 20 14:23:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 20 14:23:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:23 compute-0 podman[257331]: 2026-01-20 14:23:23.464438411 +0000 UTC m=+0.055466296 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:23:23 compute-0 podman[257330]: 2026-01-20 14:23:23.495253769 +0000 UTC m=+0.088012209 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919003655260, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 584362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21774, "largest_seqno": 22319, "table_properties": {"data_size": 581268, "index_size": 1002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7009, "raw_average_key_size": 18, "raw_value_size": 574947, "raw_average_value_size": 1489, "num_data_blocks": 44, "num_entries": 386, "num_filter_entries": 386, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768918973, "oldest_key_time": 1768918973, "file_creation_time": 1768919003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 598368 microseconds, and 2638 cpu microseconds.
Jan 20 14:23:23 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:23.655344) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 584362 bytes OK
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:23.655368) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.145417) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.145476) EVENT_LOG_v1 {"time_micros": 1768919004145461, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.145506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 586720, prev total WAL file size 587765, number of live WAL files 2.
Jan 20 14:23:24 compute-0 ceph-mon[74360]: osdmap e138: 3 total, 3 up, 3 in
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.146455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353130' seq:0, type:0; will stop at (end)
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(570KB)], [50(7229KB)]
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919004146508, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 7987716, "oldest_snapshot_seqno": -1}
Jan 20 14:23:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:24.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4543 keys, 7851661 bytes, temperature: kUnknown
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919004389292, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7851661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7821350, "index_size": 17849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115849, "raw_average_key_size": 25, "raw_value_size": 7738931, "raw_average_value_size": 1703, "num_data_blocks": 731, "num_entries": 4543, "num_filter_entries": 4543, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.390011) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7851661 bytes
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.440860) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.8 rd, 32.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(27.1) write-amplify(13.4) OK, records in: 5083, records dropped: 540 output_compression: NoCompression
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.440910) EVENT_LOG_v1 {"time_micros": 1768919004440890, "job": 26, "event": "compaction_finished", "compaction_time_micros": 243260, "compaction_time_cpu_micros": 32486, "output_level": 6, "num_output_files": 1, "total_output_size": 7851661, "num_input_records": 5083, "num_output_records": 4543, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919004442013, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919004443354, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.146311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.443450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.443455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.443456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.443458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:23:24.443459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:24.914 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:24.916 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:23:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s
Jan 20 14:23:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:25 compute-0 ceph-mon[74360]: pgmap v957: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 20 14:23:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2873657741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/143690843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:26.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:26 compute-0 nova_compute[250018]: 2026-01-20 14:23:26.875 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Automatically allocated network: {'id': 'abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'name': 'auto_allocated_network', 'tenant_id': 'e024eef627014f829fa6e45ffe36c281', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['a221e4ee-a32e-43de-aa40-0d06401d1ef3', 'e70603a2-876a-4335-85f1-9e391d1bc039'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-20T14:22:53Z', 'updated_at': '2026-01-20T14:23:15Z', 'revision_number': 4, 'project_id': 'e024eef627014f829fa6e45ffe36c281'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 20 14:23:26 compute-0 nova_compute[250018]: 2026-01-20 14:23:26.887 250022 WARNING oslo_policy.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 14:23:26 compute-0 nova_compute[250018]: 2026-01-20 14:23:26.887 250022 WARNING oslo_policy.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 14:23:26 compute-0 nova_compute[250018]: 2026-01-20 14:23:26.890 250022 DEBUG nova.policy [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '918f290d4c414b71807eacf0b27ad165', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e024eef627014f829fa6e45ffe36c281', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:23:27 compute-0 ceph-mon[74360]: pgmap v958: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 255 B/s wr, 0 op/s
Jan 20 14:23:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 20 14:23:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:28.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:28 compute-0 ceph-mon[74360]: pgmap v959: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Jan 20 14:23:28 compute-0 nova_compute[250018]: 2026-01-20 14:23:28.582 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Successfully created port: a03c7653-e053-4ded-befc-99e09d6f7ed5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:23:28 compute-0 sshd-session[257377]: Invalid user admin from 157.245.78.139 port 55208
Jan 20 14:23:28 compute-0 sshd-session[257377]: Connection closed by invalid user admin 157.245.78.139 port 55208 [preauth]
Jan 20 14:23:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 847 B/s wr, 5 op/s
Jan 20 14:23:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:30.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:30.738 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:30.739 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:30.739 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:31 compute-0 ceph-mon[74360]: pgmap v960: 321 pgs: 321 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 847 B/s wr, 5 op/s
Jan 20 14:23:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 16 KiB/s wr, 6 op/s
Jan 20 14:23:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:31 compute-0 nova_compute[250018]: 2026-01-20 14:23:31.633 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Successfully updated port: a03c7653-e053-4ded-befc-99e09d6f7ed5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:23:31 compute-0 nova_compute[250018]: 2026-01-20 14:23:31.647 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:31 compute-0 nova_compute[250018]: 2026-01-20 14:23:31.647 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquired lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:31 compute-0 nova_compute[250018]: 2026-01-20 14:23:31.647 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:23:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:32.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:32 compute-0 nova_compute[250018]: 2026-01-20 14:23:32.258 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:23:32 compute-0 nova_compute[250018]: 2026-01-20 14:23:32.442 250022 DEBUG nova.compute.manager [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-changed-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:32 compute-0 nova_compute[250018]: 2026-01-20 14:23:32.442 250022 DEBUG nova.compute.manager [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Refreshing instance network info cache due to event network-changed-a03c7653-e053-4ded-befc-99e09d6f7ed5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:23:32 compute-0 nova_compute[250018]: 2026-01-20 14:23:32.443 250022 DEBUG oslo_concurrency.lockutils [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:23:32 compute-0 ceph-mon[74360]: pgmap v961: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 16 KiB/s wr, 6 op/s
Jan 20 14:23:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 20 14:23:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 20 14:23:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 20 14:23:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 294 KiB/s rd, 18 KiB/s wr, 20 op/s
Jan 20 14:23:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/450213987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1900357995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:33 compute-0 ceph-mon[74360]: osdmap e139: 3 total, 3 up, 3 in
Jan 20 14:23:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2620160175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:34.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:34.918 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 18 KiB/s wr, 57 op/s
Jan 20 14:23:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:35 compute-0 ceph-mon[74360]: pgmap v963: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 294 KiB/s rd, 18 KiB/s wr, 20 op/s
Jan 20 14:23:36 compute-0 sudo[257384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:36 compute-0 sudo[257384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:36 compute-0 sudo[257384]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:36.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:36 compute-0 sudo[257409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:36 compute-0 sudo[257409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:36 compute-0 sudo[257409]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 20 14:23:37 compute-0 ceph-mon[74360]: pgmap v964: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 18 KiB/s wr, 57 op/s
Jan 20 14:23:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:38.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.300 250022 DEBUG nova.network.neutron [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Updating instance_info_cache with network_info: [{"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.534 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Releasing lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.535 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Instance network_info: |[{"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.537 250022 DEBUG oslo_concurrency.lockutils [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.537 250022 DEBUG nova.network.neutron [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Refreshing network info cache for port a03c7653-e053-4ded-befc-99e09d6f7ed5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.544 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Start _get_guest_xml network_info=[{"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.554 250022 WARNING nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.563 250022 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.564 250022 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.576 250022 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.578 250022 DEBUG nova.virt.libvirt.host [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.582 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.583 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.584 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.585 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.585 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.586 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.586 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.587 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.589 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.589 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.590 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.590 250022 DEBUG nova.virt.hardware [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:23:38 compute-0 nova_compute[250018]: 2026-01-20 14:23:38.598 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:38 compute-0 ceph-mon[74360]: pgmap v965: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 20 14:23:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 85 op/s
Jan 20 14:23:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3972026209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:39 compute-0 nova_compute[250018]: 2026-01-20 14:23:39.426 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:39 compute-0 nova_compute[250018]: 2026-01-20 14:23:39.462 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:39 compute-0 nova_compute[250018]: 2026-01-20 14:23:39.468 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:40.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3972026209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:40 compute-0 nova_compute[250018]: 2026-01-20 14:23:40.580 250022 DEBUG nova.network.neutron [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Updated VIF entry in instance network info cache for port a03c7653-e053-4ded-befc-99e09d6f7ed5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:23:40 compute-0 nova_compute[250018]: 2026-01-20 14:23:40.581 250022 DEBUG nova.network.neutron [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Updating instance_info_cache with network_info: [{"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:23:40 compute-0 nova_compute[250018]: 2026-01-20 14:23:40.600 250022 DEBUG oslo_concurrency.lockutils [req-7fe7adb1-f0da-42af-b766-118703acdac5 req-f8c7709c-4116-44c0-a688-1e658abe8edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-288f4f83-d33c-44ef-bf78-16cacd3f811f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:23:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:23:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3235933527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 17 KiB/s wr, 99 op/s
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.167 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.170 250022 DEBUG nova.virt.libvirt.vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-3',id=4,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:22:52Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=288f4f83-d33c-44ef-bf78-16cacd3f811f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.171 250022 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.172 250022 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.175 250022 DEBUG nova.objects.instance [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'pci_devices' on Instance uuid 288f4f83-d33c-44ef-bf78-16cacd3f811f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.192 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <uuid>288f4f83-d33c-44ef-bf78-16cacd3f811f</uuid>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <name>instance-00000004</name>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:name>tempest-tempest.common.compute-instance-63374895-3</nova:name>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:23:38</nova:creationTime>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:user uuid="918f290d4c414b71807eacf0b27ad165">tempest-AutoAllocateNetworkTest-314960358-project-member</nova:user>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:project uuid="e024eef627014f829fa6e45ffe36c281">tempest-AutoAllocateNetworkTest-314960358</nova:project>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <nova:port uuid="a03c7653-e053-4ded-befc-99e09d6f7ed5">
Jan 20 14:23:41 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="fdfe:381f:8400::93" ipVersion="6"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.1.0.8" ipVersion="4"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <system>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="serial">288f4f83-d33c-44ef-bf78-16cacd3f811f</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="uuid">288f4f83-d33c-44ef-bf78-16cacd3f811f</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </system>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <os>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </os>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <features>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </features>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/288f4f83-d33c-44ef-bf78-16cacd3f811f_disk">
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </source>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config">
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </source>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:23:41 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:35:ca:37"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <target dev="tapa03c7653-e0"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/console.log" append="off"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <video>
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </video>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:23:41 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:23:41 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:23:41 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:23:41 compute-0 nova_compute[250018]: </domain>
Jan 20 14:23:41 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.194 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Preparing to wait for external event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.194 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.194 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.195 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.195 250022 DEBUG nova.virt.libvirt.vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-3',id=4,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:22:52Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=288f4f83-d33c-44ef-bf78-16cacd3f811f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.196 250022 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.197 250022 DEBUG nova.network.os_vif_util [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.197 250022 DEBUG os_vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.231 250022 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.232 250022 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.232 250022 DEBUG ovsdbapp.backend.ovs_idl [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.247 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.247 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.247 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:41 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.249 250022 INFO oslo.privsep.daemon [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpwd8c21fa/privsep.sock']
Jan 20 14:23:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.010 250022 INFO oslo.privsep.daemon [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Spawned new privsep daemon via rootwrap
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.868 257503 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.874 257503 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.878 257503 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:41.878 257503 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257503
Jan 20 14:23:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:42.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.336 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa03c7653-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.337 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa03c7653-e0, col_values=(('external_ids', {'iface-id': 'a03c7653-e053-4ded-befc-99e09d6f7ed5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:ca:37', 'vm-uuid': '288f4f83-d33c-44ef-bf78-16cacd3f811f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:42 compute-0 NetworkManager[48960]: <info>  [1768919022.3432] manager: (tapa03c7653-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.350 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.351 250022 INFO os_vif [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0')
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.947 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.948 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.948 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] No VIF found with MAC fa:16:3e:35:ca:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.949 250022 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Using config drive
Jan 20 14:23:42 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 14:23:42 compute-0 nova_compute[250018]: 2026-01-20 14:23:42.997 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:43 compute-0 ceph-mon[74360]: pgmap v966: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 85 op/s
Jan 20 14:23:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3235933527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Jan 20 14:23:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:43 compute-0 nova_compute[250018]: 2026-01-20 14:23:43.394 250022 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Creating config drive at /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config
Jan 20 14:23:43 compute-0 nova_compute[250018]: 2026-01-20 14:23:43.400 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpga3yv60e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:43 compute-0 nova_compute[250018]: 2026-01-20 14:23:43.529 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpga3yv60e" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:44.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.494 250022 DEBUG nova.storage.rbd_utils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] rbd image 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.501 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:23:44 compute-0 ceph-mon[74360]: pgmap v967: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 17 KiB/s wr, 99 op/s
Jan 20 14:23:44 compute-0 ceph-mon[74360]: pgmap v968: 321 pgs: 321 active+clean; 260 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.708 250022 DEBUG oslo_concurrency.processutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config 288f4f83-d33c-44ef-bf78-16cacd3f811f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.710 250022 INFO nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Deleting local config drive /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f/disk.config because it was imported into RBD.
Jan 20 14:23:44 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.841 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:44 compute-0 kernel: tapa03c7653-e0: entered promiscuous mode
Jan 20 14:23:44 compute-0 NetworkManager[48960]: <info>  [1768919024.8434] manager: (tapa03c7653-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 20 14:23:44 compute-0 ovn_controller[148666]: 2026-01-20T14:23:44Z|00027|binding|INFO|Claiming lport a03c7653-e053-4ded-befc-99e09d6f7ed5 for this chassis.
Jan 20 14:23:44 compute-0 ovn_controller[148666]: 2026-01-20T14:23:44Z|00028|binding|INFO|a03c7653-e053-4ded-befc-99e09d6f7ed5: Claiming fa:16:3e:35:ca:37 10.1.0.8 fdfe:381f:8400::93
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:44.863 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:ca:37 10.1.0.8 fdfe:381f:8400::93'], port_security=['fa:16:3e:35:ca:37 10.1.0.8 fdfe:381f:8400::93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::93/64', 'neutron:device_id': '288f4f83-d33c-44ef-bf78-16cacd3f811f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e024eef627014f829fa6e45ffe36c281', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd16ec2ec-302d-4208-8497-5b8aae342313', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72f351f7-24c7-4d5d-b1a1-e23b4cd26746, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=a03c7653-e053-4ded-befc-99e09d6f7ed5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:44.864 160071 INFO neutron.agent.ovn.metadata.agent [-] Port a03c7653-e053-4ded-befc-99e09d6f7ed5 in datapath abfbbc51-530d-4964-87bc-9fe4ef7eea76 bound to our chassis
Jan 20 14:23:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:44.867 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:44.869 160071 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp9zel8wcp/privsep.sock']
Jan 20 14:23:44 compute-0 systemd-udevd[257586]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:44 compute-0 NetworkManager[48960]: <info>  [1768919024.8931] device (tapa03c7653-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:23:44 compute-0 NetworkManager[48960]: <info>  [1768919024.8938] device (tapa03c7653-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:23:44 compute-0 systemd-machined[216401]: New machine qemu-2-instance-00000004.
Jan 20 14:23:44 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.929 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:44 compute-0 ovn_controller[148666]: 2026-01-20T14:23:44Z|00029|binding|INFO|Setting lport a03c7653-e053-4ded-befc-99e09d6f7ed5 ovn-installed in OVS
Jan 20 14:23:44 compute-0 ovn_controller[148666]: 2026-01-20T14:23:44Z|00030|binding|INFO|Setting lport a03c7653-e053-4ded-befc-99e09d6f7ed5 up in Southbound
Jan 20 14:23:44 compute-0 nova_compute[250018]: 2026-01-20 14:23:44.935 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 260 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 134 op/s
Jan 20 14:23:45 compute-0 nova_compute[250018]: 2026-01-20 14:23:45.269 250022 DEBUG nova.compute.manager [req-d3372cbf-3b4d-41ed-9757-18c0b6157fb1 req-3e70c1a6-87c0-45c0-a59a-19b291b9cd82 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:45 compute-0 nova_compute[250018]: 2026-01-20 14:23:45.269 250022 DEBUG oslo_concurrency.lockutils [req-d3372cbf-3b4d-41ed-9757-18c0b6157fb1 req-3e70c1a6-87c0-45c0-a59a-19b291b9cd82 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:45 compute-0 nova_compute[250018]: 2026-01-20 14:23:45.270 250022 DEBUG oslo_concurrency.lockutils [req-d3372cbf-3b4d-41ed-9757-18c0b6157fb1 req-3e70c1a6-87c0-45c0-a59a-19b291b9cd82 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:45 compute-0 nova_compute[250018]: 2026-01-20 14:23:45.270 250022 DEBUG oslo_concurrency.lockutils [req-d3372cbf-3b4d-41ed-9757-18c0b6157fb1 req-3e70c1a6-87c0-45c0-a59a-19b291b9cd82 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:45 compute-0 nova_compute[250018]: 2026-01-20 14:23:45.270 250022 DEBUG nova.compute.manager [req-d3372cbf-3b4d-41ed-9757-18c0b6157fb1 req-3e70c1a6-87c0-45c0-a59a-19b291b9cd82 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Processing event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:23:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.546 160071 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.547 160071 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9zel8wcp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.428 257604 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.432 257604 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.434 257604 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.434 257604 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257604
Jan 20 14:23:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:45.549 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3959baee-d69a-4340-8461-e5846070fcbc]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:45 compute-0 ceph-mon[74360]: pgmap v969: 321 pgs: 321 active+clean; 260 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 134 op/s
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.083 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919026.0826437, 288f4f83-d33c-44ef-bf78-16cacd3f811f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.083 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] VM Started (Lifecycle Event)
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.086 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.090 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.093 250022 INFO nova.virt.libvirt.driver [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Instance spawned successfully.
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.093 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.135 257604 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.136 257604 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.137 257604 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:46.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.452 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.459 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.464 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.464 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.465 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.465 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.466 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.466 250022 DEBUG nova.virt.libvirt.driver [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.493 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.494 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919026.0827947, 288f4f83-d33c-44ef-bf78-16cacd3f811f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.494 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] VM Paused (Lifecycle Event)
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.516 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.520 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919026.0891383, 288f4f83-d33c-44ef-bf78-16cacd3f811f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.520 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] VM Resumed (Lifecycle Event)
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.543 250022 INFO nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Took 53.57 seconds to spawn the instance on the hypervisor.
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.544 250022 DEBUG nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.554 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.558 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.577 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.613 250022 INFO nova.compute.manager [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Took 56.24 seconds to build instance.
Jan 20 14:23:46 compute-0 nova_compute[250018]: 2026-01-20 14:23:46.641 250022 DEBUG oslo_concurrency.lockutils [None req-6d0bfef6-530d-40dd-b237-62607132ea42 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 56.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.805 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[71937606-9225-4bba-8119-0c62483dba11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.807 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabfbbc51-51 in ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.809 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabfbbc51-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.809 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6923d5b1-cb4d-43d3-ae59-46d55bf5f30a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.813 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[15367e28-4ba4-4fe0-a0b9-95c0cf1fbb6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.833 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4719177b-b53f-4145-b20b-681eaefad99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.858 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[25966393-812b-4dd8-b281-29af7d45bae6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:46.860 160071 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpovyy153i/privsep.sock']
Jan 20 14:23:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4216244016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/620535083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 320 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 170 op/s
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.529 250022 DEBUG nova.compute.manager [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.529 250022 DEBUG oslo_concurrency.lockutils [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.529 250022 DEBUG oslo_concurrency.lockutils [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.529 250022 DEBUG oslo_concurrency.lockutils [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.530 250022 DEBUG nova.compute.manager [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] No waiting events found dispatching network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:47 compute-0 nova_compute[250018]: 2026-01-20 14:23:47.530 250022 WARNING nova.compute.manager [req-136864ad-fdf4-48b6-a80a-c209bc4ab1ce req-151a2e72-ee6a-463c-b62a-381e2fc926c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received unexpected event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 for instance with vm_state active and task_state None.
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.551 160071 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.552 160071 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpovyy153i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.434 257661 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.441 257661 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.445 257661 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.445 257661 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257661
Jan 20 14:23:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:47.555 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1097bcaf-d1bd-4f30-bb38-0b1af972fd3d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.070 257661 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.070 257661 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.070 257661 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:48.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.666 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1351291e-2d08-4aae-a40c-a19aa6fd3c46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ceph-mon[74360]: pgmap v970: 321 pgs: 321 active+clean; 320 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 170 op/s
Jan 20 14:23:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2442230621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:48 compute-0 NetworkManager[48960]: <info>  [1768919028.7217] manager: (tapabfbbc51-50): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.722 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6df5d1f8-9abd-41ea-888c-decbbfef0491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 systemd-udevd[257674]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.753 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2184bb-8060-444c-9af8-fae8151232e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.756 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[14bd6319-1212-47cb-b44b-857f1ed00d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 NetworkManager[48960]: <info>  [1768919028.7828] device (tapabfbbc51-50): carrier: link connected
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.788 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ee6266-a0e7-4b6f-b7cc-c41327e0fab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.809 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce12edd-1e9e-43e2-a7a3-1125381fb979]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabfbbc51-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:8a:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496950, 'reachable_time': 29933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257692, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.825 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a69656a6-33ce-4141-9887-ffbb3f24eaa6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:8a9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496950, 'tstamp': 496950}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257693, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.841 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[861b40f9-89a0-4e04-8134-35d9e39e2ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabfbbc51-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:8a:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496950, 'reachable_time': 29933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257694, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.883 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8c0bab-4f19-4d90-9a03-d486e1947e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.938 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9d2649-c58d-42da-8d74-697be7272eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.939 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabfbbc51-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.940 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.940 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabfbbc51-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:48 compute-0 nova_compute[250018]: 2026-01-20 14:23:48.941 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:48 compute-0 kernel: tapabfbbc51-50: entered promiscuous mode
Jan 20 14:23:48 compute-0 NetworkManager[48960]: <info>  [1768919028.9451] manager: (tapabfbbc51-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 20 14:23:48 compute-0 nova_compute[250018]: 2026-01-20 14:23:48.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.946 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabfbbc51-50, col_values=(('external_ids', {'iface-id': 'ded59d17-485f-4d1e-8a09-5098adeebfa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:48 compute-0 ovn_controller[148666]: 2026-01-20T14:23:48Z|00031|binding|INFO|Releasing lport ded59d17-485f-4d1e-8a09-5098adeebfa4 from this chassis (sb_readonly=0)
Jan 20 14:23:48 compute-0 nova_compute[250018]: 2026-01-20 14:23:48.947 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:48 compute-0 nova_compute[250018]: 2026-01-20 14:23:48.962 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.963 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.963 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a36e917c-0290-4a59-87ae-c2eff9ee53ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.964 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/abfbbc51-530d-4964-87bc-9fe4ef7eea76.pid.haproxy
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID abfbbc51-530d-4964-87bc-9fe4ef7eea76
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:23:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:48.965 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'env', 'PROCESS_TAG=haproxy-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abfbbc51-530d-4964-87bc-9fe4ef7eea76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:23:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 342 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Jan 20 14:23:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:49 compute-0 podman[257725]: 2026-01-20 14:23:49.321080834 +0000 UTC m=+0.027176204 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:23:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:49 compute-0 podman[257725]: 2026-01-20 14:23:49.53864448 +0000 UTC m=+0.244739770 container create 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:49 compute-0 systemd[1]: Started libpod-conmon-1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6.scope.
Jan 20 14:23:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd3f27a9c330a55527dc201fe0979b138c4ba1f93e23980260135a1e83283b3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:49 compute-0 nova_compute[250018]: 2026-01-20 14:23:49.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:50 compute-0 podman[257725]: 2026-01-20 14:23:50.086969364 +0000 UTC m=+0.793064664 container init 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:23:50 compute-0 ceph-mon[74360]: pgmap v971: 321 pgs: 321 active+clean; 342 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Jan 20 14:23:50 compute-0 podman[257725]: 2026-01-20 14:23:50.09709757 +0000 UTC m=+0.803192870 container start 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:50 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [NOTICE]   (257743) : New worker (257745) forked
Jan 20 14:23:50 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [NOTICE]   (257743) : Loading success.
Jan 20 14:23:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:50.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 355 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Jan 20 14:23:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:51.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:52.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:52 compute-0 nova_compute[250018]: 2026-01-20 14:23:52.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:23:52
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms']
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:52 compute-0 ceph-mon[74360]: pgmap v972: 321 pgs: 321 active+clean; 355 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Jan 20 14:23:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/142931646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/341761694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:23:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 379 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Jan 20 14:23:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:53 compute-0 ceph-mon[74360]: pgmap v973: 321 pgs: 321 active+clean; 379 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Jan 20 14:23:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:23:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:54.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:23:54 compute-0 podman[257757]: 2026-01-20 14:23:54.459167536 +0000 UTC m=+0.048928294 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:23:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:54 compute-0 podman[257756]: 2026-01-20 14:23:54.492298415 +0000 UTC m=+0.082796773 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:23:54 compute-0 nova_compute[250018]: 2026-01-20 14:23:54.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 365 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 288 op/s
Jan 20 14:23:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:55.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:23:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:56.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:23:56 compute-0 ceph-mon[74360]: pgmap v974: 321 pgs: 321 active+clean; 365 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 288 op/s
Jan 20 14:23:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3389830566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:56 compute-0 sudo[257803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:56 compute-0 sudo[257803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:56 compute-0 sudo[257803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:56 compute-0 sudo[257828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:23:56 compute-0 sudo[257828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:23:56 compute-0 sudo[257828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 334 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.8 MiB/s wr, 395 op/s
Jan 20 14:23:57 compute-0 nova_compute[250018]: 2026-01-20 14:23:57.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:23:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:23:58.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:58 compute-0 ceph-mon[74360]: pgmap v975: 321 pgs: 321 active+clean; 334 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.8 MiB/s wr, 395 op/s
Jan 20 14:23:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 326 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 355 op/s
Jan 20 14:23:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2656191454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1743970707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:23:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:23:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:23:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:23:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:23:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.626 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.627 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.627 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.627 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.628 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.629 250022 INFO nova.compute.manager [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Terminating instance
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.630 250022 DEBUG nova.compute.manager [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:23:59 compute-0 kernel: tapa03c7653-e0 (unregistering): left promiscuous mode
Jan 20 14:23:59 compute-0 NetworkManager[48960]: <info>  [1768919039.6849] device (tapa03c7653-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.695 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 ovn_controller[148666]: 2026-01-20T14:23:59Z|00032|binding|INFO|Releasing lport a03c7653-e053-4ded-befc-99e09d6f7ed5 from this chassis (sb_readonly=0)
Jan 20 14:23:59 compute-0 ovn_controller[148666]: 2026-01-20T14:23:59Z|00033|binding|INFO|Setting lport a03c7653-e053-4ded-befc-99e09d6f7ed5 down in Southbound
Jan 20 14:23:59 compute-0 ovn_controller[148666]: 2026-01-20T14:23:59Z|00034|binding|INFO|Removing iface tapa03c7653-e0 ovn-installed in OVS
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.697 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.704 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:ca:37 10.1.0.8 fdfe:381f:8400::93'], port_security=['fa:16:3e:35:ca:37 10.1.0.8 fdfe:381f:8400::93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::93/64', 'neutron:device_id': '288f4f83-d33c-44ef-bf78-16cacd3f811f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e024eef627014f829fa6e45ffe36c281', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd16ec2ec-302d-4208-8497-5b8aae342313', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72f351f7-24c7-4d5d-b1a1-e23b4cd26746, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=a03c7653-e053-4ded-befc-99e09d6f7ed5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.705 160071 INFO neutron.agent.ovn.metadata.agent [-] Port a03c7653-e053-4ded-befc-99e09d6f7ed5 in datapath abfbbc51-530d-4964-87bc-9fe4ef7eea76 unbound from our chassis
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.707 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abfbbc51-530d-4964-87bc-9fe4ef7eea76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.709 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c3880b0f-c8af-42b9-8689-ac19bc85cac4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.709 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 namespace which is not needed anymore
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.716 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 20 14:23:59 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 14.361s CPU time.
Jan 20 14:23:59 compute-0 systemd-machined[216401]: Machine qemu-2-instance-00000004 terminated.
Jan 20 14:23:59 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [NOTICE]   (257743) : haproxy version is 2.8.14-c23fe91
Jan 20 14:23:59 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [NOTICE]   (257743) : path to executable is /usr/sbin/haproxy
Jan 20 14:23:59 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [WARNING]  (257743) : Exiting Master process...
Jan 20 14:23:59 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [ALERT]    (257743) : Current worker (257745) exited with code 143 (Terminated)
Jan 20 14:23:59 compute-0 neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76[257739]: [WARNING]  (257743) : All workers exited. Exiting... (0)
Jan 20 14:23:59 compute-0 systemd[1]: libpod-1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6.scope: Deactivated successfully.
Jan 20 14:23:59 compute-0 podman[257877]: 2026-01-20 14:23:59.840446748 +0000 UTC m=+0.045595947 container died 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd3f27a9c330a55527dc201fe0979b138c4ba1f93e23980260135a1e83283b3e-merged.mount: Deactivated successfully.
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.869 250022 INFO nova.virt.libvirt.driver [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Instance destroyed successfully.
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.869 250022 DEBUG nova.objects.instance [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'resources' on Instance uuid 288f4f83-d33c-44ef-bf78-16cacd3f811f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:23:59 compute-0 podman[257877]: 2026-01-20 14:23:59.881185996 +0000 UTC m=+0.086335195 container cleanup 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:23:59 compute-0 systemd[1]: libpod-conmon-1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6.scope: Deactivated successfully.
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.905 250022 DEBUG nova.virt.libvirt.vif [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:22:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-63374895-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-63374895-3',id=4,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-20T14:23:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e024eef627014f829fa6e45ffe36c281',ramdisk_id='',reservation_id='r-46o7356r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-314960358',owner_user_name='tempest-AutoAllocateNetworkTest-314960358-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:23:46Z,user_data=None,user_id='918f290d4c414b71807eacf0b27ad165',uuid=288f4f83-d33c-44ef-bf78-16cacd3f811f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.906 250022 DEBUG nova.network.os_vif_util [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converting VIF {"id": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "address": "fa:16:3e:35:ca:37", "network": {"id": "abfbbc51-530d-4964-87bc-9fe4ef7eea76", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::93", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e024eef627014f829fa6e45ffe36c281", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa03c7653-e0", "ovs_interfaceid": "a03c7653-e053-4ded-befc-99e09d6f7ed5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.907 250022 DEBUG nova.network.os_vif_util [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.908 250022 DEBUG os_vif [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.913 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa03c7653-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.915 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.917 250022 INFO os_vif [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:ca:37,bridge_name='br-int',has_traffic_filtering=True,id=a03c7653-e053-4ded-befc-99e09d6f7ed5,network=Network(abfbbc51-530d-4964-87bc-9fe4ef7eea76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa03c7653-e0')
Jan 20 14:23:59 compute-0 podman[257919]: 2026-01-20 14:23:59.949676133 +0000 UTC m=+0.042748532 container remove 1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f71e922-6ea3-4221-9865-e5a397a5b018]: (4, ('Tue Jan 20 02:23:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 (1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6)\n1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6\nTue Jan 20 02:23:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 (1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6)\n1e9743275a2ddbb4f9273b46d0ea3e67784fbb9f2c5dd50725978f21c8d0b6d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c810e58f-3ea1-4945-a408-759696cdde91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.957 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabfbbc51-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.959 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 kernel: tapabfbbc51-50: left promiscuous mode
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.972 250022 DEBUG nova.compute.manager [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-unplugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.972 250022 DEBUG oslo_concurrency.lockutils [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.973 250022 DEBUG oslo_concurrency.lockutils [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.973 250022 DEBUG oslo_concurrency.lockutils [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.973 250022 DEBUG nova.compute.manager [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] No waiting events found dispatching network-vif-unplugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.973 250022 DEBUG nova.compute.manager [req-a9ef6b26-6284-44f3-b700-b18b9dd868ba req-1b831a1b-c0cb-4075-b16d-8216864772a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-unplugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:23:59 compute-0 nova_compute[250018]: 2026-01-20 14:23:59.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.975 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e50c416a-de77-4fec-8a5b-5e98226e8412]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.987 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[be563692-cca4-4709-87e3-5f6d7788e921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:23:59.989 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5d775d-61a9-4021-bbdf-04a904b03ab2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:00.005 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[48c2b57b-7cb5-4a77-b88c-34e8e902531a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496938, 'reachable_time': 20658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257949, 'error': None, 'target': 'ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dabfbbc51\x2d530d\x2d4964\x2d87bc\x2d9fe4ef7eea76.mount: Deactivated successfully.
Jan 20 14:24:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:00.016 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abfbbc51-530d-4964-87bc-9fe4ef7eea76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:24:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:00.017 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[fcdbf6a2-e0a6-46c9-a514-16655ad417c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:24:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:00.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:00 compute-0 ceph-mon[74360]: pgmap v976: 321 pgs: 321 active+clean; 326 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 355 op/s
Jan 20 14:24:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1468624899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.422 250022 INFO nova.virt.libvirt.driver [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Deleting instance files /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f_del
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.424 250022 INFO nova.virt.libvirt.driver [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Deletion of /var/lib/nova/instances/288f4f83-d33c-44ef-bf78-16cacd3f811f_del complete
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.496 250022 DEBUG nova.virt.libvirt.host [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.497 250022 INFO nova.virt.libvirt.host [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] UEFI support detected
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.498 250022 INFO nova.compute.manager [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.499 250022 DEBUG oslo.service.loopingcall [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.499 250022 DEBUG nova.compute.manager [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:00 compute-0 nova_compute[250018]: 2026-01-20 14:24:00.499 250022 DEBUG nova.network.neutron [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 315 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.8 MiB/s wr, 399 op/s
Jan 20 14:24:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/364352947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.081 250022 DEBUG nova.compute.manager [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.082 250022 DEBUG oslo_concurrency.lockutils [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.083 250022 DEBUG oslo_concurrency.lockutils [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.083 250022 DEBUG oslo_concurrency.lockutils [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.083 250022 DEBUG nova.compute.manager [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] No waiting events found dispatching network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.084 250022 WARNING nova.compute.manager [req-17824a43-c5e4-475a-92c9-eb5eb6bbb8b7 req-434ed149-d308-4454-856b-a2b0c6ecb672 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received unexpected event network-vif-plugged-a03c7653-e053-4ded-befc-99e09d6f7ed5 for instance with vm_state active and task_state deleting.
Jan 20 14:24:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:02.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.426 250022 DEBUG nova.network.neutron [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:02 compute-0 ceph-mon[74360]: pgmap v977: 321 pgs: 321 active+clean; 315 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.8 MiB/s wr, 399 op/s
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.448 250022 INFO nova.compute.manager [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Took 1.95 seconds to deallocate network for instance.
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.529 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.529 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:02 compute-0 nova_compute[250018]: 2026-01-20 14:24:02.606 250022 DEBUG oslo_concurrency.processutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691126561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.042 250022 DEBUG oslo_concurrency.processutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.048 250022 DEBUG nova.compute.provider_tree [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.066 250022 DEBUG nova.scheduler.client.report [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.100 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.153 250022 INFO nova.scheduler.client.report [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Deleted allocations for instance 288f4f83-d33c-44ef-bf78-16cacd3f811f
Jan 20 14:24:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 257 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.6 MiB/s wr, 414 op/s
Jan 20 14:24:03 compute-0 nova_compute[250018]: 2026-01-20 14:24:03.233 250022 DEBUG oslo_concurrency.lockutils [None req-4e1e4440-978e-472a-a28e-c025096428a3 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "288f4f83-d33c-44ef-bf78-16cacd3f811f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1691126561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:04 compute-0 nova_compute[250018]: 2026-01-20 14:24:04.198 250022 DEBUG nova.compute.manager [req-50600a92-039d-438a-bc5e-5f1dc006fab7 req-2ec62ae9-48bb-4b73-91c5-891793744384 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Received event network-vif-deleted-a03c7653-e053-4ded-befc-99e09d6f7ed5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:24:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:04.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:04 compute-0 ceph-mon[74360]: pgmap v978: 321 pgs: 321 active+clean; 257 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.6 MiB/s wr, 414 op/s
Jan 20 14:24:04 compute-0 nova_compute[250018]: 2026-01-20 14:24:04.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:04 compute-0 nova_compute[250018]: 2026-01-20 14:24:04.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 275 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:05.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.077 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "0f93511b-eba1-4b09-94ce-051ac10117ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.077 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.101 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.122 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "eaa38f5d-6564-47d4-b7c7-261945710681" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.123 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.168 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.219 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.220 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.241 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.242 250022 INFO nova.compute.claims [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:24:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.248 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.441 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:06 compute-0 ceph-mon[74360]: pgmap v979: 321 pgs: 321 active+clean; 275 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1757411374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.923 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.931 250022 DEBUG nova.compute.provider_tree [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.960 250022 DEBUG nova.scheduler.client.report [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.984 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.985 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.992 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:24:06 compute-0 nova_compute[250018]: 2026-01-20 14:24:06.992 250022 INFO nova.compute.claims [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.013 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "136c1150-389c-4388-8259-33471370f395" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.014 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "136c1150-389c-4388-8259-33471370f395" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.042 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "136c1150-389c-4388-8259-33471370f395" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.042 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.107 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.108 250022 DEBUG nova.network.neutron [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.132 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.159 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:24:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 294 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 8.6 MiB/s wr, 416 op/s
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.199 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.309 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.310 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.311 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Creating image(s)
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.342 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.381 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:07.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.414 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.420 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.480 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.482 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.484 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.484 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.513 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.517 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0f93511b-eba1-4b09-94ce-051ac10117ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2027510307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.677 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.683 250022 DEBUG nova.compute.provider_tree [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.687 250022 DEBUG nova.network.neutron [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.687 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.700 250022 DEBUG nova.scheduler.client.report [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1757411374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.735 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.769 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "136c1150-389c-4388-8259-33471370f395" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.769 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "136c1150-389c-4388-8259-33471370f395" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.812 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "136c1150-389c-4388-8259-33471370f395" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.813 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.867 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.867 250022 DEBUG nova.network.neutron [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.903 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:24:07 compute-0 nova_compute[250018]: 2026-01-20 14:24:07.941 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.068 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.070 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.071 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Creating image(s)
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.096 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.118 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.140 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.143 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.161 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0f93511b-eba1-4b09-94ce-051ac10117ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.218 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.219 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.219 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.220 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.240 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.243 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eaa38f5d-6564-47d4-b7c7-261945710681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.272 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] resizing rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.376 250022 DEBUG nova.network.neutron [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.376 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.745 250022 DEBUG nova.objects.instance [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'migration_context' on Instance uuid 0f93511b-eba1-4b09-94ce-051ac10117ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:08 compute-0 ceph-mon[74360]: pgmap v980: 321 pgs: 321 active+clean; 294 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 8.6 MiB/s wr, 416 op/s
Jan 20 14:24:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2027510307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.878 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 eaa38f5d-6564-47d4-b7c7-261945710681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.949 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.950 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Ensure instance console log exists: /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.950 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.951 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.951 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.952 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:24:08 compute-0 nova_compute[250018]: 2026-01-20 14:24:08.959 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] resizing rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.000 250022 WARNING nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.007 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.008 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.012 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.012 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.014 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.014 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.015 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.015 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.016 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.016 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.016 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.016 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.017 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.017 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.017 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.017 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.022 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.171 250022 DEBUG nova.objects.instance [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'migration_context' on Instance uuid eaa38f5d-6564-47d4-b7c7-261945710681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 334 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 334 op/s
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.194 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.194 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Ensure instance console log exists: /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.195 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.195 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.196 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.197 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.201 250022 WARNING nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.206 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.206 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.212 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.212 250022 DEBUG nova.virt.libvirt.host [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.213 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.214 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.214 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.214 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.214 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.215 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.215 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.215 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.215 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.215 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.216 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.216 250022 DEBUG nova.virt.hardware [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.219 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:09.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131583590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.469 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.537 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.546 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207560375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.687 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.723 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.729 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:09 compute-0 ceph-mon[74360]: pgmap v981: 321 pgs: 321 active+clean; 334 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 334 op/s
Jan 20 14:24:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1131583590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/207560375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:09 compute-0 nova_compute[250018]: 2026-01-20 14:24:09.915 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259299625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.063 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.065 250022 DEBUG nova.objects.instance [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f93511b-eba1-4b09-94ce-051ac10117ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.130 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <uuid>0f93511b-eba1-4b09-94ce-051ac10117ce</uuid>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <name>instance-00000008</name>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1101418423-1</nova:name>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:24:09</nova:creationTime>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:user uuid="32a16ea2839748629233294de19222b3">tempest-ServersOnMultiNodesTest-1140514054-project-member</nova:user>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:project uuid="986d9f2d9bd24a228e53a76694db0568">tempest-ServersOnMultiNodesTest-1140514054</nova:project>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <system>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="serial">0f93511b-eba1-4b09-94ce-051ac10117ce</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="uuid">0f93511b-eba1-4b09-94ce-051ac10117ce</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </system>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <os>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </os>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <features>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </features>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0f93511b-eba1-4b09-94ce-051ac10117ce_disk">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/console.log" append="off"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <video>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </video>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:24:10 compute-0 nova_compute[250018]: </domain>
Jan 20 14:24:10 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:24:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1792625192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.193 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.194 250022 DEBUG nova.objects.instance [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'pci_devices' on Instance uuid eaa38f5d-6564-47d4-b7c7-261945710681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.305 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <uuid>eaa38f5d-6564-47d4-b7c7-261945710681</uuid>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <name>instance-00000009</name>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1101418423-2</nova:name>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:24:09</nova:creationTime>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:user uuid="32a16ea2839748629233294de19222b3">tempest-ServersOnMultiNodesTest-1140514054-project-member</nova:user>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <nova:project uuid="986d9f2d9bd24a228e53a76694db0568">tempest-ServersOnMultiNodesTest-1140514054</nova:project>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <system>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="serial">eaa38f5d-6564-47d4-b7c7-261945710681</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="uuid">eaa38f5d-6564-47d4-b7c7-261945710681</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </system>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <os>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </os>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <features>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </features>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/eaa38f5d-6564-47d4-b7c7-261945710681_disk">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/eaa38f5d-6564-47d4-b7c7-261945710681_disk.config">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:24:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/console.log" append="off"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <video>
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </video>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:24:10 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:24:10 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:24:10 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:24:10 compute-0 nova_compute[250018]: </domain>
Jan 20 14:24:10 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.461 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.462 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.463 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Using config drive
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.503 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.513 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.513 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.514 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Using config drive
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.537 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.545 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.821 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Creating config drive at /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.825 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jyvj_7e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.844 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Creating config drive at /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.849 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahww46ez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2259299625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1792625192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.949 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7jyvj_7e" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.980 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image 0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:10 compute-0 nova_compute[250018]: 2026-01-20 14:24:10.984 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config 0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.000 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahww46ez" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.029 250022 DEBUG nova.storage.rbd_utils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] rbd image eaa38f5d-6564-47d4-b7c7-261945710681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.034 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config eaa38f5d-6564-47d4-b7c7-261945710681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008014246580122836 of space, bias 1.0, pg target 2.4042739740368506 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.117 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config 0f93511b-eba1-4b09-94ce-051ac10117ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.118 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Deleting local config drive /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce/disk.config because it was imported into RBD.
Jan 20 14:24:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 366 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.9 MiB/s wr, 338 op/s
Jan 20 14:24:11 compute-0 systemd-machined[216401]: New machine qemu-3-instance-00000008.
Jan 20 14:24:11 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Jan 20 14:24:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:11.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.774 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.775 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:11 compute-0 nova_compute[250018]: 2026-01-20 14:24:11.776 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:12 compute-0 nova_compute[250018]: 2026-01-20 14:24:12.047 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:12.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:12 compute-0 ceph-mon[74360]: pgmap v982: 321 pgs: 321 active+clean; 366 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.9 MiB/s wr, 338 op/s
Jan 20 14:24:13 compute-0 sshd-session[258620]: Invalid user admin from 157.245.78.139 port 52018
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.068 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.100 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.100 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.101 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.101 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.101 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:13 compute-0 sshd-session[258620]: Connection closed by invalid user admin 157.245.78.139 port 52018 [preauth]
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.169 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919053.1690934, 0f93511b-eba1-4b09-94ce-051ac10117ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.170 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] VM Resumed (Lifecycle Event)
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.173 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.174 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:24:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 410 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.9 MiB/s wr, 302 op/s
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.181 250022 INFO nova.virt.libvirt.driver [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance spawned successfully.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.182 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.198 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.205 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.207 250022 DEBUG oslo_concurrency.processutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config eaa38f5d-6564-47d4-b7c7-261945710681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.208 250022 INFO nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Deleting local config drive /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681/disk.config because it was imported into RBD.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.210 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.211 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.211 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.211 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.212 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.212 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.235 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.235 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919053.1742373, 0f93511b-eba1-4b09-94ce-051ac10117ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.235 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] VM Started (Lifecycle Event)
Jan 20 14:24:13 compute-0 systemd-machined[216401]: New machine qemu-4-instance-00000009.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.270 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.275 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.304 250022 INFO nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Took 5.99 seconds to spawn the instance on the hypervisor.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.304 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.308 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.388 250022 INFO nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Took 7.19 seconds to build instance.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.403 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:13.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:13 compute-0 sudo[258694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:13 compute-0 sudo[258694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-0 sudo[258694]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-0 sudo[258719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:24:13 compute-0 sudo[258719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-0 sudo[258719]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-0 sudo[258744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:13 compute-0 sudo[258744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-0 sudo[258744]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739698211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.625 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:13 compute-0 sudo[258787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:24:13 compute-0 sudo[258787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.692 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.693 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.699 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.699 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.702 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.702 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.759 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919053.7577877, eaa38f5d-6564-47d4-b7c7-261945710681 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.760 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] VM Resumed (Lifecycle Event)
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.761 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.762 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.764 250022 INFO nova.virt.libvirt.driver [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance spawned successfully.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.764 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.783 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.787 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.788 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.788 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.789 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.789 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.789 250022 DEBUG nova.virt.libvirt.driver [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.794 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.829 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.830 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919053.7578707, eaa38f5d-6564-47d4-b7c7-261945710681 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.831 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] VM Started (Lifecycle Event)
Jan 20 14:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3076624727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:13 compute-0 ceph-mon[74360]: pgmap v983: 321 pgs: 321 active+clean; 410 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.9 MiB/s wr, 302 op/s
Jan 20 14:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2630309587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2630309587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2739698211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.868 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.871 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.915 250022 INFO nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Took 5.85 seconds to spawn the instance on the hypervisor.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.915 250022 DEBUG nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.932 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.934 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.935 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=20.804096221923828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.935 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:13 compute-0 nova_compute[250018]: 2026-01-20 14:24:13.936 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:14 compute-0 sudo[258787]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.124 250022 INFO nova.compute.manager [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Took 7.90 seconds to build instance.
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.145 250022 DEBUG oslo_concurrency.lockutils [None req-7e432946-159c-4c3d-864b-668116877540 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.174 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 458c23a5-1bf5-4160-9265-1db326ecf321 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.175 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 0f93511b-eba1-4b09-94ce-051ac10117ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.175 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance eaa38f5d-6564-47d4-b7c7-261945710681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.175 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.175 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:24:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.306 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:24:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:24:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:24:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.857 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.866 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919039.8648367, 288f4f83-d33c-44ef-bf78-16cacd3f811f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.866 250022 INFO nova.compute.manager [-] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] VM Stopped (Lifecycle Event)
Jan 20 14:24:14 compute-0 nova_compute[250018]: 2026-01-20 14:24:14.917 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:24:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 742ab2cd-7e55-4ef2-911e-2242a9a61d4f does not exist
Jan 20 14:24:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 774f4691-52a8-4634-924b-f541062e5a5a does not exist
Jan 20 14:24:15 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bb7ae410-a00f-4aaa-b55a-5a32a5648143 does not exist
Jan 20 14:24:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:24:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:24:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:24:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:24:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:24:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006413714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.097 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.104 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:15 compute-0 sudo[258888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:15 compute-0 sudo[258888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:15 compute-0 sudo[258888]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 269 op/s
Jan 20 14:24:15 compute-0 sudo[258915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:24:15 compute-0 sudo[258915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:15 compute-0 sudo[258915]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:15 compute-0 sudo[258940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:15 compute-0 sudo[258940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:15 compute-0 sudo[258940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.252 250022 DEBUG nova.compute.manager [None req-47ddfdfb-b1c0-4095-bf35-ed58ae9dd508 - - - - - -] [instance: 288f4f83-d33c-44ef-bf78-16cacd3f811f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.256 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:15 compute-0 sudo[258965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:24:15 compute-0 sudo[258965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.292 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:24:15 compute-0 nova_compute[250018]: 2026-01-20 14:24:15.293 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:15.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.683362599 +0000 UTC m=+0.098833323 container create c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.607452968 +0000 UTC m=+0.022923722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:15 compute-0 systemd[1]: Started libpod-conmon-c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb.scope.
Jan 20 14:24:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.830277253 +0000 UTC m=+0.245748057 container init c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.84122076 +0000 UTC m=+0.256691484 container start c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.844909157 +0000 UTC m=+0.260379971 container attach c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:24:15 compute-0 nice_sanderson[259047]: 167 167
Jan 20 14:24:15 compute-0 systemd[1]: libpod-c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb.scope: Deactivated successfully.
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.848117381 +0000 UTC m=+0.263588105 container died c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 14:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae59afdde179fbf32a08964e6997cc1550a98f2065972d06e8aa9b55cd8373f9-merged.mount: Deactivated successfully.
Jan 20 14:24:15 compute-0 podman[259031]: 2026-01-20 14:24:15.899081348 +0000 UTC m=+0.314552092 container remove c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_sanderson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:24:15 compute-0 systemd[1]: libpod-conmon-c5d8cfc82837a21e67d2328fec1df9a8bdecf543aa58e649302e165f882812fb.scope: Deactivated successfully.
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2006413714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:16 compute-0 ceph-mon[74360]: pgmap v984: 321 pgs: 321 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 269 op/s
Jan 20 14:24:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2066684918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:16 compute-0 podman[259070]: 2026-01-20 14:24:16.08975653 +0000 UTC m=+0.054191043 container create 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:24:16 compute-0 systemd[1]: Started libpod-conmon-13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64.scope.
Jan 20 14:24:16 compute-0 podman[259070]: 2026-01-20 14:24:16.074362626 +0000 UTC m=+0.038797159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:16 compute-0 podman[259070]: 2026-01-20 14:24:16.183980692 +0000 UTC m=+0.148415255 container init 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:24:16 compute-0 podman[259070]: 2026-01-20 14:24:16.193322987 +0000 UTC m=+0.157757500 container start 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:16 compute-0 podman[259070]: 2026-01-20 14:24:16.197176287 +0000 UTC m=+0.161610890 container attach 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:24:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:16.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.276 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.277 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.278 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.488 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "458c23a5-1bf5-4160-9265-1db326ecf321" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.489 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:16 compute-0 sudo[259092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.490 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "458c23a5-1bf5-4160-9265-1db326ecf321-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.492 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.492 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:16 compute-0 sudo[259092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:16 compute-0 sudo[259092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.496 250022 INFO nova.compute.manager [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Terminating instance
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.498 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.498 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquired lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:16 compute-0 nova_compute[250018]: 2026-01-20 14:24:16.499 250022 DEBUG nova.network.neutron [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:16 compute-0 sudo[259117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:16 compute-0 sudo[259117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:16 compute-0 sudo[259117]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.024 250022 DEBUG nova.network.neutron [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:17 compute-0 focused_proskuriakova[259087]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:24:17 compute-0 focused_proskuriakova[259087]: --> relative data size: 1.0
Jan 20 14:24:17 compute-0 focused_proskuriakova[259087]: --> All data devices are unavailable
Jan 20 14:24:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4079105907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4176525062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.056 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:17 compute-0 systemd[1]: libpod-13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64.scope: Deactivated successfully.
Jan 20 14:24:17 compute-0 podman[259070]: 2026-01-20 14:24:17.074754158 +0000 UTC m=+1.039188681 container died 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-43ce748aed6f567d1c968993a7f508494b29099ebd733041b365d4f086283605-merged.mount: Deactivated successfully.
Jan 20 14:24:17 compute-0 podman[259070]: 2026-01-20 14:24:17.132341889 +0000 UTC m=+1.096776402 container remove 13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:24:17 compute-0 systemd[1]: libpod-conmon-13e0beb0cf61c420b8d913242a622890663bffa191e95c09ace96829495f1f64.scope: Deactivated successfully.
Jan 20 14:24:17 compute-0 sudo[258965]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 443 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 8.6 MiB/s wr, 394 op/s
Jan 20 14:24:17 compute-0 sudo[259163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:17 compute-0 sudo[259163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:17 compute-0 sudo[259163]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:17 compute-0 sudo[259188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:24:17 compute-0 sudo[259188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:17 compute-0 sudo[259188]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.279 250022 DEBUG nova.network.neutron [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.294 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Releasing lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.296 250022 DEBUG nova.compute.manager [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.296 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.296 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.296 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 458c23a5-1bf5-4160-9265-1db326ecf321 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:17 compute-0 sudo[259213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:17 compute-0 sudo[259213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:17 compute-0 sudo[259213]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:17 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 20 14:24:17 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 17.447s CPU time.
Jan 20 14:24:17 compute-0 systemd-machined[216401]: Machine qemu-1-instance-00000001 terminated.
Jan 20 14:24:17 compute-0 sudo[259238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:24:17 compute-0 sudo[259238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.449 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.519 250022 INFO nova.virt.libvirt.driver [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance destroyed successfully.
Jan 20 14:24:17 compute-0 nova_compute[250018]: 2026-01-20 14:24:17.520 250022 DEBUG nova.objects.instance [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lazy-loading 'resources' on Instance uuid 458c23a5-1bf5-4160-9265-1db326ecf321 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:17 compute-0 podman[259322]: 2026-01-20 14:24:17.75868211 +0000 UTC m=+0.043642667 container create 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:17 compute-0 systemd[1]: Started libpod-conmon-8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0.scope.
Jan 20 14:24:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:17 compute-0 podman[259322]: 2026-01-20 14:24:17.741679383 +0000 UTC m=+0.026639920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.173 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.200 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-458c23a5-1bf5-4160-9265-1db326ecf321" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.200 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.200 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.201 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.201 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:24:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:18.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:18 compute-0 podman[259322]: 2026-01-20 14:24:18.427244477 +0000 UTC m=+0.712205024 container init 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:24:18 compute-0 podman[259322]: 2026-01-20 14:24:18.436420578 +0000 UTC m=+0.721381095 container start 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:24:18 compute-0 podman[259322]: 2026-01-20 14:24:18.443059811 +0000 UTC m=+0.728020328 container attach 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:18 compute-0 gracious_driscoll[259338]: 167 167
Jan 20 14:24:18 compute-0 systemd[1]: libpod-8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0.scope: Deactivated successfully.
Jan 20 14:24:18 compute-0 ceph-mon[74360]: pgmap v985: 321 pgs: 321 active+clean; 443 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 8.6 MiB/s wr, 394 op/s
Jan 20 14:24:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2929435309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:18 compute-0 podman[259322]: 2026-01-20 14:24:18.451142053 +0000 UTC m=+0.736102580 container died 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 14:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d688e52f6e0508a8a641f75bba82f6a687d3c88f6ec7fe2d158d2c5e448679f6-merged.mount: Deactivated successfully.
Jan 20 14:24:18 compute-0 podman[259322]: 2026-01-20 14:24:18.496159934 +0000 UTC m=+0.781120451 container remove 8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:24:18 compute-0 systemd[1]: libpod-conmon-8b18f49038bf77161c739ae99854bcb8222b822c6f713c29453a75e86d402ee0.scope: Deactivated successfully.
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.666 250022 INFO nova.virt.libvirt.driver [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Deleting instance files /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321_del
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.667 250022 INFO nova.virt.libvirt.driver [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Deletion of /var/lib/nova/instances/458c23a5-1bf5-4160-9265-1db326ecf321_del complete
Jan 20 14:24:18 compute-0 podman[259361]: 2026-01-20 14:24:18.692792653 +0000 UTC m=+0.053840584 container create a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.733 250022 INFO nova.compute.manager [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Took 1.44 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.735 250022 DEBUG oslo.service.loopingcall [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:18 compute-0 systemd[1]: Started libpod-conmon-a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848.scope.
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.735 250022 DEBUG nova.compute.manager [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.735 250022 DEBUG nova.network.neutron [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:18 compute-0 podman[259361]: 2026-01-20 14:24:18.665573938 +0000 UTC m=+0.026621879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ea55e62248e2cd80e9ead274f32583bb1bebcc171db0c2dfdfc23cfcc79cf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ea55e62248e2cd80e9ead274f32583bb1bebcc171db0c2dfdfc23cfcc79cf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ea55e62248e2cd80e9ead274f32583bb1bebcc171db0c2dfdfc23cfcc79cf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41ea55e62248e2cd80e9ead274f32583bb1bebcc171db0c2dfdfc23cfcc79cf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:18 compute-0 podman[259361]: 2026-01-20 14:24:18.785956677 +0000 UTC m=+0.147004598 container init a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:18 compute-0 podman[259361]: 2026-01-20 14:24:18.793216107 +0000 UTC m=+0.154264028 container start a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:24:18 compute-0 podman[259361]: 2026-01-20 14:24:18.796233547 +0000 UTC m=+0.157281488 container attach a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.957 250022 DEBUG nova.network.neutron [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.980 250022 DEBUG nova.network.neutron [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:18 compute-0 nova_compute[250018]: 2026-01-20 14:24:18.998 250022 INFO nova.compute.manager [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Took 0.26 seconds to deallocate network for instance.
Jan 20 14:24:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.200 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.200 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.335 250022 DEBUG oslo_concurrency.processutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:19.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]: {
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:     "0": [
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:         {
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "devices": [
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "/dev/loop3"
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             ],
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "lv_name": "ceph_lv0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "lv_size": "7511998464",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "name": "ceph_lv0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "tags": {
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.cluster_name": "ceph",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.crush_device_class": "",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.encrypted": "0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.osd_id": "0",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.type": "block",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:                 "ceph.vdo": "0"
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             },
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "type": "block",
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:             "vg_name": "ceph_vg0"
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:         }
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]:     ]
Jan 20 14:24:19 compute-0 vibrant_dirac[259378]: }
Jan 20 14:24:19 compute-0 systemd[1]: libpod-a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848.scope: Deactivated successfully.
Jan 20 14:24:19 compute-0 podman[259361]: 2026-01-20 14:24:19.570169528 +0000 UTC m=+0.931217469 container died a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.895 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335376661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:19 compute-0 nova_compute[250018]: 2026-01-20 14:24:19.997 250022 DEBUG oslo_concurrency.processutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:20 compute-0 nova_compute[250018]: 2026-01-20 14:24:20.006 250022 DEBUG nova.compute.provider_tree [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:20 compute-0 nova_compute[250018]: 2026-01-20 14:24:20.088 250022 DEBUG nova.scheduler.client.report [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:20 compute-0 nova_compute[250018]: 2026-01-20 14:24:20.123 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:20 compute-0 nova_compute[250018]: 2026-01-20 14:24:20.157 250022 INFO nova.scheduler.client.report [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Deleted allocations for instance 458c23a5-1bf5-4160-9265-1db326ecf321
Jan 20 14:24:20 compute-0 nova_compute[250018]: 2026-01-20 14:24:20.224 250022 DEBUG oslo_concurrency.lockutils [None req-e23aef5a-4bea-4cb3-a0f1-fa7fbcc4a0ff 918f290d4c414b71807eacf0b27ad165 e024eef627014f829fa6e45ffe36c281 - - default default] Lock "458c23a5-1bf5-4160-9265-1db326ecf321" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:24:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2525738690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-41ea55e62248e2cd80e9ead274f32583bb1bebcc171db0c2dfdfc23cfcc79cf4-merged.mount: Deactivated successfully.
Jan 20 14:24:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:20.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:20 compute-0 podman[259361]: 2026-01-20 14:24:20.379874739 +0000 UTC m=+1.740922680 container remove a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_dirac, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:24:20 compute-0 sudo[259238]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:20 compute-0 systemd[1]: libpod-conmon-a2a37606792d08bc4d8df6d4cde4391620c5d4c32c349c320e97424287e19848.scope: Deactivated successfully.
Jan 20 14:24:20 compute-0 sudo[259420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:20 compute-0 sudo[259420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:20 compute-0 sudo[259420]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:20 compute-0 sudo[259445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:24:20 compute-0 sudo[259445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:20 compute-0 sudo[259445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:20 compute-0 sudo[259470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:20 compute-0 sudo[259470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:20 compute-0 sudo[259470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:20 compute-0 sudo[259495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:24:20 compute-0 sudo[259495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.017003232 +0000 UTC m=+0.066269360 container create c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:20.988298269 +0000 UTC m=+0.037564417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:21 compute-0 systemd[1]: Started libpod-conmon-c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77.scope.
Jan 20 14:24:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 397 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.261005463 +0000 UTC m=+0.310271691 container init c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.269864415 +0000 UTC m=+0.319130583 container start c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:24:21 compute-0 heuristic_cerf[259576]: 167 167
Jan 20 14:24:21 compute-0 systemd[1]: libpod-c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77.scope: Deactivated successfully.
Jan 20 14:24:21 compute-0 ceph-mon[74360]: pgmap v986: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 327 op/s
Jan 20 14:24:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3335376661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2815343532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.28759581 +0000 UTC m=+0.336861968 container attach c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.288521825 +0000 UTC m=+0.337787953 container died c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:24:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d308b4bc32fa7133bef576d061dd19a63164ebebe2a3c29b3b77d2a6efd167f7-merged.mount: Deactivated successfully.
Jan 20 14:24:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:21 compute-0 podman[259560]: 2026-01-20 14:24:21.481449475 +0000 UTC m=+0.530715613 container remove c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:24:21 compute-0 systemd[1]: libpod-conmon-c6a5dab18d61c7ddf3bfde79f62d7f66df3ab63253291c2414210b5960cc1f77.scope: Deactivated successfully.
Jan 20 14:24:21 compute-0 podman[259599]: 2026-01-20 14:24:21.690002046 +0000 UTC m=+0.043493842 container create 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:24:21 compute-0 systemd[1]: Started libpod-conmon-39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f.scope.
Jan 20 14:24:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242210be8e26bf96ec5bc4fab8ad7c5e9338d891ad1a2477665ffc7f85da2682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242210be8e26bf96ec5bc4fab8ad7c5e9338d891ad1a2477665ffc7f85da2682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242210be8e26bf96ec5bc4fab8ad7c5e9338d891ad1a2477665ffc7f85da2682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/242210be8e26bf96ec5bc4fab8ad7c5e9338d891ad1a2477665ffc7f85da2682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:21 compute-0 podman[259599]: 2026-01-20 14:24:21.672425455 +0000 UTC m=+0.025917271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:24:21 compute-0 podman[259599]: 2026-01-20 14:24:21.779812791 +0000 UTC m=+0.133304607 container init 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:24:21 compute-0 podman[259599]: 2026-01-20 14:24:21.786556429 +0000 UTC m=+0.140048225 container start 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:24:21 compute-0 podman[259599]: 2026-01-20 14:24:21.789425144 +0000 UTC m=+0.142916970 container attach 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:24:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:22.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:22 compute-0 ceph-mon[74360]: pgmap v987: 321 pgs: 321 active+clean; 397 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Jan 20 14:24:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/131837199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:22 compute-0 romantic_thompson[259616]: {
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:         "osd_id": 0,
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:         "type": "bluestore"
Jan 20 14:24:22 compute-0 romantic_thompson[259616]:     }
Jan 20 14:24:22 compute-0 romantic_thompson[259616]: }
Jan 20 14:24:22 compute-0 systemd[1]: libpod-39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f.scope: Deactivated successfully.
Jan 20 14:24:22 compute-0 podman[259599]: 2026-01-20 14:24:22.651340144 +0000 UTC m=+1.004831970 container died 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-242210be8e26bf96ec5bc4fab8ad7c5e9338d891ad1a2477665ffc7f85da2682-merged.mount: Deactivated successfully.
Jan 20 14:24:22 compute-0 podman[259599]: 2026-01-20 14:24:22.709124769 +0000 UTC m=+1.062616575 container remove 39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:24:22 compute-0 systemd[1]: libpod-conmon-39a1b758c65b0dbabf48936285c1ec6640506040b8f6fe076ffd1615e81ec76f.scope: Deactivated successfully.
Jan 20 14:24:22 compute-0 sudo[259495]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:24:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:24:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ef9f03ab-a9c4-4696-b001-26fc64922eac does not exist
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9dbb246e-189d-4b36-ad1e-10741234795b does not exist
Jan 20 14:24:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 858d5c38-89fd-4b5a-9766-51cc4f536ad3 does not exist
Jan 20 14:24:22 compute-0 sudo[259650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:22 compute-0 sudo[259650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:22 compute-0 sudo[259650]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:22 compute-0 sudo[259675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:24:22 compute-0 sudo[259675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:22 compute-0 sudo[259675]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 323 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 295 op/s
Jan 20 14:24:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3277269713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2895502152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:24:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:24.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:24 compute-0 ceph-mon[74360]: pgmap v988: 321 pgs: 321 active+clean; 323 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 295 op/s
Jan 20 14:24:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:24 compute-0 nova_compute[250018]: 2026-01-20 14:24:24.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:24 compute-0 nova_compute[250018]: 2026-01-20 14:24:24.919 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 293 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 279 op/s
Jan 20 14:24:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3731601468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:25 compute-0 podman[259702]: 2026-01-20 14:24:25.474166132 +0000 UTC m=+0.062988483 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 20 14:24:25 compute-0 podman[259701]: 2026-01-20 14:24:25.510331131 +0000 UTC m=+0.099153542 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 14:24:25 compute-0 nova_compute[250018]: 2026-01-20 14:24:25.705 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:25.705 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:24:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:25.707 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:24:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:26.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:26 compute-0 ceph-mon[74360]: pgmap v989: 321 pgs: 321 active+clean; 293 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 279 op/s
Jan 20 14:24:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2632341963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3761145659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2591375473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3426931299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 408 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.5 MiB/s wr, 351 op/s
Jan 20 14:24:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:28 compute-0 ceph-mon[74360]: pgmap v990: 321 pgs: 321 active+clean; 408 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.5 MiB/s wr, 351 op/s
Jan 20 14:24:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:28.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 422 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.7 MiB/s wr, 238 op/s
Jan 20 14:24:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:29.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:29 compute-0 nova_compute[250018]: 2026-01-20 14:24:29.899 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:29 compute-0 nova_compute[250018]: 2026-01-20 14:24:29.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:30 compute-0 ceph-mon[74360]: pgmap v991: 321 pgs: 321 active+clean; 422 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.7 MiB/s wr, 238 op/s
Jan 20 14:24:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:30.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:30.740 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:30.741 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 435 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.7 MiB/s wr, 275 op/s
Jan 20 14:24:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:31.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:32.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:32 compute-0 nova_compute[250018]: 2026-01-20 14:24:32.517 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919057.5156524, 458c23a5-1bf5-4160-9265-1db326ecf321 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:24:32 compute-0 nova_compute[250018]: 2026-01-20 14:24:32.517 250022 INFO nova.compute.manager [-] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] VM Stopped (Lifecycle Event)
Jan 20 14:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:24:32.709 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:24:32 compute-0 nova_compute[250018]: 2026-01-20 14:24:32.739 250022 DEBUG nova.compute.manager [None req-ff53df3b-4366-4cbc-a249-9cbf8e2d7cc7 - - - - - -] [instance: 458c23a5-1bf5-4160-9265-1db326ecf321] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:24:33 compute-0 ceph-mon[74360]: pgmap v992: 321 pgs: 321 active+clean; 435 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.7 MiB/s wr, 275 op/s
Jan 20 14:24:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 451 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 345 op/s
Jan 20 14:24:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:33.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:34.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:34 compute-0 ceph-mon[74360]: pgmap v993: 321 pgs: 321 active+clean; 451 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.8 MiB/s wr, 345 op/s
Jan 20 14:24:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:34 compute-0 nova_compute[250018]: 2026-01-20 14:24:34.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:34 compute-0 nova_compute[250018]: 2026-01-20 14:24:34.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 357 op/s
Jan 20 14:24:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:35.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:35 compute-0 ceph-mon[74360]: pgmap v994: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.8 MiB/s wr, 357 op/s
Jan 20 14:24:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:36 compute-0 sudo[259751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:36 compute-0 sudo[259751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:36 compute-0 sudo[259751]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:36 compute-0 sudo[259777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:36 compute-0 sudo[259777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:36 compute-0 sudo[259777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.9 MiB/s wr, 339 op/s
Jan 20 14:24:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:37.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:38 compute-0 ceph-mon[74360]: pgmap v995: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.9 MiB/s wr, 339 op/s
Jan 20 14:24:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.5 MiB/s wr, 253 op/s
Jan 20 14:24:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:39.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:39 compute-0 ceph-mon[74360]: pgmap v996: 321 pgs: 321 active+clean; 451 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.5 MiB/s wr, 253 op/s
Jan 20 14:24:39 compute-0 nova_compute[250018]: 2026-01-20 14:24:39.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:39 compute-0 nova_compute[250018]: 2026-01-20 14:24:39.923 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.3 MiB/s wr, 252 op/s
Jan 20 14:24:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:41.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:42 compute-0 ceph-mon[74360]: pgmap v997: 321 pgs: 321 active+clean; 437 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.3 MiB/s wr, 252 op/s
Jan 20 14:24:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 392 MiB data, 454 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 20 14:24:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:43.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:43 compute-0 ceph-mon[74360]: pgmap v998: 321 pgs: 321 active+clean; 392 MiB data, 454 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 20 14:24:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:44.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:44 compute-0 nova_compute[250018]: 2026-01-20 14:24:44.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:44 compute-0 nova_compute[250018]: 2026-01-20 14:24:44.925 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2226425318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 14:24:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:45.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1638211528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:45 compute-0 ceph-mon[74360]: pgmap v999: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 14:24:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.994 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "0f93511b-eba1-4b09-94ce-051ac10117ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.995 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.995 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "0f93511b-eba1-4b09-94ce-051ac10117ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.995 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.996 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.997 250022 INFO nova.compute.manager [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Terminating instance
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.998 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "refresh_cache-0f93511b-eba1-4b09-94ce-051ac10117ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.998 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquired lock "refresh_cache-0f93511b-eba1-4b09-94ce-051ac10117ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:46 compute-0 nova_compute[250018]: 2026-01-20 14:24:46.999 250022 DEBUG nova.network.neutron [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.165 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "eaa38f5d-6564-47d4-b7c7-261945710681" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.165 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.165 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "eaa38f5d-6564-47d4-b7c7-261945710681-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.166 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.166 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.167 250022 INFO nova.compute.manager [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Terminating instance
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.167 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "refresh_cache-eaa38f5d-6564-47d4-b7c7-261945710681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.167 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquired lock "refresh_cache-eaa38f5d-6564-47d4-b7c7-261945710681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.168 250022 DEBUG nova.network.neutron [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:24:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.198 250022 DEBUG nova.network.neutron [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.381 250022 DEBUG nova.network.neutron [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:47.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.559 250022 DEBUG nova.network.neutron [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.579 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Releasing lock "refresh_cache-0f93511b-eba1-4b09-94ce-051ac10117ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.581 250022 DEBUG nova.compute.manager [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.754 250022 DEBUG nova.network.neutron [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:47 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 20 14:24:47 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 15.059s CPU time.
Jan 20 14:24:47 compute-0 systemd-machined[216401]: Machine qemu-3-instance-00000008 terminated.
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.811 250022 INFO nova.virt.libvirt.driver [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance destroyed successfully.
Jan 20 14:24:47 compute-0 nova_compute[250018]: 2026-01-20 14:24:47.811 250022 DEBUG nova.objects.instance [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'resources' on Instance uuid 0f93511b-eba1-4b09-94ce-051ac10117ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:48 compute-0 nova_compute[250018]: 2026-01-20 14:24:48.208 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Releasing lock "refresh_cache-eaa38f5d-6564-47d4-b7c7-261945710681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:24:48 compute-0 nova_compute[250018]: 2026-01-20 14:24:48.208 250022 DEBUG nova.compute.manager [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:24:48 compute-0 ceph-mon[74360]: pgmap v1000: 321 pgs: 321 active+clean; 405 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 20 14:24:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:48 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 20 14:24:48 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 16.070s CPU time.
Jan 20 14:24:48 compute-0 systemd-machined[216401]: Machine qemu-4-instance-00000009 terminated.
Jan 20 14:24:48 compute-0 nova_compute[250018]: 2026-01-20 14:24:48.436 250022 INFO nova.virt.libvirt.driver [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance destroyed successfully.
Jan 20 14:24:48 compute-0 nova_compute[250018]: 2026-01-20 14:24:48.437 250022 DEBUG nova.objects.instance [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lazy-loading 'resources' on Instance uuid eaa38f5d-6564-47d4-b7c7-261945710681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:24:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 397 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:24:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:49.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.563 250022 INFO nova.virt.libvirt.driver [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Deleting instance files /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce_del
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.565 250022 INFO nova.virt.libvirt.driver [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Deletion of /var/lib/nova/instances/0f93511b-eba1-4b09-94ce-051ac10117ce_del complete
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.630 250022 INFO nova.compute.manager [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Took 2.05 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.630 250022 DEBUG oslo.service.loopingcall [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.631 250022 DEBUG nova.compute.manager [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.631 250022 DEBUG nova.network.neutron [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.825 250022 DEBUG nova.network.neutron [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.844 250022 DEBUG nova.network.neutron [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.865 250022 INFO nova.compute.manager [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Took 0.23 seconds to deallocate network for instance.
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.919 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.920 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:49 compute-0 nova_compute[250018]: 2026-01-20 14:24:49.936 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.000 250022 DEBUG oslo_concurrency.processutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.041 250022 INFO nova.virt.libvirt.driver [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Deleting instance files /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681_del
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.042 250022 INFO nova.virt.libvirt.driver [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Deletion of /var/lib/nova/instances/eaa38f5d-6564-47d4-b7c7-261945710681_del complete
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.105 250022 INFO nova.compute.manager [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Took 1.90 seconds to destroy the instance on the hypervisor.
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.107 250022 DEBUG oslo.service.loopingcall [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.107 250022 DEBUG nova.compute.manager [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.108 250022 DEBUG nova.network.neutron [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.253 250022 DEBUG nova.network.neutron [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.266 250022 DEBUG nova.network.neutron [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:24:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.320 250022 INFO nova.compute.manager [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Took 0.21 seconds to deallocate network for instance.
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.371 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/150030370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.504 250022 DEBUG oslo_concurrency.processutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.509 250022 DEBUG nova.compute.provider_tree [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.525 250022 DEBUG nova.scheduler.client.report [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.546 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.548 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.567 250022 INFO nova.scheduler.client.report [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Deleted allocations for instance 0f93511b-eba1-4b09-94ce-051ac10117ce
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.595 250022 DEBUG oslo_concurrency.processutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:24:50 compute-0 ceph-mon[74360]: pgmap v1001: 321 pgs: 321 active+clean; 397 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:24:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4226925414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/150030370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:50 compute-0 nova_compute[250018]: 2026-01-20 14:24:50.635 250022 DEBUG oslo_concurrency.lockutils [None req-72bae494-334a-434e-98eb-87f17c5de862 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "0f93511b-eba1-4b09-94ce-051ac10117ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852221292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.002 250022 DEBUG oslo_concurrency.processutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.009 250022 DEBUG nova.compute.provider_tree [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.025 250022 DEBUG nova.scheduler.client.report [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.048 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.090 250022 INFO nova.scheduler.client.report [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Deleted allocations for instance eaa38f5d-6564-47d4-b7c7-261945710681
Jan 20 14:24:51 compute-0 nova_compute[250018]: 2026-01-20 14:24:51.191 250022 DEBUG oslo_concurrency.lockutils [None req-5387e59d-5b95-4c5c-b991-41131f4ca30f 32a16ea2839748629233294de19222b3 986d9f2d9bd24a228e53a76694db0568 - - default default] Lock "eaa38f5d-6564-47d4-b7c7-261945710681" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:24:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 348 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 20 14:24:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:51.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/852221292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:24:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133120487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:24:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:24:52
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'backups']
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:52 compute-0 ceph-mon[74360]: pgmap v1002: 321 pgs: 321 active+clean; 348 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 20 14:24:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3133120487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 273 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 14:24:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:53.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:53 compute-0 ceph-mon[74360]: pgmap v1003: 321 pgs: 321 active+clean; 273 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 14:24:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:24:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:24:54 compute-0 sshd-session[259897]: Invalid user admin from 157.245.78.139 port 38532
Jan 20 14:24:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:54 compute-0 sshd-session[259897]: Connection closed by invalid user admin 157.245.78.139 port 38532 [preauth]
Jan 20 14:24:54 compute-0 nova_compute[250018]: 2026-01-20 14:24:54.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:54 compute-0 nova_compute[250018]: 2026-01-20 14:24:54.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3059066805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:24:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2429111377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:24:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 236 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 856 KiB/s wr, 73 op/s
Jan 20 14:24:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:55.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:56 compute-0 ceph-mon[74360]: pgmap v1004: 321 pgs: 321 active+clean; 236 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 856 KiB/s wr, 73 op/s
Jan 20 14:24:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:56 compute-0 podman[259902]: 2026-01-20 14:24:56.507696965 +0000 UTC m=+0.085426522 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:24:56 compute-0 podman[259901]: 2026-01-20 14:24:56.552999153 +0000 UTC m=+0.132507598 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:24:56 compute-0 sudo[259947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:56 compute-0 sudo[259947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:56 compute-0 sudo[259947]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:57 compute-0 sudo[259972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:24:57 compute-0 sudo[259972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:24:57 compute-0 sudo[259972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 167 MiB data, 330 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 26 KiB/s wr, 85 op/s
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:24:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:58 compute-0 ceph-mon[74360]: pgmap v1005: 321 pgs: 321 active+clean; 167 MiB data, 330 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 26 KiB/s wr, 85 op/s
Jan 20 14:24:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:24:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 942 KiB/s rd, 25 KiB/s wr, 117 op/s
Jan 20 14:24:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:24:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:24:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:24:59.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:24:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:24:59 compute-0 nova_compute[250018]: 2026-01-20 14:24:59.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:24:59 compute-0 nova_compute[250018]: 2026-01-20 14:24:59.940 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:00 compute-0 ceph-mon[74360]: pgmap v1006: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 942 KiB/s rd, 25 KiB/s wr, 117 op/s
Jan 20 14:25:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:25:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:01.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:02 compute-0 ceph-mon[74360]: pgmap v1007: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:25:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:02 compute-0 nova_compute[250018]: 2026-01-20 14:25:02.808 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919087.807572, 0f93511b-eba1-4b09-94ce-051ac10117ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:02 compute-0 nova_compute[250018]: 2026-01-20 14:25:02.809 250022 INFO nova.compute.manager [-] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] VM Stopped (Lifecycle Event)
Jan 20 14:25:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 128 op/s
Jan 20 14:25:03 compute-0 nova_compute[250018]: 2026-01-20 14:25:03.434 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919088.4329839, eaa38f5d-6564-47d4-b7c7-261945710681 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:03 compute-0 nova_compute[250018]: 2026-01-20 14:25:03.434 250022 INFO nova.compute.manager [-] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] VM Stopped (Lifecycle Event)
Jan 20 14:25:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:25:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:03.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:25:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:04 compute-0 ceph-mon[74360]: pgmap v1008: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 128 op/s
Jan 20 14:25:04 compute-0 nova_compute[250018]: 2026-01-20 14:25:04.398 250022 DEBUG nova.compute.manager [None req-510aafe4-802d-4667-9fce-ec8e13ce24af - - - - - -] [instance: 0f93511b-eba1-4b09-94ce-051ac10117ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:04 compute-0 nova_compute[250018]: 2026-01-20 14:25:04.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:04 compute-0 nova_compute[250018]: 2026-01-20 14:25:04.942 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:05 compute-0 nova_compute[250018]: 2026-01-20 14:25:05.064 250022 DEBUG nova.compute.manager [None req-837d4b1c-d267-4521-9930-9afe059e7117 - - - - - -] [instance: eaa38f5d-6564-47d4-b7c7-261945710681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 20 14:25:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:05 compute-0 ovn_controller[148666]: 2026-01-20T14:25:05Z|00035|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 20 14:25:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:06 compute-0 ceph-mon[74360]: pgmap v1009: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 20 14:25:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 14:25:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:25:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5296 writes, 23K keys, 5295 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5296 writes, 5295 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1478 writes, 6590 keys, 1478 commit groups, 1.0 writes per commit group, ingest: 10.01 MB, 0.02 MB/s
                                           Interval WAL: 1478 writes, 1478 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     30.4      0.88              0.09        13    0.068       0      0       0.0       0.0
                                             L6      1/0    7.49 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6    111.0     92.4      1.04              0.33        12    0.086     56K   6313       0.0       0.0
                                            Sum      1/0    7.49 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     60.0     63.9      1.91              0.43        25    0.077     56K   6313       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.6     41.2     40.8      1.35              0.20        12    0.112     30K   3044       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    111.0     92.4      1.04              0.33        12    0.086     56K   6313       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     30.4      0.88              0.09        12    0.073       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.026, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.11 GB read, 0.06 MB/s read, 1.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 10.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000117 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(593,9.85 MB,3.24135%) FilterBlock(26,164.17 KB,0.0527382%) IndexBlock(26,296.00 KB,0.0950863%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:25:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:08.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:08 compute-0 ceph-mon[74360]: pgmap v1010: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 14:25:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 74 op/s
Jan 20 14:25:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.491 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.492 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.512 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.571 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.572 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.592 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.619 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.619 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.632 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.632 250022 INFO nova.compute.claims [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.735 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.846 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:09 compute-0 nova_compute[250018]: 2026-01-20 14:25:09.941 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:10.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176940240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.636 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.790s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.646 250022 DEBUG nova.compute.provider_tree [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.678 250022 DEBUG nova.scheduler.client.report [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.704 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.705 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.711 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.723 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.723 250022 INFO nova.compute.claims [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.776 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.776 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.808 250022 INFO nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.830 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.906 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.987 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.989 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:25:10 compute-0 nova_compute[250018]: 2026-01-20 14:25:10.990 250022 INFO nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Creating image(s)
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.030 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.066 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.113 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.119 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021748644844593336 of space, bias 1.0, pg target 0.6524593453378001 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0010132767425088405 of space, bias 1.0, pg target 0.30398302275265215 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.155 250022 DEBUG nova.policy [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57d58248fa3b44579c14396dca4a2199', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef783a3b5dd3446faf947d627c64c5da', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.160 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 176 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 678 KiB/s wr, 56 op/s
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.222 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.223 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.224 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.225 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.266 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.271 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fcfc64da-5468-42a5-bf34-daa6db48df22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-0 ceph-mon[74360]: pgmap v1011: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 74 op/s
Jan 20 14:25:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198693599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.407 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.412 250022 DEBUG nova.compute.provider_tree [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.434 250022 DEBUG nova.scheduler.client.report [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.460 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.461 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:25:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.516 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.532 250022 INFO nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.548 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.583 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fcfc64da-5468-42a5-bf34-daa6db48df22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.669 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] resizing rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.710 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.711 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.712 250022 INFO nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating image(s)
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.736 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.761 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.784 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.787 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.853 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.854 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.855 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.855 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.875 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.880 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6091ab6e-2530-4b48-b482-00867d3c66c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.906 250022 DEBUG nova.objects.instance [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lazy-loading 'migration_context' on Instance uuid fcfc64da-5468-42a5-bf34-daa6db48df22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.920 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.921 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Ensure instance console log exists: /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.921 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.921 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:11 compute-0 nova_compute[250018]: 2026-01-20 14:25:11.922 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.127 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6091ab6e-2530-4b48-b482-00867d3c66c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.192 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] resizing rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.293 250022 DEBUG nova.objects.instance [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'migration_context' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/176940240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:12 compute-0 ceph-mon[74360]: pgmap v1012: 321 pgs: 321 active+clean; 176 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 678 KiB/s wr, 56 op/s
Jan 20 14:25:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4198693599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.569 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.569 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Ensure instance console log exists: /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.570 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.571 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.571 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.573 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.578 250022 WARNING nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.584 250022 DEBUG nova.virt.libvirt.host [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.585 250022 DEBUG nova.virt.libvirt.host [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.588 250022 DEBUG nova.virt.libvirt.host [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.589 250022 DEBUG nova.virt.libvirt.host [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.590 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.590 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.591 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.591 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.591 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.591 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.592 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.592 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.592 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.592 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.592 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.593 250022 DEBUG nova.virt.hardware [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:25:12 compute-0 nova_compute[250018]: 2026-01-20 14:25:12.596 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183489613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.003 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.033 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.038 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.157 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.157 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.158 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.158 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.158 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 206 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 616 KiB/s rd, 2.3 MiB/s wr, 75 op/s
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.243 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Successfully created port: 89adf3b7-65a1-479e-bc6d-0f86de206593 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:25:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3277421869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.519 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.522 250022 DEBUG nova.objects.instance [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'pci_devices' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:13 compute-0 nova_compute[250018]: 2026-01-20 14:25:13.562 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <uuid>6091ab6e-2530-4b48-b482-00867d3c66c5</uuid>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <name>instance-0000000e</name>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-767584007</nova:name>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:25:12</nova:creationTime>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:user uuid="8ea9f3cd2cbb462a8ecbb488e6a1a25d">tempest-UnshelveToHostMultiNodesTest-997401309-project-member</nova:user>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <nova:project uuid="14ebcff06a484899a9725832f1eddfdf">tempest-UnshelveToHostMultiNodesTest-997401309</nova:project>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <system>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="serial">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="uuid">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </system>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <os>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </os>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <features>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </features>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk">
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config">
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:13 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log" append="off"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <video>
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </video>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:25:13 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:25:13 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:25:13 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:25:13 compute-0 nova_compute[250018]: </domain>
Jan 20 14:25:13 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:25:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924992509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.152 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.993s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:14.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.478 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.479 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.480 250022 INFO nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Using config drive
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.517 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.528 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.528 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.723 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.725 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4811MB free_disk=20.94263458251953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.726 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.726 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.850 250022 INFO nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating config drive at /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.859 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqozq8yh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.949 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.949 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:14 compute-0 nova_compute[250018]: 2026-01-20 14:25:14.988 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 14:25:15 compute-0 nova_compute[250018]: 2026-01-20 14:25:15.007 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqozq8yh" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:15 compute-0 nova_compute[250018]: 2026-01-20 14:25:15.176 250022 DEBUG nova.storage.rbd_utils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:15 compute-0 nova_compute[250018]: 2026-01-20 14:25:15.181 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 234 MiB data, 355 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Jan 20 14:25:15 compute-0 nova_compute[250018]: 2026-01-20 14:25:15.386 250022 DEBUG oslo_concurrency.processutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:15 compute-0 nova_compute[250018]: 2026-01-20 14:25:15.387 250022 INFO nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting local config drive /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config because it was imported into RBD.
Jan 20 14:25:15 compute-0 systemd-machined[216401]: New machine qemu-5-instance-0000000e.
Jan 20 14:25:15 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000e.
Jan 20 14:25:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:15.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/713538746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3183489613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3926975021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3277421869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:25:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1892182925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:25:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:16.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:17 compute-0 sudo[260548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:17 compute-0 sudo[260548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:17 compute-0 sudo[260548]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 123 op/s
Jan 20 14:25:17 compute-0 sudo[260573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:17 compute-0 sudo[260573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:17 compute-0 sudo[260573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:18.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:18 compute-0 nova_compute[250018]: 2026-01-20 14:25:18.650 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance fcfc64da-5468-42a5-bf34-daa6db48df22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:25:18 compute-0 nova_compute[250018]: 2026-01-20 14:25:18.650 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 6091ab6e-2530-4b48-b482-00867d3c66c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:25:18 compute-0 nova_compute[250018]: 2026-01-20 14:25:18.651 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:25:18 compute-0 nova_compute[250018]: 2026-01-20 14:25:18.651 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:25:18 compute-0 nova_compute[250018]: 2026-01-20 14:25:18.980 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:19 compute-0 ceph-mon[74360]: pgmap v1013: 321 pgs: 321 active+clean; 206 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 616 KiB/s rd, 2.3 MiB/s wr, 75 op/s
Jan 20 14:25:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3924992509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3824416908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-0 ceph-mon[74360]: pgmap v1014: 321 pgs: 321 active+clean; 234 MiB data, 355 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Jan 20 14:25:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/132688509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-0 ceph-mon[74360]: pgmap v1015: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 123 op/s
Jan 20 14:25:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 363 KiB/s rd, 5.7 MiB/s wr, 126 op/s
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.418 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919119.4179082, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.420 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Resumed (Lifecycle Event)
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.425 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.426 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.432 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance spawned successfully.
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.432 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:25:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:19.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384893503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.548 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.555 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.816 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.823 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.823 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.824 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.824 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.825 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.825 250022 DEBUG nova.virt.libvirt.driver [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.829 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.986 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:19 compute-0 nova_compute[250018]: 2026-01-20 14:25:19.993 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2708667634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:20 compute-0 ceph-mon[74360]: pgmap v1016: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 363 KiB/s rd, 5.7 MiB/s wr, 126 op/s
Jan 20 14:25:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/384893503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.821 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.822 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919119.4242074, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.822 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Started (Lifecycle Event)
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.859 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Successfully updated port: 89adf3b7-65a1-479e-bc6d-0f86de206593 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.945 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.946 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.953 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:20 compute-0 nova_compute[250018]: 2026-01-20 14:25:20.956 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.383 250022 INFO nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Took 9.67 seconds to spawn the instance on the hypervisor.
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.383 250022 DEBUG nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.393 250022 DEBUG nova.compute.manager [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-changed-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.393 250022 DEBUG nova.compute.manager [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Refreshing instance network info cache due to event network-changed-89adf3b7-65a1-479e-bc6d-0f86de206593. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.394 250022 DEBUG oslo_concurrency.lockutils [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.394 250022 DEBUG oslo_concurrency.lockutils [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.395 250022 DEBUG nova.network.neutron [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Refreshing network info cache for port 89adf3b7-65a1-479e-bc6d-0f86de206593 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.428 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.431 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:21.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.610 250022 INFO nova.compute.manager [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Took 11.91 seconds to build instance.
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.724 250022 DEBUG oslo_concurrency.lockutils [None req-865dd36c-a94a-404d-ae0a-41c626c37aae 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:21 compute-0 nova_compute[250018]: 2026-01-20 14:25:21.849 250022 DEBUG nova.network.neutron [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:25:22 compute-0 ceph-mon[74360]: pgmap v1017: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Jan 20 14:25:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:22.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:22 compute-0 nova_compute[250018]: 2026-01-20 14:25:22.944 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:22 compute-0 nova_compute[250018]: 2026-01-20 14:25:22.945 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:25:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 166 op/s
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.230 250022 DEBUG nova.network.neutron [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:23 compute-0 sudo[260658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:23 compute-0 sudo[260658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-0 sudo[260658]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-0 sudo[260683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:25:23 compute-0 sudo[260683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-0 sudo[260683]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.427 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.428 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.428 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.428 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.428 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:25:23 compute-0 sudo[260708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:23 compute-0 sudo[260708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-0 sudo[260708]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:23.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:23 compute-0 sudo[260733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:25:23 compute-0 sudo[260733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.571 250022 DEBUG oslo_concurrency.lockutils [req-6b1558f1-4e04-401e-ae86-9f22d5af4c13 req-02038e23-0f43-4ce6-ba6d-057ccd2bf771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.571 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquired lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:23 compute-0 nova_compute[250018]: 2026-01-20 14:25:23.572 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:25:23 compute-0 sudo[260733]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bce672ab-6e32-4b49-95ef-5777337d4a84 does not exist
Jan 20 14:25:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1df47aea-285b-45c1-923b-0ca54b36cbe2 does not exist
Jan 20 14:25:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev eeb1e13b-8cb7-4ef0-817c-21b1df9629b3 does not exist
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:25:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-0 sudo[260790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:24 compute-0 sudo[260790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:24 compute-0 sudo[260790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:24 compute-0 sudo[260815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:25:24 compute-0 sudo[260815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:24 compute-0 sudo[260815]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:24 compute-0 ceph-mon[74360]: pgmap v1018: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 166 op/s
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:25:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:25:24 compute-0 sudo[260840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:24 compute-0 sudo[260840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:24 compute-0 sudo[260840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:24 compute-0 sudo[260865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:25:24 compute-0 sudo[260865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.654932534 +0000 UTC m=+0.037852134 container create 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:25:24 compute-0 systemd[1]: Started libpod-conmon-9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80.scope.
Jan 20 14:25:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.638217215 +0000 UTC m=+0.021136825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.7390105 +0000 UTC m=+0.121930100 container init 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.753566731 +0000 UTC m=+0.136486361 container start 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:24 compute-0 thirsty_moore[260948]: 167 167
Jan 20 14:25:24 compute-0 systemd[1]: libpod-9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80.scope: Deactivated successfully.
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.758654805 +0000 UTC m=+0.141574415 container attach 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:25:24 compute-0 podman[260932]: 2026-01-20 14:25:24.75925571 +0000 UTC m=+0.142175310 container died 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:25:24 compute-0 nova_compute[250018]: 2026-01-20 14:25:24.991 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:24 compute-0 nova_compute[250018]: 2026-01-20 14:25:24.995 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:25 compute-0 nova_compute[250018]: 2026-01-20 14:25:25.017 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:25:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-893e6d43ec860c8c99c38ce64a64d5ba7cd4e26b74bbd82381a1204e610cfa36-merged.mount: Deactivated successfully.
Jan 20 14:25:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 20 14:25:25 compute-0 podman[260932]: 2026-01-20 14:25:25.269083994 +0000 UTC m=+0.652003594 container remove 9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:25 compute-0 systemd[1]: libpod-conmon-9002e7f4176849bbf50229abd3586abd2366edc1ad1ef940cf38f79f07e8cd80.scope: Deactivated successfully.
Jan 20 14:25:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:25.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:25 compute-0 podman[260975]: 2026-01-20 14:25:25.421621245 +0000 UTC m=+0.023939089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:25 compute-0 podman[260975]: 2026-01-20 14:25:25.516649868 +0000 UTC m=+0.118967732 container create 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:25 compute-0 systemd[1]: Started libpod-conmon-5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e.scope.
Jan 20 14:25:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:25 compute-0 podman[260975]: 2026-01-20 14:25:25.729947113 +0000 UTC m=+0.332265017 container init 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:25:25 compute-0 podman[260975]: 2026-01-20 14:25:25.740959102 +0000 UTC m=+0.343276926 container start 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:25:25 compute-0 podman[260975]: 2026-01-20 14:25:25.77096842 +0000 UTC m=+0.373286264 container attach 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:26 compute-0 ceph-mon[74360]: pgmap v1019: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 20 14:25:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:26.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:26 compute-0 zen_montalcini[260991]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:25:26 compute-0 zen_montalcini[260991]: --> relative data size: 1.0
Jan 20 14:25:26 compute-0 zen_montalcini[260991]: --> All data devices are unavailable
Jan 20 14:25:26 compute-0 systemd[1]: libpod-5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e.scope: Deactivated successfully.
Jan 20 14:25:26 compute-0 podman[260975]: 2026-01-20 14:25:26.536160142 +0000 UTC m=+1.138477966 container died 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce6a69140be63a18e7a38f88b3517b0df612693907627657db3f1032e5c0d85d-merged.mount: Deactivated successfully.
Jan 20 14:25:26 compute-0 podman[261007]: 2026-01-20 14:25:26.755612709 +0000 UTC m=+0.190838427 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:25:26 compute-0 podman[260975]: 2026-01-20 14:25:26.845939328 +0000 UTC m=+1.448257152 container remove 5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:25:26 compute-0 sudo[260865]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:26 compute-0 systemd[1]: libpod-conmon-5b90c67f3de08c36df5f80f9aaeada62f5a865912c7a8e37976c35c267e3fe0e.scope: Deactivated successfully.
Jan 20 14:25:26 compute-0 podman[261031]: 2026-01-20 14:25:26.938201608 +0000 UTC m=+0.265029413 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:25:26 compute-0 sudo[261056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:26 compute-0 sudo[261056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:26 compute-0 sudo[261056]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:27 compute-0 sudo[261090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:25:27 compute-0 sudo[261090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:27 compute-0 sudo[261090]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:27 compute-0 sudo[261115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:27 compute-0 sudo[261115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:27 compute-0 sudo[261115]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:27 compute-0 sudo[261140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:25:27 compute-0 sudo[261140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.292 250022 DEBUG nova.network.neutron [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Updating instance_info_cache with network_info: [{"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.348 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Releasing lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.348 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance network_info: |[{"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.354 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Start _get_guest_xml network_info=[{"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.362 250022 WARNING nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.370 250022 DEBUG nova.virt.libvirt.host [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.372 250022 DEBUG nova.virt.libvirt.host [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.374 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.375 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.375 250022 INFO nova.compute.manager [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shelving
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.381 250022 DEBUG nova.virt.libvirt.host [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.381 250022 DEBUG nova.virt.libvirt.host [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.382 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.383 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:24:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='2114049963',id=18,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1469248447',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.383 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.384 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.384 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.384 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.384 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.385 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.385 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.385 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.385 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.386 250022 DEBUG nova.virt.hardware [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.389 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.442 250022 DEBUG nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.483985555 +0000 UTC m=+0.043772749 container create de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:27.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:27 compute-0 systemd[1]: Started libpod-conmon-de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f.scope.
Jan 20 14:25:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.462932083 +0000 UTC m=+0.022719287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.581632067 +0000 UTC m=+0.141419281 container init de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.589278557 +0000 UTC m=+0.149065751 container start de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:25:27 compute-0 wizardly_banzai[261241]: 167 167
Jan 20 14:25:27 compute-0 systemd[1]: libpod-de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f.scope: Deactivated successfully.
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.596414004 +0000 UTC m=+0.156201228 container attach de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.596811275 +0000 UTC m=+0.156598469 container died de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa2a03870958f0d87ef3722ad8c1c21a655c6d269c134a5c83f7706d661a1c6-merged.mount: Deactivated successfully.
Jan 20 14:25:27 compute-0 podman[261206]: 2026-01-20 14:25:27.629180464 +0000 UTC m=+0.188967658 container remove de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:27 compute-0 systemd[1]: libpod-conmon-de3a175ae05a6fdc1bb4a176f84372e8673d812d89529445c505c90462d5be6f.scope: Deactivated successfully.
Jan 20 14:25:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949782716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:27 compute-0 podman[261265]: 2026-01-20 14:25:27.813786827 +0000 UTC m=+0.044840727 container create 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.816 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.844 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:27 compute-0 nova_compute[250018]: 2026-01-20 14:25:27.847 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:27 compute-0 systemd[1]: Started libpod-conmon-868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1.scope.
Jan 20 14:25:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7df910c68fe5f83876de87e3d9013701a841ee28c3a8fd262444df472c407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:27 compute-0 podman[261265]: 2026-01-20 14:25:27.795406525 +0000 UTC m=+0.026460455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7df910c68fe5f83876de87e3d9013701a841ee28c3a8fd262444df472c407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7df910c68fe5f83876de87e3d9013701a841ee28c3a8fd262444df472c407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc7df910c68fe5f83876de87e3d9013701a841ee28c3a8fd262444df472c407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:27 compute-0 podman[261265]: 2026-01-20 14:25:27.910236157 +0000 UTC m=+0.141290107 container init 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:25:27 compute-0 podman[261265]: 2026-01-20 14:25:27.917018564 +0000 UTC m=+0.148072464 container start 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:25:27 compute-0 podman[261265]: 2026-01-20 14:25:27.920619029 +0000 UTC m=+0.151672979 container attach 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:25:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180814250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.270 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.272 250022 DEBUG nova.virt.libvirt.vif [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:25:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-2102451076',id=13,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnZ/Cx8vMumXGvEI9547JEyMeRkGznqLk5Xz3oR+TXmoMMxw6ZcUZJSSPx9PRS1PfeH2my6tZBX8mJSWH6Q1mhQIN/hiJECzeN4ewqe8NWMqXUqY2ux8nHjNnGnzhLaeQ==',key_name='tempest-keypair-660220900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef783a3b5dd3446faf947d627c64c5da',ramdisk_id='',reservation_id='r-09ngr9a8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57d58248fa3b44579c14396dca4a2199',uuid=fcfc64da-5468-42a5-bf34-daa6db48df22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.272 250022 DEBUG nova.network.os_vif_util [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converting VIF {"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.273 250022 DEBUG nova.network.os_vif_util [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.274 250022 DEBUG nova.objects.instance [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lazy-loading 'pci_devices' on Instance uuid fcfc64da-5468-42a5-bf34-daa6db48df22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.294 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <uuid>fcfc64da-5468-42a5-bf34-daa6db48df22</uuid>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <name>instance-0000000d</name>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-2102451076</nova:name>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:25:27</nova:creationTime>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1469248447">
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:user uuid="57d58248fa3b44579c14396dca4a2199">tempest-ServersWithSpecificFlavorTestJSON-2064998848-project-member</nova:user>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:project uuid="ef783a3b5dd3446faf947d627c64c5da">tempest-ServersWithSpecificFlavorTestJSON-2064998848</nova:project>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <nova:port uuid="89adf3b7-65a1-479e-bc6d-0f86de206593">
Jan 20 14:25:28 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <system>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="serial">fcfc64da-5468-42a5-bf34-daa6db48df22</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="uuid">fcfc64da-5468-42a5-bf34-daa6db48df22</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </system>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <os>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </os>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <features>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </features>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fcfc64da-5468-42a5-bf34-daa6db48df22_disk">
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config">
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:65:71:f0"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <target dev="tap89adf3b7-65"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/console.log" append="off"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <video>
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </video>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:25:28 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:25:28 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:25:28 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:25:28 compute-0 nova_compute[250018]: </domain>
Jan 20 14:25:28 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.301 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Preparing to wait for external event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.301 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.301 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.302 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.302 250022 DEBUG nova.virt.libvirt.vif [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:25:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-2102451076',id=13,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnZ/Cx8vMumXGvEI9547JEyMeRkGznqLk5Xz3oR+TXmoMMxw6ZcUZJSSPx9PRS1PfeH2my6tZBX8mJSWH6Q1mhQIN/hiJECzeN4ewqe8NWMqXUqY2ux8nHjNnGnzhLaeQ==',key_name='tempest-keypair-660220900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef783a3b5dd3446faf947d627c64c5da',ramdisk_id='',reservation_id='r-09ngr9a8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57d58248fa3b44579c14396dca4a2199',uuid=fcfc64da-5468-42a5-bf34-daa6db48df22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.303 250022 DEBUG nova.network.os_vif_util [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converting VIF {"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.303 250022 DEBUG nova.network.os_vif_util [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.304 250022 DEBUG os_vif [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.305 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.305 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.306 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.310 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89adf3b7-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.311 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89adf3b7-65, col_values=(('external_ids', {'iface-id': '89adf3b7-65a1-479e-bc6d-0f86de206593', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:71:f0', 'vm-uuid': 'fcfc64da-5468-42a5-bf34-daa6db48df22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.312 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.314 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:28 compute-0 NetworkManager[48960]: <info>  [1768919128.3146] manager: (tap89adf3b7-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.323 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.324 250022 INFO os_vif [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65')
Jan 20 14:25:28 compute-0 ceph-mon[74360]: pgmap v1020: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 20 14:25:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/949782716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3180814250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.395 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.396 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.396 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] No VIF found with MAC fa:16:3e:65:71:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.397 250022 INFO nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Using config drive
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.423 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:28 compute-0 boring_golick[261302]: {
Jan 20 14:25:28 compute-0 boring_golick[261302]:     "0": [
Jan 20 14:25:28 compute-0 boring_golick[261302]:         {
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "devices": [
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "/dev/loop3"
Jan 20 14:25:28 compute-0 boring_golick[261302]:             ],
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "lv_name": "ceph_lv0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "lv_size": "7511998464",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "name": "ceph_lv0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "tags": {
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.cluster_name": "ceph",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.crush_device_class": "",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.encrypted": "0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.osd_id": "0",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.type": "block",
Jan 20 14:25:28 compute-0 boring_golick[261302]:                 "ceph.vdo": "0"
Jan 20 14:25:28 compute-0 boring_golick[261302]:             },
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "type": "block",
Jan 20 14:25:28 compute-0 boring_golick[261302]:             "vg_name": "ceph_vg0"
Jan 20 14:25:28 compute-0 boring_golick[261302]:         }
Jan 20 14:25:28 compute-0 boring_golick[261302]:     ]
Jan 20 14:25:28 compute-0 boring_golick[261302]: }
Jan 20 14:25:28 compute-0 systemd[1]: libpod-868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1.scope: Deactivated successfully.
Jan 20 14:25:28 compute-0 podman[261265]: 2026-01-20 14:25:28.686142351 +0000 UTC m=+0.917196251 container died 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc7df910c68fe5f83876de87e3d9013701a841ee28c3a8fd262444df472c407-merged.mount: Deactivated successfully.
Jan 20 14:25:28 compute-0 podman[261265]: 2026-01-20 14:25:28.759074734 +0000 UTC m=+0.990128634 container remove 868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:25:28 compute-0 systemd[1]: libpod-conmon-868367928b2940ecca29c633f4f633f3c91fcd7c61db51448c81c00d7946b1e1.scope: Deactivated successfully.
Jan 20 14:25:28 compute-0 sudo[261140]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:28 compute-0 sudo[261367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:28 compute-0 sudo[261367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:28 compute-0 sudo[261367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.873 250022 INFO nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Creating config drive at /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config
Jan 20 14:25:28 compute-0 nova_compute[250018]: 2026-01-20 14:25:28.879 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpca1_n8gm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:28 compute-0 sudo[261392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:25:28 compute-0 sudo[261392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:28 compute-0 sudo[261392]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:28 compute-0 sudo[261420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:28 compute-0 sudo[261420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:28 compute-0 sudo[261420]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:29 compute-0 sudo[261445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:25:29 compute-0 sudo[261445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.012 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpca1_n8gm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.038 250022 DEBUG nova.storage.rbd_utils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] rbd image fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.042 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.210 250022 DEBUG oslo_concurrency.processutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config fcfc64da-5468-42a5-bf34-daa6db48df22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.211 250022 INFO nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Deleting local config drive /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22/disk.config because it was imported into RBD.
Jan 20 14:25:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 14:25:29 compute-0 kernel: tap89adf3b7-65: entered promiscuous mode
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.2698] manager: (tap89adf3b7-65): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 20 14:25:29 compute-0 ovn_controller[148666]: 2026-01-20T14:25:29Z|00036|binding|INFO|Claiming lport 89adf3b7-65a1-479e-bc6d-0f86de206593 for this chassis.
Jan 20 14:25:29 compute-0 ovn_controller[148666]: 2026-01-20T14:25:29Z|00037|binding|INFO|89adf3b7-65a1-479e-bc6d-0f86de206593: Claiming fa:16:3e:65:71:f0 10.100.0.12
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.272 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.280 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.294 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:71:f0 10.100.0.12'], port_security=['fa:16:3e:65:71:f0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fcfc64da-5468-42a5-bf34-daa6db48df22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d67e270-6232-44c0-a859-2ab75934074d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef783a3b5dd3446faf947d627c64c5da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c69a4378-df75-46a8-a919-3917216182c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a00b20ec-8436-4c1b-b8fb-9d59f661148c, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=89adf3b7-65a1-479e-bc6d-0f86de206593) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.296 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 89adf3b7-65a1-479e-bc6d-0f86de206593 in datapath 4d67e270-6232-44c0-a859-2ab75934074d bound to our chassis
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.298 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d67e270-6232-44c0-a859-2ab75934074d
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.312 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9d2319-8399-4e0c-a411-78ab09b66094]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.314 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d67e270-61 in ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.316 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d67e270-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.316 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ad1c43-9438-4c75-acf3-5e0e73699066]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 systemd-machined[216401]: New machine qemu-6-instance-0000000d.
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.319 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d61a9bb0-83cf-439e-81af-5882e9aef3ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.333 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e690a31e-8809-44ab-9873-04893a492835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.351 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Jan 20 14:25:29 compute-0 ovn_controller[148666]: 2026-01-20T14:25:29Z|00038|binding|INFO|Setting lport 89adf3b7-65a1-479e-bc6d-0f86de206593 ovn-installed in OVS
Jan 20 14:25:29 compute-0 ovn_controller[148666]: 2026-01-20T14:25:29Z|00039|binding|INFO|Setting lport 89adf3b7-65a1-479e-bc6d-0f86de206593 up in Southbound
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.359 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[16f596d4-ef17-46c0-8608-b591ae8f85df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.364260319 +0000 UTC m=+0.049829558 container create 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:25:29 compute-0 systemd-udevd[261576]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.3789] device (tap89adf3b7-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.3797] device (tap89adf3b7-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.388 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[75e646da-aa50-4656-82a6-d6583d30a3b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.392 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4ea0a5-f15a-4dd2-9fd9-6e584f94cc40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.3934] manager: (tap4d67e270-60): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Jan 20 14:25:29 compute-0 systemd[1]: Started libpod-conmon-6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751.scope.
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.422 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a95217c4-ea85-4506-8611-2cd4d26f4433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.427 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8c51ed-bfea-4c27-8556-6a249d21c018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.34102308 +0000 UTC m=+0.026592339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.4527] device (tap4d67e270-60): carrier: link connected
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.458 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[736da8dd-a5bd-4197-82e1-ae6ac853f87b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.463088582 +0000 UTC m=+0.148657821 container init 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.469556481 +0000 UTC m=+0.155125720 container start 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.473674799 +0000 UTC m=+0.159244048 container attach 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:25:29 compute-0 nifty_black[261598]: 167 167
Jan 20 14:25:29 compute-0 systemd[1]: libpod-6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751.scope: Deactivated successfully.
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.476312738 +0000 UTC m=+0.161881977 container died 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.481 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a24528fe-236f-41aa-8fdb-9459823781c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d67e270-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:cb:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507017, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261612, 'error': None, 'target': 'ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:29.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5051452f64ffda1a90c1100371d5d105018b38935a6723aaaf65e2ae3893ff0-merged.mount: Deactivated successfully.
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.506 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5b662f38-fbe5-4669-9d8d-93e8652314f4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:cbae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507017, 'tstamp': 507017}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261616, 'error': None, 'target': 'ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 podman[261557]: 2026-01-20 14:25:29.5129766 +0000 UTC m=+0.198545839 container remove 6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.523 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b7a95b-18d9-4937-b81e-352ccfb253a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d67e270-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:cb:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507017, 'reachable_time': 17023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261624, 'error': None, 'target': 'ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 systemd[1]: libpod-conmon-6177399baeb93bdb447468b82c8932a493b916efd37acfdaca022dd5a7977751.scope: Deactivated successfully.
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.558 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f6f72d-0bd6-4b0e-be35-3a7681f5b69d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.609 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[726c7951-592f-4f52-93e5-48c5ed4b728a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.611 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d67e270-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.612 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.612 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d67e270-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 NetworkManager[48960]: <info>  [1768919129.6150] manager: (tap4d67e270-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 20 14:25:29 compute-0 kernel: tap4d67e270-60: entered promiscuous mode
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.620 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d67e270-60, col_values=(('external_ids', {'iface-id': 'c053ea46-d58e-42b0-9a74-e2f5480c182b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:29 compute-0 ovn_controller[148666]: 2026-01-20T14:25:29Z|00040|binding|INFO|Releasing lport c053ea46-d58e-42b0-9a74-e2f5480c182b from this chassis (sb_readonly=0)
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.621 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.624 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d67e270-6232-44c0-a859-2ab75934074d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d67e270-6232-44c0-a859-2ab75934074d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.625 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7645dc8e-7410-4818-b3c2-715e308b0735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.626 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-4d67e270-6232-44c0-a859-2ab75934074d
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/4d67e270-6232-44c0-a859-2ab75934074d.pid.haproxy
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 4d67e270-6232-44c0-a859-2ab75934074d
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:25:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:29.628 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d', 'env', 'PROCESS_TAG=haproxy-4d67e270-6232-44c0-a859-2ab75934074d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d67e270-6232-44c0-a859-2ab75934074d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.639 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:29 compute-0 podman[261639]: 2026-01-20 14:25:29.679735474 +0000 UTC m=+0.045061713 container create 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 14:25:29 compute-0 systemd[1]: Started libpod-conmon-32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189.scope.
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.726 250022 DEBUG nova.compute.manager [req-d079b855-c52b-4364-b2d9-92cce79363c6 req-bc769059-d2da-4215-be37-bab32c23c7f2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.726 250022 DEBUG oslo_concurrency.lockutils [req-d079b855-c52b-4364-b2d9-92cce79363c6 req-bc769059-d2da-4215-be37-bab32c23c7f2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.727 250022 DEBUG oslo_concurrency.lockutils [req-d079b855-c52b-4364-b2d9-92cce79363c6 req-bc769059-d2da-4215-be37-bab32c23c7f2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.728 250022 DEBUG oslo_concurrency.lockutils [req-d079b855-c52b-4364-b2d9-92cce79363c6 req-bc769059-d2da-4215-be37-bab32c23c7f2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.729 250022 DEBUG nova.compute.manager [req-d079b855-c52b-4364-b2d9-92cce79363c6 req-bc769059-d2da-4215-be37-bab32c23c7f2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Processing event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:25:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05def57db156e84e087242da3ed172495eaea1b8c0a486d4e8ecd32aeacde983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05def57db156e84e087242da3ed172495eaea1b8c0a486d4e8ecd32aeacde983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05def57db156e84e087242da3ed172495eaea1b8c0a486d4e8ecd32aeacde983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05def57db156e84e087242da3ed172495eaea1b8c0a486d4e8ecd32aeacde983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:29 compute-0 podman[261639]: 2026-01-20 14:25:29.75273485 +0000 UTC m=+0.118061109 container init 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:25:29 compute-0 podman[261639]: 2026-01-20 14:25:29.661402133 +0000 UTC m=+0.026728392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:25:29 compute-0 podman[261639]: 2026-01-20 14:25:29.760322858 +0000 UTC m=+0.125649097 container start 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:29 compute-0 podman[261639]: 2026-01-20 14:25:29.763948624 +0000 UTC m=+0.129274893 container attach 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.898 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919129.898274, fcfc64da-5468-42a5-bf34-daa6db48df22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.899 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] VM Started (Lifecycle Event)
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.901 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.920 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.924 250022 INFO nova.virt.libvirt.driver [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance spawned successfully.
Jan 20 14:25:29 compute-0 nova_compute[250018]: 2026-01-20 14:25:29.924 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:25:30 compute-0 podman[261725]: 2026-01-20 14:25:30.02531937 +0000 UTC m=+0.061009931 container create e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.039 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:30 compute-0 systemd[1]: Started libpod-conmon-e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883.scope.
Jan 20 14:25:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:25:30 compute-0 podman[261725]: 2026-01-20 14:25:29.98493124 +0000 UTC m=+0.020621821 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a34df1e2534c2ebc212de4b30d48cad879fb2cd34f8d99db93653fb9f1a5eef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:30 compute-0 podman[261725]: 2026-01-20 14:25:30.094246587 +0000 UTC m=+0.129937178 container init e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:25:30 compute-0 podman[261725]: 2026-01-20 14:25:30.099421863 +0000 UTC m=+0.135112444 container start e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:25:30 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [NOTICE]   (261744) : New worker (261746) forked
Jan 20 14:25:30 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [NOTICE]   (261744) : Loading success.
Jan 20 14:25:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:30 compute-0 ceph-mon[74360]: pgmap v1021: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 14:25:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.519 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.523 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.523 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.523 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.524 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.524 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.524 250022 DEBUG nova.virt.libvirt.driver [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.528 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]: {
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:         "osd_id": 0,
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:         "type": "bluestore"
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]:     }
Jan 20 14:25:30 compute-0 frosty_ritchie[261672]: }
Jan 20 14:25:30 compute-0 systemd[1]: libpod-32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189.scope: Deactivated successfully.
Jan 20 14:25:30 compute-0 podman[261639]: 2026-01-20 14:25:30.687252774 +0000 UTC m=+1.052579023 container died 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-05def57db156e84e087242da3ed172495eaea1b8c0a486d4e8ecd32aeacde983-merged.mount: Deactivated successfully.
Jan 20 14:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:30.740 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:30.741 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:30 compute-0 podman[261639]: 2026-01-20 14:25:30.744085825 +0000 UTC m=+1.109412064 container remove 32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ritchie, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:25:30 compute-0 systemd[1]: libpod-conmon-32d8d1cf30f749d8d0db6222baac33ed1553c94c1ba1082f222e7abc6b9ac189.scope: Deactivated successfully.
Jan 20 14:25:30 compute-0 sudo[261445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:25:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:25:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6b179197-c84b-4b4c-8242-6b28a41f256b does not exist
Jan 20 14:25:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f7cc1d90-d5bf-4b50-aab9-2b6a7ebb643f does not exist
Jan 20 14:25:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6b6afd64-c7bb-4f51-abbc-1b713c0bcb77 does not exist
Jan 20 14:25:30 compute-0 sudo[261784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:30 compute-0 sudo[261784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:30 compute-0 sudo[261784]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:30 compute-0 sudo[261809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:25:30 compute-0 sudo[261809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:30 compute-0 sudo[261809]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.997 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.997 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919129.9010248, fcfc64da-5468-42a5-bf34-daa6db48df22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:30 compute-0 nova_compute[250018]: 2026-01-20 14:25:30.998 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] VM Paused (Lifecycle Event)
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.025 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.031 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919129.9048207, fcfc64da-5468-42a5-bf34-daa6db48df22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.031 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] VM Resumed (Lifecycle Event)
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.116 250022 INFO nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Took 20.13 seconds to spawn the instance on the hypervisor.
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.117 250022 DEBUG nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 93 op/s
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.270 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.274 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.299 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.360 250022 INFO nova.compute.manager [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Took 21.78 seconds to build instance.
Jan 20 14:25:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:25:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:31.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.594 250022 DEBUG oslo_concurrency.lockutils [None req-598bd51e-d153-439a-badf-cd30f9b8e3fa 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:25:31 compute-0 ceph-mon[74360]: pgmap v1022: 321 pgs: 321 active+clean; 293 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 93 op/s
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.908 250022 DEBUG nova.compute.manager [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.909 250022 DEBUG oslo_concurrency.lockutils [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.909 250022 DEBUG oslo_concurrency.lockutils [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.910 250022 DEBUG oslo_concurrency.lockutils [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.910 250022 DEBUG nova.compute.manager [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] No waiting events found dispatching network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:31 compute-0 nova_compute[250018]: 2026-01-20 14:25:31.910 250022 WARNING nova.compute.manager [req-1b0663ec-1b78-41ed-a5ea-9ba720e0138a req-73816452-602a-4cb4-98d4-96c2563bdb3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received unexpected event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 for instance with vm_state active and task_state None.
Jan 20 14:25:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 301 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 626 KiB/s wr, 108 op/s
Jan 20 14:25:33 compute-0 nova_compute[250018]: 2026-01-20 14:25:33.342 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:34 compute-0 nova_compute[250018]: 2026-01-20 14:25:34.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4415] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/31)
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4425] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <warn>  [1768919134.4429] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4441] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/32)
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4447] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <warn>  [1768919134.4447] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4458] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4482] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4489] device (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:25:34 compute-0 NetworkManager[48960]: <info>  [1768919134.4497] device (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:25:34 compute-0 ceph-mon[74360]: pgmap v1023: 321 pgs: 321 active+clean; 301 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 626 KiB/s wr, 108 op/s
Jan 20 14:25:34 compute-0 nova_compute[250018]: 2026-01-20 14:25:34.634 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:34 compute-0 ovn_controller[148666]: 2026-01-20T14:25:34Z|00041|binding|INFO|Releasing lport c053ea46-d58e-42b0-9a74-e2f5480c182b from this chassis (sb_readonly=0)
Jan 20 14:25:34 compute-0 nova_compute[250018]: 2026-01-20 14:25:34.659 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 304 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 107 op/s
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.509 250022 DEBUG nova.compute.manager [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-changed-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.510 250022 DEBUG nova.compute.manager [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Refreshing instance network info cache due to event network-changed-89adf3b7-65a1-479e-bc6d-0f86de206593. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.510 250022 DEBUG oslo_concurrency.lockutils [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.510 250022 DEBUG oslo_concurrency.lockutils [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.511 250022 DEBUG nova.network.neutron [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Refreshing network info cache for port 89adf3b7-65a1-479e-bc6d-0f86de206593 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:25:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:35.510 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:35.511 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:25:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:35.511 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:35 compute-0 nova_compute[250018]: 2026-01-20 14:25:35.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:35.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:25:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:25:36 compute-0 ceph-mon[74360]: pgmap v1024: 321 pgs: 321 active+clean; 304 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 107 op/s
Jan 20 14:25:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1380560990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/912185248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:37 compute-0 sudo[261838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:37 compute-0 sudo[261838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:37 compute-0 sudo[261838]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:37 compute-0 sudo[261863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:37 compute-0 sudo[261863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:37 compute-0 sudo[261863]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:37 compute-0 nova_compute[250018]: 2026-01-20 14:25:37.534 250022 DEBUG nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:25:37 compute-0 sshd-session[261888]: Invalid user admin from 157.245.78.139 port 59950
Jan 20 14:25:37 compute-0 sshd-session[261888]: Connection closed by invalid user admin 157.245.78.139 port 59950 [preauth]
Jan 20 14:25:38 compute-0 nova_compute[250018]: 2026-01-20 14:25:38.336 250022 DEBUG nova.network.neutron [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Updated VIF entry in instance network info cache for port 89adf3b7-65a1-479e-bc6d-0f86de206593. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:25:38 compute-0 nova_compute[250018]: 2026-01-20 14:25:38.336 250022 DEBUG nova.network.neutron [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Updating instance_info_cache with network_info: [{"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:38 compute-0 nova_compute[250018]: 2026-01-20 14:25:38.344 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:38 compute-0 ceph-mon[74360]: pgmap v1025: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:38 compute-0 nova_compute[250018]: 2026-01-20 14:25:38.605 250022 DEBUG oslo_concurrency.lockutils [req-30b943c3-3455-46ea-8f4e-be43eb6a5e08 req-ed632206-15ce-42d2-8c22-609742b56dc5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fcfc64da-5468-42a5-bf34-daa6db48df22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:39 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 20 14:25:39 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Consumed 13.931s CPU time.
Jan 20 14:25:39 compute-0 systemd-machined[216401]: Machine qemu-5-instance-0000000e terminated.
Jan 20 14:25:40 compute-0 nova_compute[250018]: 2026-01-20 14:25:40.077 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:40 compute-0 nova_compute[250018]: 2026-01-20 14:25:40.547 250022 INFO nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance shutdown successfully after 13 seconds.
Jan 20 14:25:40 compute-0 nova_compute[250018]: 2026-01-20 14:25:40.552 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:25:40 compute-0 nova_compute[250018]: 2026-01-20 14:25:40.552 250022 DEBUG nova.objects.instance [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'numa_topology' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:40 compute-0 ceph-mon[74360]: pgmap v1026: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:25:41 compute-0 nova_compute[250018]: 2026-01-20 14:25:41.109 250022 INFO nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Beginning cold snapshot process
Jan 20 14:25:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 20 14:25:41 compute-0 nova_compute[250018]: 2026-01-20 14:25:41.432 250022 DEBUG nova.virt.libvirt.imagebackend [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:25:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:41.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:42 compute-0 nova_compute[250018]: 2026-01-20 14:25:42.014 250022 DEBUG nova.storage.rbd_utils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] creating snapshot(b25fd9a6da744aae964e005d69ce3cd4) on rbd image(6091ab6e-2530-4b48-b482-00867d3c66c5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:25:42 compute-0 ceph-mon[74360]: pgmap v1027: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 20 14:25:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:42 compute-0 ovn_controller[148666]: 2026-01-20T14:25:42Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:71:f0 10.100.0.12
Jan 20 14:25:42 compute-0 ovn_controller[148666]: 2026-01-20T14:25:42Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:71:f0 10.100.0.12
Jan 20 14:25:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 20 14:25:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 20 14:25:43 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 20 14:25:43 compute-0 nova_compute[250018]: 2026-01-20 14:25:43.090 250022 DEBUG nova.storage.rbd_utils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] cloning vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk@b25fd9a6da744aae964e005d69ce3cd4 to images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:25:43 compute-0 nova_compute[250018]: 2026-01-20 14:25:43.217 250022 DEBUG nova.storage.rbd_utils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] flattening images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:25:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 20 14:25:43 compute-0 nova_compute[250018]: 2026-01-20 14:25:43.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:43.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:43 compute-0 nova_compute[250018]: 2026-01-20 14:25:43.613 250022 DEBUG nova.storage.rbd_utils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] removing snapshot(b25fd9a6da744aae964e005d69ce3cd4) on rbd image(6091ab6e-2530-4b48-b482-00867d3c66c5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:25:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 20 14:25:44 compute-0 ceph-mon[74360]: osdmap e140: 3 total, 3 up, 3 in
Jan 20 14:25:44 compute-0 ceph-mon[74360]: pgmap v1029: 321 pgs: 321 active+clean; 326 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 96 op/s
Jan 20 14:25:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 20 14:25:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 20 14:25:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:44.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:44 compute-0 nova_compute[250018]: 2026-01-20 14:25:44.423 250022 DEBUG nova.storage.rbd_utils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] creating snapshot(snap) on rbd image(9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:25:45 compute-0 nova_compute[250018]: 2026-01-20 14:25:45.079 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 358 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 69 op/s
Jan 20 14:25:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 20 14:25:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 20 14:25:45 compute-0 ceph-mon[74360]: osdmap e141: 3 total, 3 up, 3 in
Jan 20 14:25:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 20 14:25:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:45.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3192772719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:46 compute-0 ceph-mon[74360]: pgmap v1031: 321 pgs: 321 active+clean; 358 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 69 op/s
Jan 20 14:25:46 compute-0 ceph-mon[74360]: osdmap e142: 3 total, 3 up, 3 in
Jan 20 14:25:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3192772719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.120 250022 INFO nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Snapshot image upload complete
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.121 250022 DEBUG nova.compute.manager [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.208 250022 INFO nova.compute.manager [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shelve offloading
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.219 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:25:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 436 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 293 op/s
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.219 250022 DEBUG nova.compute.manager [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.222 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.223 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.223 250022 DEBUG nova.network.neutron [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:25:47 compute-0 nova_compute[250018]: 2026-01-20 14:25:47.504 250022 DEBUG nova.network.neutron [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:25:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:47.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.275 250022 DEBUG nova.network.neutron [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.289 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.295 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.296 250022 DEBUG nova.objects.instance [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'resources' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.348 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:48 compute-0 ceph-mon[74360]: pgmap v1033: 321 pgs: 321 active+clean; 436 MiB data, 523 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 293 op/s
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.786 250022 INFO nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting instance files /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.786 250022 INFO nova.virt.libvirt.driver [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deletion of /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del complete
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.888 250022 INFO nova.scheduler.client.report [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Deleted allocations for instance 6091ab6e-2530-4b48-b482-00867d3c66c5
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.940 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.941 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:48 compute-0 nova_compute[250018]: 2026-01-20 14:25:48.985 250022 DEBUG oslo_concurrency.processutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 424 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 309 op/s
Jan 20 14:25:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280635441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:49 compute-0 nova_compute[250018]: 2026-01-20 14:25:49.463 250022 DEBUG oslo_concurrency.processutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:49 compute-0 nova_compute[250018]: 2026-01-20 14:25:49.469 250022 DEBUG nova.compute.provider_tree [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:49 compute-0 nova_compute[250018]: 2026-01-20 14:25:49.482 250022 DEBUG nova.scheduler.client.report [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:49 compute-0 nova_compute[250018]: 2026-01-20 14:25:49.508 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1280635441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:49 compute-0 nova_compute[250018]: 2026-01-20 14:25:49.560 250022 DEBUG oslo_concurrency.lockutils [None req-7d010e3b-cb9b-4243-9239-4076b704cbb2 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 22.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:50 compute-0 nova_compute[250018]: 2026-01-20 14:25:50.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 20 14:25:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 20 14:25:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 20 14:25:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:50 compute-0 ceph-mon[74360]: pgmap v1034: 321 pgs: 321 active+clean; 424 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 309 op/s
Jan 20 14:25:50 compute-0 ceph-mon[74360]: osdmap e143: 3 total, 3 up, 3 in
Jan 20 14:25:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 397 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 216 op/s
Jan 20 14:25:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.121 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.121 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.121 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.122 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.122 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.123 250022 INFO nova.compute.manager [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Terminating instance
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.124 250022 DEBUG nova.compute.manager [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:25:52 compute-0 kernel: tap89adf3b7-65 (unregistering): left promiscuous mode
Jan 20 14:25:52 compute-0 NetworkManager[48960]: <info>  [1768919152.1798] device (tap89adf3b7-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:25:52 compute-0 ovn_controller[148666]: 2026-01-20T14:25:52Z|00042|binding|INFO|Releasing lport 89adf3b7-65a1-479e-bc6d-0f86de206593 from this chassis (sb_readonly=0)
Jan 20 14:25:52 compute-0 ovn_controller[148666]: 2026-01-20T14:25:52Z|00043|binding|INFO|Setting lport 89adf3b7-65a1-479e-bc6d-0f86de206593 down in Southbound
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 ovn_controller[148666]: 2026-01-20T14:25:52Z|00044|binding|INFO|Removing iface tap89adf3b7-65 ovn-installed in OVS
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.199 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:71:f0 10.100.0.12'], port_security=['fa:16:3e:65:71:f0 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fcfc64da-5468-42a5-bf34-daa6db48df22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d67e270-6232-44c0-a859-2ab75934074d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef783a3b5dd3446faf947d627c64c5da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c69a4378-df75-46a8-a919-3917216182c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a00b20ec-8436-4c1b-b8fb-9d59f661148c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=89adf3b7-65a1-479e-bc6d-0f86de206593) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.200 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 89adf3b7-65a1-479e-bc6d-0f86de206593 in datapath 4d67e270-6232-44c0-a859-2ab75934074d unbound from our chassis
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.202 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d67e270-6232-44c0-a859-2ab75934074d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.203 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[34109eca-0bf1-4448-b78f-2cf7753038ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.204 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d namespace which is not needed anymore
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.209 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 20 14:25:52 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 14.977s CPU time.
Jan 20 14:25:52 compute-0 systemd-machined[216401]: Machine qemu-6-instance-0000000d terminated.
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.260 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.261 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.262 250022 INFO nova.compute.manager [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Unshelving
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.346 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.346 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.353 250022 INFO nova.virt.libvirt.driver [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Instance destroyed successfully.
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.353 250022 DEBUG nova.objects.instance [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lazy-loading 'resources' on Instance uuid fcfc64da-5468-42a5-bf34-daa6db48df22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.355 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.372 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.374 250022 DEBUG nova.virt.libvirt.vif [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:25:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-2102451076',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(18),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-2102451076',id=13,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=18,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnZ/Cx8vMumXGvEI9547JEyMeRkGznqLk5Xz3oR+TXmoMMxw6ZcUZJSSPx9PRS1PfeH2my6tZBX8mJSWH6Q1mhQIN/hiJECzeN4ewqe8NWMqXUqY2ux8nHjNnGnzhLaeQ==',key_name='tempest-keypair-660220900',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:25:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef783a3b5dd3446faf947d627c64c5da',ramdisk_id='',reservation_id='r-09ngr9a8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2064998848-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:25:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57d58248fa3b44579c14396dca4a2199',uuid=fcfc64da-5468-42a5-bf34-daa6db48df22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.375 250022 DEBUG nova.network.os_vif_util [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converting VIF {"id": "89adf3b7-65a1-479e-bc6d-0f86de206593", "address": "fa:16:3e:65:71:f0", "network": {"id": "4d67e270-6232-44c0-a859-2ab75934074d", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1442825192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef783a3b5dd3446faf947d627c64c5da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89adf3b7-65", "ovs_interfaceid": "89adf3b7-65a1-479e-bc6d-0f86de206593", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.375 250022 DEBUG nova.network.os_vif_util [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.376 250022 DEBUG os_vif [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.378 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.378 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89adf3b7-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.384 250022 DEBUG nova.compute.manager [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-unplugged-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.384 250022 DEBUG oslo_concurrency.lockutils [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.384 250022 DEBUG oslo_concurrency.lockutils [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.384 250022 DEBUG oslo_concurrency.lockutils [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.385 250022 DEBUG nova.compute.manager [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] No waiting events found dispatching network-vif-unplugged-89adf3b7-65a1-479e-bc6d-0f86de206593 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.385 250022 DEBUG nova.compute.manager [req-a06490c1-7bcc-479d-84ed-450cf8ab20a3 req-c7c3bfe5-7e3d-4dd0-bb0f-15d0c7b7aa0c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-unplugged-89adf3b7-65a1-479e-bc6d-0f86de206593 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.386 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.389 250022 INFO os_vif [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:71:f0,bridge_name='br-int',has_traffic_filtering=True,id=89adf3b7-65a1-479e-bc6d-0f86de206593,network=Network(4d67e270-6232-44c0-a859-2ab75934074d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89adf3b7-65')
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.405 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.406 250022 INFO nova.compute.claims [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:25:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [NOTICE]   (261744) : haproxy version is 2.8.14-c23fe91
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [NOTICE]   (261744) : path to executable is /usr/sbin/haproxy
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [WARNING]  (261744) : Exiting Master process...
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [WARNING]  (261744) : Exiting Master process...
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [ALERT]    (261744) : Current worker (261746) exited with code 143 (Terminated)
Jan 20 14:25:52 compute-0 neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d[261740]: [WARNING]  (261744) : All workers exited. Exiting... (0)
Jan 20 14:25:52 compute-0 systemd[1]: libpod-e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883.scope: Deactivated successfully.
Jan 20 14:25:52 compute-0 podman[262108]: 2026-01-20 14:25:52.465336646 +0000 UTC m=+0.155707445 container died e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:25:52
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'volumes']
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.529 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:52 compute-0 ceph-mon[74360]: pgmap v1036: 321 pgs: 321 active+clean; 397 MiB data, 500 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 216 op/s
Jan 20 14:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883-userdata-shm.mount: Deactivated successfully.
Jan 20 14:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a34df1e2534c2ebc212de4b30d48cad879fb2cd34f8d99db93653fb9f1a5eef-merged.mount: Deactivated successfully.
Jan 20 14:25:52 compute-0 podman[262108]: 2026-01-20 14:25:52.66789392 +0000 UTC m=+0.358264719 container cleanup e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:25:52 compute-0 systemd[1]: libpod-conmon-e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883.scope: Deactivated successfully.
Jan 20 14:25:52 compute-0 podman[262185]: 2026-01-20 14:25:52.741997624 +0000 UTC m=+0.048015512 container remove e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.748 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5577fa0c-b7a2-49cd-bb85-eeada9cac434]: (4, ('Tue Jan 20 02:25:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d (e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883)\ne3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883\nTue Jan 20 02:25:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d (e3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883)\ne3ac784481da361dc59a173add10d682b7ec5663101b16e6775a22aba33eb883\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.750 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a2681e6b-8b0c-4c32-978c-c6f5f1c67bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.751 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d67e270-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 kernel: tap4d67e270-60: left promiscuous mode
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.767 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.769 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1b08c5-e8c9-486d-83c4-896401fb00c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.790 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9f51bc0a-219d-45aa-83f9-6fac6444c8cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.790 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[46a26223-be51-49e2-b757-36fdde068cd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.805 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[771ee975-f445-456d-859a-6fbfa565b965]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507010, 'reachable_time': 44275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262201, 'error': None, 'target': 'ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d67e270\x2d6232\x2d44c0\x2da859\x2d2ab75934074d.mount: Deactivated successfully.
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.810 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d67e270-6232-44c0-a859-2ab75934074d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:25:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:25:52.810 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3635d903-2c5f-45a7-8613-ca685332fa2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.869 250022 INFO nova.virt.libvirt.driver [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Deleting instance files /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22_del
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.871 250022 INFO nova.virt.libvirt.driver [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Deletion of /var/lib/nova/instances/fcfc64da-5468-42a5-bf34-daa6db48df22_del complete
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.928 250022 INFO nova.compute.manager [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.929 250022 DEBUG oslo.service.loopingcall [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.929 250022 DEBUG nova.compute.manager [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.929 250022 DEBUG nova.network.neutron [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:25:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775805918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.983 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:52 compute-0 nova_compute[250018]: 2026-01-20 14:25:52.988 250022 DEBUG nova.compute.provider_tree [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.008 250022 DEBUG nova.scheduler.client.report [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.031 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 209 op/s
Jan 20 14:25:53 compute-0 ceph-mgr[74653]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 14:25:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/775805918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.721 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.721 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.721 250022 DEBUG nova.network.neutron [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:25:53 compute-0 nova_compute[250018]: 2026-01-20 14:25:53.967 250022 DEBUG nova.network.neutron [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:25:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.553 250022 DEBUG nova.compute.manager [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.554 250022 DEBUG oslo_concurrency.lockutils [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.554 250022 DEBUG oslo_concurrency.lockutils [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.555 250022 DEBUG oslo_concurrency.lockutils [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.555 250022 DEBUG nova.compute.manager [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] No waiting events found dispatching network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.555 250022 WARNING nova.compute.manager [req-a8dea085-ea07-4bff-804b-b021aeed1d62 req-c97ed727-ceb4-4929-8b7e-0b1c043c0e2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received unexpected event network-vif-plugged-89adf3b7-65a1-479e-bc6d-0f86de206593 for instance with vm_state active and task_state deleting.
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.610 250022 DEBUG nova.network.neutron [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.624 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.625 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.626 250022 INFO nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating image(s)
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.648 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.651 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.709 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.730 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.734 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "950a2a316610e38caa7473866572e049bea38041" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.735 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "950a2a316610e38caa7473866572e049bea38041" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.790 250022 DEBUG nova.network.neutron [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.806 250022 INFO nova.compute.manager [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Took 1.88 seconds to deallocate network for instance.
Jan 20 14:25:54 compute-0 ceph-mon[74360]: pgmap v1037: 321 pgs: 321 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 209 op/s
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.861 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.862 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.948 250022 DEBUG oslo_concurrency.processutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.991 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919139.9907498, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:54 compute-0 nova_compute[250018]: 2026-01-20 14:25:54.993 250022 INFO nova.compute.manager [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Stopped (Lifecycle Event)
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.020 250022 DEBUG nova.compute.manager [None req-d8e64297-611a-4fda-b656-676deefcdee8 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.109 250022 DEBUG nova.virt.libvirt.imagebackend [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:25:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.183 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.189 250022 DEBUG nova.virt.libvirt.imagebackend [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.189 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] cloning images/9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c@snap to None/6091ab6e-2530-4b48-b482-00867d3c66c5_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:25:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 346 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.322 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "950a2a316610e38caa7473866572e049bea38041" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:25:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189409054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.452 250022 DEBUG oslo_concurrency.processutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.456 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'migration_context' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.459 250022 DEBUG nova.compute.provider_tree [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:25:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.635 250022 DEBUG nova.scheduler.client.report [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.643 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] flattening vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:25:55 compute-0 ceph-mon[74360]: pgmap v1038: 321 pgs: 321 active+clean; 346 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Jan 20 14:25:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4189409054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.932 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.951 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Image rbd:vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.952 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.952 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Ensure instance console log exists: /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.953 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.953 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.953 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.954 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:25:27Z,direct_url=<?>,disk_format='raw',id=9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-767584007-shelved',owner='14ebcff06a484899a9725832f1eddfdf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:25:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.958 250022 WARNING nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.962 250022 DEBUG nova.virt.libvirt.host [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.962 250022 DEBUG nova.virt.libvirt.host [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.964 250022 DEBUG nova.virt.libvirt.host [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.964 250022 DEBUG nova.virt.libvirt.host [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.965 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.965 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T14:25:27Z,direct_url=<?>,disk_format='raw',id=9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-767584007-shelved',owner='14ebcff06a484899a9725832f1eddfdf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T14:25:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.966 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.966 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.966 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.966 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.967 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.967 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.967 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.967 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.967 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.968 250022 DEBUG nova.virt.hardware [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:25:55 compute-0 nova_compute[250018]: 2026-01-20 14:25:55.968 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.214 250022 INFO nova.scheduler.client.report [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Deleted allocations for instance fcfc64da-5468-42a5-bf34-daa6db48df22
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.233 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.295 250022 DEBUG nova.compute.manager [req-eb6bc299-687f-4301-b88b-1d54e63539bb req-d1825f95-8027-419f-b181-4efdba7c3a6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Received event network-vif-deleted-89adf3b7-65a1-479e-bc6d-0f86de206593 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.328 250022 DEBUG oslo_concurrency.lockutils [None req-37da2104-bb5e-4ac1-8037-5a453b9841a4 57d58248fa3b44579c14396dca4a2199 ef783a3b5dd3446faf947d627c64c5da - - default default] Lock "fcfc64da-5468-42a5-bf34-daa6db48df22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:25:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:25:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:25:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687057013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.671 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.696 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:56 compute-0 nova_compute[250018]: 2026-01-20 14:25:56.700 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:25:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940468679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.098 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.101 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.126 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <uuid>6091ab6e-2530-4b48-b482-00867d3c66c5</uuid>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <name>instance-0000000e</name>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-767584007</nova:name>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:25:55</nova:creationTime>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:user uuid="8ea9f3cd2cbb462a8ecbb488e6a1a25d">tempest-UnshelveToHostMultiNodesTest-997401309-project-member</nova:user>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <nova:project uuid="14ebcff06a484899a9725832f1eddfdf">tempest-UnshelveToHostMultiNodesTest-997401309</nova:project>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="9b402b85-ed3a-4c51-8c4a-aeeda86dfa7c"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <system>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="serial">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="uuid">6091ab6e-2530-4b48-b482-00867d3c66c5</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </system>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <os>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </os>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <features>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </features>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk">
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config">
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </source>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:25:57 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/console.log" append="off"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <video>
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </video>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:25:57 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:25:57 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:25:57 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:25:57 compute-0 nova_compute[250018]: </domain>
Jan 20 14:25:57 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:25:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1687057013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:57 compute-0 podman[262506]: 2026-01-20 14:25:57.217236868 +0000 UTC m=+0.051914403 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.215 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.215 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.216 250022 INFO nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Using config drive
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 336 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.240 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.259 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:57 compute-0 podman[262505]: 2026-01-20 14:25:57.264545089 +0000 UTC m=+0.093001051 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.326 250022 DEBUG nova.objects.instance [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lazy-loading 'keypairs' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:25:57 compute-0 nova_compute[250018]: 2026-01-20 14:25:57.380 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:25:57 compute-0 sudo[262566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:57 compute-0 sudo[262566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:57 compute-0 sudo[262566]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:57 compute-0 sudo[262591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:25:57 compute-0 sudo[262591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:25:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:57 compute-0 sudo[262591]: pam_unix(sudo:session): session closed for user root
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.122 250022 INFO nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Creating config drive at /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.134 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8qedz2xs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.286 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8qedz2xs" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:25:58.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3940468679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:25:58 compute-0 ceph-mon[74360]: pgmap v1039: 321 pgs: 321 active+clean; 336 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 20 14:25:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1159291882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.600 250022 DEBUG nova.storage.rbd_utils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] rbd image 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.604 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.801 250022 DEBUG oslo_concurrency.processutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config 6091ab6e-2530-4b48-b482-00867d3c66c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:25:58 compute-0 nova_compute[250018]: 2026-01-20 14:25:58.802 250022 INFO nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting local config drive /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5/disk.config because it was imported into RBD.
Jan 20 14:25:58 compute-0 systemd-machined[216401]: New machine qemu-7-instance-0000000e.
Jan 20 14:25:58 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000e.
Jan 20 14:25:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 376 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.274 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919159.2738202, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.274 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Resumed (Lifecycle Event)
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.277 250022 DEBUG nova.compute.manager [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.277 250022 DEBUG nova.virt.libvirt.driver [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.281 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance spawned successfully.
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.293 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.296 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.321 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.321 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919159.277284, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.321 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Started (Lifecycle Event)
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.344 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.347 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:25:59 compute-0 nova_compute[250018]: 2026-01-20 14:25:59.389 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:25:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:25:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:25:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:25:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:25:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4228894258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:25:59 compute-0 ceph-mon[74360]: pgmap v1040: 321 pgs: 321 active+clean; 376 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 20 14:26:00 compute-0 nova_compute[250018]: 2026-01-20 14:26:00.136 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 20 14:26:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 20 14:26:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 20 14:26:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3380735433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:01 compute-0 nova_compute[250018]: 2026-01-20 14:26:01.162 250022 DEBUG nova.compute.manager [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 347 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 20 14:26:01 compute-0 nova_compute[250018]: 2026-01-20 14:26:01.269 250022 DEBUG oslo_concurrency.lockutils [None req-bb197d45-7740-48bd-993f-27c17574b88f c85759c031f744d2b9774757c7eb3cc2 95f7d246c566473eb07dba860a310578 - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:01.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:01 compute-0 ceph-mon[74360]: osdmap e144: 3 total, 3 up, 3 in
Jan 20 14:26:01 compute-0 ceph-mon[74360]: pgmap v1042: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 347 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 20 14:26:02 compute-0 nova_compute[250018]: 2026-01-20 14:26:02.383 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 359 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.8 MiB/s wr, 278 op/s
Jan 20 14:26:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:03.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:04 compute-0 ceph-mon[74360]: pgmap v1043: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 359 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.8 MiB/s wr, 278 op/s
Jan 20 14:26:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1375991866' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:26:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1375991866' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:26:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3068101538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:04 compute-0 nova_compute[250018]: 2026-01-20 14:26:04.729 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "6091ab6e-2530-4b48-b482-00867d3c66c5" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:04 compute-0 nova_compute[250018]: 2026-01-20 14:26:04.730 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:04 compute-0 nova_compute[250018]: 2026-01-20 14:26:04.730 250022 INFO nova.compute.manager [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shelving
Jan 20 14:26:04 compute-0 nova_compute[250018]: 2026-01-20 14:26:04.753 250022 DEBUG nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:26:05 compute-0 nova_compute[250018]: 2026-01-20 14:26:05.139 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 301 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 303 op/s
Jan 20 14:26:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3573996431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1157289147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:06 compute-0 ceph-mon[74360]: pgmap v1044: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 301 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 303 op/s
Jan 20 14:26:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:26:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:26:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 199 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 254 op/s
Jan 20 14:26:07 compute-0 nova_compute[250018]: 2026-01-20 14:26:07.353 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919152.3515394, fcfc64da-5468-42a5-bf34-daa6db48df22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:07 compute-0 nova_compute[250018]: 2026-01-20 14:26:07.353 250022 INFO nova.compute.manager [-] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] VM Stopped (Lifecycle Event)
Jan 20 14:26:07 compute-0 nova_compute[250018]: 2026-01-20 14:26:07.370 250022 DEBUG nova.compute.manager [None req-8849d46b-6b32-4916-b805-3e697480e1f0 - - - - - -] [instance: fcfc64da-5468-42a5-bf34-daa6db48df22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:07 compute-0 nova_compute[250018]: 2026-01-20 14:26:07.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:07.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:08 compute-0 ceph-mon[74360]: pgmap v1045: 321 pgs: 321 active+clean; 199 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 254 op/s
Jan 20 14:26:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 232 op/s
Jan 20 14:26:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:09.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:10 compute-0 nova_compute[250018]: 2026-01-20 14:26:10.140 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 20 14:26:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 20 14:26:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 20 14:26:10 compute-0 ceph-mon[74360]: pgmap v1046: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 232 op/s
Jan 20 14:26:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1338130560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003157970116787642 of space, bias 1.0, pg target 0.9473910350362926 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:26:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 956 KiB/s wr, 167 op/s
Jan 20 14:26:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:11.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:11 compute-0 ceph-mon[74360]: osdmap e145: 3 total, 3 up, 3 in
Jan 20 14:26:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:12.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:12 compute-0 nova_compute[250018]: 2026-01-20 14:26:12.491 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 156 op/s
Jan 20 14:26:13 compute-0 ceph-mon[74360]: pgmap v1048: 321 pgs: 321 active+clean; 169 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 956 KiB/s wr, 167 op/s
Jan 20 14:26:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:13.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817449645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.776 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.845 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.846 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.992 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.993 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=20.9219970703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.993 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:13 compute-0 nova_compute[250018]: 2026-01-20 14:26:13.994 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.070 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 6091ab6e-2530-4b48-b482-00867d3c66c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.070 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.071 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.117 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:14 compute-0 ceph-mon[74360]: pgmap v1049: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 156 op/s
Jan 20 14:26:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/817449645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:26:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1018950850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:26:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490535771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.547 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.553 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.573 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.613 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.613 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:14 compute-0 nova_compute[250018]: 2026-01-20 14:26:14.799 250022 DEBUG nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:26:15 compute-0 nova_compute[250018]: 2026-01-20 14:26:15.141 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 163 op/s
Jan 20 14:26:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3490535771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/448007984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:15.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:15 compute-0 nova_compute[250018]: 2026-01-20 14:26:15.613 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:15 compute-0 nova_compute[250018]: 2026-01-20 14:26:15.614 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:16 compute-0 nova_compute[250018]: 2026-01-20 14:26:16.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:16 compute-0 nova_compute[250018]: 2026-01-20 14:26:16.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:16 compute-0 nova_compute[250018]: 2026-01-20 14:26:16.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:26:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:16.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:17 compute-0 ceph-mon[74360]: pgmap v1050: 321 pgs: 321 active+clean; 169 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 163 op/s
Jan 20 14:26:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1806203902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.093 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.094 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.094 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.124 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.125 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.125 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.125 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 20 14:26:17 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Consumed 14.211s CPU time.
Jan 20 14:26:17 compute-0 systemd-machined[216401]: Machine qemu-7-instance-0000000e terminated.
Jan 20 14:26:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 51 KiB/s wr, 155 op/s
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.495 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:17.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:17 compute-0 sudo[262771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:17 compute-0 sudo[262771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:17 compute-0 sudo[262771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:17 compute-0 sudo[262796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:17 compute-0 sudo[262796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:17 compute-0 sudo[262796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.811 250022 INFO nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance shutdown successfully after 13 seconds.
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.816 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:26:17 compute-0 nova_compute[250018]: 2026-01-20 14:26:17.816 250022 DEBUG nova.objects.instance [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'numa_topology' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:18 compute-0 nova_compute[250018]: 2026-01-20 14:26:18.154 250022 INFO nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Beginning cold snapshot process
Jan 20 14:26:18 compute-0 nova_compute[250018]: 2026-01-20 14:26:18.162 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:26:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 14:26:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/333192495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:18 compute-0 ceph-mon[74360]: pgmap v1051: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 51 KiB/s wr, 155 op/s
Jan 20 14:26:18 compute-0 nova_compute[250018]: 2026-01-20 14:26:18.354 250022 DEBUG nova.virt.libvirt.imagebackend [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:26:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:18 compute-0 nova_compute[250018]: 2026-01-20 14:26:18.665 250022 DEBUG nova.storage.rbd_utils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] creating snapshot(c7925697bc00440b8990eea6cb322c3a) on rbd image(6091ab6e-2530-4b48-b482-00867d3c66c5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:26:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 145 op/s
Jan 20 14:26:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 20 14:26:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3111387232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 20 14:26:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.389 250022 DEBUG nova.storage.rbd_utils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] cloning vms/6091ab6e-2530-4b48-b482-00867d3c66c5_disk@c7925697bc00440b8990eea6cb322c3a to images/0db5d54c-c1b5-4100-80fe-c616a5483520 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.430 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.446 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.446 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.525 250022 DEBUG nova.storage.rbd_utils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] flattening images/0db5d54c-c1b5-4100-80fe-c616a5483520 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:26:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:19.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:19 compute-0 nova_compute[250018]: 2026-01-20 14:26:19.885 250022 DEBUG nova.storage.rbd_utils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] removing snapshot(c7925697bc00440b8990eea6cb322c3a) on rbd image(6091ab6e-2530-4b48-b482-00867d3c66c5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:26:20 compute-0 nova_compute[250018]: 2026-01-20 14:26:20.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:26:20 compute-0 nova_compute[250018]: 2026-01-20 14:26:20.144 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 20 14:26:20 compute-0 ceph-mon[74360]: pgmap v1052: 321 pgs: 321 active+clean; 171 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 52 KiB/s wr, 145 op/s
Jan 20 14:26:20 compute-0 ceph-mon[74360]: osdmap e146: 3 total, 3 up, 3 in
Jan 20 14:26:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 20 14:26:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 20 14:26:20 compute-0 nova_compute[250018]: 2026-01-20 14:26:20.392 250022 DEBUG nova.storage.rbd_utils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] creating snapshot(snap) on rbd image(0db5d54c-c1b5-4100-80fe-c616a5483520) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:26:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:20.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:20 compute-0 sshd-session[262963]: Invalid user admin from 157.245.78.139 port 60252
Jan 20 14:26:20 compute-0 sshd-session[262963]: Connection closed by invalid user admin 157.245.78.139 port 60252 [preauth]
Jan 20 14:26:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 204 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:26:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 20 14:26:21 compute-0 ceph-mon[74360]: osdmap e147: 3 total, 3 up, 3 in
Jan 20 14:26:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 20 14:26:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 20 14:26:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:21.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:21 compute-0 nova_compute[250018]: 2026-01-20 14:26:21.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:21 compute-0 nova_compute[250018]: 2026-01-20 14:26:21.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:22 compute-0 ceph-mon[74360]: pgmap v1055: 321 pgs: 321 active+clean; 204 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:26:22 compute-0 ceph-mon[74360]: osdmap e148: 3 total, 3 up, 3 in
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.397589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182397867, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1945, "num_deletes": 252, "total_data_size": 3243565, "memory_usage": 3301952, "flush_reason": "Manual Compaction"}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182433582, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3192400, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22320, "largest_seqno": 24264, "table_properties": {"data_size": 3183657, "index_size": 5301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19330, "raw_average_key_size": 20, "raw_value_size": 3165714, "raw_average_value_size": 3393, "num_data_blocks": 234, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919003, "oldest_key_time": 1768919003, "file_creation_time": 1768919182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 36025 microseconds, and 11507 cpu microseconds.
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.433627) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3192400 bytes OK
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.433644) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.442997) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.443018) EVENT_LOG_v1 {"time_micros": 1768919182443012, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.443034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3235398, prev total WAL file size 3235398, number of live WAL files 2.
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.443974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3117KB)], [53(7667KB)]
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182444021, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11044061, "oldest_snapshot_seqno": -1}
Jan 20 14:26:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:22.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:22 compute-0 nova_compute[250018]: 2026-01-20 14:26:22.496 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4954 keys, 9068245 bytes, temperature: kUnknown
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182499573, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9068245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9034269, "index_size": 20490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125468, "raw_average_key_size": 25, "raw_value_size": 8943810, "raw_average_value_size": 1805, "num_data_blocks": 840, "num_entries": 4954, "num_filter_entries": 4954, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.499913) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9068245 bytes
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.504841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.2 rd, 162.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 5476, records dropped: 522 output_compression: NoCompression
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.504862) EVENT_LOG_v1 {"time_micros": 1768919182504851, "job": 28, "event": "compaction_finished", "compaction_time_micros": 55716, "compaction_time_cpu_micros": 22401, "output_level": 6, "num_output_files": 1, "total_output_size": 9068245, "num_input_records": 5476, "num_output_records": 4954, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182505421, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919182506671, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.443866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.506694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.506698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.506699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.506701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:22 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:26:22.506702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:26:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 226 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.6 MiB/s wr, 78 op/s
Jan 20 14:26:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:26:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:23.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:26:24 compute-0 ceph-mon[74360]: pgmap v1057: 321 pgs: 321 active+clean; 226 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.6 MiB/s wr, 78 op/s
Jan 20 14:26:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:26:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:24.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.145 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.201 250022 INFO nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Snapshot image upload complete
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.202 250022 DEBUG nova.compute.manager [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 256 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 8.6 MiB/s wr, 199 op/s
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.311 250022 INFO nova.compute.manager [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Shelve offloading
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.322 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.322 250022 DEBUG nova.compute.manager [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.325 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.326 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquired lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.326 250022 DEBUG nova.network.neutron [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:26:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 20 14:26:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:25.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 20 14:26:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 20 14:26:25 compute-0 nova_compute[250018]: 2026-01-20 14:26:25.867 250022 DEBUG nova.network.neutron [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.287 250022 DEBUG nova.network.neutron [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.306 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Releasing lock "refresh_cache-6091ab6e-2530-4b48-b482-00867d3c66c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.312 250022 INFO nova.virt.libvirt.driver [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Instance destroyed successfully.
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.313 250022 DEBUG nova.objects.instance [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lazy-loading 'resources' on Instance uuid 6091ab6e-2530-4b48-b482-00867d3c66c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:26:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:26:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:26:26 compute-0 ceph-mon[74360]: pgmap v1058: 321 pgs: 321 active+clean; 256 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 8.6 MiB/s wr, 199 op/s
Jan 20 14:26:26 compute-0 ceph-mon[74360]: osdmap e149: 3 total, 3 up, 3 in
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.874 250022 INFO nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deleting instance files /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.874 250022 INFO nova.virt.libvirt.driver [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Deletion of /var/lib/nova/instances/6091ab6e-2530-4b48-b482-00867d3c66c5_del complete
Jan 20 14:26:26 compute-0 nova_compute[250018]: 2026-01-20 14:26:26.997 250022 INFO nova.scheduler.client.report [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Deleted allocations for instance 6091ab6e-2530-4b48-b482-00867d3c66c5
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.058 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.059 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.081 250022 DEBUG oslo_concurrency.processutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:26:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 261 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 278 op/s
Jan 20 14:26:27 compute-0 podman[263010]: 2026-01-20 14:26:27.479558962 +0000 UTC m=+0.066939847 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.497 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:26:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499673647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:27 compute-0 podman[263009]: 2026-01-20 14:26:27.506025916 +0000 UTC m=+0.093408352 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.517 250022 DEBUG oslo_concurrency.processutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.522 250022 DEBUG nova.compute.provider_tree [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.544 250022 DEBUG nova.scheduler.client.report [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.566 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:27.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1499673647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:27 compute-0 nova_compute[250018]: 2026-01-20 14:26:27.640 250022 DEBUG oslo_concurrency.lockutils [None req-fce66b71-9a4a-44b6-bded-2e1d3284673b 8ea9f3cd2cbb462a8ecbb488e6a1a25d 14ebcff06a484899a9725832f1eddfdf - - default default] Lock "6091ab6e-2530-4b48-b482-00867d3c66c5" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 22.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:28.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:28 compute-0 ceph-mon[74360]: pgmap v1060: 321 pgs: 321 active+clean; 261 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 278 op/s
Jan 20 14:26:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 241 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 203 op/s
Jan 20 14:26:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:29.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:30 compute-0 nova_compute[250018]: 2026-01-20 14:26:30.186 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:30 compute-0 ceph-mon[74360]: pgmap v1061: 321 pgs: 321 active+clean; 241 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 203 op/s
Jan 20 14:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:30.741 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:26:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 204 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 187 op/s
Jan 20 14:26:31 compute-0 sudo[263055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:31 compute-0 sudo[263055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-0 sudo[263055]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-0 sudo[263080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:31 compute-0 sudo[263080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-0 sudo[263080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-0 sudo[263105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:31 compute-0 sudo[263105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-0 sudo[263105]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:31 compute-0 sudo[263130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:26:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:31 compute-0 sudo[263130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:31.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1116669953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:31 compute-0 sudo[263130]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-0 sudo[263186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263186]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:32 compute-0 sudo[263211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263211]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-0 sudo[263236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263236]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 nova_compute[250018]: 2026-01-20 14:26:32.329 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919177.328729, 6091ab6e-2530-4b48-b482-00867d3c66c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:26:32 compute-0 nova_compute[250018]: 2026-01-20 14:26:32.330 250022 INFO nova.compute.manager [-] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] VM Stopped (Lifecycle Event)
Jan 20 14:26:32 compute-0 sudo[263261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 14:26:32 compute-0 sudo[263261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 nova_compute[250018]: 2026-01-20 14:26:32.351 250022 DEBUG nova.compute.manager [None req-4e44a2a6-8dc9-404a-b328-60a70e94963a - - - - - -] [instance: 6091ab6e-2530-4b48-b482-00867d3c66c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:26:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:32.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:32 compute-0 sudo[263261]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 nova_compute[250018]: 2026-01-20 14:26:32.555 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: pgmap v1062: 321 pgs: 321 active+clean; 204 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 187 op/s
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8aeda2e9-3812-4eb3-9c61-9d581ee77e24 does not exist
Jan 20 14:26:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b3d18348-0907-403e-8251-d524c6b0f28d does not exist
Jan 20 14:26:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9e0027ed-df17-4d3c-b3b8-58f1ba8afc21 does not exist
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:26:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:32 compute-0 sudo[263305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-0 sudo[263305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263305]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:32 compute-0 sudo[263330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263330]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:32 compute-0 sudo[263355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:32 compute-0 sudo[263355]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:32 compute-0 sudo[263380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:26:32 compute-0 sudo[263380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.410265246 +0000 UTC m=+0.072319098 container create 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:26:33 compute-0 systemd[1]: Started libpod-conmon-4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920.scope.
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.370055381 +0000 UTC m=+0.032109253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:33.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.741884254 +0000 UTC m=+0.403938126 container init 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:26:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.753247623 +0000 UTC m=+0.415301475 container start 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:26:33 compute-0 ecstatic_boyd[263461]: 167 167
Jan 20 14:26:33 compute-0 systemd[1]: libpod-4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920.scope: Deactivated successfully.
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.767808754 +0000 UTC m=+0.429862636 container attach 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:26:33 compute-0 podman[263445]: 2026-01-20 14:26:33.768604405 +0000 UTC m=+0.430658257 container died 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-47ed39097fe7ad6f6068d8352315c036501360fa840b0cf9f491ce2599c7a4de-merged.mount: Deactivated successfully.
Jan 20 14:26:34 compute-0 podman[263445]: 2026-01-20 14:26:34.135313125 +0000 UTC m=+0.797366977 container remove 4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:26:34 compute-0 systemd[1]: libpod-conmon-4f3e3e448431425dd4e5bb1477e443e575c91c00465678030d0710cca4692920.scope: Deactivated successfully.
Jan 20 14:26:34 compute-0 podman[263485]: 2026-01-20 14:26:34.337730285 +0000 UTC m=+0.038136801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:34.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:35 compute-0 podman[263485]: 2026-01-20 14:26:35.037710887 +0000 UTC m=+0.738117363 container create 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:26:35 compute-0 ceph-mon[74360]: pgmap v1063: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 179 op/s
Jan 20 14:26:35 compute-0 systemd[1]: Started libpod-conmon-67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c.scope.
Jan 20 14:26:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:35 compute-0 podman[263485]: 2026-01-20 14:26:35.170828699 +0000 UTC m=+0.871235255 container init 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:26:35 compute-0 podman[263485]: 2026-01-20 14:26:35.179767853 +0000 UTC m=+0.880174369 container start 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:26:35 compute-0 podman[263485]: 2026-01-20 14:26:35.1838713 +0000 UTC m=+0.884277866 container attach 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:35 compute-0 nova_compute[250018]: 2026-01-20 14:26:35.188 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 20 14:26:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:35 compute-0 nova_compute[250018]: 2026-01-20 14:26:35.724 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:35.724 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:26:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:35.725 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:26:35 compute-0 interesting_solomon[263503]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:26:35 compute-0 interesting_solomon[263503]: --> relative data size: 1.0
Jan 20 14:26:35 compute-0 interesting_solomon[263503]: --> All data devices are unavailable
Jan 20 14:26:35 compute-0 systemd[1]: libpod-67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c.scope: Deactivated successfully.
Jan 20 14:26:35 compute-0 podman[263485]: 2026-01-20 14:26:35.990717016 +0000 UTC m=+1.691123482 container died 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c1379903c319359285132ce6efe456d871151ab3f831ac0e68cdfafe2029db-merged.mount: Deactivated successfully.
Jan 20 14:26:36 compute-0 podman[263485]: 2026-01-20 14:26:36.040108242 +0000 UTC m=+1.740514718 container remove 67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:26:36 compute-0 systemd[1]: libpod-conmon-67743c11eea1645bd5f5444618d370347023ef40a41717263e759d4a3603412c.scope: Deactivated successfully.
Jan 20 14:26:36 compute-0 sudo[263380]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:36 compute-0 sudo[263533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:36 compute-0 sudo[263533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:36 compute-0 sudo[263533]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:36 compute-0 sudo[263558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:36 compute-0 sudo[263558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:36 compute-0 sudo[263558]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:36 compute-0 sudo[263583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:36 compute-0 sudo[263583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:36 compute-0 sudo[263583]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:36 compute-0 sudo[263608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:26:36 compute-0 sudo[263608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.589461962 +0000 UTC m=+0.023301812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:36 compute-0 ceph-mon[74360]: pgmap v1064: 321 pgs: 321 active+clean; 204 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 20 14:26:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3726797702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.870996507 +0000 UTC m=+0.304836327 container create e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:26:36 compute-0 systemd[1]: Started libpod-conmon-e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79.scope.
Jan 20 14:26:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.949064565 +0000 UTC m=+0.382904395 container init e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.955683889 +0000 UTC m=+0.389523709 container start e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:26:36 compute-0 unruffled_driscoll[263691]: 167 167
Jan 20 14:26:36 compute-0 systemd[1]: libpod-e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79.scope: Deactivated successfully.
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.962296502 +0000 UTC m=+0.396136322 container attach e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:26:36 compute-0 podman[263673]: 2026-01-20 14:26:36.964596633 +0000 UTC m=+0.398436453 container died e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb5a54b6ce027dc81a3d7340184cd8b04cf19015a18118dc283d93c9b425c0bf-merged.mount: Deactivated successfully.
Jan 20 14:26:37 compute-0 podman[263673]: 2026-01-20 14:26:37.01290564 +0000 UTC m=+0.446745470 container remove e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:26:37 compute-0 systemd[1]: libpod-conmon-e56f817c8164a9f04386df5d8b70f7b993b7dda837424ed1d9479016ed447e79.scope: Deactivated successfully.
Jan 20 14:26:37 compute-0 podman[263715]: 2026-01-20 14:26:37.175647559 +0000 UTC m=+0.041230753 container create 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:26:37 compute-0 systemd[1]: Started libpod-conmon-70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c.scope.
Jan 20 14:26:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a12c34cdf03ac9ad66b528c901d84b3d0c02e40cb59fb22b5828cf32a246247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a12c34cdf03ac9ad66b528c901d84b3d0c02e40cb59fb22b5828cf32a246247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a12c34cdf03ac9ad66b528c901d84b3d0c02e40cb59fb22b5828cf32a246247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a12c34cdf03ac9ad66b528c901d84b3d0c02e40cb59fb22b5828cf32a246247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 264 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.6 MiB/s wr, 145 op/s
Jan 20 14:26:37 compute-0 podman[263715]: 2026-01-20 14:26:37.246278332 +0000 UTC m=+0.111861526 container init 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:37 compute-0 podman[263715]: 2026-01-20 14:26:37.253719697 +0000 UTC m=+0.119302891 container start 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:26:37 compute-0 podman[263715]: 2026-01-20 14:26:37.158953971 +0000 UTC m=+0.024537185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:37 compute-0 podman[263715]: 2026-01-20 14:26:37.256589883 +0000 UTC m=+0.122173077 container attach 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:26:37 compute-0 nova_compute[250018]: 2026-01-20 14:26:37.558 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:37 compute-0 sudo[263736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:37 compute-0 sudo[263736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:37 compute-0 sudo[263736]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2320921482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:26:37 compute-0 ceph-mon[74360]: pgmap v1065: 321 pgs: 321 active+clean; 264 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.6 MiB/s wr, 145 op/s
Jan 20 14:26:37 compute-0 sudo[263761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:37 compute-0 sudo[263761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:37 compute-0 sudo[263761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-0 funny_ritchie[263731]: {
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:     "0": [
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:         {
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "devices": [
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "/dev/loop3"
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             ],
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "lv_name": "ceph_lv0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "lv_size": "7511998464",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "name": "ceph_lv0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "tags": {
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.cluster_name": "ceph",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.crush_device_class": "",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.encrypted": "0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.osd_id": "0",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.type": "block",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:                 "ceph.vdo": "0"
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             },
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "type": "block",
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:             "vg_name": "ceph_vg0"
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:         }
Jan 20 14:26:38 compute-0 funny_ritchie[263731]:     ]
Jan 20 14:26:38 compute-0 funny_ritchie[263731]: }
Jan 20 14:26:38 compute-0 systemd[1]: libpod-70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c.scope: Deactivated successfully.
Jan 20 14:26:38 compute-0 podman[263715]: 2026-01-20 14:26:38.05321929 +0000 UTC m=+0.918802504 container died 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a12c34cdf03ac9ad66b528c901d84b3d0c02e40cb59fb22b5828cf32a246247-merged.mount: Deactivated successfully.
Jan 20 14:26:38 compute-0 podman[263715]: 2026-01-20 14:26:38.106370644 +0000 UTC m=+0.971953838 container remove 70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:38 compute-0 systemd[1]: libpod-conmon-70b57ec959cf2d1075e8157a888926eae1187bcf374ea32f17936f638a8acf6c.scope: Deactivated successfully.
Jan 20 14:26:38 compute-0 sudo[263608]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-0 sudo[263803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:38 compute-0 sudo[263803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:38 compute-0 sudo[263803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-0 sudo[263828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:26:38 compute-0 sudo[263828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:38 compute-0 sudo[263828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-0 sudo[263853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:38 compute-0 sudo[263853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:38 compute-0 sudo[263853]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:38 compute-0 sudo[263878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:26:38 compute-0 sudo[263878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.737613762 +0000 UTC m=+0.044288692 container create 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:26:38 compute-0 systemd[1]: Started libpod-conmon-64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c.scope.
Jan 20 14:26:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.719340333 +0000 UTC m=+0.026015293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.818988788 +0000 UTC m=+0.125663748 container init 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.826460103 +0000 UTC m=+0.133135033 container start 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.830407597 +0000 UTC m=+0.137082557 container attach 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:26:38 compute-0 brave_northcutt[263959]: 167 167
Jan 20 14:26:38 compute-0 systemd[1]: libpod-64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c.scope: Deactivated successfully.
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.833024666 +0000 UTC m=+0.139699606 container died 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e78d243b08d75d54cbdea44dc17859a69fb7ca4a1413630a90a550509511ffc-merged.mount: Deactivated successfully.
Jan 20 14:26:38 compute-0 podman[263942]: 2026-01-20 14:26:38.88084701 +0000 UTC m=+0.187521940 container remove 64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:26:38 compute-0 systemd[1]: libpod-conmon-64fdc8506cf778f138ab70fe19b1b4c8db0918894feff53810bbc4c12476a08c.scope: Deactivated successfully.
Jan 20 14:26:39 compute-0 podman[263983]: 2026-01-20 14:26:39.027836616 +0000 UTC m=+0.026208679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:26:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 14:26:39 compute-0 podman[263983]: 2026-01-20 14:26:39.243339889 +0000 UTC m=+0.241711932 container create 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:26:39 compute-0 systemd[1]: Started libpod-conmon-3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c.scope.
Jan 20 14:26:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894a2a4a57ff46e63c5e922a6924139f210d500bb85f06da40f45331fb6e5c39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894a2a4a57ff46e63c5e922a6924139f210d500bb85f06da40f45331fb6e5c39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894a2a4a57ff46e63c5e922a6924139f210d500bb85f06da40f45331fb6e5c39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/894a2a4a57ff46e63c5e922a6924139f210d500bb85f06da40f45331fb6e5c39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:39 compute-0 podman[263983]: 2026-01-20 14:26:39.383128925 +0000 UTC m=+0.381501008 container init 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:26:39 compute-0 podman[263983]: 2026-01-20 14:26:39.392879491 +0000 UTC m=+0.391251534 container start 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:26:39 compute-0 podman[263983]: 2026-01-20 14:26:39.39625172 +0000 UTC m=+0.394623783 container attach 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:26:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:39.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:40 compute-0 nova_compute[250018]: 2026-01-20 14:26:40.190 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:40 compute-0 fervent_yalow[264000]: {
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:         "osd_id": 0,
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:         "type": "bluestore"
Jan 20 14:26:40 compute-0 fervent_yalow[264000]:     }
Jan 20 14:26:40 compute-0 fervent_yalow[264000]: }
Jan 20 14:26:40 compute-0 systemd[1]: libpod-3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c.scope: Deactivated successfully.
Jan 20 14:26:40 compute-0 podman[263983]: 2026-01-20 14:26:40.263717365 +0000 UTC m=+1.262089408 container died 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:26:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 20 14:26:40 compute-0 ceph-mon[74360]: pgmap v1066: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 14:26:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 136 op/s
Jan 20 14:26:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:41.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:42.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:42 compute-0 nova_compute[250018]: 2026-01-20 14:26:42.598 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-894a2a4a57ff46e63c5e922a6924139f210d500bb85f06da40f45331fb6e5c39-merged.mount: Deactivated successfully.
Jan 20 14:26:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 20 14:26:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 20 14:26:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 166 op/s
Jan 20 14:26:43 compute-0 ceph-mon[74360]: pgmap v1067: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 136 op/s
Jan 20 14:26:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:43 compute-0 podman[263983]: 2026-01-20 14:26:43.712665948 +0000 UTC m=+4.711037981 container remove 3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:26:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:26:43.727 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:26:43 compute-0 systemd[1]: libpod-conmon-3bc0e1f00e09fdfe1e745ad8902ec3fc40dc0bd0059f392c9ed17533e164c76c.scope: Deactivated successfully.
Jan 20 14:26:43 compute-0 sudo[263878]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:26:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:26:43 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:43 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bd91585f-4d13-4fbf-8ab7-5051dd287829 does not exist
Jan 20 14:26:43 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 79413533-4ec6-4ac1-9e01-dcedad9fbc35 does not exist
Jan 20 14:26:43 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 14581aec-7239-4696-bf32-280a6d00e652 does not exist
Jan 20 14:26:43 compute-0 sudo[264035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:43 compute-0 sudo[264035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:43 compute-0 sudo[264035]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:43 compute-0 sudo[264060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:26:43 compute-0 sudo[264060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:43 compute-0 sudo[264060]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:44.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:44 compute-0 ceph-mon[74360]: osdmap e150: 3 total, 3 up, 3 in
Jan 20 14:26:44 compute-0 ceph-mon[74360]: pgmap v1069: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 166 op/s
Jan 20 14:26:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:26:45 compute-0 nova_compute[250018]: 2026-01-20 14:26:45.191 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 164 op/s
Jan 20 14:26:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:45.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 230 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 130 op/s
Jan 20 14:26:47 compute-0 ceph-mon[74360]: pgmap v1070: 321 pgs: 321 active+clean; 285 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 164 op/s
Jan 20 14:26:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:47 compute-0 nova_compute[250018]: 2026-01-20 14:26:47.602 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:48 compute-0 ceph-mon[74360]: pgmap v1071: 321 pgs: 321 active+clean; 230 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 130 op/s
Jan 20 14:26:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:48.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 180 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 115 op/s
Jan 20 14:26:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:50 compute-0 nova_compute[250018]: 2026-01-20 14:26:50.196 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:50.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 20 14:26:50 compute-0 ceph-mon[74360]: pgmap v1072: 321 pgs: 321 active+clean; 180 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 115 op/s
Jan 20 14:26:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 20 14:26:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 20 14:26:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 160 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 707 KiB/s rd, 14 KiB/s wr, 93 op/s
Jan 20 14:26:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:26:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:51.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:26:52 compute-0 ceph-mon[74360]: osdmap e151: 3 total, 3 up, 3 in
Jan 20 14:26:52 compute-0 ceph-mon[74360]: pgmap v1074: 321 pgs: 321 active+clean; 160 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 707 KiB/s rd, 14 KiB/s wr, 93 op/s
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:26:52
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:52.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:52 compute-0 nova_compute[250018]: 2026-01-20 14:26:52.605 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 122 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 722 KiB/s rd, 13 KiB/s wr, 111 op/s
Jan 20 14:26:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1900977882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:54.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:54 compute-0 ceph-mon[74360]: pgmap v1075: 321 pgs: 321 active+clean; 122 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 722 KiB/s rd, 13 KiB/s wr, 111 op/s
Jan 20 14:26:55 compute-0 nova_compute[250018]: 2026-01-20 14:26:55.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 101 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 14:26:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:26:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:56.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:56 compute-0 ceph-mon[74360]: pgmap v1076: 321 pgs: 321 active+clean; 101 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 458 KiB/s rd, 14 KiB/s wr, 132 op/s
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:26:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:57 compute-0 nova_compute[250018]: 2026-01-20 14:26:57.610 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:26:57 compute-0 sudo[264093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:57 compute-0 sudo[264093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:57 compute-0 sudo[264093]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:58 compute-0 sudo[264120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:26:58 compute-0 sudo[264120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:26:58 compute-0 sudo[264120]: pam_unix(sudo:session): session closed for user root
Jan 20 14:26:58 compute-0 podman[264118]: 2026-01-20 14:26:58.096049242 +0000 UTC m=+0.076521077 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:26:58 compute-0 podman[264117]: 2026-01-20 14:26:58.118866462 +0000 UTC m=+0.099340587 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:26:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3603524033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:26:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:26:58.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:26:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 448 KiB/s rd, 13 KiB/s wr, 116 op/s
Jan 20 14:26:59 compute-0 ceph-mon[74360]: pgmap v1077: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 458 KiB/s rd, 14 KiB/s wr, 132 op/s
Jan 20 14:26:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:26:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:26:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:26:59.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:00 compute-0 nova_compute[250018]: 2026-01-20 14:27:00.257 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:00 compute-0 ceph-mon[74360]: pgmap v1078: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 448 KiB/s rd, 13 KiB/s wr, 116 op/s
Jan 20 14:27:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 371 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 14:27:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:02 compute-0 ceph-mon[74360]: pgmap v1079: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 371 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 14:27:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:02 compute-0 nova_compute[250018]: 2026-01-20 14:27:02.615 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 3.5 KiB/s wr, 69 op/s
Jan 20 14:27:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:03.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:04 compute-0 sshd-session[264190]: Invalid user admin from 157.245.78.139 port 37434
Jan 20 14:27:04 compute-0 sshd-session[264190]: Connection closed by invalid user admin 157.245.78.139 port 37434 [preauth]
Jan 20 14:27:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:05 compute-0 ceph-mon[74360]: pgmap v1080: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 3.5 KiB/s wr, 69 op/s
Jan 20 14:27:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 195 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Jan 20 14:27:05 compute-0 nova_compute[250018]: 2026-01-20 14:27:05.261 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:05.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:06 compute-0 ceph-mon[74360]: pgmap v1081: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 195 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Jan 20 14:27:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 20 14:27:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:07.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:07 compute-0 nova_compute[250018]: 2026-01-20 14:27:07.662 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:08.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:08 compute-0 ceph-mon[74360]: pgmap v1082: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 20 14:27:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:09.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:10 compute-0 nova_compute[250018]: 2026-01-20 14:27:10.264 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:10 compute-0 ceph-mon[74360]: pgmap v1083: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:27:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:11.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:12 compute-0 ceph-mon[74360]: pgmap v1084: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:12.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:12 compute-0 nova_compute[250018]: 2026-01-20 14:27:12.665 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 14:27:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:13.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.085 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:27:14 compute-0 nova_compute[250018]: 2026-01-20 14:27:14.085 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:14.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:14 compute-0 ceph-mon[74360]: pgmap v1085: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896920713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.006 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.921s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.238 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.241 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.241 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.242 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.266 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.371 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.371 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.388 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:15.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469458289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:27:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3830522030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:27:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2896920713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.868 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.874 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.890 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.970 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:27:15 compute-0 nova_compute[250018]: 2026-01-20 14:27:15.971 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:16.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.678 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.678 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.697 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.794 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.794 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.799 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.799 250022 INFO nova.compute.claims [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:27:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3092 syncs, 3.88 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3541 writes, 12K keys, 3541 commit groups, 1.0 writes per commit group, ingest: 14.43 MB, 0.02 MB/s
                                           Interval WAL: 3541 writes, 1399 syncs, 2.53 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:27:16 compute-0 nova_compute[250018]: 2026-01-20 14:27:16.957 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:17.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:17 compute-0 ceph-mon[74360]: pgmap v1086: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1596450767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1469458289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3796316152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.670 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.966 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.966 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.967 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.967 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:17 compute-0 nova_compute[250018]: 2026-01-20 14:27:17.967 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:27:18 compute-0 sudo[264265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:18 compute-0 sudo[264265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:18 compute-0 sudo[264265]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:18 compute-0 sudo[264290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:18 compute-0 sudo[264290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:18 compute-0 sudo[264290]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678624093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.355 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.361 250022 DEBUG nova.compute.provider_tree [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.452 250022 DEBUG nova.scheduler.client.report [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.516 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.517 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:27:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:18.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.628 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.629 250022 DEBUG nova.network.neutron [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.675 250022 INFO nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:27:18 compute-0 ceph-mon[74360]: pgmap v1087: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/338726957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1678624093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/499083521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.735 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.950 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.951 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.952 250022 INFO nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Creating image(s)
Jan 20 14:27:18 compute-0 nova_compute[250018]: 2026-01-20 14:27:18.982 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.011 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.035 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.039 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.065 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.065 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.096 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.097 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.110 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.112 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.114 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.114 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.135 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.139 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.350 250022 DEBUG nova.policy [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bce7fcbd19554e29bb80c5b93b7dd3c9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.507 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.581 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] resizing rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:27:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:19.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.676 250022 DEBUG nova.objects.instance [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lazy-loading 'migration_context' on Instance uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.720 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.720 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Ensure instance console log exists: /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.721 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.721 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:19 compute-0 nova_compute[250018]: 2026-01-20 14:27:19.721 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:20 compute-0 ceph-mon[74360]: pgmap v1088: 321 pgs: 321 active+clean; 41 MiB data, 303 MiB used, 21 GiB / 21 GiB avail
Jan 20 14:27:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3505650960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:20 compute-0 nova_compute[250018]: 2026-01-20 14:27:20.269 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:27:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 86 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.415 250022 DEBUG nova.network.neutron [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Successfully updated port: e6067076-0f97-4e9c-9355-353277570e11 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.505 250022 DEBUG nova.compute.manager [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-changed-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.505 250022 DEBUG nova.compute.manager [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Refreshing instance network info cache due to event network-changed-e6067076-0f97-4e9c-9355-353277570e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.506 250022 DEBUG oslo_concurrency.lockutils [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.506 250022 DEBUG oslo_concurrency.lockutils [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.506 250022 DEBUG nova.network.neutron [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Refreshing network info cache for port e6067076-0f97-4e9c-9355-353277570e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:27:21 compute-0 nova_compute[250018]: 2026-01-20 14:27:21.511 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1706451896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/697472392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.093 250022 DEBUG nova.network.neutron [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:22.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.703 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:22 compute-0 ceph-mon[74360]: pgmap v1089: 321 pgs: 321 active+clean; 86 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 20 14:27:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1274305540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.845 250022 DEBUG nova.network.neutron [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.859 250022 DEBUG oslo_concurrency.lockutils [req-6436035a-7e75-4d8c-bae1-abd29da6930a req-def26641-7517-462f-b5a4-71aa7aa7b8b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.859 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:22 compute-0 nova_compute[250018]: 2026-01-20 14:27:22.860 250022 DEBUG nova.network.neutron [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:27:23 compute-0 nova_compute[250018]: 2026-01-20 14:27:23.029 250022 DEBUG nova.network.neutron [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:27:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 126 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 20 14:27:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:23 compute-0 ceph-mon[74360]: pgmap v1090: 321 pgs: 321 active+clean; 126 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.089 250022 DEBUG nova.network.neutron [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.116 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.117 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance network_info: |[{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.120 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Start _get_guest_xml network_info=[{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.123 250022 WARNING nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.128 250022 DEBUG nova.virt.libvirt.host [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.129 250022 DEBUG nova.virt.libvirt.host [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.134 250022 DEBUG nova.virt.libvirt.host [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.135 250022 DEBUG nova.virt.libvirt.host [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.136 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.136 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.137 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.137 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.137 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.138 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.138 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.138 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.139 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.139 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.139 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.139 250022 DEBUG nova.virt.hardware [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:27:24 compute-0 nova_compute[250018]: 2026-01-20 14:27:24.143 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 145 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 20 14:27:25 compute-0 nova_compute[250018]: 2026-01-20 14:27:25.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:27:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326481010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:25 compute-0 nova_compute[250018]: 2026-01-20 14:27:25.631 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:25.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:25 compute-0 nova_compute[250018]: 2026-01-20 14:27:25.655 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:25 compute-0 nova_compute[250018]: 2026-01-20 14:27:25.658 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:27:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205760377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.104 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.105 250022 DEBUG nova.virt.libvirt.vif [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:27:18Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.106 250022 DEBUG nova.network.os_vif_util [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.106 250022 DEBUG nova.network.os_vif_util [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.107 250022 DEBUG nova.objects.instance [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lazy-loading 'pci_devices' on Instance uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.122 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <uuid>d726266f-b9a6-406b-ad13-f9db3e0dc6aa</uuid>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <name>instance-00000010</name>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:name>tempest-LiveMigrationTest-server-1394818615</nova:name>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:27:24</nova:creationTime>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:user uuid="bce7fcbd19554e29bb80c5b93b7dd3c9">tempest-LiveMigrationTest-864280704-project-member</nova:user>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:project uuid="d15f60b9e48e4175b5520d1e57ed2d3a">tempest-LiveMigrationTest-864280704</nova:project>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <nova:port uuid="e6067076-0f97-4e9c-9355-353277570e11">
Jan 20 14:27:26 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <system>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="serial">d726266f-b9a6-406b-ad13-f9db3e0dc6aa</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="uuid">d726266f-b9a6-406b-ad13-f9db3e0dc6aa</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </system>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <os>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </os>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <features>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </features>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk">
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </source>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config">
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </source>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:27:26 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:db:cf:b7"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <target dev="tape6067076-0f"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/console.log" append="off"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <video>
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </video>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:27:26 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:27:26 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:27:26 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:27:26 compute-0 nova_compute[250018]: </domain>
Jan 20 14:27:26 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.122 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Preparing to wait for external event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.122 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.123 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.123 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.123 250022 DEBUG nova.virt.libvirt.vif [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:27:18Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.124 250022 DEBUG nova.network.os_vif_util [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.124 250022 DEBUG nova.network.os_vif_util [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.125 250022 DEBUG os_vif [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.125 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.126 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.126 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.132 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6067076-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.132 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6067076-0f, col_values=(('external_ids', {'iface-id': 'e6067076-0f97-4e9c-9355-353277570e11', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:cf:b7', 'vm-uuid': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:26 compute-0 NetworkManager[48960]: <info>  [1768919246.1793] manager: (tape6067076-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.180 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.186 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.187 250022 INFO os_vif [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f')
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.329 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.330 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.330 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No VIF found with MAC fa:16:3e:db:cf:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.332 250022 INFO nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Using config drive
Jan 20 14:27:26 compute-0 nova_compute[250018]: 2026-01-20 14:27:26.355 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:26 compute-0 ceph-mon[74360]: pgmap v1091: 321 pgs: 321 active+clean; 145 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 20 14:27:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1326481010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1257689580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/205760377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 180 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 148 op/s
Jan 20 14:27:27 compute-0 nova_compute[250018]: 2026-01-20 14:27:27.296 250022 INFO nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Creating config drive at /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config
Jan 20 14:27:27 compute-0 nova_compute[250018]: 2026-01-20 14:27:27.301 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0dpwh9f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:27 compute-0 nova_compute[250018]: 2026-01-20 14:27:27.435 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk0dpwh9f" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:27 compute-0 nova_compute[250018]: 2026-01-20 14:27:27.461 250022 DEBUG nova.storage.rbd_utils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:27 compute-0 nova_compute[250018]: 2026-01-20 14:27:27.463 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3520391376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:28 compute-0 podman[264607]: 2026-01-20 14:27:28.45269882 +0000 UTC m=+0.044738480 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:28 compute-0 podman[264606]: 2026-01-20 14:27:28.50982169 +0000 UTC m=+0.104780789 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:27:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.691 250022 DEBUG oslo_concurrency.processutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.692 250022 INFO nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deleting local config drive /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa/disk.config because it was imported into RBD.
Jan 20 14:27:28 compute-0 kernel: tape6067076-0f: entered promiscuous mode
Jan 20 14:27:28 compute-0 NetworkManager[48960]: <info>  [1768919248.7478] manager: (tape6067076-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00045|binding|INFO|Claiming lport e6067076-0f97-4e9c-9355-353277570e11 for this chassis.
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00046|binding|INFO|e6067076-0f97-4e9c-9355-353277570e11: Claiming fa:16:3e:db:cf:b7 10.100.0.12
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00047|binding|INFO|Claiming lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 for this chassis.
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00048|binding|INFO|9013ed66-b0f2-4a83-b7d4-572f1324f582: Claiming fa:16:3e:51:74:79 19.80.0.125
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.748 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.752 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.754 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.764 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:74:79 19.80.0.125'], port_security=['fa:16:3e:51:74:79 19.80.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e6067076-0f97-4e9c-9355-353277570e11'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1871336558', 'neutron:cidrs': '19.80.0.125/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08e625c5-899c-442a-8ef4-9a3c96892de4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1871336558', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=62d5dc3b-a6a9-4e55-8632-5a7fe1112862, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9013ed66-b0f2-4a83-b7d4-572f1324f582) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.766 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:cf:b7 10.100.0.12'], port_security=['fa:16:3e:db:cf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-395006048', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-395006048', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e6067076-0f97-4e9c-9355-353277570e11) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.767 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 9013ed66-b0f2-4a83-b7d4-572f1324f582 in datapath 08e625c5-899c-442a-8ef4-9a3c96892de4 bound to our chassis
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.769 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:28 compute-0 systemd-machined[216401]: New machine qemu-8-instance-00000010.
Jan 20 14:27:28 compute-0 systemd-udevd[264672]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.782 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5b44e792-6582-41f4-88cb-1c7ca732a907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.783 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08e625c5-81 in ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.785 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08e625c5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.786 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[53991fb8-378f-41d4-9e2a-383b0f3a3d60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.787 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8b521875-461d-4a37-a01f-9c89c0df7648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 NetworkManager[48960]: <info>  [1768919248.7930] device (tape6067076-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:27:28 compute-0 NetworkManager[48960]: <info>  [1768919248.7941] device (tape6067076-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.798 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[78baa008-151d-4c02-be0a-70242ca42a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000010.
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.825 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe7976f-c096-4425-a5f9-f84d838607d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.852 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[741a9a6e-30d3-459d-bb71-bf37a4297ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.859 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9034c-eb83-416e-ab4a-61a3d8bed219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 NetworkManager[48960]: <info>  [1768919248.8602] manager: (tap08e625c5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00049|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 ovn-installed in OVS
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00050|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 up in Southbound
Jan 20 14:27:28 compute-0 ovn_controller[148666]: 2026-01-20T14:27:28Z|00051|binding|INFO|Setting lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 up in Southbound
Jan 20 14:27:28 compute-0 nova_compute[250018]: 2026-01-20 14:27:28.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.886 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[da1939a9-8730-4872-a73d-9b8436fe5d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.889 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4d815da2-fed8-4030-8569-793764b2f8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 NetworkManager[48960]: <info>  [1768919248.9066] device (tap08e625c5-80): carrier: link connected
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.911 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0699b3-fa73-434b-8f6a-6ffebe4ca908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.926 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e81e54ef-93d9-46ea-9a7f-5a3abb3d52ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08e625c5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:55:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518962, 'reachable_time': 33905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264705, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.939 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[831de797-c926-4cf8-b8c1-39c77563e79c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:5580'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518962, 'tstamp': 518962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264706, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[95e94951-8134-4576-8772-740aa7782690]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08e625c5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:55:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518962, 'reachable_time': 33905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264707, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:28.977 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[beccbc2f-9e98-4227-88f1-fa908656480c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.029 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3dea450e-025c-4bd6-b1ee-b08391db6e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.031 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e625c5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.031 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.031 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08e625c5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:29 compute-0 NetworkManager[48960]: <info>  [1768919249.0337] manager: (tap08e625c5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.034 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.040 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.041 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08e625c5-80, col_values=(('external_ids', {'iface-id': 'e10f34be-dfc1-4bfe-806f-f00a84c17390'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:29 compute-0 ovn_controller[148666]: 2026-01-20T14:27:29Z|00052|binding|INFO|Releasing lport e10f34be-dfc1-4bfe-806f-f00a84c17390 from this chassis (sb_readonly=0)
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.046 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.047 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[75ad258e-9a07-4982-8c6a-39998712a945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.048 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/08e625c5-899c-442a-8ef4-9a3c96892de4.pid.haproxy
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 08e625c5-899c-442a-8ef4-9a3c96892de4
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:27:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:29.050 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'env', 'PROCESS_TAG=haproxy-08e625c5-899c-442a-8ef4-9a3c96892de4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08e625c5-899c-442a-8ef4-9a3c96892de4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:29 compute-0 kernel: tap08e625c5-80: entered promiscuous mode
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.208 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919249.208373, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.209 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Started (Lifecycle Event)
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.263 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 181 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 155 op/s
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.268 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919249.2085686, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.268 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Paused (Lifecycle Event)
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.307 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.310 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:29 compute-0 nova_compute[250018]: 2026-01-20 14:27:29.335 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:29 compute-0 podman[264781]: 2026-01-20 14:27:29.427829314 +0000 UTC m=+0.022152165 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:27:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:30 compute-0 nova_compute[250018]: 2026-01-20 14:27:30.273 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:30.742 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.243 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 192 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Jan 20 14:27:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.875 250022 DEBUG nova.compute.manager [req-9e81da8c-caab-43e2-8ddc-7a76607fa1bf req-3150292f-d4eb-4a14-91a1-d825bcb18fdb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.876 250022 DEBUG oslo_concurrency.lockutils [req-9e81da8c-caab-43e2-8ddc-7a76607fa1bf req-3150292f-d4eb-4a14-91a1-d825bcb18fdb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.877 250022 DEBUG oslo_concurrency.lockutils [req-9e81da8c-caab-43e2-8ddc-7a76607fa1bf req-3150292f-d4eb-4a14-91a1-d825bcb18fdb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.877 250022 DEBUG oslo_concurrency.lockutils [req-9e81da8c-caab-43e2-8ddc-7a76607fa1bf req-3150292f-d4eb-4a14-91a1-d825bcb18fdb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.877 250022 DEBUG nova.compute.manager [req-9e81da8c-caab-43e2-8ddc-7a76607fa1bf req-3150292f-d4eb-4a14-91a1-d825bcb18fdb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Processing event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.878 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.884 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919251.884539, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.885 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Resumed (Lifecycle Event)
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.889 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.895 250022 INFO nova.virt.libvirt.driver [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance spawned successfully.
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.895 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:27:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.910 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:31 compute-0 nova_compute[250018]: 2026-01-20 14:27:31.915 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:31 compute-0 ceph-mon[74360]: pgmap v1092: 321 pgs: 321 active+clean; 180 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 148 op/s
Jan 20 14:27:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/819079954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.446 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.448 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.448 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.448 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.449 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.449 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 nova_compute[250018]: 2026-01-20 14:27:32.450 250022 DEBUG nova.virt.libvirt.driver [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:32 compute-0 podman[264781]: 2026-01-20 14:27:32.711401219 +0000 UTC m=+3.305724040 container create 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:27:32 compute-0 systemd[1]: Started libpod-conmon-7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c.scope.
Jan 20 14:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ad095cf806bb6de681428480d27a037b482814d8f2dab166b8f36af2dcedaf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:32 compute-0 podman[264781]: 2026-01-20 14:27:32.909878318 +0000 UTC m=+3.504201159 container init 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:27:32 compute-0 podman[264781]: 2026-01-20 14:27:32.91664924 +0000 UTC m=+3.510972061 container start 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:27:32 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [NOTICE]   (264803) : New worker (264805) forked
Jan 20 14:27:32 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [NOTICE]   (264803) : Loading success.
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.972 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e6067076-0f97-4e9c-9355-353277570e11 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.974 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.983 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[46c227e0-0e18-4d2e-b780-f361a40695bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.984 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14f18b27-11 in ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.986 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14f18b27-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.986 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[825cb526-ef31-4221-a638-efe2627ecf8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:32.987 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fa714914-3ba9-41c4-8983-c60ff16a7494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.003 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[df878353-7865-41b8-8b6a-78a2ade15730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.027 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[226468e1-3532-4431-8c69-45fbfa0a6532]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.062 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[31c49acb-a5ce-46c0-8bf1-bca1069e5386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.069 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6873a6-7ec9-44d2-a02b-33c534a21426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 NetworkManager[48960]: <info>  [1768919253.0715] manager: (tap14f18b27-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 20 14:27:33 compute-0 systemd-udevd[264821]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.103 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6a96cd98-e9ad-48de-b518-4a236e20a824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.113 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[73d267c1-3e6c-4fe8-94dc-cd06937c9fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 NetworkManager[48960]: <info>  [1768919253.1445] device (tap14f18b27-10): carrier: link connected
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.160 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[51cea2b1-2ba0-46b1-9015-a745ff4b8857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.188 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a47b773e-6ab4-4ae0-9dd5-0e10f2eb645a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519386, 'reachable_time': 19797, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264840, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.199 250022 INFO nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Took 14.25 seconds to spawn the instance on the hypervisor.
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.200 250022 DEBUG nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.225 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4b9311-0d72-47e7-8e32-1e1983814661]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:1f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519386, 'tstamp': 519386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264841, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ceph-mon[74360]: pgmap v1093: 321 pgs: 321 active+clean; 181 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 155 op/s
Jan 20 14:27:33 compute-0 ceph-mon[74360]: pgmap v1094: 321 pgs: 321 active+clean; 192 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 MiB/s wr, 184 op/s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.248 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[58e39bd3-b0f7-48a1-8a5d-67ff3d5bbc2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519386, 'reachable_time': 19797, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264842, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 219 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 206 op/s
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.276 250022 INFO nova.compute.manager [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Took 16.52 seconds to build instance.
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.280 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[405766f7-1c93-43d7-8b41-fa6baf258c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.296 250022 DEBUG oslo_concurrency.lockutils [None req-72d9d30b-fc0c-49bc-a65d-98d7b4609436 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.340 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf0d913-962e-4795-8fdb-830ef9442b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.341 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.341 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.341 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.343 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:33 compute-0 NetworkManager[48960]: <info>  [1768919253.3442] manager: (tap14f18b27-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 20 14:27:33 compute-0 kernel: tap14f18b27-10: entered promiscuous mode
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.346 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.347 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.348 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:33 compute-0 ovn_controller[148666]: 2026-01-20T14:27:33Z|00053|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.365 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.366 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[02c6d321-df40-4757-883c-ec7bdbf8b9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.367 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:27:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:33.369 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'env', 'PROCESS_TAG=haproxy-14f18b27-1594-48d8-a08b-a930f7adbc08', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14f18b27-1594-48d8-a08b-a930f7adbc08.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:27:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:33 compute-0 podman[264875]: 2026-01-20 14:27:33.756082697 +0000 UTC m=+0.024143317 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.968 250022 DEBUG nova.compute.manager [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.969 250022 DEBUG oslo_concurrency.lockutils [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.970 250022 DEBUG oslo_concurrency.lockutils [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.970 250022 DEBUG oslo_concurrency.lockutils [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.971 250022 DEBUG nova.compute.manager [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:33 compute-0 nova_compute[250018]: 2026-01-20 14:27:33.972 250022 WARNING nova.compute.manager [req-48a2bf62-480f-4b59-a19d-90a20e13e4a5 req-9cc0eeee-a3e3-46c2-b48b-e3c2b0e9af4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state None.
Jan 20 14:27:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 227 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:27:35 compute-0 nova_compute[250018]: 2026-01-20 14:27:35.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:35 compute-0 ceph-mon[74360]: pgmap v1095: 321 pgs: 321 active+clean; 219 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 206 op/s
Jan 20 14:27:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3043623211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:35 compute-0 podman[264875]: 2026-01-20 14:27:35.690990466 +0000 UTC m=+1.959051066 container create ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:27:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:35 compute-0 systemd[1]: Started libpod-conmon-ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a.scope.
Jan 20 14:27:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622b1f76b8a377470b7be91c6a4995de9a48225be0e6ba0918b580cde83c798/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:36 compute-0 podman[264875]: 2026-01-20 14:27:36.030491783 +0000 UTC m=+2.298552383 container init ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:27:36 compute-0 podman[264875]: 2026-01-20 14:27:36.036316009 +0000 UTC m=+2.304376609 container start ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:27:36 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [NOTICE]   (264895) : New worker (264897) forked
Jan 20 14:27:36 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [NOTICE]   (264895) : Loading success.
Jan 20 14:27:36 compute-0 nova_compute[250018]: 2026-01-20 14:27:36.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 20 14:27:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 254 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.4 MiB/s wr, 303 op/s
Jan 20 14:27:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 20 14:27:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1353817912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:37 compute-0 ceph-mon[74360]: pgmap v1096: 321 pgs: 321 active+clean; 227 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:27:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 20 14:27:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:38 compute-0 sudo[264907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:38 compute-0 sudo[264907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:38 compute-0 sudo[264907]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:38 compute-0 sudo[264932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:38 compute-0 sudo[264932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:38 compute-0 nova_compute[250018]: 2026-01-20 14:27:38.396 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Check if temp file /var/lib/nova/instances/tmpuyxcxf2_ exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 20 14:27:38 compute-0 sudo[264932]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:38 compute-0 nova_compute[250018]: 2026-01-20 14:27:38.398 250022 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 20 14:27:38 compute-0 ceph-mon[74360]: pgmap v1097: 321 pgs: 321 active+clean; 254 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.4 MiB/s wr, 303 op/s
Jan 20 14:27:38 compute-0 ceph-mon[74360]: osdmap e152: 3 total, 3 up, 3 in
Jan 20 14:27:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:38 compute-0 nova_compute[250018]: 2026-01-20 14:27:38.572 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:38.573 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:38.574 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:27:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 254 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.3 MiB/s wr, 289 op/s
Jan 20 14:27:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/817646247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:39 compute-0 nova_compute[250018]: 2026-01-20 14:27:39.439 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:39 compute-0 nova_compute[250018]: 2026-01-20 14:27:39.440 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:39 compute-0 nova_compute[250018]: 2026-01-20 14:27:39.519 250022 INFO nova.compute.rpcapi [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Jan 20 14:27:39 compute-0 nova_compute[250018]: 2026-01-20 14:27:39.521 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:40 compute-0 nova_compute[250018]: 2026-01-20 14:27:40.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:40 compute-0 ceph-mon[74360]: pgmap v1099: 321 pgs: 321 active+clean; 254 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.3 MiB/s wr, 289 op/s
Jan 20 14:27:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:40.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.249 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 243 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.6 MiB/s wr, 309 op/s
Jan 20 14:27:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:41.577 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:41.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.784 250022 DEBUG nova.compute.manager [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.785 250022 DEBUG oslo_concurrency.lockutils [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.786 250022 DEBUG oslo_concurrency.lockutils [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.787 250022 DEBUG oslo_concurrency.lockutils [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.788 250022 DEBUG nova.compute.manager [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:41 compute-0 nova_compute[250018]: 2026-01-20 14:27:41.788 250022 DEBUG nova.compute.manager [req-7ea7ebdf-280a-4961-9b4b-904e7e010e90 req-6c51aec9-7aee-4a94-ade6-833bedaf68e1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:27:42 compute-0 ceph-mon[74360]: pgmap v1100: 321 pgs: 321 active+clean; 243 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.6 MiB/s wr, 309 op/s
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.514 250022 INFO nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Took 3.07 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.515 250022 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.541 250022 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpuyxcxf2_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='d726266f-b9a6-406b-ad13-f9db3e0dc6aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(1e151de2-77f6-4ade-bff0-98b8ffeabfe4),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.545 250022 DEBUG nova.objects.instance [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lazy-loading 'migration_context' on Instance uuid d726266f-b9a6-406b-ad13-f9db3e0dc6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.546 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.548 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.548 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 20 14:27:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:42.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.576 250022 DEBUG nova.virt.libvirt.vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:27:33Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.577 250022 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.578 250022 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.578 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating guest XML with vif config: <interface type="ethernet">
Jan 20 14:27:42 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:db:cf:b7"/>
Jan 20 14:27:42 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:27:42 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:27:42 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:27:42 compute-0 nova_compute[250018]:   <target dev="tape6067076-0f"/>
Jan 20 14:27:42 compute-0 nova_compute[250018]: </interface>
Jan 20 14:27:42 compute-0 nova_compute[250018]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 20 14:27:42 compute-0 nova_compute[250018]: 2026-01-20 14:27:42.579 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.051 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.051 250022 INFO nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.183 250022 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 20 14:27:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 205 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.8 MiB/s wr, 333 op/s
Jan 20 14:27:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2865709230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1307297049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.631 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.631 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.656 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:27:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:43.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.685 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.685 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.733 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.733 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.739 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.740 250022 INFO nova.compute.claims [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.951 250022 DEBUG nova.compute.manager [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.952 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.952 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.953 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.953 250022 DEBUG nova.compute.manager [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.953 250022 WARNING nova.compute.manager [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state migrating.
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.954 250022 DEBUG nova.compute.manager [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-changed-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.954 250022 DEBUG nova.compute.manager [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Refreshing instance network info cache due to event network-changed-e6067076-0f97-4e9c-9355-353277570e11. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.954 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.955 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:43 compute-0 nova_compute[250018]: 2026-01-20 14:27:43.955 250022 DEBUG nova.network.neutron [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Refreshing network info cache for port e6067076-0f97-4e9c-9355-353277570e11 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.054 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.190 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.191 250022 DEBUG nova.virt.libvirt.migration [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.342 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919264.3420703, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.343 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Paused (Lifecycle Event)
Jan 20 14:27:44 compute-0 sudo[264983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:44 compute-0 sudo[264983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-0 sudo[264983]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-0 sudo[265008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:44 compute-0 sudo[265008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.452 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:44 compute-0 sudo[265008]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.460 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.484 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 20 14:27:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111249090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:44 compute-0 sudo[265033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:44 compute-0 sudo[265033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-0 sudo[265033]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-0 ceph-mon[74360]: pgmap v1101: 321 pgs: 321 active+clean; 205 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.8 MiB/s wr, 333 op/s
Jan 20 14:27:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4262737880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4111249090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.530 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:44 compute-0 kernel: tape6067076-0f (unregistering): left promiscuous mode
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.538 250022 DEBUG nova.compute.provider_tree [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:44 compute-0 NetworkManager[48960]: <info>  [1768919264.5402] device (tape6067076-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.549 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00054|binding|INFO|Releasing lport e6067076-0f97-4e9c-9355-353277570e11 from this chassis (sb_readonly=0)
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00055|binding|INFO|Setting lport e6067076-0f97-4e9c-9355-353277570e11 down in Southbound
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00056|binding|INFO|Releasing lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 from this chassis (sb_readonly=0)
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00057|binding|INFO|Setting lport 9013ed66-b0f2-4a83-b7d4-572f1324f582 down in Southbound
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00058|binding|INFO|Removing iface tape6067076-0f ovn-installed in OVS
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.553 250022 DEBUG nova.scheduler.client.report [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00059|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:27:44 compute-0 ovn_controller[148666]: 2026-01-20T14:27:44Z|00060|binding|INFO|Releasing lport e10f34be-dfc1-4bfe-806f-f00a84c17390 from this chassis (sb_readonly=0)
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.564 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:74:79 19.80.0.125'], port_security=['fa:16:3e:51:74:79 19.80.0.125'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e6067076-0f97-4e9c-9355-353277570e11'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1871336558', 'neutron:cidrs': '19.80.0.125/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08e625c5-899c-442a-8ef4-9a3c96892de4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1871336558', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '3', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=62d5dc3b-a6a9-4e55-8632-5a7fe1112862, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9013ed66-b0f2-4a83-b7d4-572f1324f582) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.567 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:cf:b7 10.100.0.12'], port_security=['fa:16:3e:db:cf:b7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5ffd4ac3-9266-4927-98ad-20a17782c725'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-395006048', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd726266f-b9a6-406b-ad13-f9db3e0dc6aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-395006048', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e6067076-0f97-4e9c-9355-353277570e11) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.568 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 9013ed66-b0f2-4a83-b7d4-572f1324f582 in datapath 08e625c5-899c-442a-8ef4-9a3c96892de4 unbound from our chassis
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.569 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08e625c5-899c-442a-8ef4-9a3c96892de4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.571 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a617c84c-93d1-4b6e-9318-65ae989765c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.572 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 namespace which is not needed anymore
Jan 20 14:27:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:44 compute-0 sudo[265060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:44 compute-0 sudo[265060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.603 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.604 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:27:44 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 20 14:27:44 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Consumed 13.046s CPU time.
Jan 20 14:27:44 compute-0 systemd-machined[216401]: Machine qemu-8-instance-00000010 terminated.
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.672 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.674 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.685 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:44 compute-0 virtqemud[249565]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk: No such file or directory
Jan 20 14:27:44 compute-0 virtqemud[249565]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_disk: No such file or directory
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.733 250022 INFO nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.737 250022 DEBUG nova.virt.libvirt.guest [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.738 250022 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migration operation has completed
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.738 250022 INFO nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] _post_live_migration() is started..
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.743 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.743 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.744 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.752 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:27:44 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [NOTICE]   (264803) : haproxy version is 2.8.14-c23fe91
Jan 20 14:27:44 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [NOTICE]   (264803) : path to executable is /usr/sbin/haproxy
Jan 20 14:27:44 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [WARNING]  (264803) : Exiting Master process...
Jan 20 14:27:44 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [ALERT]    (264803) : Current worker (264805) exited with code 143 (Terminated)
Jan 20 14:27:44 compute-0 neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4[264799]: [WARNING]  (264803) : All workers exited. Exiting... (0)
Jan 20 14:27:44 compute-0 systemd[1]: libpod-7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c.scope: Deactivated successfully.
Jan 20 14:27:44 compute-0 podman[265108]: 2026-01-20 14:27:44.776050204 +0000 UTC m=+0.062740418 container died 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-58ad095cf806bb6de681428480d27a037b482814d8f2dab166b8f36af2dcedaf-merged.mount: Deactivated successfully.
Jan 20 14:27:44 compute-0 podman[265108]: 2026-01-20 14:27:44.855038717 +0000 UTC m=+0.141728941 container cleanup 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:27:44 compute-0 sudo[265060]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-0 systemd[1]: libpod-conmon-7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c.scope: Deactivated successfully.
Jan 20 14:27:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:27:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:27:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:44 compute-0 podman[265168]: 2026-01-20 14:27:44.926163329 +0000 UTC m=+0.047758105 container remove 7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.927 250022 DEBUG nova.compute.manager [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.927 250022 DEBUG oslo_concurrency.lockutils [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.927 250022 DEBUG oslo_concurrency.lockutils [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.928 250022 DEBUG oslo_concurrency.lockutils [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.928 250022 DEBUG nova.compute.manager [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.928 250022 DEBUG nova.compute.manager [req-68dc4357-b927-464a-9fac-a6076301f664 req-5a5ba0c0-b4d6-4b3e-a3ba-02eeffa956bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.932 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[660edf2c-d30a-4ccc-b2a5-a40497f87de2]: (4, ('Tue Jan 20 02:27:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 (7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c)\n7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c\nTue Jan 20 02:27:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 (7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c)\n7f04b0dfdfeca784e6bd2b41a48c3d602a36bebc15cffa107d841bef5a893f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.933 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b22155ef-aac1-46b2-bcb7-b72358089cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.934 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08e625c5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.934 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:27:44 compute-0 kernel: tap08e625c5-80: left promiscuous mode
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.937 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:27:44 compute-0 sudo[265176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.938 250022 INFO nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Creating image(s)
Jan 20 14:27:44 compute-0 sudo[265176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:44 compute-0 sudo[265176]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.959 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[19f0eea4-67e5-4e46-b2fb-ac0797aa9929]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 nova_compute[250018]: 2026-01-20 14:27:44.973 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.974 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9697e43-0be1-48cf-acae-25c94bd925a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.976 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[018d0b74-1753-41e1-b187-6042b30ce28a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.992 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[df5554f7-521d-4f4c-afe0-bbf4722c5c35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518957, 'reachable_time': 21218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265246, 'error': None, 'target': 'ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d08e625c5\x2d899c\x2d442a\x2d8ef4\x2d9a3c96892de4.mount: Deactivated successfully.
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.995 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08e625c5-899c-442a-8ef4-9a3c96892de4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.995 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b0983424-54df-4448-8f77-51ad51039809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.997 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e6067076-0f97-4e9c-9355-353277570e11 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.998 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.998 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6c38a1-1b5c-4ab4-98ea-f904d9a04769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:44.999 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace which is not needed anymore
Jan 20 14:27:45 compute-0 sudo[265221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:45 compute-0 sudo[265221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.008 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:45 compute-0 sudo[265221]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.038 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.042 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "5c59bb50cd8e2f04a0e24e31c5eec4210425eca7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.044 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "5c59bb50cd8e2f04a0e24e31c5eec4210425eca7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.046 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.049 250022 DEBUG nova.policy [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eae4ac21a700463eadfdbe7717ed8b13', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '944b426a2d4c4ad3a01f0b855ad36509', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:27:45 compute-0 sudo[265274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:45 compute-0 sudo[265274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 sudo[265274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [NOTICE]   (264895) : haproxy version is 2.8.14-c23fe91
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [NOTICE]   (264895) : path to executable is /usr/sbin/haproxy
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [WARNING]  (264895) : Exiting Master process...
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [WARNING]  (264895) : Exiting Master process...
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [ALERT]    (264895) : Current worker (264897) exited with code 143 (Terminated)
Jan 20 14:27:45 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[264891]: [WARNING]  (264895) : All workers exited. Exiting... (0)
Jan 20 14:27:45 compute-0 systemd[1]: libpod-ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a.scope: Deactivated successfully.
Jan 20 14:27:45 compute-0 podman[265331]: 2026-01-20 14:27:45.128915949 +0000 UTC m=+0.045726500 container died ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a-userdata-shm.mount: Deactivated successfully.
Jan 20 14:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2622b1f76b8a377470b7be91c6a4995de9a48225be0e6ba0918b580cde83c798-merged.mount: Deactivated successfully.
Jan 20 14:27:45 compute-0 podman[265331]: 2026-01-20 14:27:45.172838849 +0000 UTC m=+0.089649390 container cleanup ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:45 compute-0 systemd[1]: libpod-conmon-ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a.scope: Deactivated successfully.
Jan 20 14:27:45 compute-0 sudo[265347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:27:45 compute-0 sudo[265347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 podman[265385]: 2026-01-20 14:27:45.240731305 +0000 UTC m=+0.044755894 container remove ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.246 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[080514eb-14e6-4d7e-a036-ebe42c3bda7e]: (4, ('Tue Jan 20 02:27:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a)\nba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a\nTue Jan 20 02:27:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (ba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a)\nba1dd9d2f8a5396c0474a798d7bbb54a63a6d4831a9a3897ba791dcbc7b6188a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.248 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b901992-4a93-4db8-b3e3-93b8ab8b3a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.249 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:45 compute-0 kernel: tap14f18b27-10: left promiscuous mode
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.268 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.271 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b0182207-293a-489d-8e15-2548e4f2bb81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 220 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.4 MiB/s wr, 309 op/s
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.283 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e07acd-9822-41c7-bdf1-da11a0ae5da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.284 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3648fcfc-6c30-4e0c-b8c0-f4810eacba22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.299 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[395bcae0-b68f-4337-98f6-d385dc75a020]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519378, 'reachable_time': 27815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265406, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.301 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:27:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:45.301 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[fe44b7d5-5d00-4610-9fb6-8cc5828122bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.376 250022 DEBUG nova.virt.libvirt.imagebackend [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/afa12d53-6955-4be8-8dd3-8e7dd18a3d5b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/afa12d53-6955-4be8-8dd3-8e7dd18a3d5b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:27:45 compute-0 sudo[265347]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:27:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:27:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 sudo[265437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:45 compute-0 sudo[265437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 sudo[265437]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 sudo[265462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:45 compute-0 sudo[265462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 sudo[265462]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.794 250022 DEBUG nova.network.neutron [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updated VIF entry in instance network info cache for port e6067076-0f97-4e9c-9355-353277570e11. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.794 250022 DEBUG nova.network.neutron [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Updating instance_info_cache with network_info: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:45 compute-0 sudo[265487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:45 compute-0 sudo[265487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:45 compute-0 sudo[265487]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d14f18b27\x2d1594\x2d48d8\x2da08b\x2da930f7adbc08.mount: Deactivated successfully.
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.834 250022 DEBUG oslo_concurrency.lockutils [req-292d66bb-980d-42aa-8af3-c133242c123d req-e70b5631-e09f-486f-9c77-53bd966c340b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d726266f-b9a6-406b-ad13-f9db3e0dc6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.865 250022 DEBUG nova.compute.manager [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.866 250022 DEBUG oslo_concurrency.lockutils [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.866 250022 DEBUG oslo_concurrency.lockutils [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.866 250022 DEBUG oslo_concurrency.lockutils [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.866 250022 DEBUG nova.compute.manager [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:45 compute-0 nova_compute[250018]: 2026-01-20 14:27:45.866 250022 DEBUG nova.compute.manager [req-c5f22fd9-503e-4b69-898e-a1d44a937d53 req-ad46685e-cde5-4d31-860a-e5789a638705 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-unplugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:27:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/603074013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:45 compute-0 ceph-mon[74360]: pgmap v1102: 321 pgs: 321 active+clean; 220 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.4 MiB/s wr, 309 op/s
Jan 20 14:27:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:45 compute-0 sudo[265512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 14:27:45 compute-0 sudo[265512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.072 250022 DEBUG nova.network.neutron [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Activated binding for port e6067076-0f97-4e9c-9355-353277570e11 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.073 250022 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.074 250022 DEBUG nova.virt.libvirt.vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1394818615',display_name='tempest-LiveMigrationTest-server-1394818615',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1394818615',id=16,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:27:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-pti072hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:27:37Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=d726266f-b9a6-406b-ad13-f9db3e0dc6aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.074 250022 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "e6067076-0f97-4e9c-9355-353277570e11", "address": "fa:16:3e:db:cf:b7", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6067076-0f", "ovs_interfaceid": "e6067076-0f97-4e9c-9355-353277570e11", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.075 250022 DEBUG nova.network.os_vif_util [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.075 250022 DEBUG os_vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.082 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6067076-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.084 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.088 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.091 250022 INFO os_vif [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:cf:b7,bridge_name='br-int',has_traffic_filtering=True,id=e6067076-0f97-4e9c-9355-353277570e11,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape6067076-0f')
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.092 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.092 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.092 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.093 250022 DEBUG nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.093 250022 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deleting instance files /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_del
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.093 250022 INFO nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Deletion of /var/lib/nova/instances/d726266f-b9a6-406b-ad13-f9db3e0dc6aa_del complete
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.130 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Successfully created port: 249be315-41d6-478d-9d9a-f3251b200e7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.203860395 +0000 UTC m=+0.049089340 container create e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:27:46 compute-0 systemd[1]: Started libpod-conmon-e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711.scope.
Jan 20 14:27:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.183880187 +0000 UTC m=+0.029109152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.409701858 +0000 UTC m=+0.254930823 container init e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.419975544 +0000 UTC m=+0.265204489 container start e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.423507659 +0000 UTC m=+0.268736604 container attach e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:27:46 compute-0 zen_jackson[265586]: 167 167
Jan 20 14:27:46 compute-0 systemd[1]: libpod-e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711.scope: Deactivated successfully.
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.42875112 +0000 UTC m=+0.273980065 container died e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-30036eaac574381c343fcb553df20bc810ff9186574419eb404e0a95215c1c93-merged.mount: Deactivated successfully.
Jan 20 14:27:46 compute-0 podman[265570]: 2026-01-20 14:27:46.470340238 +0000 UTC m=+0.315569173 container remove e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:27:46 compute-0 systemd[1]: libpod-conmon-e7f6adae0d8303b3dd22ce1283e7d968aa7d7f49fb1377093c1975f06cecc711.scope: Deactivated successfully.
Jan 20 14:27:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:46.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:46 compute-0 podman[265611]: 2026-01-20 14:27:46.653160793 +0000 UTC m=+0.041695642 container create d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:27:46 compute-0 systemd[1]: Started libpod-conmon-d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff.scope.
Jan 20 14:27:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7d8e4d0d908333ab8756d78fd9085e4bf8f289e8c50de33174ec7ef4435914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7d8e4d0d908333ab8756d78fd9085e4bf8f289e8c50de33174ec7ef4435914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7d8e4d0d908333ab8756d78fd9085e4bf8f289e8c50de33174ec7ef4435914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec7d8e4d0d908333ab8756d78fd9085e4bf8f289e8c50de33174ec7ef4435914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:46 compute-0 podman[265611]: 2026-01-20 14:27:46.635620061 +0000 UTC m=+0.024154930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:46 compute-0 podman[265611]: 2026-01-20 14:27:46.747414046 +0000 UTC m=+0.135948905 container init d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:27:46 compute-0 podman[265611]: 2026-01-20 14:27:46.753566881 +0000 UTC m=+0.142101730 container start d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:27:46 compute-0 podman[265611]: 2026-01-20 14:27:46.756632983 +0000 UTC m=+0.145167832 container attach d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.823 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.893 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.895 250022 DEBUG nova.virt.images [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] afa12d53-6955-4be8-8dd3-8e7dd18a3d5b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.895 250022 DEBUG nova.privsep.utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:27:46 compute-0 nova_compute[250018]: 2026-01-20 14:27:46.896 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.part /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:46 compute-0 sshd-session[265604]: Invalid user admin from 157.245.78.139 port 58458
Jan 20 14:27:46 compute-0 sshd-session[265604]: Connection closed by invalid user admin 157.245.78.139 port 58458 [preauth]
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.057 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.057 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.058 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.058 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.058 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.059 250022 WARNING nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state migrating.
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.059 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.059 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.060 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.060 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.060 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.061 250022 WARNING nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state migrating.
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.061 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.061 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.062 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.062 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.062 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.063 250022 WARNING nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state migrating.
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.063 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.063 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.063 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.064 250022 DEBUG oslo_concurrency.lockutils [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.064 250022 DEBUG nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] No waiting events found dispatching network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.064 250022 WARNING nova.compute.manager [req-862935c7-8dbf-4aa9-863d-ab5e60c1c4e6 req-10104e14-93e0-4ef8-8c54-8ce80edce26a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Received unexpected event network-vif-plugged-e6067076-0f97-4e9c-9355-353277570e11 for instance with vm_state active and task_state migrating.
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.178 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.part /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.converted" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.183 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.247 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.249 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "5c59bb50cd8e2f04a0e24e31c5eec4210425eca7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 230 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.9 MiB/s wr, 342 op/s
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.287 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.292 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.340 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Successfully updated port: 249be315-41d6-478d-9d9a-f3251b200e7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.363 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.364 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquired lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.364 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.457 250022 DEBUG nova.compute.manager [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-changed-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.458 250022 DEBUG nova.compute.manager [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Refreshing instance network info cache due to event network-changed-249be315-41d6-478d-9d9a-f3251b200e7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.459 250022 DEBUG oslo_concurrency.lockutils [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:27:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:27:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.585 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.592 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.663 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] resizing rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:27:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:27:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:47.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.765 250022 DEBUG nova.objects.instance [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'migration_context' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.786 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.786 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Ensure instance console log exists: /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.788 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.788 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:47 compute-0 nova_compute[250018]: 2026-01-20 14:27:47.788 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:48 compute-0 stupefied_galois[265628]: [
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:     {
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "available": false,
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "ceph_device": false,
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "lsm_data": {},
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "lvs": [],
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "path": "/dev/sr0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "rejected_reasons": [
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "Insufficient space (<5GB)",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "Has a FileSystem"
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         ],
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         "sys_api": {
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "actuators": null,
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "device_nodes": "sr0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "devname": "sr0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "human_readable_size": "482.00 KB",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "id_bus": "ata",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "model": "QEMU DVD-ROM",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "nr_requests": "2",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "parent": "/dev/sr0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "partitions": {},
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "path": "/dev/sr0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "removable": "1",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "rev": "2.5+",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "ro": "0",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "rotational": "1",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "sas_address": "",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "sas_device_handle": "",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "scheduler_mode": "mq-deadline",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "sectors": 0,
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "sectorsize": "2048",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "size": 493568.0,
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "support_discard": "2048",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "type": "disk",
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:             "vendor": "QEMU"
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:         }
Jan 20 14:27:48 compute-0 stupefied_galois[265628]:     }
Jan 20 14:27:48 compute-0 stupefied_galois[265628]: ]
Jan 20 14:27:48 compute-0 systemd[1]: libpod-d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff.scope: Deactivated successfully.
Jan 20 14:27:48 compute-0 systemd[1]: libpod-d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff.scope: Consumed 1.327s CPU time.
Jan 20 14:27:48 compute-0 podman[266953]: 2026-01-20 14:27:48.130958697 +0000 UTC m=+0.030699826 container died d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:27:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec7d8e4d0d908333ab8756d78fd9085e4bf8f289e8c50de33174ec7ef4435914-merged.mount: Deactivated successfully.
Jan 20 14:27:48 compute-0 podman[266953]: 2026-01-20 14:27:48.180060927 +0000 UTC m=+0.079802026 container remove d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:27:48 compute-0 systemd[1]: libpod-conmon-d479fa8d7c7efbedb2c164d4809064e039df2e39d5b17bcc3c64154990687dff.scope: Deactivated successfully.
Jan 20 14:27:48 compute-0 sudo[265512]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cf2a90a1-62c5-4b20-b008-e15f7f268602 does not exist
Jan 20 14:27:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 77d51fcd-47ac-4d80-93d1-5222ee5dc8c8 does not exist
Jan 20 14:27:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 156f6342-7a89-4e36-99f5-c26483d23764 does not exist
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:27:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-0 sudo[266968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:48 compute-0 sudo[266968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:48 compute-0 sudo[266968]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:48 compute-0 ceph-mon[74360]: pgmap v1103: 321 pgs: 321 active+clean; 230 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.9 MiB/s wr, 342 op/s
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1118793105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:27:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:27:48 compute-0 sudo[266993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:48 compute-0 sudo[266993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:48 compute-0 sudo[266993]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:48 compute-0 sudo[267018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:48 compute-0 sudo[267018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:48 compute-0 sudo[267018]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:48 compute-0 sudo[267043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:27:48 compute-0 sudo[267043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 14:27:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:48.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.748572899 +0000 UTC m=+0.037066717 container create cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:27:48 compute-0 systemd[1]: Started libpod-conmon-cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12.scope.
Jan 20 14:27:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.824972214 +0000 UTC m=+0.113466052 container init cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.733070303 +0000 UTC m=+0.021564141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.832322991 +0000 UTC m=+0.120816799 container start cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.835574598 +0000 UTC m=+0.124068416 container attach cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:27:48 compute-0 systemd[1]: libpod-cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12.scope: Deactivated successfully.
Jan 20 14:27:48 compute-0 confident_moser[267126]: 167 167
Jan 20 14:27:48 compute-0 conmon[267126]: conmon cee80d2a84522c431a3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12.scope/container/memory.events
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.838333992 +0000 UTC m=+0.126827820 container died cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:27:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-36cc22b526ecdbd1e3d80d07b32fe0630ef866331b315d842cf113220960ef2b-merged.mount: Deactivated successfully.
Jan 20 14:27:48 compute-0 podman[267109]: 2026-01-20 14:27:48.874398542 +0000 UTC m=+0.162892360 container remove cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:27:48 compute-0 systemd[1]: libpod-conmon-cee80d2a84522c431a3d799006497d39dd42fe3f7acde74cf1ece4f297f3bb12.scope: Deactivated successfully.
Jan 20 14:27:49 compute-0 podman[267150]: 2026-01-20 14:27:49.042140481 +0000 UTC m=+0.057688921 container create dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:27:49 compute-0 systemd[1]: Started libpod-conmon-dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7.scope.
Jan 20 14:27:49 compute-0 podman[267150]: 2026-01-20 14:27:49.005660101 +0000 UTC m=+0.021208541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:49 compute-0 podman[267150]: 2026-01-20 14:27:49.137624798 +0000 UTC m=+0.153173248 container init dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:27:49 compute-0 podman[267150]: 2026-01-20 14:27:49.143905866 +0000 UTC m=+0.159454286 container start dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:27:49 compute-0 podman[267150]: 2026-01-20 14:27:49.147629266 +0000 UTC m=+0.163177686 container attach dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:27:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 229 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 331 op/s
Jan 20 14:27:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:49.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:49 compute-0 eloquent_vaughan[267167]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:27:49 compute-0 eloquent_vaughan[267167]: --> relative data size: 1.0
Jan 20 14:27:49 compute-0 eloquent_vaughan[267167]: --> All data devices are unavailable
Jan 20 14:27:50 compute-0 systemd[1]: libpod-dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7.scope: Deactivated successfully.
Jan 20 14:27:50 compute-0 podman[267150]: 2026-01-20 14:27:50.012055404 +0000 UTC m=+1.027603824 container died dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:27:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dea2d125db0fac454b9f838d0faae7c87842be3b9e7a1e7d66fb7358f0df929-merged.mount: Deactivated successfully.
Jan 20 14:27:50 compute-0 podman[267150]: 2026-01-20 14:27:50.075802407 +0000 UTC m=+1.091350827 container remove dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.079 250022 DEBUG nova.network.neutron [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating instance_info_cache with network_info: [{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:50 compute-0 systemd[1]: libpod-conmon-dcaa3ea1a8fbab2c7431ddeec8677be2fe4aacd8629df8537d23b9216016bab7.scope: Deactivated successfully.
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.100 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Releasing lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.100 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Instance network_info: |[{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.101 250022 DEBUG oslo_concurrency.lockutils [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.101 250022 DEBUG nova.network.neutron [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Refreshing network info cache for port 249be315-41d6-478d-9d9a-f3251b200e7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:27:50 compute-0 sudo[267043]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.106 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Start _get_guest_xml network_info=[{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:27:31Z,direct_url=<?>,disk_format='qcow2',id=afa12d53-6955-4be8-8dd3-8e7dd18a3d5b,min_disk=0,min_ram=0,name='',owner='64772ad8e43048d89873964617706532',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:27:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/sda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'scsi', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'afa12d53-6955-4be8-8dd3-8e7dd18a3d5b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.112 250022 WARNING nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.118 250022 DEBUG nova.virt.libvirt.host [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.120 250022 DEBUG nova.virt.libvirt.host [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.127 250022 DEBUG nova.virt.libvirt.host [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.128 250022 DEBUG nova.virt.libvirt.host [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.130 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.130 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:27:31Z,direct_url=<?>,disk_format='qcow2',id=afa12d53-6955-4be8-8dd3-8e7dd18a3d5b,min_disk=0,min_ram=0,name='',owner='64772ad8e43048d89873964617706532',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:27:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.130 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.130 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.131 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.132 250022 DEBUG nova.virt.hardware [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.134 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:50 compute-0 sudo[267196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:50 compute-0 sudo[267196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:50 compute-0 sudo[267196]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:50 compute-0 sudo[267222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:50 compute-0 sudo[267222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:50 compute-0 sudo[267222]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:50 compute-0 sudo[267247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:50 compute-0 sudo[267247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:50 compute-0 sudo[267247]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.305 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:50 compute-0 sudo[267291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:27:50 compute-0 sudo[267291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:50 compute-0 ceph-mon[74360]: pgmap v1104: 321 pgs: 321 active+clean; 229 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 331 op/s
Jan 20 14:27:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:27:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785786679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.574 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.599 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:50 compute-0 nova_compute[250018]: 2026-01-20 14:27:50.603 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.635686977 +0000 UTC m=+0.038027683 container create 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:27:50 compute-0 systemd[1]: Started libpod-conmon-58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7.scope.
Jan 20 14:27:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.703937532 +0000 UTC m=+0.106278278 container init 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.710119828 +0000 UTC m=+0.112460544 container start 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.713588592 +0000 UTC m=+0.115929338 container attach 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:27:50 compute-0 angry_swartz[267392]: 167 167
Jan 20 14:27:50 compute-0 systemd[1]: libpod-58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7.scope: Deactivated successfully.
Jan 20 14:27:50 compute-0 conmon[267392]: conmon 58b53e000d4dbe9130b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7.scope/container/memory.events
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.716213981 +0000 UTC m=+0.118554697 container died 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.620436057 +0000 UTC m=+0.022776803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a937274ca641aa6ba547d356789902eaa4ab00d8903e6832fd9a4024edda0198-merged.mount: Deactivated successfully.
Jan 20 14:27:50 compute-0 podman[267371]: 2026-01-20 14:27:50.984152705 +0000 UTC m=+0.386493421 container remove 58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:27:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:27:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112098410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.063 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.066 250022 DEBUG nova.virt.libvirt.vif [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1306178105',display_name='tempest-AttachSCSIVolumeTestJSON-server-1306178105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1306178105',id=21,image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcwkLfz+n0xUADkH5hSwgQR9Vlq54FRm6knqxSeXMPjygHrcFpUP17ZQ6w9WwqEnkXLzYkY49szBEmF9sDE/LAQKFFOum+jTv2mFAAqYD9yBP8bjfXxEQt6qfFVHpdBww==',key_name='tempest-keypair-919175856',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944b426a2d4c4ad3a01f0b855ad36509',ramdisk_id='',reservation_id='r-3025v9a1',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-687112080',owner_user_name='tempest-AttachSCSIVolumeTestJSON-687112080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eae4ac21a700463eadfdbe7717ed8b13',uuid=4ee9159e-bf2b-47b7-8568-47fd13815f05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.066 250022 DEBUG nova.network.os_vif_util [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converting VIF {"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.067 250022 DEBUG nova.network.os_vif_util [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.069 250022 DEBUG nova.objects.instance [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:27:51 compute-0 systemd[1]: libpod-conmon-58b53e000d4dbe9130b2f4dadface827cba317c8870452809292be740042c3e7.scope: Deactivated successfully.
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.084 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <uuid>4ee9159e-bf2b-47b7-8568-47fd13815f05</uuid>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <name>instance-00000015</name>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:name>tempest-AttachSCSIVolumeTestJSON-server-1306178105</nova:name>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:27:50</nova:creationTime>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:user uuid="eae4ac21a700463eadfdbe7717ed8b13">tempest-AttachSCSIVolumeTestJSON-687112080-project-member</nova:user>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:project uuid="944b426a2d4c4ad3a01f0b855ad36509">tempest-AttachSCSIVolumeTestJSON-687112080</nova:project>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="afa12d53-6955-4be8-8dd3-8e7dd18a3d5b"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <nova:port uuid="249be315-41d6-478d-9d9a-f3251b200e7f">
Jan 20 14:27:51 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <system>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="serial">4ee9159e-bf2b-47b7-8568-47fd13815f05</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="uuid">4ee9159e-bf2b-47b7-8568-47fd13815f05</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </system>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <os>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </os>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <features>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </features>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/4ee9159e-bf2b-47b7-8568-47fd13815f05_disk">
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <target dev="sda" bus="scsi"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <address type="drive" controller="0" unit="0"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config">
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:27:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <target dev="sdb" bus="scsi"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <address type="drive" controller="0" unit="1"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="scsi" index="0" model="virtio-scsi"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e2:1c:e2"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <target dev="tap249be315-41"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/console.log" append="off"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <video>
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </video>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:27:51 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:27:51 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:27:51 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:27:51 compute-0 nova_compute[250018]: </domain>
Jan 20 14:27:51 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.088 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Preparing to wait for external event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.090 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.090 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.090 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.091 250022 DEBUG nova.virt.libvirt.vif [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1306178105',display_name='tempest-AttachSCSIVolumeTestJSON-server-1306178105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1306178105',id=21,image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcwkLfz+n0xUADkH5hSwgQR9Vlq54FRm6knqxSeXMPjygHrcFpUP17ZQ6w9WwqEnkXLzYkY49szBEmF9sDE/LAQKFFOum+jTv2mFAAqYD9yBP8bjfXxEQt6qfFVHpdBww==',key_name='tempest-keypair-919175856',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='944b426a2d4c4ad3a01f0b855ad36509',ramdisk_id='',reservation_id='r-3025v9a1',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-687112080',owner_user_name='tempest-AttachSCSIVolumeTestJSON-687112080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:27:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eae4ac21a700463eadfdbe7717ed8b13',uuid=4ee9159e-bf2b-47b7-8568-47fd13815f05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.091 250022 DEBUG nova.network.os_vif_util [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converting VIF {"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.092 250022 DEBUG nova.network.os_vif_util [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.092 250022 DEBUG os_vif [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.093 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.093 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.094 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.097 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.098 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249be315-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.098 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap249be315-41, col_values=(('external_ids', {'iface-id': '249be315-41d6-478d-9d9a-f3251b200e7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:1c:e2', 'vm-uuid': '4ee9159e-bf2b-47b7-8568-47fd13815f05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:51 compute-0 NetworkManager[48960]: <info>  [1768919271.1006] manager: (tap249be315-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.107 250022 INFO os_vif [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41')
Jan 20 14:27:51 compute-0 podman[267437]: 2026-01-20 14:27:51.121372263 +0000 UTC m=+0.023548924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.233 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.234 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.234 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No VIF found with MAC fa:16:3e:e2:1c:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.235 250022 INFO nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Using config drive
Jan 20 14:27:51 compute-0 nova_compute[250018]: 2026-01-20 14:27:51.258 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 206 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 337 op/s
Jan 20 14:27:51 compute-0 podman[267437]: 2026-01-20 14:27:51.667921865 +0000 UTC m=+0.570098546 container create 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:27:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:51.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/785786679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/112098410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:27:51 compute-0 systemd[1]: Started libpod-conmon-3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f.scope.
Jan 20 14:27:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90bd490175a3ad38af41753bf6510c4fefb6f5443143bc8938dc747a3266e463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90bd490175a3ad38af41753bf6510c4fefb6f5443143bc8938dc747a3266e463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90bd490175a3ad38af41753bf6510c4fefb6f5443143bc8938dc747a3266e463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90bd490175a3ad38af41753bf6510c4fefb6f5443143bc8938dc747a3266e463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:51 compute-0 podman[267437]: 2026-01-20 14:27:51.801951638 +0000 UTC m=+0.704128389 container init 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:27:51 compute-0 podman[267437]: 2026-01-20 14:27:51.817875476 +0000 UTC m=+0.720052177 container start 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:27:51 compute-0 podman[267437]: 2026-01-20 14:27:51.822340926 +0000 UTC m=+0.724517617 container attach 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.362 250022 INFO nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Creating config drive at /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.374 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrks86ax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:27:52
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.511 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdrks86ax" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:52 compute-0 epic_keller[267475]: {
Jan 20 14:27:52 compute-0 epic_keller[267475]:     "0": [
Jan 20 14:27:52 compute-0 epic_keller[267475]:         {
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "devices": [
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "/dev/loop3"
Jan 20 14:27:52 compute-0 epic_keller[267475]:             ],
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "lv_name": "ceph_lv0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "lv_size": "7511998464",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "name": "ceph_lv0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "tags": {
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.cluster_name": "ceph",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.crush_device_class": "",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.encrypted": "0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.osd_id": "0",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.type": "block",
Jan 20 14:27:52 compute-0 epic_keller[267475]:                 "ceph.vdo": "0"
Jan 20 14:27:52 compute-0 epic_keller[267475]:             },
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "type": "block",
Jan 20 14:27:52 compute-0 epic_keller[267475]:             "vg_name": "ceph_vg0"
Jan 20 14:27:52 compute-0 epic_keller[267475]:         }
Jan 20 14:27:52 compute-0 epic_keller[267475]:     ]
Jan 20 14:27:52 compute-0 epic_keller[267475]: }
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.540 250022 DEBUG nova.storage.rbd_utils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] rbd image 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.543 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:52 compute-0 systemd[1]: libpod-3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f.scope: Deactivated successfully.
Jan 20 14:27:52 compute-0 podman[267437]: 2026-01-20 14:27:52.552669888 +0000 UTC m=+1.454846539 container died 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:27:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.701 250022 DEBUG oslo_concurrency.processutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config 4ee9159e-bf2b-47b7-8568-47fd13815f05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.702 250022 INFO nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Deleting local config drive /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05/disk.config because it was imported into RBD.
Jan 20 14:27:52 compute-0 ceph-mon[74360]: pgmap v1105: 321 pgs: 321 active+clean; 206 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 337 op/s
Jan 20 14:27:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1556740575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:52 compute-0 kernel: tap249be315-41: entered promiscuous mode
Jan 20 14:27:52 compute-0 NetworkManager[48960]: <info>  [1768919272.7611] manager: (tap249be315-41): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:52 compute-0 ovn_controller[148666]: 2026-01-20T14:27:52Z|00061|binding|INFO|Claiming lport 249be315-41d6-478d-9d9a-f3251b200e7f for this chassis.
Jan 20 14:27:52 compute-0 ovn_controller[148666]: 2026-01-20T14:27:52Z|00062|binding|INFO|249be315-41d6-478d-9d9a-f3251b200e7f: Claiming fa:16:3e:e2:1c:e2 10.100.0.10
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.772 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.778 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:1c:e2 10.100.0.10'], port_security=['fa:16:3e:e2:1c:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ee9159e-bf2b-47b7-8568-47fd13815f05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652d7129-cd9b-4229-8d54-211a1946e427', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944b426a2d4c4ad3a01f0b855ad36509', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4bf82b26-b849-4d35-a448-97009e04bc43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eeeeb41-2381-4a49-aecf-33a614ed88ea, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=249be315-41d6-478d-9d9a-f3251b200e7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.779 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 249be315-41d6-478d-9d9a-f3251b200e7f in datapath 652d7129-cd9b-4229-8d54-211a1946e427 bound to our chassis
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.780 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 652d7129-cd9b-4229-8d54-211a1946e427
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.791 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca646c5-8b3a-4a9a-8619-b00bb67ad8be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.792 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap652d7129-c1 in ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:27:52 compute-0 systemd-udevd[267552]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.793 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap652d7129-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.793 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[571017df-d89d-45c2-a8fc-ee3675c459f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 systemd-machined[216401]: New machine qemu-9-instance-00000015.
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.794 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dd9818-7207-4984-a3e4-1057ced1f506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 NetworkManager[48960]: <info>  [1768919272.8052] device (tap249be315-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:27:52 compute-0 NetworkManager[48960]: <info>  [1768919272.8058] device (tap249be315-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.806 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[cc718015-c3ae-45a0-9697-aaf77885ad63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000015.
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.825 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf18817-6e64-44a5-b5f4-5c44146906a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-90bd490175a3ad38af41753bf6510c4fefb6f5443143bc8938dc747a3266e463-merged.mount: Deactivated successfully.
Jan 20 14:27:52 compute-0 ovn_controller[148666]: 2026-01-20T14:27:52Z|00063|binding|INFO|Setting lport 249be315-41d6-478d-9d9a-f3251b200e7f ovn-installed in OVS
Jan 20 14:27:52 compute-0 ovn_controller[148666]: 2026-01-20T14:27:52Z|00064|binding|INFO|Setting lport 249be315-41d6-478d-9d9a-f3251b200e7f up in Southbound
Jan 20 14:27:52 compute-0 nova_compute[250018]: 2026-01-20 14:27:52.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.859 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd7de4b-70b0-4f97-a467-78ae40b1363c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 systemd-udevd[267555]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:27:52 compute-0 NetworkManager[48960]: <info>  [1768919272.8653] manager: (tap652d7129-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.864 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7279ffaf-4f0d-4bcd-87c3-24d6eea6bf63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.893 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9c349fa7-652b-44fa-84f1-98eb820850b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.896 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b5aab69c-c7a3-4375-b020-3e8c1fbb137d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 NetworkManager[48960]: <info>  [1768919272.9189] device (tap652d7129-c0): carrier: link connected
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.923 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0b6fe43a-5432-44d8-a495-75f337ee9057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.941 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4b049bbf-e9f9-4ee7-9269-d1168f0929f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap652d7129-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:25:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521364, 'reachable_time': 43839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267584, 'error': None, 'target': 'ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.958 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[091474a4-b078-46de-8a81-896ce6022bee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:25db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 521364, 'tstamp': 521364}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267585, 'error': None, 'target': 'ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:52.981 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec00b055-c19e-4a3e-a657-dc75dec942a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap652d7129-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:25:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521364, 'reachable_time': 43839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267586, 'error': None, 'target': 'ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.012 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b366a584-5b74-40ad-8696-29da911188f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.063 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.064 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.064 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "d726266f-b9a6-406b-ad13-f9db3e0dc6aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.079 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ab6fc-05cb-4e0c-90cc-09de6e0872c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.080 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap652d7129-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.081 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.081 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap652d7129-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.083 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:53 compute-0 kernel: tap652d7129-c0: entered promiscuous mode
Jan 20 14:27:53 compute-0 NetworkManager[48960]: <info>  [1768919273.0845] manager: (tap652d7129-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.087 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap652d7129-c0, col_values=(('external_ids', {'iface-id': '43a00cfd-5c8b-4081-a089-4ad4b2c20237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.088 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:53 compute-0 ovn_controller[148666]: 2026-01-20T14:27:53Z|00065|binding|INFO|Releasing lport 43a00cfd-5c8b-4081-a089-4ad4b2c20237 from this chassis (sb_readonly=0)
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.092 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.092 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.092 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/652d7129-cd9b-4229-8d54-211a1946e427.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/652d7129-cd9b-4229-8d54-211a1946e427.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.093 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.093 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.093 250022 DEBUG oslo_concurrency.processutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.094 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[135e4c3b-97ea-40fc-9178-c431c5be514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.094 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-652d7129-cd9b-4229-8d54-211a1946e427
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/652d7129-cd9b-4229-8d54-211a1946e427.pid.haproxy
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 652d7129-cd9b-4229-8d54-211a1946e427
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:27:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:27:53.095 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427', 'env', 'PROCESS_TAG=haproxy-652d7129-cd9b-4229-8d54-211a1946e427', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/652d7129-cd9b-4229-8d54-211a1946e427.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.119 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.123 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.256 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919273.2561853, 4ee9159e-bf2b-47b7-8568-47fd13815f05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.257 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] VM Started (Lifecycle Event)
Jan 20 14:27:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 208 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 349 op/s
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.328 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.332 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919273.2570708, 4ee9159e-bf2b-47b7-8568-47fd13815f05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.332 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] VM Paused (Lifecycle Event)
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.355 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.358 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.380 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.403 250022 DEBUG nova.compute.manager [req-38c9d5ee-3a21-4c99-9552-30f871f625ec req-b12cf5b3-618a-4f2f-a679-ade0e3c41df3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.404 250022 DEBUG oslo_concurrency.lockutils [req-38c9d5ee-3a21-4c99-9552-30f871f625ec req-b12cf5b3-618a-4f2f-a679-ade0e3c41df3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.404 250022 DEBUG oslo_concurrency.lockutils [req-38c9d5ee-3a21-4c99-9552-30f871f625ec req-b12cf5b3-618a-4f2f-a679-ade0e3c41df3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.405 250022 DEBUG oslo_concurrency.lockutils [req-38c9d5ee-3a21-4c99-9552-30f871f625ec req-b12cf5b3-618a-4f2f-a679-ade0e3c41df3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.405 250022 DEBUG nova.compute.manager [req-38c9d5ee-3a21-4c99-9552-30f871f625ec req-b12cf5b3-618a-4f2f-a679-ade0e3c41df3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Processing event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.406 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.410 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.411 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919273.410155, 4ee9159e-bf2b-47b7-8568-47fd13815f05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.411 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] VM Resumed (Lifecycle Event)
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.415 250022 INFO nova.virt.libvirt.driver [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Instance spawned successfully.
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.415 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.423 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.424 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.424 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.424 250022 DEBUG nova.virt.libvirt.driver [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.432 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.434 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.560 250022 DEBUG nova.network.neutron [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updated VIF entry in instance network info cache for port 249be315-41d6-478d-9d9a-f3251b200e7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.561 250022 DEBUG nova.network.neutron [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating instance_info_cache with network_info: [{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.563 250022 DEBUG oslo_concurrency.processutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.592 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:27:53 compute-0 podman[267437]: 2026-01-20 14:27:53.601654566 +0000 UTC m=+2.503831257 container remove 3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.609 250022 DEBUG oslo_concurrency.lockutils [req-772b337f-31ec-420f-aaa6-b94e3747bad7 req-b59b7f8b-5ea4-447a-9e3a-d11d70f7e728 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:27:53 compute-0 systemd[1]: libpod-conmon-3adb455122e3c88948d503a9cb73d55d3df503753a05933e40fa64ab39f4381f.scope: Deactivated successfully.
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.617 250022 INFO nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Took 8.68 seconds to spawn the instance on the hypervisor.
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.618 250022 DEBUG nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:53 compute-0 sudo[267291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.637 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.638 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:27:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:53.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.687 250022 INFO nova.compute.manager [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Took 9.98 seconds to build instance.
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.701 250022 DEBUG oslo_concurrency.lockutils [None req-55606195-e2f6-40a0-ad91-bfb1aead077b eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:53 compute-0 sudo[267675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:53 compute-0 sudo[267675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:53 compute-0 sudo[267675]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:53 compute-0 sudo[267714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:27:53 compute-0 sudo[267714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:53 compute-0 sudo[267714]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:53 compute-0 ceph-mon[74360]: pgmap v1106: 321 pgs: 321 active+clean; 208 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 349 op/s
Jan 20 14:27:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4134874017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:53 compute-0 podman[267707]: 2026-01-20 14:27:53.770617908 +0000 UTC m=+0.032414542 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.864 250022 WARNING nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:27:53 compute-0 sudo[267745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.866 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4579MB free_disk=20.914661407470703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.866 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.866 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:53 compute-0 sudo[267745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:53 compute-0 sudo[267745]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:53 compute-0 sudo[267770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:27:53 compute-0 sudo[267770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.931 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration for instance d726266f-b9a6-406b-ad13-f9db3e0dc6aa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.958 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.987 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration 1e151de2-77f6-4ade-bff0-98b8ffeabfe4 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.987 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.987 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:27:53 compute-0 nova_compute[250018]: 2026-01-20 14:27:53.988 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.002 250022 DEBUG nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.016 250022 DEBUG nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.016 250022 DEBUG nova.compute.provider_tree [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.028 250022 DEBUG nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.052 250022 DEBUG nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:27:54 compute-0 podman[267707]: 2026-01-20 14:27:54.056960815 +0000 UTC m=+0.318757429 container create 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.105 250022 DEBUG oslo_concurrency.processutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:27:54 compute-0 systemd[1]: Started libpod-conmon-58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de.scope.
Jan 20 14:27:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4439b8b3aa10798cce22576b290ed1c19f760f88b30f60c6330c5387a21d171a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:54 compute-0 podman[267707]: 2026-01-20 14:27:54.264365601 +0000 UTC m=+0.526162245 container init 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:27:54 compute-0 podman[267707]: 2026-01-20 14:27:54.275026597 +0000 UTC m=+0.536823201 container start 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:27:54 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [NOTICE]   (267843) : New worker (267848) forked
Jan 20 14:27:54 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [NOTICE]   (267843) : Loading success.
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.464397967 +0000 UTC m=+0.050065976 container create 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.435758628 +0000 UTC m=+0.021426657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:54 compute-0 systemd[1]: Started libpod-conmon-895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e.scope.
Jan 20 14:27:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:27:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932853683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.581 250022 DEBUG oslo_concurrency.processutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.588 250022 DEBUG nova.compute.provider_tree [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:27:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:27:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.602 250022 DEBUG nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.621 250022 DEBUG nova.compute.resource_tracker [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.622 250022 DEBUG oslo_concurrency.lockutils [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.626 250022 INFO nova.compute.manager [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.647290164 +0000 UTC m=+0.232958213 container init 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.656729948 +0000 UTC m=+0.242397957 container start 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:27:54 compute-0 systemd[1]: libpod-895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e.scope: Deactivated successfully.
Jan 20 14:27:54 compute-0 bold_morse[267885]: 167 167
Jan 20 14:27:54 compute-0 conmon[267885]: conmon 895fa79ae69ca348ccb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e.scope/container/memory.events
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.66575049 +0000 UTC m=+0.251418529 container attach 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.666526761 +0000 UTC m=+0.252194770 container died 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.768 250022 INFO nova.scheduler.client.report [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Deleted allocation for migration 1e151de2-77f6-4ade-bff0-98b8ffeabfe4
Jan 20 14:27:54 compute-0 nova_compute[250018]: 2026-01-20 14:27:54.768 250022 DEBUG nova.virt.libvirt.driver [None req-466ac999-dbbf-45a2-9a60-0b686a463dfc f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 20 14:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8911d058b41139559ef3591a096a16a2fec69e11b444e11f8e6692b4b60f46fd-merged.mount: Deactivated successfully.
Jan 20 14:27:54 compute-0 podman[267869]: 2026-01-20 14:27:54.91499865 +0000 UTC m=+0.500666659 container remove 895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_morse, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:27:54 compute-0 systemd[1]: libpod-conmon-895fa79ae69ca348ccb217714ee06fd45ba4a38915aa7a10ec99da7b12455c6e.scope: Deactivated successfully.
Jan 20 14:27:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2932853683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:27:55 compute-0 podman[267913]: 2026-01-20 14:27:55.096818508 +0000 UTC m=+0.038817655 container create bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:27:55 compute-0 systemd[1]: Started libpod-conmon-bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94.scope.
Jan 20 14:27:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f5547a8f770537805e1cef95f98f0f1f2032b11b9a948952ece4b200836fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f5547a8f770537805e1cef95f98f0f1f2032b11b9a948952ece4b200836fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f5547a8f770537805e1cef95f98f0f1f2032b11b9a948952ece4b200836fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f5547a8f770537805e1cef95f98f0f1f2032b11b9a948952ece4b200836fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:27:55 compute-0 podman[267913]: 2026-01-20 14:27:55.080593522 +0000 UTC m=+0.022592699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:27:55 compute-0 podman[267913]: 2026-01-20 14:27:55.22191181 +0000 UTC m=+0.163910977 container init bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:27:55 compute-0 podman[267913]: 2026-01-20 14:27:55.23044822 +0000 UTC m=+0.172447367 container start bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 20 14:27:55 compute-0 podman[267913]: 2026-01-20 14:27:55.250702124 +0000 UTC m=+0.192701302 container attach bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:27:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 187 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 279 op/s
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.307 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.493 250022 DEBUG nova.compute.manager [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.493 250022 DEBUG oslo_concurrency.lockutils [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.493 250022 DEBUG oslo_concurrency.lockutils [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.493 250022 DEBUG oslo_concurrency.lockutils [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.493 250022 DEBUG nova.compute.manager [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] No waiting events found dispatching network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:27:55 compute-0 nova_compute[250018]: 2026-01-20 14:27:55.494 250022 WARNING nova.compute.manager [req-94235112-0b9c-41fa-a713-97bba472d3ca req-77a7aa0d-09a6-4c7d-9978-56ad5298ef88 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received unexpected event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f for instance with vm_state active and task_state None.
Jan 20 14:27:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:55.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:27:56 compute-0 dazzling_banach[267930]: {
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:         "osd_id": 0,
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:         "type": "bluestore"
Jan 20 14:27:56 compute-0 dazzling_banach[267930]:     }
Jan 20 14:27:56 compute-0 dazzling_banach[267930]: }
Jan 20 14:27:56 compute-0 nova_compute[250018]: 2026-01-20 14:27:56.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:56 compute-0 systemd[1]: libpod-bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94.scope: Deactivated successfully.
Jan 20 14:27:56 compute-0 podman[267913]: 2026-01-20 14:27:56.109534531 +0000 UTC m=+1.051533678 container died bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:27:56 compute-0 ceph-mon[74360]: pgmap v1107: 321 pgs: 321 active+clean; 187 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 279 op/s
Jan 20 14:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f5547a8f770537805e1cef95f98f0f1f2032b11b9a948952ece4b200836fbb-merged.mount: Deactivated successfully.
Jan 20 14:27:56 compute-0 podman[267913]: 2026-01-20 14:27:56.428810423 +0000 UTC m=+1.370809570 container remove bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:27:56 compute-0 systemd[1]: libpod-conmon-bc683b37220a4408708683c6f6520f9cf4f17c883e027d126d99b7531cb9bd94.scope: Deactivated successfully.
Jan 20 14:27:56 compute-0 sudo[267770]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:27:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:27:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b2d5ca9f-826d-439f-9ed2-3b9f845f7022 does not exist
Jan 20 14:27:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4c18fff9-6cb5-4a15-934b-b11ba192db76 does not exist
Jan 20 14:27:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f4403761-bae6-452c-986f-f62ee0cdd31c does not exist
Jan 20 14:27:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:56 compute-0 sudo[267963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:56 compute-0 sudo[267963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:56 compute-0 sudo[267963]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:56 compute-0 sudo[267988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:27:56 compute-0 sudo[267988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:56 compute-0 sudo[267988]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.0 MiB/s wr, 335 op/s
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:27:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:27:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:27:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:57.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:27:57 compute-0 NetworkManager[48960]: <info>  [1768919277.9198] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 20 14:27:57 compute-0 nova_compute[250018]: 2026-01-20 14:27:57.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:57 compute-0 NetworkManager[48960]: <info>  [1768919277.9214] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:58 compute-0 ovn_controller[148666]: 2026-01-20T14:27:58Z|00066|binding|INFO|Releasing lport 43a00cfd-5c8b-4081-a089-4ad4b2c20237 from this chassis (sb_readonly=0)
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.140 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.414 250022 DEBUG nova.compute.manager [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-changed-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.414 250022 DEBUG nova.compute.manager [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Refreshing instance network info cache due to event network-changed-249be315-41d6-478d-9d9a-f3251b200e7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.415 250022 DEBUG oslo_concurrency.lockutils [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.415 250022 DEBUG oslo_concurrency.lockutils [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:27:58 compute-0 nova_compute[250018]: 2026-01-20 14:27:58.415 250022 DEBUG nova.network.neutron [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Refreshing network info cache for port 249be315-41d6-478d-9d9a-f3251b200e7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:27:58 compute-0 sudo[268015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:58 compute-0 sudo[268015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:58 compute-0 sudo[268015]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:58 compute-0 sudo[268041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:27:58 compute-0 sudo[268041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:27:58 compute-0 sudo[268041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:27:58 compute-0 podman[268039]: 2026-01-20 14:27:58.560117926 +0000 UTC m=+0.078213934 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:27:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:27:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:58 compute-0 podman[268083]: 2026-01-20 14:27:58.643329722 +0000 UTC m=+0.087198745 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:27:58 compute-0 ceph-mon[74360]: pgmap v1108: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.0 MiB/s wr, 335 op/s
Jan 20 14:27:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 213 op/s
Jan 20 14:27:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:27:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:27:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:27:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:27:59 compute-0 nova_compute[250018]: 2026-01-20 14:27:59.738 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919264.7361636, d726266f-b9a6-406b-ad13-f9db3e0dc6aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:27:59 compute-0 nova_compute[250018]: 2026-01-20 14:27:59.739 250022 INFO nova.compute.manager [-] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] VM Stopped (Lifecycle Event)
Jan 20 14:27:59 compute-0 nova_compute[250018]: 2026-01-20 14:27:59.764 250022 DEBUG nova.compute.manager [None req-9f60a640-f436-4ec6-a86d-90c751a8e681 - - - - - -] [instance: d726266f-b9a6-406b-ad13-f9db3e0dc6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:27:59 compute-0 ceph-mon[74360]: pgmap v1109: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 213 op/s
Jan 20 14:28:00 compute-0 nova_compute[250018]: 2026-01-20 14:28:00.308 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:00 compute-0 nova_compute[250018]: 2026-01-20 14:28:00.406 250022 DEBUG nova.network.neutron [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updated VIF entry in instance network info cache for port 249be315-41d6-478d-9d9a-f3251b200e7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:28:00 compute-0 nova_compute[250018]: 2026-01-20 14:28:00.407 250022 DEBUG nova.network.neutron [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating instance_info_cache with network_info: [{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:00 compute-0 nova_compute[250018]: 2026-01-20 14:28:00.427 250022 DEBUG oslo_concurrency.lockutils [req-c3ee7df0-17a3-4c7f-b759-2f1c4f42fe2c req-ea0e7489-06e7-4ef5-a203-1cc1492d1c6c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:01 compute-0 nova_compute[250018]: 2026-01-20 14:28:01.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Jan 20 14:28:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2407875768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:01 compute-0 anacron[102472]: Job `cron.weekly' started
Jan 20 14:28:01 compute-0 anacron[102472]: Job `cron.weekly' terminated
Jan 20 14:28:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:02 compute-0 ceph-mon[74360]: pgmap v1110: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 172 op/s
Jan 20 14:28:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 20 14:28:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:03.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2155246561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3793975136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:05 compute-0 ceph-mon[74360]: pgmap v1111: 321 pgs: 321 active+clean; 188 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Jan 20 14:28:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 197 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 83 op/s
Jan 20 14:28:05 compute-0 nova_compute[250018]: 2026-01-20 14:28:05.354 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:05.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:06 compute-0 nova_compute[250018]: 2026-01-20 14:28:06.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:06 compute-0 ceph-mon[74360]: pgmap v1112: 321 pgs: 321 active+clean; 197 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 421 KiB/s wr, 83 op/s
Jan 20 14:28:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:06.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:07 compute-0 ovn_controller[148666]: 2026-01-20T14:28:07Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:1c:e2 10.100.0.10
Jan 20 14:28:07 compute-0 ovn_controller[148666]: 2026-01-20T14:28:07Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:1c:e2 10.100.0.10
Jan 20 14:28:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 246 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 184 op/s
Jan 20 14:28:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:07.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:08.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:08 compute-0 ceph-mon[74360]: pgmap v1113: 321 pgs: 321 active+clean; 246 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.2 MiB/s wr, 184 op/s
Jan 20 14:28:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 269 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:28:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:09.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4136608474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:09 compute-0 ceph-mon[74360]: pgmap v1114: 321 pgs: 321 active+clean; 269 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:28:10 compute-0 nova_compute[250018]: 2026-01-20 14:28:10.355 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:10.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:11 compute-0 nova_compute[250018]: 2026-01-20 14:28:11.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:11 compute-0 nova_compute[250018]: 2026-01-20 14:28:11.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:28:11 compute-0 nova_compute[250018]: 2026-01-20 14:28:11.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0051161842197096595 of space, bias 1.0, pg target 1.5348552659128978 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00023955134468639494 of space, bias 1.0, pg target 0.07162585206123209 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:28:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 302 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Jan 20 14:28:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:12 compute-0 ceph-mon[74360]: pgmap v1115: 321 pgs: 321 active+clean; 302 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Jan 20 14:28:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:12.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 313 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.428 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.428 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.444 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.538 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.539 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.549 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.550 250022 INFO nova.compute.claims [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:28:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2012390759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:28:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2012390759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:28:13 compute-0 nova_compute[250018]: 2026-01-20 14:28:13.668 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:13.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.063 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373042378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.119 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.125 250022 DEBUG nova.compute.provider_tree [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.145 250022 DEBUG nova.scheduler.client.report [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.173 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.174 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.245 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.245 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.276 250022 INFO nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.307 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.383 250022 INFO nova.virt.block_device [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Booting with volume 47e883f3-6efe-40b3-be28-6c01525dfc0c at /dev/vda
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.532 250022 DEBUG nova.policy [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bce7fcbd19554e29bb80c5b93b7dd3c9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.578 250022 DEBUG os_brick.utils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:28:14 compute-0 nova_compute[250018]: 2026-01-20 14:28:14.579 250022 INFO oslo.privsep.daemon [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpgvyrq19k/privsep.sock']
Jan 20 14:28:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:28:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:28:15 compute-0 ceph-mon[74360]: pgmap v1116: 321 pgs: 321 active+clean; 313 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Jan 20 14:28:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/373042378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.114 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.115 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.115 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.115 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.116 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.251 250022 INFO oslo.privsep.daemon [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Spawned new privsep daemon via rootwrap
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.126 268150 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.133 268150 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.138 268150 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.138 268150 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268150
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.254 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8076d1-32fa-457c-b542-dc8f4c4dc7b6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 313 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 207 op/s
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.357 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.363 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.374 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.374 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe92e7b-8a3f-4d9e-9eff-ed3580324a92]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.375 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.382 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.383 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[83acee1e-75f4-4a75-90d3-09521ed64545]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.384 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.395 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.395 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[7905bc7e-a697-487b-88fc-ed13cdf0170a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.396 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[5b27d95d-73a5-48c7-9595-128bb88815a7]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.397 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.422 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.425 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.426 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.426 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.426 250022 DEBUG os_brick.utils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] <== get_connector_properties: return (848ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.426 250022 DEBUG nova.virt.block_device [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating existing volume attachment record: 9eb63166-9838-4b2e-9a3b-635bb42864f1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:28:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226743044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.615 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:15.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.878 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.878 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.899 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.899 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:28:15 compute-0 nova_compute[250018]: 2026-01-20 14:28:15.906 250022 DEBUG nova.objects.instance [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'flavor' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.009 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.022 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Successfully created port: bd002580-dd95-49e1-bc34-e85f86272a05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:28:16 compute-0 ceph-mon[74360]: pgmap v1117: 321 pgs: 321 active+clean; 313 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 207 op/s
Jan 20 14:28:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/226743044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.042 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.043 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4462MB free_disk=20.87643814086914GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.043 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.044 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2276074320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.108 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.295 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.296 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.296 250022 INFO nova.compute.manager [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Attaching volume 0ef22368-9785-4823-bb66-470ea5a5862d to /dev/sdc
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.462 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.462 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 79b5596e-43c9-4085-9829-454fecf59490 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.462 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.463 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.485 250022 DEBUG os_brick.utils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.486 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.501 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.502 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[fff06618-2297-465e-bb2e-7e5969fda771]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.502 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.509 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.509 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[df793374-beae-4c62-a172-493aac96560b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.510 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.517 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.517 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cafe93-edc1-4a44-97fd-9f12b22b2c60]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.518 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[36196c9d-cfdf-4af9-a39b-fd18d4b59e04]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.518 250022 DEBUG oslo_concurrency.processutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.537 250022 DEBUG oslo_concurrency.processutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.540 250022 DEBUG os_brick.initiator.connectors.lightos [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.540 250022 DEBUG os_brick.initiator.connectors.lightos [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.540 250022 DEBUG os_brick.initiator.connectors.lightos [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.541 250022 DEBUG os_brick.utils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.541 250022 DEBUG nova.virt.block_device [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating existing volume attachment record: 96eb38b6-0b35-4972-95c6-9de22312b89c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.549 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.551 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.551 250022 INFO nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating image(s)
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.551 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.552 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Ensure instance console log exists: /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.552 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.552 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.552 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.629 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.967 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Successfully updated port: bd002580-dd95-49e1-bc34-e85f86272a05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.993 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.993 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:16 compute-0 nova_compute[250018]: 2026-01-20 14:28:16.993 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069433274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.024 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.029 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.045 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2276074320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3069433274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.069 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.070 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.071 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.071 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.089 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.187 250022 DEBUG nova.compute.manager [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.188 250022 DEBUG nova.compute.manager [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing instance network info cache due to event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.188 250022 DEBUG oslo_concurrency.lockutils [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 326 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.298 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.298 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.300 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.310 250022 DEBUG nova.objects.instance [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'flavor' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.329 250022 DEBUG nova.virt.libvirt.guest [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0ef22368-9785-4823-bb66-470ea5a5862d">
Jan 20 14:28:17 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   </source>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 14:28:17 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   </auth>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <target dev="sdc" bus="scsi"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <serial>0ef22368-9785-4823-bb66-470ea5a5862d</serial>
Jan 20 14:28:17 compute-0 nova_compute[250018]:   <address type="drive" controller="0" unit="2"/>
Jan 20 14:28:17 compute-0 nova_compute[250018]: </disk>
Jan 20 14:28:17 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.487 250022 DEBUG nova.virt.libvirt.driver [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.488 250022 DEBUG nova.virt.libvirt.driver [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.488 250022 DEBUG nova.virt.libvirt.driver [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.488 250022 DEBUG nova.virt.libvirt.driver [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] No VIF found with MAC fa:16:3e:e2:1c:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:28:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:17.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.819 250022 DEBUG oslo_concurrency.lockutils [None req-27f229b4-eef5-4191-b44f-9be6a11816ee eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:17 compute-0 nova_compute[250018]: 2026-01-20 14:28:17.821 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:28:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2141615976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3394207389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:18 compute-0 ceph-mon[74360]: pgmap v1118: 321 pgs: 321 active+clean; 326 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.083 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.084 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.243 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.245 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.245 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.246 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:18 compute-0 nova_compute[250018]: 2026-01-20 14:28:18.246 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:28:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:18 compute-0 sudo[268230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:18 compute-0 sudo[268230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:18 compute-0 sudo[268230]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:18 compute-0 sudo[268255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:18 compute-0 sudo[268255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:18 compute-0 sudo[268255]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1146682809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 328 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 131 op/s
Jan 20 14:28:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.102 250022 DEBUG oslo_concurrency.lockutils [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.102 250022 DEBUG oslo_concurrency.lockutils [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:20 compute-0 ceph-mon[74360]: pgmap v1119: 321 pgs: 321 active+clean; 328 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 131 op/s
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.119 250022 INFO nova.compute.manager [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Detaching volume 0ef22368-9785-4823-bb66-470ea5a5862d
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.442 250022 INFO nova.virt.block_device [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Attempting to driver detach volume 0ef22368-9785-4823-bb66-470ea5a5862d from mountpoint /dev/sdc
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.450 250022 DEBUG nova.virt.libvirt.driver [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Attempting to detach device sdc from instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.451 250022 DEBUG nova.virt.libvirt.guest [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0ef22368-9785-4823-bb66-470ea5a5862d">
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   </source>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <target dev="sdc" bus="scsi"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <serial>0ef22368-9785-4823-bb66-470ea5a5862d</serial>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]: </disk>
Jan 20 14:28:20 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.459 250022 INFO nova.virt.libvirt.driver [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Successfully detached device sdc from instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 from the persistent domain config.
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.460 250022 DEBUG nova.virt.libvirt.driver [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.460 250022 DEBUG nova.virt.libvirt.guest [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0ef22368-9785-4823-bb66-470ea5a5862d">
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   </source>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <target dev="sdc" bus="scsi"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <serial>0ef22368-9785-4823-bb66-470ea5a5862d</serial>
Jan 20 14:28:20 compute-0 nova_compute[250018]:   <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Jan 20 14:28:20 compute-0 nova_compute[250018]: </disk>
Jan 20 14:28:20 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.544 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768919300.543679, 4ee9159e-bf2b-47b7-8568-47fd13815f05 => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.545 250022 DEBUG nova.virt.libvirt.driver [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.548 250022 INFO nova.virt.libvirt.driver [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Successfully detached device sdc from instance 4ee9159e-bf2b-47b7-8568-47fd13815f05 from the live domain config.
Jan 20 14:28:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.798 250022 DEBUG nova.objects.instance [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'flavor' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.861 250022 DEBUG nova.network.neutron [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.944 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.945 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance network_info: |[{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.946 250022 DEBUG oslo_concurrency.lockutils [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.947 250022 DEBUG nova.network.neutron [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.952 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Start _get_guest_xml network_info=[{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': '9eb63166-9838-4b2e-9a3b-635bb42864f1', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '47e883f3-6efe-40b3-be28-6c01525dfc0c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '79b5596e-43c9-4085-9829-454fecf59490', 'attached_at': '', 'detached_at': '', 'volume_id': '47e883f3-6efe-40b3-be28-6c01525dfc0c', 'serial': '47e883f3-6efe-40b3-be28-6c01525dfc0c'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:28:20 compute-0 nova_compute[250018]: 2026-01-20 14:28:20.958 250022 WARNING nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.036 250022 DEBUG nova.virt.libvirt.host [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.037 250022 DEBUG nova.virt.libvirt.host [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.042 250022 DEBUG nova.virt.libvirt.host [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.043 250022 DEBUG nova.virt.libvirt.host [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.044 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.045 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.046 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.046 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.046 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.047 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.047 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.047 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.048 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.048 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.049 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.049 250022 DEBUG nova.virt.hardware [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.085 250022 DEBUG nova.storage.rbd_utils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image 79b5596e-43c9-4085-9829-454fecf59490_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.089 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.113 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.115 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.116 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.116 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.121 250022 DEBUG oslo_concurrency.lockutils [None req-19e36816-ee11-416b-b60d-d263c77faf4d eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4021864044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.144 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:28:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 339 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 3.8 MiB/s wr, 114 op/s
Jan 20 14:28:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428076612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.509 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.673 250022 DEBUG nova.virt.libvirt.vif [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:28:14Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.674 250022 DEBUG nova.network.os_vif_util [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.675 250022 DEBUG nova.network.os_vif_util [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.677 250022 DEBUG nova.objects.instance [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 79b5596e-43c9-4085-9829-454fecf59490 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.695 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <uuid>79b5596e-43c9-4085-9829-454fecf59490</uuid>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <name>instance-00000017</name>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:name>tempest-LiveMigrationTest-server-1483268234</nova:name>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:28:20</nova:creationTime>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:user uuid="bce7fcbd19554e29bb80c5b93b7dd3c9">tempest-LiveMigrationTest-864280704-project-member</nova:user>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:project uuid="d15f60b9e48e4175b5520d1e57ed2d3a">tempest-LiveMigrationTest-864280704</nova:project>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <nova:port uuid="bd002580-dd95-49e1-bc34-e85f86272a05">
Jan 20 14:28:21 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <system>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="serial">79b5596e-43c9-4085-9829-454fecf59490</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="uuid">79b5596e-43c9-4085-9829-454fecf59490</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </system>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <os>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </os>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <features>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </features>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/79b5596e-43c9-4085-9829-454fecf59490_disk.config">
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </source>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c">
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </source>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:28:21 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <serial>47e883f3-6efe-40b3-be28-6c01525dfc0c</serial>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:24:ce:0d"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <target dev="tapbd002580-dd"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/console.log" append="off"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <video>
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </video>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:28:21 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:28:21 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:28:21 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:28:21 compute-0 nova_compute[250018]: </domain>
Jan 20 14:28:21 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.696 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Preparing to wait for external event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.697 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.697 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.697 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.698 250022 DEBUG nova.virt.libvirt.vif [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:28:14Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.699 250022 DEBUG nova.network.os_vif_util [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.699 250022 DEBUG nova.network.os_vif_util [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.700 250022 DEBUG os_vif [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.700 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.701 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.702 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.706 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd002580-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.707 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd002580-dd, col_values=(('external_ids', {'iface-id': 'bd002580-dd95-49e1-bc34-e85f86272a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:ce:0d', 'vm-uuid': '79b5596e-43c9-4085-9829-454fecf59490'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.708 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 NetworkManager[48960]: <info>  [1768919301.7094] manager: (tapbd002580-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.711 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:28:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:21.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.717 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.719 250022 INFO os_vif [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.747 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.748 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.748 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.748 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.749 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.750 250022 INFO nova.compute.manager [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Terminating instance
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.750 250022 DEBUG nova.compute.manager [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.783 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.784 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.784 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.784 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:21 compute-0 kernel: tap249be315-41 (unregistering): left promiscuous mode
Jan 20 14:28:21 compute-0 NetworkManager[48960]: <info>  [1768919301.8094] device (tap249be315-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.826 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 ovn_controller[148666]: 2026-01-20T14:28:21Z|00067|binding|INFO|Releasing lport 249be315-41d6-478d-9d9a-f3251b200e7f from this chassis (sb_readonly=0)
Jan 20 14:28:21 compute-0 ovn_controller[148666]: 2026-01-20T14:28:21Z|00068|binding|INFO|Setting lport 249be315-41d6-478d-9d9a-f3251b200e7f down in Southbound
Jan 20 14:28:21 compute-0 ovn_controller[148666]: 2026-01-20T14:28:21Z|00069|binding|INFO|Removing iface tap249be315-41 ovn-installed in OVS
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.829 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:21.836 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:1c:e2 10.100.0.10'], port_security=['fa:16:3e:e2:1c:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4ee9159e-bf2b-47b7-8568-47fd13815f05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652d7129-cd9b-4229-8d54-211a1946e427', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '944b426a2d4c4ad3a01f0b855ad36509', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4bf82b26-b849-4d35-a448-97009e04bc43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eeeeb41-2381-4a49-aecf-33a614ed88ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=249be315-41d6-478d-9d9a-f3251b200e7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:21.838 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 249be315-41d6-478d-9d9a-f3251b200e7f in datapath 652d7129-cd9b-4229-8d54-211a1946e427 unbound from our chassis
Jan 20 14:28:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:21.839 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 652d7129-cd9b-4229-8d54-211a1946e427, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:28:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:21.841 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[63e223bb-07d8-4a44-936c-c105948e053e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:21.842 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427 namespace which is not needed anymore
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:21 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 20 14:28:21 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Consumed 14.725s CPU time.
Jan 20 14:28:21 compute-0 systemd-machined[216401]: Machine qemu-9-instance-00000015 terminated.
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.987 250022 INFO nova.virt.libvirt.driver [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Instance destroyed successfully.
Jan 20 14:28:21 compute-0 nova_compute[250018]: 2026-01-20 14:28:21.988 250022 DEBUG nova.objects.instance [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lazy-loading 'resources' on Instance uuid 4ee9159e-bf2b-47b7-8568-47fd13815f05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [NOTICE]   (267843) : haproxy version is 2.8.14-c23fe91
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [NOTICE]   (267843) : path to executable is /usr/sbin/haproxy
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [WARNING]  (267843) : Exiting Master process...
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [WARNING]  (267843) : Exiting Master process...
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [ALERT]    (267843) : Current worker (267848) exited with code 143 (Terminated)
Jan 20 14:28:22 compute-0 neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427[267809]: [WARNING]  (267843) : All workers exited. Exiting... (0)
Jan 20 14:28:22 compute-0 systemd[1]: libpod-58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de.scope: Deactivated successfully.
Jan 20 14:28:22 compute-0 podman[268350]: 2026-01-20 14:28:22.018352911 +0000 UTC m=+0.050281372 container died 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.061 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.062 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.062 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] No VIF found with MAC fa:16:3e:24:ce:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.062 250022 INFO nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Using config drive
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.088 250022 DEBUG nova.storage.rbd_utils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image 79b5596e-43c9-4085-9829-454fecf59490_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.095 250022 DEBUG nova.virt.libvirt.vif [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-1306178105',display_name='tempest-AttachSCSIVolumeTestJSON-server-1306178105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-1306178105',id=21,image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLcwkLfz+n0xUADkH5hSwgQR9Vlq54FRm6knqxSeXMPjygHrcFpUP17ZQ6w9WwqEnkXLzYkY49szBEmF9sDE/LAQKFFOum+jTv2mFAAqYD9yBP8bjfXxEQt6qfFVHpdBww==',key_name='tempest-keypair-919175856',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:27:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='944b426a2d4c4ad3a01f0b855ad36509',ramdisk_id='',reservation_id='r-3025v9a1',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='afa12d53-6955-4be8-8dd3-8e7dd18a3d5b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-687112080',owner_user_name='tempest-AttachSCSIVolumeTestJSON-687112080-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eae4ac21a700463eadfdbe7717ed8b13',uuid=4ee9159e-bf2b-47b7-8568-47fd13815f05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.096 250022 DEBUG nova.network.os_vif_util [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converting VIF {"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.096 250022 DEBUG nova.network.os_vif_util [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.096 250022 DEBUG os_vif [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.098 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.098 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249be315-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.103 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.105 250022 INFO os_vif [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:1c:e2,bridge_name='br-int',has_traffic_filtering=True,id=249be315-41d6-478d-9d9a-f3251b200e7f,network=Network(652d7129-cd9b-4229-8d54-211a1946e427),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap249be315-41')
Jan 20 14:28:22 compute-0 ceph-mon[74360]: pgmap v1120: 321 pgs: 321 active+clean; 339 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 3.8 MiB/s wr, 114 op/s
Jan 20 14:28:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/428076612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/117801400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de-userdata-shm.mount: Deactivated successfully.
Jan 20 14:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4439b8b3aa10798cce22576b290ed1c19f760f88b30f60c6330c5387a21d171a-merged.mount: Deactivated successfully.
Jan 20 14:28:22 compute-0 podman[268350]: 2026-01-20 14:28:22.178440314 +0000 UTC m=+0.210368795 container cleanup 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:28:22 compute-0 systemd[1]: libpod-conmon-58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de.scope: Deactivated successfully.
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.368 250022 DEBUG nova.compute.manager [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-unplugged-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.369 250022 DEBUG oslo_concurrency.lockutils [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.369 250022 DEBUG oslo_concurrency.lockutils [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.369 250022 DEBUG oslo_concurrency.lockutils [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.370 250022 DEBUG nova.compute.manager [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] No waiting events found dispatching network-vif-unplugged-249be315-41d6-478d-9d9a-f3251b200e7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.370 250022 DEBUG nova.compute.manager [req-9e521783-f2ac-4546-9bf3-8f7d6e0314fa req-4d590bee-598c-4bda-854b-d4d271a810fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-unplugged-249be315-41d6-478d-9d9a-f3251b200e7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:28:22 compute-0 podman[268429]: 2026-01-20 14:28:22.42790849 +0000 UTC m=+0.216681205 container remove 58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.437 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef02a38-d64a-4acb-975f-cac2729f8202]: (4, ('Tue Jan 20 02:28:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427 (58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de)\n58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de\nTue Jan 20 02:28:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427 (58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de)\n58ef78b08fdf4e79f528172a82052cb9dd1e724dfa4ce29377bbc38b842da7de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2a77b1a6-a9a8-448a-bcb0-23c99c69515f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.440 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap652d7129-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.442 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:22 compute-0 kernel: tap652d7129-c0: left promiscuous mode
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.460 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.464 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbcbba0-c26e-467b-a0c6-0164d414001b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.478 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[93ed0541-6494-4b99-adf4-df49c630a1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.480 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c30d04a-d5ad-4b0e-850d-b51ba2933a91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.496 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1925ffab-2aa3-4996-864d-1c253adf4bcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 521357, 'reachable_time': 39851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268448, 'error': None, 'target': 'ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d652d7129\x2dcd9b\x2d4229\x2d8d54\x2d211a1946e427.mount: Deactivated successfully.
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.500 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-652d7129-cd9b-4229-8d54-211a1946e427 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:28:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:22.500 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0af286-2d26-431c-a57f-ce6e1a99f4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:22.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.718 250022 INFO nova.virt.libvirt.driver [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Deleting instance files /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05_del
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.719 250022 INFO nova.virt.libvirt.driver [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Deletion of /var/lib/nova/instances/4ee9159e-bf2b-47b7-8568-47fd13815f05_del complete
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.839 250022 INFO nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating config drive at /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.844 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpez9qm4vt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.868 250022 INFO nova.compute.manager [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.869 250022 DEBUG oslo.service.loopingcall [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.870 250022 DEBUG nova.compute.manager [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.870 250022 DEBUG nova.network.neutron [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:28:22 compute-0 nova_compute[250018]: 2026-01-20 14:28:22.974 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpez9qm4vt" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.012 250022 DEBUG nova.storage.rbd_utils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] rbd image 79b5596e-43c9-4085-9829-454fecf59490_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.017 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config 79b5596e-43c9-4085-9829-454fecf59490_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.222 250022 DEBUG oslo_concurrency.processutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config 79b5596e-43c9-4085-9829-454fecf59490_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.223 250022 INFO nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deleting local config drive /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/disk.config because it was imported into RBD.
Jan 20 14:28:23 compute-0 kernel: tapbd002580-dd: entered promiscuous mode
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.2765] manager: (tapbd002580-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Jan 20 14:28:23 compute-0 systemd-udevd[268330]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:28:23 compute-0 ovn_controller[148666]: 2026-01-20T14:28:23Z|00070|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this chassis.
Jan 20 14:28:23 compute-0 ovn_controller[148666]: 2026-01-20T14:28:23Z|00071|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.276 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.284 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.286 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 bound to our chassis
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.287 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.2891] device (tapbd002580-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.2899] device (tapbd002580-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:28:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 346 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 20 14:28:23 compute-0 ovn_controller[148666]: 2026-01-20T14:28:23Z|00072|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 ovn-installed in OVS
Jan 20 14:28:23 compute-0 ovn_controller[148666]: 2026-01-20T14:28:23Z|00073|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 up in Southbound
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.294 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.295 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.300 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2a659835-471a-448c-9320-971220adf925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.303 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14f18b27-11 in ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.305 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14f18b27-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.305 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[49b76e2f-a6ee-40df-b3f2-235d5c8f1d44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.306 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3804e0a6-b4e6-4645-83ef-8b5624f36f90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 systemd-machined[216401]: New machine qemu-10-instance-00000017.
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.319 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[c5be39ed-c2bf-4dda-966c-a88d5eaf9e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000017.
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5810555e-0fb5-4da8-990b-55653c85c224]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.352 250022 DEBUG nova.network.neutron [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updated VIF entry in instance network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.353 250022 DEBUG nova.network.neutron [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.369 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc5bb23-b03d-405d-b400-f7bd8b3fa214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.3812] manager: (tap14f18b27-10): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.380 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f281c741-26b0-4f5f-a7b9-347440e68d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.386 250022 DEBUG oslo_concurrency.lockutils [req-7789ecd3-7f22-4f80-99c4-ff1ce409885c req-32af202b-cd4d-4f2b-88cd-e162600fba16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.412 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d1833dc2-107e-49dd-8a41-a74267e73c64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.415 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b962fb-db27-410f-bc8d-d3b794039c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.4344] device (tap14f18b27-10): carrier: link connected
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.440 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a471decb-4f6c-48e4-8c7b-a7f5d8fbcd4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.455 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3afcda-18cd-4f37-a5b0-e2318ca0c37b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524415, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268536, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.469 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7cca15e6-0f74-4a72-9267-1c1116cb85d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:1f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524415, 'tstamp': 524415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268537, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.484 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b2a224-c3b1-4ab2-9ce3-ef66fcbe0930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524415, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268538, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.512 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[01d93a59-606c-4ca0-bb74-cf20bb167db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.564 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2246c8e4-80d0-4f80-bdad-0fa568f20e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.566 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.567 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.567 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.569 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 NetworkManager[48960]: <info>  [1768919303.5707] manager: (tap14f18b27-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 20 14:28:23 compute-0 kernel: tap14f18b27-10: entered promiscuous mode
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.573 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.574 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.575 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 ovn_controller[148666]: 2026-01-20T14:28:23Z|00074|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.576 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.577 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.578 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c77b30cf-21bb-4275-9368-6062d33280b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.578 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:28:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:23.579 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'env', 'PROCESS_TAG=haproxy-14f18b27-1594-48d8-a08b-a930f7adbc08', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14f18b27-1594-48d8-a08b-a930f7adbc08.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.590 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:23.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.792 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating instance_info_cache with network_info: [{"id": "249be315-41d6-478d-9d9a-f3251b200e7f", "address": "fa:16:3e:e2:1c:e2", "network": {"id": "652d7129-cd9b-4229-8d54-211a1946e427", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-427094000-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "944b426a2d4c4ad3a01f0b855ad36509", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap249be315-41", "ovs_interfaceid": "249be315-41d6-478d-9d9a-f3251b200e7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.831 250022 DEBUG nova.compute.manager [req-f43010e0-ca6f-4864-adc9-8c48a379fa67 req-c9806b86-6bf2-4c0b-be1a-4ea299c2de5a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.832 250022 DEBUG oslo_concurrency.lockutils [req-f43010e0-ca6f-4864-adc9-8c48a379fa67 req-c9806b86-6bf2-4c0b-be1a-4ea299c2de5a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.832 250022 DEBUG oslo_concurrency.lockutils [req-f43010e0-ca6f-4864-adc9-8c48a379fa67 req-c9806b86-6bf2-4c0b-be1a-4ea299c2de5a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.833 250022 DEBUG oslo_concurrency.lockutils [req-f43010e0-ca6f-4864-adc9-8c48a379fa67 req-c9806b86-6bf2-4c0b-be1a-4ea299c2de5a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.833 250022 DEBUG nova.compute.manager [req-f43010e0-ca6f-4864-adc9-8c48a379fa67 req-c9806b86-6bf2-4c0b-be1a-4ea299c2de5a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Processing event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.846 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-4ee9159e-bf2b-47b7-8568-47fd13815f05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.846 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.847 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:23 compute-0 nova_compute[250018]: 2026-01-20 14:28:23.847 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:23 compute-0 podman[268604]: 2026-01-20 14:28:23.904627777 +0000 UTC m=+0.021283543 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.060 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.061 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919304.060559, 79b5596e-43c9-4085-9829-454fecf59490 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.061 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Started (Lifecycle Event)
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.065 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.068 250022 INFO nova.virt.libvirt.driver [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance spawned successfully.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.069 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.092 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.093 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.093 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.093 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.094 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.094 250022 DEBUG nova.virt.libvirt.driver [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.105 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.108 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.196 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.197 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919304.0607738, 79b5596e-43c9-4085-9829-454fecf59490 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.197 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Paused (Lifecycle Event)
Jan 20 14:28:24 compute-0 podman[268604]: 2026-01-20 14:28:24.2391887 +0000 UTC m=+0.355844466 container create bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.268 250022 INFO nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 7.72 seconds to spawn the instance on the hypervisor.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.268 250022 DEBUG nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.282 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:24 compute-0 systemd[1]: Started libpod-conmon-bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693.scope.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.286 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919304.0642676, 79b5596e-43c9-4085-9829-454fecf59490 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.287 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Resumed (Lifecycle Event)
Jan 20 14:28:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af50e4350546bf18796712becaa21e5e0e9dffa956e2230cbfad3dc758e9979/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.331 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.335 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:24 compute-0 podman[268604]: 2026-01-20 14:28:24.361114118 +0000 UTC m=+0.477769914 container init bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:28:24 compute-0 podman[268604]: 2026-01-20 14:28:24.368135317 +0000 UTC m=+0.484791063 container start bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.374 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:28:24 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [NOTICE]   (268630) : New worker (268632) forked
Jan 20 14:28:24 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [NOTICE]   (268630) : Loading success.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.394 250022 INFO nova.compute.manager [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 10.90 seconds to build instance.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.406 250022 DEBUG nova.network.neutron [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.413 250022 DEBUG oslo_concurrency.lockutils [None req-d659419d-dcea-4ecd-9c9d-954854ebd236 bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.427 250022 INFO nova.compute.manager [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Took 1.56 seconds to deallocate network for instance.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.467 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.468 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.549 250022 DEBUG nova.compute.manager [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.550 250022 DEBUG oslo_concurrency.lockutils [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.550 250022 DEBUG oslo_concurrency.lockutils [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.550 250022 DEBUG oslo_concurrency.lockutils [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.550 250022 DEBUG nova.compute.manager [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] No waiting events found dispatching network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.551 250022 WARNING nova.compute.manager [req-cd29ca7d-2335-4343-b33e-fd09b122fd7c req-fb8b208f-ece0-49c4-9ec2-19e8ca8d0fe3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received unexpected event network-vif-plugged-249be315-41d6-478d-9d9a-f3251b200e7f for instance with vm_state deleted and task_state None.
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.552 250022 DEBUG oslo_concurrency.processutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:24 compute-0 nova_compute[250018]: 2026-01-20 14:28:24.576 250022 DEBUG nova.compute.manager [req-1ef5eda4-f9ca-4c0f-b5d5-2269ba4e8d46 req-9cbc3be0-1d82-4ce3-840d-9dafa46adaf1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Received event network-vif-deleted-249be315-41d6-478d-9d9a-f3251b200e7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:24 compute-0 ceph-mon[74360]: pgmap v1121: 321 pgs: 321 active+clean; 346 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 20 14:28:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997936891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.001 250022 DEBUG oslo_concurrency.processutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.008 250022 DEBUG nova.compute.provider_tree [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.247 250022 DEBUG nova.scheduler.client.report [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 335 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.414 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.441 250022 INFO nova.scheduler.client.report [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Deleted allocations for instance 4ee9159e-bf2b-47b7-8568-47fd13815f05
Jan 20 14:28:25 compute-0 nova_compute[250018]: 2026-01-20 14:28:25.555 250022 DEBUG oslo_concurrency.lockutils [None req-75656ce7-0a91-46f1-8654-13497dc979d1 eae4ac21a700463eadfdbe7717ed8b13 944b426a2d4c4ad3a01f0b855ad36509 - - default default] Lock "4ee9159e-bf2b-47b7-8568-47fd13815f05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3997936891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 20 14:28:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 20 14:28:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 20 14:28:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.143 250022 DEBUG nova.compute.manager [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.143 250022 DEBUG oslo_concurrency.lockutils [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.144 250022 DEBUG oslo_concurrency.lockutils [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.144 250022 DEBUG oslo_concurrency.lockutils [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.144 250022 DEBUG nova.compute.manager [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:26 compute-0 nova_compute[250018]: 2026-01-20 14:28:26.144 250022 WARNING nova.compute.manager [req-71ecc317-1538-486f-8453-ad418aaae3c4 req-31a0c09e-fcbf-4286-b884-bf58893d9c1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state None.
Jan 20 14:28:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:26.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:26 compute-0 ceph-mon[74360]: pgmap v1122: 321 pgs: 321 active+clean; 335 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 14:28:26 compute-0 ceph-mon[74360]: osdmap e153: 3 total, 3 up, 3 in
Jan 20 14:28:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/868074923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:27 compute-0 nova_compute[250018]: 2026-01-20 14:28:27.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 178 op/s
Jan 20 14:28:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:27.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2394368548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:27 compute-0 ceph-mon[74360]: pgmap v1124: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 178 op/s
Jan 20 14:28:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 184 op/s
Jan 20 14:28:29 compute-0 podman[268667]: 2026-01-20 14:28:29.467503084 +0000 UTC m=+0.063376855 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 20 14:28:29 compute-0 podman[268666]: 2026-01-20 14:28:29.517110368 +0000 UTC m=+0.114872800 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:28:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:29.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:29 compute-0 ceph-mon[74360]: pgmap v1125: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.1 MiB/s wr, 184 op/s
Jan 20 14:28:30 compute-0 nova_compute[250018]: 2026-01-20 14:28:30.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:30 compute-0 nova_compute[250018]: 2026-01-20 14:28:30.561 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Check if temp file /var/lib/nova/instances/tmpt3smbf1a exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 20 14:28:30 compute-0 nova_compute[250018]: 2026-01-20 14:28:30.562 250022 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 20 14:28:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:30.743 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:30.744 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:30.744 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 20 14:28:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 20 14:28:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 20 14:28:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 23 KiB/s wr, 276 op/s
Jan 20 14:28:31 compute-0 sshd-session[268712]: Invalid user admin from 157.245.78.139 port 47066
Jan 20 14:28:31 compute-0 sshd-session[268712]: Connection closed by invalid user admin 157.245.78.139 port 47066 [preauth]
Jan 20 14:28:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:31.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:31 compute-0 ceph-mon[74360]: osdmap e154: 3 total, 3 up, 3 in
Jan 20 14:28:31 compute-0 ceph-mon[74360]: pgmap v1127: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 23 KiB/s wr, 276 op/s
Jan 20 14:28:32 compute-0 nova_compute[250018]: 2026-01-20 14:28:32.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 20 14:28:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 20 14:28:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 20 14:28:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 22 KiB/s wr, 230 op/s
Jan 20 14:28:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:34 compute-0 ceph-mon[74360]: osdmap e155: 3 total, 3 up, 3 in
Jan 20 14:28:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2290410301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:28:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2290410301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:28:34 compute-0 ceph-mon[74360]: pgmap v1129: 321 pgs: 321 active+clean; 267 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 22 KiB/s wr, 230 op/s
Jan 20 14:28:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1026837311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:34.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:28:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1979035454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 262 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 KiB/s wr, 178 op/s
Jan 20 14:28:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1979035454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:35 compute-0 nova_compute[250018]: 2026-01-20 14:28:35.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:28:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:28:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:36 compute-0 ceph-mon[74360]: pgmap v1130: 321 pgs: 321 active+clean; 262 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 KiB/s wr, 178 op/s
Jan 20 14:28:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:36.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.939 250022 DEBUG nova.compute.manager [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.940 250022 DEBUG oslo_concurrency.lockutils [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.941 250022 DEBUG oslo_concurrency.lockutils [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.941 250022 DEBUG oslo_concurrency.lockutils [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.941 250022 DEBUG nova.compute.manager [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.941 250022 DEBUG nova.compute.manager [req-435e9442-ba62-4472-9c30-0beb4aae5e5a req-9958acab-4742-40ef-b191-d42e2777d87e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.987 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919301.9862099, 4ee9159e-bf2b-47b7-8568-47fd13815f05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:36 compute-0 nova_compute[250018]: 2026-01-20 14:28:36.988 250022 INFO nova.compute.manager [-] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] VM Stopped (Lifecycle Event)
Jan 20 14:28:37 compute-0 nova_compute[250018]: 2026-01-20 14:28:37.066 250022 DEBUG nova.compute.manager [None req-36fdae9f-475d-4697-9727-374279984d0c - - - - - -] [instance: 4ee9159e-bf2b-47b7-8568-47fd13815f05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:37 compute-0 nova_compute[250018]: 2026-01-20 14:28:37.108 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 246 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 KiB/s wr, 175 op/s
Jan 20 14:28:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.629 250022 INFO nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 6.67 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.630 250022 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.655 250022 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpt3smbf1a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(0a1d0101-68cf-4b3f-9211-e9c29fb7c49b),old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='9eb63166-9838-4b2e-9a3b-635bb42864f1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 20 14:28:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.660 250022 DEBUG nova.objects.instance [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lazy-loading 'migration_context' on Instance uuid 79b5596e-43c9-4085-9829-454fecf59490 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:28:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:38.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.661 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.662 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 20 14:28:38 compute-0 nova_compute[250018]: 2026-01-20 14:28:38.663 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 20 14:28:38 compute-0 ceph-mon[74360]: pgmap v1131: 321 pgs: 321 active+clean; 246 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 KiB/s wr, 175 op/s
Jan 20 14:28:38 compute-0 sudo[268718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:38 compute-0 sudo[268718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:38 compute-0 sudo[268718]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:38 compute-0 sudo[268743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:38 compute-0 sudo[268743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:38 compute-0 sudo[268743]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:38 compute-0 ovn_controller[148666]: 2026-01-20T14:28:38Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:38 compute-0 ovn_controller[148666]: 2026-01-20T14:28:38Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.000 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Find same serial number: pos=1, serial=47e883f3-6efe-40b3-be28-6c01525dfc0c _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.002 250022 DEBUG nova.virt.libvirt.vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:24Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.003 250022 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.005 250022 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.006 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating guest XML with vif config: <interface type="ethernet">
Jan 20 14:28:39 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:24:ce:0d"/>
Jan 20 14:28:39 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:28:39 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:28:39 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:28:39 compute-0 nova_compute[250018]:   <target dev="tapbd002580-dd"/>
Jan 20 14:28:39 compute-0 nova_compute[250018]: </interface>
Jan 20 14:28:39 compute-0 nova_compute[250018]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.008 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 20 14:28:39 compute-0 ovn_controller[148666]: 2026-01-20T14:28:39Z|00075|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.090 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.165 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.165 250022 INFO nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.231 250022 DEBUG nova.compute.manager [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.232 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.232 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.232 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.232 250022 DEBUG nova.compute.manager [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.232 250022 WARNING nova.compute.manager [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.233 250022 DEBUG nova.compute.manager [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.233 250022 DEBUG nova.compute.manager [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing instance network info cache due to event network-changed-bd002580-dd95-49e1-bc34-e85f86272a05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.233 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.233 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.233 250022 DEBUG nova.network.neutron [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Refreshing network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:28:39 compute-0 ovn_controller[148666]: 2026-01-20T14:28:39Z|00076|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.286 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 250 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 561 KiB/s rd, 372 KiB/s wr, 90 op/s
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.633 250022 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 20 14:28:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:39.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:39 compute-0 ceph-mon[74360]: pgmap v1132: 321 pgs: 321 active+clean; 250 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 561 KiB/s rd, 372 KiB/s wr, 90 op/s
Jan 20 14:28:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:39.892 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:39 compute-0 nova_compute[250018]: 2026-01-20 14:28:39.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:39.893 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:28:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:39.894 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.136 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.136 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.607 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919320.6066082, 79b5596e-43c9-4085-9829-454fecf59490 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.607 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Paused (Lifecycle Event)
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.638 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.639 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.640 250022 DEBUG nova.virt.libvirt.migration [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.641 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:28:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:28:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:40.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.665 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 20 14:28:40 compute-0 kernel: tapbd002580-dd (unregistering): left promiscuous mode
Jan 20 14:28:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2708764190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:40 compute-0 NetworkManager[48960]: <info>  [1768919320.7896] device (tapbd002580-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00077|binding|INFO|Releasing lport bd002580-dd95-49e1-bc34-e85f86272a05 from this chassis (sb_readonly=0)
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00078|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 down in Southbound
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00079|binding|INFO|Removing iface tapbd002580-dd ovn-installed in OVS
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.799 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.806 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5ffd4ac3-9266-4927-98ad-20a17782c725'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.808 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.811 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.812 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[37bab051-91ff-4421-ab97-68ce005bac4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.813 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace which is not needed anymore
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 20 14:28:40 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Consumed 13.633s CPU time.
Jan 20 14:28:40 compute-0 systemd-machined[216401]: Machine qemu-10-instance-00000017 terminated.
Jan 20 14:28:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 20 14:28:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 20 14:28:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 20 14:28:40 compute-0 virtqemud[249565]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c: No such file or directory
Jan 20 14:28:40 compute-0 virtqemud[249565]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-47e883f3-6efe-40b3-be28-6c01525dfc0c: No such file or directory
Jan 20 14:28:40 compute-0 kernel: tapbd002580-dd: entered promiscuous mode
Jan 20 14:28:40 compute-0 kernel: tapbd002580-dd (unregistering): left promiscuous mode
Jan 20 14:28:40 compute-0 NetworkManager[48960]: <info>  [1768919320.9616] manager: (tapbd002580-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00080|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this chassis.
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00081|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [NOTICE]   (268630) : haproxy version is 2.8.14-c23fe91
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [NOTICE]   (268630) : path to executable is /usr/sbin/haproxy
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [WARNING]  (268630) : Exiting Master process...
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [WARNING]  (268630) : Exiting Master process...
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [ALERT]    (268630) : Current worker (268632) exited with code 143 (Terminated)
Jan 20 14:28:40 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[268626]: [WARNING]  (268630) : All workers exited. Exiting... (0)
Jan 20 14:28:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:40.971 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5ffd4ac3-9266-4927-98ad-20a17782c725'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:40 compute-0 systemd[1]: libpod-bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693.scope: Deactivated successfully.
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.979 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.980 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.980 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 20 14:28:40 compute-0 podman[268797]: 2026-01-20 14:28:40.980746842 +0000 UTC m=+0.052932534 container died bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:28:40 compute-0 ovn_controller[148666]: 2026-01-20T14:28:40Z|00082|binding|INFO|Releasing lport bd002580-dd95-49e1-bc34-e85f86272a05 from this chassis (sb_readonly=0)
Jan 20 14:28:40 compute-0 nova_compute[250018]: 2026-01-20 14:28:40.992 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.001 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '5ffd4ac3-9266-4927-98ad-20a17782c725'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693-userdata-shm.mount: Deactivated successfully.
Jan 20 14:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af50e4350546bf18796712becaa21e5e0e9dffa956e2230cbfad3dc758e9979-merged.mount: Deactivated successfully.
Jan 20 14:28:41 compute-0 podman[268797]: 2026-01-20 14:28:41.019162514 +0000 UTC m=+0.091348206 container cleanup bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:28:41 compute-0 systemd[1]: libpod-conmon-bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693.scope: Deactivated successfully.
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.067 250022 DEBUG nova.compute.manager [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.068 250022 DEBUG oslo_concurrency.lockutils [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.068 250022 DEBUG oslo_concurrency.lockutils [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.069 250022 DEBUG oslo_concurrency.lockutils [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.069 250022 DEBUG nova.compute.manager [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.069 250022 DEBUG nova.compute.manager [req-65077209-83de-409e-b75a-b39ded4807e9 req-adb99b4c-847d-4931-9327-ee86f2bfce6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:28:41 compute-0 podman[268833]: 2026-01-20 14:28:41.077236756 +0000 UTC m=+0.037876029 container remove bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.083 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69f4de95-4907-49cf-a09b-2813a23ab5a7]: (4, ('Tue Jan 20 02:28:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693)\nbd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693\nTue Jan 20 02:28:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (bd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693)\nbd039eb99c221c1e581a732726a2d440b4720bbf1fb34cd8c7b4769ad5284693\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.085 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc471c0-0f6f-4b8d-9378-8384bc952f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.086 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.088 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:41 compute-0 kernel: tap14f18b27-10: left promiscuous mode
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.106 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.110 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea581fe0-ac57-4f2a-86a2-dd9b7383259a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.130 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c28a758-141d-405e-851c-24adf70d0fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.132 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8103938a-1774-482e-83db-f4e8be23647b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.141 250022 DEBUG nova.virt.libvirt.guest [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '79b5596e-43c9-4085-9829-454fecf59490' (instance-00000017) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.144 250022 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migration operation has completed
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.145 250022 INFO nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] _post_live_migration() is started..
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.159 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[11e425fd-ecab-4e03-a84c-2b1d4de86968]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524408, 'reachable_time': 30986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268852, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.163 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:28:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d14f18b27\x2d1594\x2d48d8\x2da08b\x2da930f7adbc08.mount: Deactivated successfully.
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.164 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ff35d0-29d8-41a4-992e-f0fba56d1723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.166 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.169 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.170 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5a3e2a-c518-4ba9-8528-4d124464efe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.171 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.174 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:28:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:28:41.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8b0c11-1323-4062-8b82-1d73ca238374]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.1 MiB/s wr, 212 op/s
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.408 250022 DEBUG nova.network.neutron [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updated VIF entry in instance network info cache for port bd002580-dd95-49e1-bc34-e85f86272a05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.409 250022 DEBUG nova.network.neutron [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:41 compute-0 nova_compute[250018]: 2026-01-20 14:28:41.430 250022 DEBUG oslo_concurrency.lockutils [req-50fd848e-72e5-4bdf-a989-13c732e00337 req-7a46018d-98b1-4479-b198-ace48e653fe0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:41 compute-0 ceph-mon[74360]: osdmap e156: 3 total, 3 up, 3 in
Jan 20 14:28:41 compute-0 ceph-mon[74360]: pgmap v1134: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.1 MiB/s wr, 212 op/s
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.111 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2946817491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3456057515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.947 250022 DEBUG nova.compute.manager [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.947 250022 DEBUG oslo_concurrency.lockutils [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.948 250022 DEBUG oslo_concurrency.lockutils [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.948 250022 DEBUG oslo_concurrency.lockutils [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.948 250022 DEBUG nova.compute.manager [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:42 compute-0 nova_compute[250018]: 2026-01-20 14:28:42.949 250022 DEBUG nova.compute.manager [req-9904a04d-6f7e-4c79-bbc8-12f291f2604d req-664171d2-c25f-402e-b89c-88d7d22f856c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.161 250022 DEBUG nova.network.neutron [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Activated binding for port bd002580-dd95-49e1-bc34-e85f86272a05 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.162 250022 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.163 250022 DEBUG nova.virt.libvirt.vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:30Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.163 250022 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.164 250022 DEBUG nova.network.os_vif_util [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.165 250022 DEBUG os_vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.167 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.168 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd002580-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.170 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.173 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.176 250022 INFO os_vif [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.177 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.178 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.178 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.178 250022 DEBUG nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.179 250022 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deleting instance files /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.180 250022 INFO nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deletion of /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del complete
Jan 20 14:28:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 174 op/s
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.362 250022 DEBUG nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.363 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.363 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.364 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.364 250022 DEBUG nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.365 250022 WARNING nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.365 250022 DEBUG nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.366 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.367 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.368 250022 DEBUG oslo_concurrency.lockutils [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.369 250022 DEBUG nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:43 compute-0 nova_compute[250018]: 2026-01-20 14:28:43.369 250022 WARNING nova.compute.manager [req-7e3e87c0-8284-4654-bf28-cb08fb31ca14 req-ce710568-ae1f-42fe-a7aa-f0263b5a8a09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:28:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:43 compute-0 ceph-mon[74360]: pgmap v1135: 321 pgs: 321 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 174 op/s
Jan 20 14:28:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:44.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 312 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.374 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.581 250022 DEBUG nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.581 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.581 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 DEBUG nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 WARNING nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 DEBUG nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.582 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.583 250022 DEBUG oslo_concurrency.lockutils [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.583 250022 DEBUG nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:28:45 compute-0 nova_compute[250018]: 2026-01-20 14:28:45.583 250022 WARNING nova.compute.manager [req-3cc1848f-6cf2-4eaf-9b8b-871deff36304 req-e53a4274-1a27-46e2-82e9-b2f971c2044a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state active and task_state migrating.
Jan 20 14:28:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:46 compute-0 ceph-mon[74360]: pgmap v1136: 321 pgs: 321 active+clean; 312 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:28:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Jan 20 14:28:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:47 compute-0 ceph-mon[74360]: pgmap v1137: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.170 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.704 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.704 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.705 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.740 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.741 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.741 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.742 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:28:48 compute-0 nova_compute[250018]: 2026-01-20 14:28:48.742 250022 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/663865372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858502062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.166 250022 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 219 op/s
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.342 250022 WARNING nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.343 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4747MB free_disk=20.876209259033203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.343 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.343 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.416 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration for instance 79b5596e-43c9-4085-9829-454fecf59490 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.443 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.469 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Migration 0a1d0101-68cf-4b3f-9211-e9c29fb7c49b is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.470 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.470 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.505 250022 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:49.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2858502062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:49 compute-0 ceph-mon[74360]: pgmap v1138: 321 pgs: 321 active+clean; 326 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 219 op/s
Jan 20 14:28:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:28:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66379269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:49 compute-0 nova_compute[250018]: 2026-01-20 14:28:49.995 250022 DEBUG oslo_concurrency.processutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.002 250022 DEBUG nova.compute.provider_tree [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.024 250022 DEBUG nova.scheduler.client.report [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.050 250022 DEBUG nova.compute.resource_tracker [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.051 250022 DEBUG oslo_concurrency.lockutils [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.056 250022 INFO nova.compute.manager [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.179 250022 INFO nova.scheduler.client.report [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Deleted allocation for migration 0a1d0101-68cf-4b3f-9211-e9c29fb7c49b
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.179 250022 DEBUG nova.virt.libvirt.driver [None req-316b633b-1626-487f-b686-649c0e42886e f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.400 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:50.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:50 compute-0 nova_compute[250018]: 2026-01-20 14:28:50.953 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:28:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/66379269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:28:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 20 14:28:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:52 compute-0 ceph-mon[74360]: pgmap v1139: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 20 14:28:52 compute-0 nova_compute[250018]: 2026-01-20 14:28:52.353 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating tmpfile /var/lib/nova/instances/tmp27yeses5 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:28:52
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:28:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:52.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:52 compute-0 nova_compute[250018]: 2026-01-20 14:28:52.756 250022 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 14:28:53 compute-0 nova_compute[250018]: 2026-01-20 14:28:53.172 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 20 14:28:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:54 compute-0 nova_compute[250018]: 2026-01-20 14:28:54.068 250022 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 14:28:54 compute-0 nova_compute[250018]: 2026-01-20 14:28:54.096 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:28:54 compute-0 nova_compute[250018]: 2026-01-20 14:28:54.097 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:28:54 compute-0 nova_compute[250018]: 2026-01-20 14:28:54.097 250022 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:28:54 compute-0 ceph-mon[74360]: pgmap v1140: 321 pgs: 321 active+clean; 327 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 20 14:28:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:54.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 327 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 14:28:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.813 250022 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.838 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.841 250022 DEBUG os_brick.utils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.843 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.864 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.864 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4ad567-f832-4eb5-8b65-c0b5ab7bdf46]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.867 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.882 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.882 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[c250f408-e7ce-4ca9-9c60-9b3dd38a8c58]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.885 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.900 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.900 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[69779a70-7938-46ce-935d-5702c46c9826]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.902 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[c56ca85c-6be5-4625-84a6-508700442c86]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.902 250022 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.948 250022 DEBUG oslo_concurrency.processutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.950 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.951 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.951 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.951 250022 DEBUG os_brick.utils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.978 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919320.977372, 79b5596e-43c9-4085-9829-454fecf59490 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.978 250022 INFO nova.compute.manager [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Stopped (Lifecycle Event)
Jan 20 14:28:55 compute-0 nova_compute[250018]: 2026-01-20 14:28:55.997 250022 DEBUG nova.compute.manager [None req-e445104f-f2d4-4ff5-882c-b116feb93ef9 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:28:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:28:56 compute-0 ceph-mon[74360]: pgmap v1141: 321 pgs: 321 active+clean; 327 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 238 op/s
Jan 20 14:28:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:56.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:57 compute-0 sudo[268915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:57 compute-0 sudo[268915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-0 sudo[268915]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-0 sudo[268940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:28:57 compute-0 sudo[268940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-0 sudo[268940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-0 sudo[268965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:57 compute-0 sudo[268965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-0 sudo[268965]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 327 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 804 KiB/s wr, 226 op/s
Jan 20 14:28:57 compute-0 sudo[268990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:28:57 compute-0 sudo[268990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.579 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='636146ab-4bc6-4c21-9609-7755eb208c7c'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.580 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Creating instance directory: /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.580 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Ensure instance console log exists: /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.581 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.585 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.586 250022 DEBUG nova.virt.libvirt.vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:28:47Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.587 250022 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.587 250022 DEBUG nova.network.os_vif_util [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.588 250022 DEBUG os_vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.589 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.589 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.592 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd002580-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.592 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd002580-dd, col_values=(('external_ids', {'iface-id': 'bd002580-dd95-49e1-bc34-e85f86272a05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:ce:0d', 'vm-uuid': '79b5596e-43c9-4085-9829-454fecf59490'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:57 compute-0 NetworkManager[48960]: <info>  [1768919337.5947] manager: (tapbd002580-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.596 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.599 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.600 250022 INFO os_vif [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:28:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.602 250022 DEBUG nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 14:28:57 compute-0 nova_compute[250018]: 2026-01-20 14:28:57.602 250022 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='636146ab-4bc6-4c21-9609-7755eb208c7c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 14:28:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:28:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/731545174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:28:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:57.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:57 compute-0 sudo[268990]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:28:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:28:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:28:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:28:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:28:58.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:28:58 compute-0 ceph-mon[74360]: pgmap v1142: 321 pgs: 321 active+clean; 327 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 804 KiB/s wr, 226 op/s
Jan 20 14:28:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:28:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:28:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:59 compute-0 sudo[269049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:59 compute-0 sudo[269049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:59 compute-0 sudo[269049]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:59 compute-0 sudo[269074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:28:59 compute-0 sudo[269074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:28:59 compute-0 sudo[269074]: pam_unix(sudo:session): session closed for user root
Jan 20 14:28:59 compute-0 nova_compute[250018]: 2026-01-20 14:28:59.229 250022 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Port bd002580-dd95-49e1-bc34-e85f86272a05 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 14:28:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 329 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 144 KiB/s wr, 195 op/s
Jan 20 14:28:59 compute-0 nova_compute[250018]: 2026-01-20 14:28:59.689 250022 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp27yeses5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='79b5596e-43c9-4085-9829-454fecf59490',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={47e883f3-6efe-40b3-be28-6c01525dfc0c='636146ab-4bc6-4c21-9609-7755eb208c7c'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 14:28:59 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:28:59 compute-0 ceph-mon[74360]: pgmap v1143: 321 pgs: 321 active+clean; 329 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 144 KiB/s wr, 195 op/s
Jan 20 14:28:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:28:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:28:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:28:59.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:28:59 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:28:59 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:28:59 compute-0 podman[269100]: 2026-01-20 14:28:59.915809487 +0000 UTC m=+0.071836072 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:28:59 compute-0 podman[269099]: 2026-01-20 14:28:59.94790614 +0000 UTC m=+0.107604393 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:28:59 compute-0 kernel: tapbd002580-dd: entered promiscuous mode
Jan 20 14:28:59 compute-0 NetworkManager[48960]: <info>  [1768919339.9984] manager: (tapbd002580-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 20 14:28:59 compute-0 ovn_controller[148666]: 2026-01-20T14:28:59Z|00083|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this additional chassis.
Jan 20 14:28:59 compute-0 ovn_controller[148666]: 2026-01-20T14:28:59Z|00084|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:28:59 compute-0 nova_compute[250018]: 2026-01-20 14:28:59.999 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:00 compute-0 ovn_controller[148666]: 2026-01-20T14:29:00Z|00085|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 ovn-installed in OVS
Jan 20 14:29:00 compute-0 nova_compute[250018]: 2026-01-20 14:29:00.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:00 compute-0 systemd-udevd[269176]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:29:00 compute-0 systemd-machined[216401]: New machine qemu-11-instance-00000017.
Jan 20 14:29:00 compute-0 NetworkManager[48960]: <info>  [1768919340.0705] device (tapbd002580-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:29:00 compute-0 NetworkManager[48960]: <info>  [1768919340.0726] device (tapbd002580-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:29:00 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000017.
Jan 20 14:29:00 compute-0 nova_compute[250018]: 2026-01-20 14:29:00.405 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:00.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.354 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919341.353808, 79b5596e-43c9-4085-9829-454fecf59490 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.355 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Started (Lifecycle Event)
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.380 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:01 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 013d3049-1a69-435d-9b9e-bb1fa32827c6 does not exist
Jan 20 14:29:01 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 88389bc1-f6fe-48f6-be9a-9e7a7d475591 does not exist
Jan 20 14:29:01 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 44029572-1ad6-4b35-9ba8-3bdf7fd9d7b4 does not exist
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:29:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:29:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:01 compute-0 sudo[269229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:01 compute-0 sudo[269229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:01 compute-0 sudo[269229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:01 compute-0 sudo[269254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:29:01 compute-0 sudo[269254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:01 compute-0 sudo[269254]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:01 compute-0 sudo[269279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:01 compute-0 sudo[269279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:01 compute-0 sudo[269279]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:01 compute-0 sudo[269304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:29:01 compute-0 sudo[269304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.904 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919341.9046264, 79b5596e-43c9-4085-9829-454fecf59490 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.906 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Resumed (Lifecycle Event)
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.931 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.938 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:01 compute-0 nova_compute[250018]: 2026-01-20 14:29:01.964 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.180907075 +0000 UTC m=+0.056934781 container create 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:29:02 compute-0 systemd[1]: Started libpod-conmon-0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b.scope.
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.156702035 +0000 UTC m=+0.032729771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.279077014 +0000 UTC m=+0.155104810 container init 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.287061449 +0000 UTC m=+0.163089175 container start 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.29193153 +0000 UTC m=+0.167959266 container attach 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:29:02 compute-0 tender_wozniak[269388]: 167 167
Jan 20 14:29:02 compute-0 systemd[1]: libpod-0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b.scope: Deactivated successfully.
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.296962455 +0000 UTC m=+0.172990161 container died 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e018f74d0e599963a91a9dd8414fff50e7cb3b1070fa8c652d7c187db94ed87-merged.mount: Deactivated successfully.
Jan 20 14:29:02 compute-0 podman[269371]: 2026-01-20 14:29:02.349457837 +0000 UTC m=+0.225485573 container remove 0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:29:02 compute-0 systemd[1]: libpod-conmon-0a673b1ddbe32a13aee1cdb529d9eab6d4eda2e6914e67091058a2be5934673b.scope: Deactivated successfully.
Jan 20 14:29:02 compute-0 ceph-mon[74360]: pgmap v1144: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:29:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:29:02 compute-0 podman[269412]: 2026-01-20 14:29:02.582790728 +0000 UTC m=+0.044522617 container create f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:29:02 compute-0 nova_compute[250018]: 2026-01-20 14:29:02.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:02 compute-0 systemd[1]: Started libpod-conmon-f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc.scope.
Jan 20 14:29:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:02 compute-0 podman[269412]: 2026-01-20 14:29:02.563320456 +0000 UTC m=+0.025052385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:02 compute-0 podman[269412]: 2026-01-20 14:29:02.672461829 +0000 UTC m=+0.134193728 container init f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:29:02 compute-0 podman[269412]: 2026-01-20 14:29:02.678494821 +0000 UTC m=+0.140226720 container start f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:29:02 compute-0 podman[269412]: 2026-01-20 14:29:02.682331565 +0000 UTC m=+0.144063494 container attach f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:29:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 20 14:29:03 compute-0 tender_khayyam[269427]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:29:03 compute-0 tender_khayyam[269427]: --> relative data size: 1.0
Jan 20 14:29:03 compute-0 tender_khayyam[269427]: --> All data devices are unavailable
Jan 20 14:29:03 compute-0 systemd[1]: libpod-f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc.scope: Deactivated successfully.
Jan 20 14:29:03 compute-0 podman[269412]: 2026-01-20 14:29:03.49166153 +0000 UTC m=+0.953393419 container died f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0ec89480c0ab60b39c83451058fa17375f797c57acea6c5eb61139771833084-merged.mount: Deactivated successfully.
Jan 20 14:29:03 compute-0 podman[269412]: 2026-01-20 14:29:03.561276361 +0000 UTC m=+1.023008240 container remove f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:29:03 compute-0 systemd[1]: libpod-conmon-f4794e0dd502b0b21d2583713dc91398d1b5e15d46245548037f32aff70e27fc.scope: Deactivated successfully.
Jan 20 14:29:03 compute-0 sudo[269304]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:03 compute-0 sudo[269455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:03 compute-0 sudo[269455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:03 compute-0 sudo[269455]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:03 compute-0 sudo[269480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:29:03 compute-0 sudo[269480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:03 compute-0 sudo[269480]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:03 compute-0 sudo[269505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:03 compute-0 sudo[269505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:03 compute-0 sudo[269505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:03 compute-0 sudo[269530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:29:03 compute-0 sudo[269530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.310130341 +0000 UTC m=+0.055216275 container create e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:29:04 compute-0 systemd[1]: Started libpod-conmon-e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37.scope.
Jan 20 14:29:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.385086627 +0000 UTC m=+0.130172581 container init e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.29407114 +0000 UTC m=+0.039157104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.391488368 +0000 UTC m=+0.136574302 container start e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:29:04 compute-0 vibrant_shockley[269612]: 167 167
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.39565872 +0000 UTC m=+0.140744674 container attach e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:29:04 compute-0 systemd[1]: libpod-e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37.scope: Deactivated successfully.
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.397703606 +0000 UTC m=+0.142789570 container died e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab8ce773b777ffe32f170ad21d2b2fc830651270e3f0026b3508b0ecd7206137-merged.mount: Deactivated successfully.
Jan 20 14:29:04 compute-0 podman[269596]: 2026-01-20 14:29:04.438928693 +0000 UTC m=+0.184014628 container remove e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:29:04 compute-0 systemd[1]: libpod-conmon-e800369a9507a95c6477bdb7e07a46e3682faf139a41af4073af95a43cafdf37.scope: Deactivated successfully.
Jan 20 14:29:04 compute-0 ceph-mon[74360]: pgmap v1145: 321 pgs: 321 active+clean; 360 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 20 14:29:04 compute-0 podman[269638]: 2026-01-20 14:29:04.605506461 +0000 UTC m=+0.040133860 container create dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:29:04 compute-0 systemd[1]: Started libpod-conmon-dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59.scope.
Jan 20 14:29:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2e30600741f5b0c9656ac23aba113ebb597811e09e53c23cb1c50eaf15f97f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2e30600741f5b0c9656ac23aba113ebb597811e09e53c23cb1c50eaf15f97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2e30600741f5b0c9656ac23aba113ebb597811e09e53c23cb1c50eaf15f97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2e30600741f5b0c9656ac23aba113ebb597811e09e53c23cb1c50eaf15f97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:04 compute-0 podman[269638]: 2026-01-20 14:29:04.683278932 +0000 UTC m=+0.117906341 container init dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:29:04 compute-0 podman[269638]: 2026-01-20 14:29:04.590712783 +0000 UTC m=+0.025340202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:04 compute-0 podman[269638]: 2026-01-20 14:29:04.689554181 +0000 UTC m=+0.124181580 container start dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:29:04 compute-0 podman[269638]: 2026-01-20 14:29:04.693828785 +0000 UTC m=+0.128456254 container attach dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:29:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:04.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:04 compute-0 ovn_controller[148666]: 2026-01-20T14:29:04Z|00086|binding|INFO|Claiming lport bd002580-dd95-49e1-bc34-e85f86272a05 for this chassis.
Jan 20 14:29:04 compute-0 ovn_controller[148666]: 2026-01-20T14:29:04Z|00087|binding|INFO|bd002580-dd95-49e1-bc34-e85f86272a05: Claiming fa:16:3e:24:ce:0d 10.100.0.10
Jan 20 14:29:04 compute-0 ovn_controller[148666]: 2026-01-20T14:29:04Z|00088|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 up in Southbound
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.922 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '21', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.923 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 bound to our chassis
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.924 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.937 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e80ea8-969e-41f3-a9db-b073683c777e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.938 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14f18b27-11 in ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.940 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14f18b27-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.940 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[67741a9e-ac43-4ca5-92a9-3e6d642640bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.941 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e540fbed-ea31-4de4-9e9f-b0bb5a4f884e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.951 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[af887bae-bd85-4bd3-8fcd-b2b63ae0d8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.960 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c03d2bc5-3fec-4bc3-900d-b59824b97180]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:04.991 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf3838c-0644-4a3c-b016-39b17ed8ec4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 NetworkManager[48960]: <info>  [1768919345.0071] manager: (tap14f18b27-10): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.006 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3bc28f-659a-4029-8145-159425c40b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 systemd-udevd[269667]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.038 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[72131f5a-2af9-4fdf-8ce3-15754a51e362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.042 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[50dda157-ab65-4367-bd14-b41ee371f21e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 NetworkManager[48960]: <info>  [1768919345.0626] device (tap14f18b27-10): carrier: link connected
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.067 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[365d2fb7-0f5c-41bc-a40f-7d43995f5279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.084 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[03d75ea7-208a-4ef3-8fc8-c91c0fc8d825]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528578, 'reachable_time': 38252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269687, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.095 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e2acea87-4333-4b64-bbc2-ad6994d576e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7d:1f17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528578, 'tstamp': 528578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269688, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.111 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d3870710-d3a5-4e24-bb3a-e015bee2d88d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14f18b27-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7d:1f:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528578, 'reachable_time': 38252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269689, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.127 250022 INFO nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Post operation of migration started
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.135 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9abd1e5b-6b10-4940-ad94-a7ff5cb01fc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.185 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d667bb-ef40-4465-a7ef-460dba853072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.190 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.190 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.190 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f18b27-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.192 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 NetworkManager[48960]: <info>  [1768919345.1926] manager: (tap14f18b27-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 20 14:29:05 compute-0 kernel: tap14f18b27-10: entered promiscuous mode
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.194 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.201 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14f18b27-10, col_values=(('external_ids', {'iface-id': 'aa1c73c5-9761-4457-acdc-9f93220f739f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 ovn_controller[148666]: 2026-01-20T14:29:05Z|00089|binding|INFO|Releasing lport aa1c73c5-9761-4457-acdc-9f93220f739f from this chassis (sb_readonly=0)
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.212 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.213 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[92c45b63-885c-4e3f-a362-26b3bd1433e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.215 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/14f18b27-1594-48d8-a08b-a930f7adbc08.pid.haproxy
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 14f18b27-1594-48d8-a08b-a930f7adbc08
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:29:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:05.215 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'env', 'PROCESS_TAG=haproxy-14f18b27-1594-48d8-a08b-a930f7adbc08', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14f18b27-1594-48d8-a08b-a930f7adbc08.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.407 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:05 compute-0 kind_austin[269655]: {
Jan 20 14:29:05 compute-0 kind_austin[269655]:     "0": [
Jan 20 14:29:05 compute-0 kind_austin[269655]:         {
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "devices": [
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "/dev/loop3"
Jan 20 14:29:05 compute-0 kind_austin[269655]:             ],
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "lv_name": "ceph_lv0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "lv_size": "7511998464",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "name": "ceph_lv0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "tags": {
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.cluster_name": "ceph",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.crush_device_class": "",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.encrypted": "0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.osd_id": "0",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.type": "block",
Jan 20 14:29:05 compute-0 kind_austin[269655]:                 "ceph.vdo": "0"
Jan 20 14:29:05 compute-0 kind_austin[269655]:             },
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "type": "block",
Jan 20 14:29:05 compute-0 kind_austin[269655]:             "vg_name": "ceph_vg0"
Jan 20 14:29:05 compute-0 kind_austin[269655]:         }
Jan 20 14:29:05 compute-0 kind_austin[269655]:     ]
Jan 20 14:29:05 compute-0 kind_austin[269655]: }
Jan 20 14:29:05 compute-0 systemd[1]: libpod-dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59.scope: Deactivated successfully.
Jan 20 14:29:05 compute-0 podman[269638]: 2026-01-20 14:29:05.455263434 +0000 UTC m=+0.889890883 container died dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2e30600741f5b0c9656ac23aba113ebb597811e09e53c23cb1c50eaf15f97f-merged.mount: Deactivated successfully.
Jan 20 14:29:05 compute-0 podman[269638]: 2026-01-20 14:29:05.518898674 +0000 UTC m=+0.953526073 container remove dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:29:05 compute-0 systemd[1]: libpod-conmon-dbff509f6c2a66f7cfc6b16b53c2a126a820665d647d43ca81d85ce5133cfc59.scope: Deactivated successfully.
Jan 20 14:29:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 20 14:29:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 20 14:29:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 20 14:29:05 compute-0 sudo[269530]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:05 compute-0 sudo[269745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:05 compute-0 sudo[269745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:05 compute-0 sudo[269745]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:05 compute-0 podman[269739]: 2026-01-20 14:29:05.65117632 +0000 UTC m=+0.088086449 container create ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:29:05 compute-0 podman[269739]: 2026-01-20 14:29:05.589013479 +0000 UTC m=+0.025923628 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.687 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.687 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquired lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:05 compute-0 nova_compute[250018]: 2026-01-20 14:29:05.688 250022 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:05 compute-0 systemd[1]: Started libpod-conmon-ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158.scope.
Jan 20 14:29:05 compute-0 sudo[269776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:29:05 compute-0 sudo[269776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:05 compute-0 sudo[269776]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550a0dd8c20b0a1bbc72aebf610897080d3eabc1704701de80624b86f106f0b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:05 compute-0 podman[269739]: 2026-01-20 14:29:05.739113864 +0000 UTC m=+0.176023953 container init ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:29:05 compute-0 podman[269739]: 2026-01-20 14:29:05.744465538 +0000 UTC m=+0.181375617 container start ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 14:29:05 compute-0 sudo[269808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:05 compute-0 sudo[269808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:05 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [NOTICE]   (269832) : New worker (269835) forked
Jan 20 14:29:05 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [NOTICE]   (269832) : Loading success.
Jan 20 14:29:05 compute-0 sudo[269808]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:05 compute-0 sudo[269845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:29:05 compute-0 sudo[269845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.157561763 +0000 UTC m=+0.040688896 container create b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:29:06 compute-0 systemd[1]: Started libpod-conmon-b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa.scope.
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.142538739 +0000 UTC m=+0.025665892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.261206999 +0000 UTC m=+0.144334212 container init b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.268463934 +0000 UTC m=+0.151591097 container start b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:29:06 compute-0 strange_ramanujan[269926]: 167 167
Jan 20 14:29:06 compute-0 systemd[1]: libpod-b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa.scope: Deactivated successfully.
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.273193301 +0000 UTC m=+0.156320544 container attach b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.273792257 +0000 UTC m=+0.156919430 container died b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f016369a165704d682bab0f7cf65ffc143d1646f022cb6482ac4688ba1a6325-merged.mount: Deactivated successfully.
Jan 20 14:29:06 compute-0 podman[269911]: 2026-01-20 14:29:06.320493772 +0000 UTC m=+0.203620955 container remove b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:29:06 compute-0 systemd[1]: libpod-conmon-b4624c56b69e33241ad22b216cbb10eb73fe4c3fe0ed8be8efc0157d4b63fefa.scope: Deactivated successfully.
Jan 20 14:29:06 compute-0 podman[269949]: 2026-01-20 14:29:06.543567218 +0000 UTC m=+0.076091036 container create eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:29:06 compute-0 ceph-mon[74360]: pgmap v1146: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:29:06 compute-0 ceph-mon[74360]: osdmap e157: 3 total, 3 up, 3 in
Jan 20 14:29:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2168453940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:06 compute-0 systemd[1]: Started libpod-conmon-eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba.scope.
Jan 20 14:29:06 compute-0 podman[269949]: 2026-01-20 14:29:06.513310416 +0000 UTC m=+0.045834294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:29:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76ef28a1e6878e03efb5243438bab8034823b6f8dc207421fa26e17f8049a99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76ef28a1e6878e03efb5243438bab8034823b6f8dc207421fa26e17f8049a99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76ef28a1e6878e03efb5243438bab8034823b6f8dc207421fa26e17f8049a99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76ef28a1e6878e03efb5243438bab8034823b6f8dc207421fa26e17f8049a99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:06 compute-0 podman[269949]: 2026-01-20 14:29:06.656926506 +0000 UTC m=+0.189450294 container init eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:29:06 compute-0 podman[269949]: 2026-01-20 14:29:06.674227681 +0000 UTC m=+0.206751509 container start eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:29:06 compute-0 podman[269949]: 2026-01-20 14:29:06.6782839 +0000 UTC m=+0.210807708 container attach eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:29:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:29:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:06.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:29:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Jan 20 14:29:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2585897243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:07 compute-0 brave_davinci[269965]: {
Jan 20 14:29:07 compute-0 brave_davinci[269965]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:29:07 compute-0 brave_davinci[269965]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:29:07 compute-0 brave_davinci[269965]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:29:07 compute-0 brave_davinci[269965]:         "osd_id": 0,
Jan 20 14:29:07 compute-0 brave_davinci[269965]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:29:07 compute-0 brave_davinci[269965]:         "type": "bluestore"
Jan 20 14:29:07 compute-0 brave_davinci[269965]:     }
Jan 20 14:29:07 compute-0 brave_davinci[269965]: }
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.599 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:07 compute-0 systemd[1]: libpod-eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba.scope: Deactivated successfully.
Jan 20 14:29:07 compute-0 podman[269949]: 2026-01-20 14:29:07.631526604 +0000 UTC m=+1.164050422 container died eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76ef28a1e6878e03efb5243438bab8034823b6f8dc207421fa26e17f8049a99-merged.mount: Deactivated successfully.
Jan 20 14:29:07 compute-0 podman[269949]: 2026-01-20 14:29:07.703100878 +0000 UTC m=+1.235624676 container remove eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:29:07 compute-0 systemd[1]: libpod-conmon-eebebcc154a7142bf6942cb56e7271d669fb6ee373a2f3d70346fd36f966beba.scope: Deactivated successfully.
Jan 20 14:29:07 compute-0 sudo[269845]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:29:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:29:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:29:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:29:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f637c0e7-9e92-4f0b-b9b9-4d9954883254 does not exist
Jan 20 14:29:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 02194872-d2cf-4b72-a172-990c22bb5c5a does not exist
Jan 20 14:29:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 92262e2a-6975-4a46-95c4-84b7cbbfeb95 does not exist
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.885 250022 DEBUG nova.network.neutron [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [{"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:07 compute-0 sudo[270002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:07 compute-0 sudo[270002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:07 compute-0 sudo[270002]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.932 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Releasing lock "refresh_cache-79b5596e-43c9-4085-9829-454fecf59490" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.958 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.959 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.959 250022 DEBUG oslo_concurrency.lockutils [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:07 compute-0 nova_compute[250018]: 2026-01-20 14:29:07.963 250022 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 14:29:07 compute-0 virtqemud[249565]: Domain id=11 name='instance-00000017' uuid=79b5596e-43c9-4085-9829-454fecf59490 is tainted: custom-monitor
Jan 20 14:29:07 compute-0 sudo[270027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:29:07 compute-0 sudo[270027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:07 compute-0 sudo[270027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.503 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "cc200085-11d4-4fbb-afb9-fa6186779430" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.504 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.530 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:29:08 compute-0 ceph-mon[74360]: pgmap v1148: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Jan 20 14:29:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.613 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.614 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.623 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.623 250022 INFO nova.compute.claims [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:29:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:08.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.752 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:08 compute-0 nova_compute[250018]: 2026-01-20 14:29:08.971 250022 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 14:29:09 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 14:29:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324916243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.209 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.216 250022 DEBUG nova.compute.provider_tree [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.237 250022 DEBUG nova.scheduler.client.report [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.271 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.272 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:29:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.318 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.319 250022 DEBUG nova.network.neutron [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.344 250022 INFO nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.367 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.456 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.457 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.458 250022 INFO nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Creating image(s)
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.489 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.517 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.552 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.557 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1324916243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.618 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.619 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.620 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.620 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.641 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.644 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cc200085-11d4-4fbb-afb9-fa6186779430_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.807 250022 DEBUG nova.network.neutron [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.808 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.954 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cc200085-11d4-4fbb-afb9-fa6186779430_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.988 250022 INFO nova.virt.libvirt.driver [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 14:29:09 compute-0 nova_compute[250018]: 2026-01-20 14:29:09.994 250022 DEBUG nova.compute.manager [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.050 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] resizing rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.113 250022 DEBUG nova.objects.instance [None req-3e6ecc77-61de-4099-8bb3-ee2d276c7579 f59120b8f4004c4fb57448db9dcaa6cd e22b29df381845278c7b679b17d11c8b - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.174 250022 DEBUG nova.objects.instance [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lazy-loading 'migration_context' on Instance uuid cc200085-11d4-4fbb-afb9-fa6186779430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.195 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.195 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Ensure instance console log exists: /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.196 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.196 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.196 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.198 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.204 250022 WARNING nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.209 250022 DEBUG nova.virt.libvirt.host [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.210 250022 DEBUG nova.virt.libvirt.host [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.215 250022 DEBUG nova.virt.libvirt.host [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.216 250022 DEBUG nova.virt.libvirt.host [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.218 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.218 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.219 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.219 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.220 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.220 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.221 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.221 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.222 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.222 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.222 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.223 250022 DEBUG nova.virt.hardware [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.228 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 20 14:29:10 compute-0 ceph-mon[74360]: pgmap v1149: 321 pgs: 321 active+clean; 360 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Jan 20 14:29:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 20 14:29:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 20 14:29:10 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 14:29:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:10.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/909451222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.734 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.772 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:10 compute-0 nova_compute[250018]: 2026-01-20 14:29:10.778 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006568396089242509 of space, bias 1.0, pg target 1.9705188267727527 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6465890651172529 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:29:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359295586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.212 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.214 250022 DEBUG nova.objects.instance [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lazy-loading 'pci_devices' on Instance uuid cc200085-11d4-4fbb-afb9-fa6186779430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.231 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <uuid>cc200085-11d4-4fbb-afb9-fa6186779430</uuid>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <name>instance-00000019</name>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:name>tempest-TenantUsagesTestJSON-server-513706756</nova:name>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:29:10</nova:creationTime>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:user uuid="91c665fb46e847d4a6de7463b12a1d1d">tempest-TenantUsagesTestJSON-612787959-project-member</nova:user>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <nova:project uuid="a1959aa8761e48ae9c708bd25a18f2fe">tempest-TenantUsagesTestJSON-612787959</nova:project>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <system>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="serial">cc200085-11d4-4fbb-afb9-fa6186779430</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="uuid">cc200085-11d4-4fbb-afb9-fa6186779430</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </system>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <os>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </os>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <features>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </features>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/cc200085-11d4-4fbb-afb9-fa6186779430_disk">
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/cc200085-11d4-4fbb-afb9-fa6186779430_disk.config">
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:11 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/console.log" append="off"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <video>
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </video>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:29:11 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:29:11 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:29:11 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:29:11 compute-0 nova_compute[250018]: </domain>
Jan 20 14:29:11 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.307 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.307 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.308 250022 INFO nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Using config drive
Jan 20 14:29:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 915 KiB/s wr, 188 op/s
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.334 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.491 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.492 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.492 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.492 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.493 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.494 250022 INFO nova.compute.manager [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Terminating instance
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.495 250022 DEBUG nova.compute.manager [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:29:11 compute-0 kernel: tapbd002580-dd (unregistering): left promiscuous mode
Jan 20 14:29:11 compute-0 NetworkManager[48960]: <info>  [1768919351.5686] device (tapbd002580-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:29:11 compute-0 ovn_controller[148666]: 2026-01-20T14:29:11Z|00090|binding|INFO|Releasing lport bd002580-dd95-49e1-bc34-e85f86272a05 from this chassis (sb_readonly=0)
Jan 20 14:29:11 compute-0 ovn_controller[148666]: 2026-01-20T14:29:11Z|00091|binding|INFO|Setting lport bd002580-dd95-49e1-bc34-e85f86272a05 down in Southbound
Jan 20 14:29:11 compute-0 ovn_controller[148666]: 2026-01-20T14:29:11Z|00092|binding|INFO|Removing iface tapbd002580-dd ovn-installed in OVS
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.580 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.589 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:ce:0d 10.100.0.10'], port_security=['fa:16:3e:24:ce:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '79b5596e-43c9-4085-9829-454fecf59490', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14f18b27-1594-48d8-a08b-a930f7adbc08', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd15f60b9e48e4175b5520d1e57ed2d3a', 'neutron:revision_number': '23', 'neutron:security_group_ids': '6d729cfd-2f98-4ca5-a524-e543b12b3766', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02983c41-bbec-48cf-910a-84fed1be783f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bd002580-dd95-49e1-bc34-e85f86272a05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.591 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bd002580-dd95-49e1-bc34-e85f86272a05 in datapath 14f18b27-1594-48d8-a08b-a930f7adbc08 unbound from our chassis
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.594 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14f18b27-1594-48d8-a08b-a930f7adbc08, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.596 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5b122e-8a02-471e-9302-2847b1bdd0e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.596 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 namespace which is not needed anymore
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 ceph-mon[74360]: osdmap e158: 3 total, 3 up, 3 in
Jan 20 14:29:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/909451222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3359295586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/107033582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:11 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 20 14:29:11 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000017.scope: Consumed 2.153s CPU time.
Jan 20 14:29:11 compute-0 systemd-machined[216401]: Machine qemu-11-instance-00000017 terminated.
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.715 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.729 250022 INFO nova.virt.libvirt.driver [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Instance destroyed successfully.
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.730 250022 DEBUG nova.objects.instance [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lazy-loading 'resources' on Instance uuid 79b5596e-43c9-4085-9829-454fecf59490 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [NOTICE]   (269832) : haproxy version is 2.8.14-c23fe91
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [NOTICE]   (269832) : path to executable is /usr/sbin/haproxy
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [WARNING]  (269832) : Exiting Master process...
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [WARNING]  (269832) : Exiting Master process...
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [ALERT]    (269832) : Current worker (269835) exited with code 143 (Terminated)
Jan 20 14:29:11 compute-0 neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08[269804]: [WARNING]  (269832) : All workers exited. Exiting... (0)
Jan 20 14:29:11 compute-0 systemd[1]: libpod-ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158.scope: Deactivated successfully.
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.747 250022 DEBUG nova.virt.libvirt.vif [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1483268234',display_name='tempest-LiveMigrationTest-server-1483268234',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1483268234',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:28:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d15f60b9e48e4175b5520d1e57ed2d3a',ramdisk_id='',reservation_id='r-jglb1q09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-864280704',owner_user_name='tempest-LiveMigrationTest-864280704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:29:10Z,user_data=None,user_id='bce7fcbd19554e29bb80c5b93b7dd3c9',uuid=79b5596e-43c9-4085-9829-454fecf59490,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.748 250022 DEBUG nova.network.os_vif_util [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converting VIF {"id": "bd002580-dd95-49e1-bc34-e85f86272a05", "address": "fa:16:3e:24:ce:0d", "network": {"id": "14f18b27-1594-48d8-a08b-a930f7adbc08", "bridge": "br-int", "label": "tempest-LiveMigrationTest-2126108622-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d15f60b9e48e4175b5520d1e57ed2d3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd002580-dd", "ovs_interfaceid": "bd002580-dd95-49e1-bc34-e85f86272a05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.748 250022 DEBUG nova.network.os_vif_util [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:29:11 compute-0 podman[270348]: 2026-01-20 14:29:11.749233372 +0000 UTC m=+0.056502929 container died ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.749 250022 DEBUG os_vif [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.750 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.751 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd002580-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.756 250022 INFO os_vif [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:ce:0d,bridge_name='br-int',has_traffic_filtering=True,id=bd002580-dd95-49e1-bc34-e85f86272a05,network=Network(14f18b27-1594-48d8-a08b-a930f7adbc08),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd002580-dd')
Jan 20 14:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158-userdata-shm.mount: Deactivated successfully.
Jan 20 14:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-550a0dd8c20b0a1bbc72aebf610897080d3eabc1704701de80624b86f106f0b0-merged.mount: Deactivated successfully.
Jan 20 14:29:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:11.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:11 compute-0 podman[270348]: 2026-01-20 14:29:11.814135368 +0000 UTC m=+0.121404925 container cleanup ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:29:11 compute-0 systemd[1]: libpod-conmon-ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158.scope: Deactivated successfully.
Jan 20 14:29:11 compute-0 podman[270404]: 2026-01-20 14:29:11.888864766 +0000 UTC m=+0.048890405 container remove ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.896 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[389c291c-e18e-452e-af7a-e7777e022133]: (4, ('Tue Jan 20 02:29:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158)\nffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158\nTue Jan 20 02:29:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 (ffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158)\nffec0aa20c27b34dc55a0e5de6eaa2f6129d8ce027a2ff64ed9102b9d0c0c158\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.898 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[21330e59-13e6-4cda-bb16-a1ad038cc380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.899 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f18b27-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 kernel: tap14f18b27-10: left promiscuous mode
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.905 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7404afa-afd9-4967-abbd-74eb56481f01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.921 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aace1b99-a19c-4041-b149-bf9cac9ebb24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.922 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[adc796c2-edc5-41a7-9f7f-086f4bacc0f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.938 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf1c675-f02b-4556-999f-e827c4e0787c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528571, 'reachable_time': 16288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270419, 'error': None, 'target': 'ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d14f18b27\x2d1594\x2d48d8\x2da08b\x2da930f7adbc08.mount: Deactivated successfully.
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.942 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14f18b27-1594-48d8-a08b-a930f7adbc08 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:29:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:11.942 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d90637-5bcf-4e13-833b-ba4dfee40c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.975 250022 INFO nova.virt.libvirt.driver [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deleting instance files /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del
Jan 20 14:29:11 compute-0 nova_compute[250018]: 2026-01-20 14:29:11.976 250022 INFO nova.virt.libvirt.driver [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deletion of /var/lib/nova/instances/79b5596e-43c9-4085-9829-454fecf59490_del complete
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.031 250022 INFO nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Creating config drive at /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.036 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybsrdvz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.066 250022 INFO nova.compute.manager [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.067 250022 DEBUG oslo.service.loopingcall [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.068 250022 DEBUG nova.compute.manager [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.068 250022 DEBUG nova.network.neutron [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.169 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcybsrdvz" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.194 250022 DEBUG nova.storage.rbd_utils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] rbd image cc200085-11d4-4fbb-afb9-fa6186779430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.197 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config cc200085-11d4-4fbb-afb9-fa6186779430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.233 250022 DEBUG nova.compute.manager [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.234 250022 DEBUG oslo_concurrency.lockutils [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.234 250022 DEBUG oslo_concurrency.lockutils [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.235 250022 DEBUG oslo_concurrency.lockutils [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.235 250022 DEBUG nova.compute.manager [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.235 250022 DEBUG nova.compute.manager [req-af2912b3-e87c-4b87-b327-321fbd3a8fe8 req-a2e59f79-c5f4-4bc3-bf46-1d1de4cf649f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-unplugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.379 250022 DEBUG oslo_concurrency.processutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config cc200085-11d4-4fbb-afb9-fa6186779430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.380 250022 INFO nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Deleting local config drive /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430/disk.config because it was imported into RBD.
Jan 20 14:29:12 compute-0 systemd-machined[216401]: New machine qemu-12-instance-00000019.
Jan 20 14:29:12 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000019.
Jan 20 14:29:12 compute-0 ceph-mon[74360]: pgmap v1151: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 915 KiB/s wr, 188 op/s
Jan 20 14:29:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/549940775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:12.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.799 250022 DEBUG nova.network.neutron [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.828 250022 INFO nova.compute.manager [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 0.76 seconds to deallocate network for instance.
Jan 20 14:29:12 compute-0 nova_compute[250018]: 2026-01-20 14:29:12.918 250022 DEBUG nova.compute.manager [req-81756a74-9fa8-4d8a-bfc5-1eef3a4393fb req-84f6b530-e4c7-424d-9bc5-77eb39b26223 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-deleted-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:12 compute-0 sshd-session[270478]: Invalid user admin from 157.245.78.139 port 38956
Jan 20 14:29:13 compute-0 sshd-session[270478]: Connection closed by invalid user admin 157.245.78.139 port 38956 [preauth]
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.065 250022 INFO nova.compute.manager [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Took 0.24 seconds to detach 1 volumes for instance.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.067 250022 DEBUG nova.compute.manager [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Deleting volume: 47e883f3-6efe-40b3-be28-6c01525dfc0c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.281 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.283 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.288 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 786 KiB/s wr, 172 op/s
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.331 250022 INFO nova.scheduler.client.report [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Deleted allocations for instance 79b5596e-43c9-4085-9829-454fecf59490
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.355 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919353.355041, cc200085-11d4-4fbb-afb9-fa6186779430 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.358 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] VM Resumed (Lifecycle Event)
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.360 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.360 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.365 250022 INFO nova.virt.libvirt.driver [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance spawned successfully.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.365 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.405 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.406 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.406 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.407 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.407 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.408 250022 DEBUG nova.virt.libvirt.driver [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.412 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.415 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.439 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.439 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919353.3582418, cc200085-11d4-4fbb-afb9-fa6186779430 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.439 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] VM Started (Lifecycle Event)
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.448 250022 DEBUG oslo_concurrency.lockutils [None req-3359e910-77ea-495c-9932-5a930b83905b bce7fcbd19554e29bb80c5b93b7dd3c9 d15f60b9e48e4175b5520d1e57ed2d3a - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.462 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.466 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.483 250022 INFO nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Took 4.03 seconds to spawn the instance on the hypervisor.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.483 250022 DEBUG nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.484 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.538 250022 INFO nova.compute.manager [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Took 4.96 seconds to build instance.
Jan 20 14:29:13 compute-0 nova_compute[250018]: 2026-01-20 14:29:13.559 250022 DEBUG oslo_concurrency.lockutils [None req-ce9bb50c-3a9e-4722-bcfd-eca84afe2460 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1240338787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/636651936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:29:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/636651936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:29:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.365 250022 DEBUG nova.compute.manager [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.365 250022 DEBUG oslo_concurrency.lockutils [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "79b5596e-43c9-4085-9829-454fecf59490-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.365 250022 DEBUG oslo_concurrency.lockutils [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.366 250022 DEBUG oslo_concurrency.lockutils [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "79b5596e-43c9-4085-9829-454fecf59490-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.366 250022 DEBUG nova.compute.manager [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] No waiting events found dispatching network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:14 compute-0 nova_compute[250018]: 2026-01-20 14:29:14.366 250022 WARNING nova.compute.manager [req-0a78b75e-3473-4fc6-9005-431f28d6deb7 req-5688ac97-c85c-42e8-b710-f7a7d07f86fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Received unexpected event network-vif-plugged-bd002580-dd95-49e1-bc34-e85f86272a05 for instance with vm_state deleted and task_state None.
Jan 20 14:29:14 compute-0 ceph-mon[74360]: pgmap v1152: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 382 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 786 KiB/s wr, 172 op/s
Jan 20 14:29:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1914442263' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:29:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1914442263' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:29:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:14.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.235 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "cc200085-11d4-4fbb-afb9-fa6186779430" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.236 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.237 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "cc200085-11d4-4fbb-afb9-fa6186779430-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.237 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.238 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.240 250022 INFO nova.compute.manager [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Terminating instance
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.242 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "refresh_cache-cc200085-11d4-4fbb-afb9-fa6186779430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.242 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquired lock "refresh_cache-cc200085-11d4-4fbb-afb9-fa6186779430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.243 250022 DEBUG nova.network.neutron [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 367 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Jan 20 14:29:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974734948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.509 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.513 250022 DEBUG nova.network.neutron [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.529 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.613 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.613 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2974734948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.760 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.761 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=20.845653533935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.761 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.762 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:15.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.815 250022 DEBUG nova.network.neutron [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.838 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Releasing lock "refresh_cache-cc200085-11d4-4fbb-afb9-fa6186779430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.839 250022 DEBUG nova.compute.manager [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.846 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance cc200085-11d4-4fbb-afb9-fa6186779430 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.847 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.847 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:29:15 compute-0 nova_compute[250018]: 2026-01-20 14:29:15.896 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:15 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 20 14:29:15 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000019.scope: Consumed 3.375s CPU time.
Jan 20 14:29:15 compute-0 systemd-machined[216401]: Machine qemu-12-instance-00000019 terminated.
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.060 250022 INFO nova.virt.libvirt.driver [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance destroyed successfully.
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.061 250022 DEBUG nova.objects.instance [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lazy-loading 'resources' on Instance uuid cc200085-11d4-4fbb-afb9-fa6186779430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273831407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.370 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.376 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.411 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.434 250022 INFO nova.virt.libvirt.driver [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Deleting instance files /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430_del
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.435 250022 INFO nova.virt.libvirt.driver [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Deletion of /var/lib/nova/instances/cc200085-11d4-4fbb-afb9-fa6186779430_del complete
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.448 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.448 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.487 250022 INFO nova.compute.manager [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.487 250022 DEBUG oslo.service.loopingcall [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.487 250022 DEBUG nova.compute.manager [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.488 250022 DEBUG nova.network.neutron [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.644 250022 DEBUG nova.network.neutron [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.663 250022 DEBUG nova.network.neutron [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.682 250022 INFO nova.compute.manager [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Took 0.19 seconds to deallocate network for instance.
Jan 20 14:29:16 compute-0 ceph-mon[74360]: pgmap v1153: 321 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 312 active+clean; 367 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Jan 20 14:29:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3273831407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:16.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.745 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.746 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:16 compute-0 nova_compute[250018]: 2026-01-20 14:29:16.789 250022 DEBUG oslo_concurrency.processutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254805158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.212 250022 DEBUG oslo_concurrency.processutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.219 250022 DEBUG nova.compute.provider_tree [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.235 250022 DEBUG nova.scheduler.client.report [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.267 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.307 250022 INFO nova.scheduler.client.report [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Deleted allocations for instance cc200085-11d4-4fbb-afb9-fa6186779430
Jan 20 14:29:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 311 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 284 op/s
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.376 250022 DEBUG oslo_concurrency.lockutils [None req-2a170984-9f21-4f44-8c0a-a64b5eeee6ce 91c665fb46e847d4a6de7463b12a1d1d a1959aa8761e48ae9c708bd25a18f2fe - - default default] Lock "cc200085-11d4-4fbb-afb9-fa6186779430" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.442 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.443 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.443 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2254805158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:17.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.804 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.804 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.834 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.935 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.935 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.947 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:29:17 compute-0 nova_compute[250018]: 2026-01-20 14:29:17.947 250022 INFO nova.compute.claims [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.085 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.291 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.292 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.308 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.374 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152397161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.569 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.580 250022 DEBUG nova.compute.provider_tree [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.605 250022 DEBUG nova.scheduler.client.report [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.646 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.647 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.648 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.663 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.664 250022 INFO nova.compute.claims [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:29:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:18 compute-0 ceph-mon[74360]: pgmap v1154: 321 pgs: 321 active+clean; 311 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 284 op/s
Jan 20 14:29:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/771574584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3599865667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1152397161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.802 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.803 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.966 250022 INFO nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:29:18 compute-0 nova_compute[250018]: 2026-01-20 14:29:18.994 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.102 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.144 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.146 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.146 250022 INFO nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Creating image(s)
Jan 20 14:29:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 287 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Jan 20 14:29:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:29:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597519327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.543 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.573 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.603 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:19 compute-0 sudo[270676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.610 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:19 compute-0 sudo[270676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:19 compute-0 sudo[270676]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.634 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.640 250022 DEBUG nova.compute.provider_tree [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.658 250022 DEBUG nova.scheduler.client.report [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.672 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.673 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.674 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.674 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:19 compute-0 sudo[270738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:19 compute-0 sudo[270738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:19 compute-0 sudo[270738]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.699 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.703 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ad62888a-ef27-43b4-bb6c-439541ff5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.726 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.727 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:29:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1452425071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-0 ceph-mon[74360]: pgmap v1155: 321 pgs: 321 active+clean; 287 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 303 op/s
Jan 20 14:29:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1597519327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:19.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.815 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.816 250022 DEBUG nova.network.neutron [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.885 250022 INFO nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.911 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:29:19 compute-0 nova_compute[250018]: 2026-01-20 14:29:19.987 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ad62888a-ef27-43b4-bb6c-439541ff5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.029 250022 DEBUG nova.policy [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6a3fbc3f92a849e88cbf34d28ca17e43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.077 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] resizing rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.132 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.133 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.134 250022 INFO nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Creating image(s)
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.163 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.195 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.223 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.227 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.306 250022 DEBUG nova.objects.instance [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'migration_context' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.308 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.309 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.310 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.310 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.334 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.338 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.362 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.363 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Ensure instance console log exists: /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.363 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.364 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.364 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.627 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.727 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] resizing rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:29:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.858 250022 DEBUG nova.objects.instance [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.886 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.886 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Ensure instance console log exists: /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.887 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.887 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:20 compute-0 nova_compute[250018]: 2026-01-20 14:29:20.887 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.056 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 20 14:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 20 14:29:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 20 14:29:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.412 250022 DEBUG nova.network.neutron [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.413 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.416 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.420 250022 WARNING nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.427 250022 DEBUG nova.virt.libvirt.host [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.428 250022 DEBUG nova.virt.libvirt.host [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.431 250022 DEBUG nova.virt.libvirt.host [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.431 250022 DEBUG nova.virt.libvirt.host [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.432 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.432 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:29:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='313bc8e0-fa4e-45e2-a827-fd3bf5c8eed4',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1217480715',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.433 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.433 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.433 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.433 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.433 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.434 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.434 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.434 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.434 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.434 250022 DEBUG nova.virt.hardware [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.436 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4157179260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.910 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.947 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:21 compute-0 nova_compute[250018]: 2026-01-20 14:29:21.953 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:29:22 compute-0 ceph-mon[74360]: osdmap e159: 3 total, 3 up, 3 in
Jan 20 14:29:22 compute-0 ceph-mon[74360]: pgmap v1157: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4157179260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.243 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.243 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.244 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:29:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564187580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.393 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.395 250022 DEBUG nova.objects.instance [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.449 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <uuid>9f5c9253-e2bd-42d3-8253-fac568daeda7</uuid>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <name>instance-0000001b</name>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:name>tempest-MigrationsAdminTest-server-326963183</nova:name>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:29:21</nova:creationTime>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:flavor name="tempest-test_resize_flavor_-1217480715">
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <system>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="serial">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="uuid">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </system>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <os>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </os>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <features>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </features>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk">
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config">
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/console.log" append="off"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <video>
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </video>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:29:22 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:29:22 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:29:22 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:29:22 compute-0 nova_compute[250018]: </domain>
Jan 20 14:29:22 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.499 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Successfully created port: dc8abb47-5960-4824-b04c-1903f2eb5e32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.556 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.557 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.558 250022 INFO nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Using config drive
Jan 20 14:29:22 compute-0 nova_compute[250018]: 2026-01-20 14:29:22.595 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:22.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/972668459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3564187580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.386 250022 INFO nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Creating config drive at /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.390 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpws7mk2bc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.533 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpws7mk2bc" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.562 250022 DEBUG nova.storage.rbd_utils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rbd image 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.566 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.774 250022 DEBUG oslo_concurrency.processutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config 9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:23 compute-0 nova_compute[250018]: 2026-01-20 14:29:23.775 250022 INFO nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Deleting local config drive /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/disk.config because it was imported into RBD.
Jan 20 14:29:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:23 compute-0 systemd-machined[216401]: New machine qemu-13-instance-0000001b.
Jan 20 14:29:23 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000001b.
Jan 20 14:29:24 compute-0 ceph-mon[74360]: pgmap v1158: 321 pgs: 321 active+clean; 241 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 257 op/s
Jan 20 14:29:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/423882642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.381 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919364.380554, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.382 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Resumed (Lifecycle Event)
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.385 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.386 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.390 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance spawned successfully.
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.390 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.414 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.422 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.430 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.430 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.431 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.432 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.433 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.433 250022 DEBUG nova.virt.libvirt.driver [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.485 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.486 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919364.382252, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.486 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Started (Lifecycle Event)
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.506 250022 INFO nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Took 4.37 seconds to spawn the instance on the hypervisor.
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.507 250022 DEBUG nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.513 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.522 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.552 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.601 250022 INFO nova.compute.manager [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Took 6.25 seconds to build instance.
Jan 20 14:29:24 compute-0 nova_compute[250018]: 2026-01-20 14:29:24.636 250022 DEBUG oslo_concurrency.lockutils [None req-9d86e5bb-be59-4801-b37f-63a469b602e1 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.074 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Successfully updated port: dc8abb47-5960-4824-b04c-1903f2eb5e32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.105 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.106 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquired lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.106 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 294 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.515 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:25.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:25 compute-0 nova_compute[250018]: 2026-01-20 14:29:25.829 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.111 250022 DEBUG nova.compute.manager [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.112 250022 DEBUG nova.compute.manager [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing instance network info cache due to event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.112 250022 DEBUG oslo_concurrency.lockutils [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:26 compute-0 ceph-mon[74360]: pgmap v1159: 321 pgs: 321 active+clean; 294 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.392942) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366393038, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2221, "num_deletes": 256, "total_data_size": 3849538, "memory_usage": 3923920, "flush_reason": "Manual Compaction"}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366424809, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3707485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24265, "largest_seqno": 26485, "table_properties": {"data_size": 3697574, "index_size": 6213, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21636, "raw_average_key_size": 20, "raw_value_size": 3677289, "raw_average_value_size": 3539, "num_data_blocks": 273, "num_entries": 1039, "num_filter_entries": 1039, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919183, "oldest_key_time": 1768919183, "file_creation_time": 1768919366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 31915 microseconds, and 17121 cpu microseconds.
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.424866) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3707485 bytes OK
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.424889) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.426748) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.426766) EVENT_LOG_v1 {"time_micros": 1768919366426760, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.426786) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3840323, prev total WAL file size 3840323, number of live WAL files 2.
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.428210) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3620KB)], [56(8855KB)]
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366428296, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12775730, "oldest_snapshot_seqno": -1}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5462 keys, 10745381 bytes, temperature: kUnknown
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366504513, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10745381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10706810, "index_size": 23805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 136942, "raw_average_key_size": 25, "raw_value_size": 10606345, "raw_average_value_size": 1941, "num_data_blocks": 980, "num_entries": 5462, "num_filter_entries": 5462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.504769) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10745381 bytes
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.506358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.4 rd, 140.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5993, records dropped: 531 output_compression: NoCompression
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.506393) EVENT_LOG_v1 {"time_micros": 1768919366506367, "job": 30, "event": "compaction_finished", "compaction_time_micros": 76307, "compaction_time_cpu_micros": 26795, "output_level": 6, "num_output_files": 1, "total_output_size": 10745381, "num_input_records": 5993, "num_output_records": 5462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366507364, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919366509146, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.428128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.509198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.509202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.509204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.509205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:26.509207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.727 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919351.7266731, 79b5596e-43c9-4085-9829-454fecf59490 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.728 250022 INFO nova.compute.manager [-] [instance: 79b5596e-43c9-4085-9829-454fecf59490] VM Stopped (Lifecycle Event)
Jan 20 14:29:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:26.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.788 250022 DEBUG nova.compute.manager [None req-2ca77aa7-e2fe-47c2-b40f-f19862715646 - - - - - -] [instance: 79b5596e-43c9-4085-9829-454fecf59490] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:26 compute-0 nova_compute[250018]: 2026-01-20 14:29:26.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 294 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:29:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:27.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:28 compute-0 ceph-mon[74360]: pgmap v1160: 321 pgs: 321 active+clean; 294 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.499 250022 DEBUG nova.network.neutron [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.530 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Releasing lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.530 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Instance network_info: |[{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.531 250022 DEBUG oslo_concurrency.lockutils [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.531 250022 DEBUG nova.network.neutron [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.533 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Start _get_guest_xml network_info=[{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.539 250022 WARNING nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.545 250022 DEBUG nova.virt.libvirt.host [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.546 250022 DEBUG nova.virt.libvirt.host [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.549 250022 DEBUG nova.virt.libvirt.host [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.549 250022 DEBUG nova.virt.libvirt.host [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.550 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.551 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.551 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.551 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.552 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.552 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.552 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.553 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.553 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.553 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.553 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.554 250022 DEBUG nova.virt.hardware [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:29:28 compute-0 nova_compute[250018]: 2026-01-20 14:29:28.556 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:29:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:28.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:29:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219983015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.067 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.099 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.103 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Jan 20 14:29:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2219983015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801464026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.548 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.551 250022 DEBUG nova.virt.libvirt.vif [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947631498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947631498',id=26,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-n5nexebz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:29:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=ad62888a-ef27-43b4-bb6c-439541ff5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.551 250022 DEBUG nova.network.os_vif_util [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.553 250022 DEBUG nova.network.os_vif_util [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.554 250022 DEBUG nova.objects.instance [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'pci_devices' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.572 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <uuid>ad62888a-ef27-43b4-bb6c-439541ff5524</uuid>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <name>instance-0000001a</name>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-947631498</nova:name>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:29:28</nova:creationTime>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:user uuid="6a3fbc3f92a849e88cbf34d28ca17e43">tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member</nova:user>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:project uuid="0cee74dd60da4a839bb5eb0ba3137edf">tempest-UpdateMultiattachVolumeNegativeTest-859917658</nova:project>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <nova:port uuid="dc8abb47-5960-4824-b04c-1903f2eb5e32">
Jan 20 14:29:29 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <system>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="serial">ad62888a-ef27-43b4-bb6c-439541ff5524</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="uuid">ad62888a-ef27-43b4-bb6c-439541ff5524</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </system>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <os>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </os>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <features>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </features>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ad62888a-ef27-43b4-bb6c-439541ff5524_disk">
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config">
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </source>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:29:29 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:76:68:31"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <target dev="tapdc8abb47-59"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/console.log" append="off"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <video>
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </video>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:29:29 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:29:29 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:29:29 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:29:29 compute-0 nova_compute[250018]: </domain>
Jan 20 14:29:29 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.584 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Preparing to wait for external event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.585 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.585 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.585 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.586 250022 DEBUG nova.virt.libvirt.vif [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947631498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947631498',id=26,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-n5nexebz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:29:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=ad62888a-ef27-43b4-bb6c-439541ff5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.586 250022 DEBUG nova.network.os_vif_util [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.587 250022 DEBUG nova.network.os_vif_util [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.587 250022 DEBUG os_vif [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.588 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.589 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.597 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.597 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc8abb47-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.598 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc8abb47-59, col_values=(('external_ids', {'iface-id': 'dc8abb47-5960-4824-b04c-1903f2eb5e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:68:31', 'vm-uuid': 'ad62888a-ef27-43b4-bb6c-439541ff5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.600 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:29 compute-0 NetworkManager[48960]: <info>  [1768919369.6017] manager: (tapdc8abb47-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.605 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.609 250022 INFO os_vif [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59')
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.667 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.668 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.668 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No VIF found with MAC fa:16:3e:76:68:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.669 250022 INFO nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Using config drive
Jan 20 14:29:29 compute-0 nova_compute[250018]: 2026-01-20 14:29:29.698 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:29.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:30 compute-0 ceph-mon[74360]: pgmap v1161: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 190 op/s
Jan 20 14:29:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1801464026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.460 250022 DEBUG nova.network.neutron [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updated VIF entry in instance network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.460 250022 DEBUG nova.network.neutron [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:30 compute-0 podman[271307]: 2026-01-20 14:29:30.474155458 +0000 UTC m=+0.060117637 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.479 250022 DEBUG oslo_concurrency.lockutils [req-543a98ec-f7c8-4d7b-a91e-73c89fc478b9 req-928531da-2a56-452b-ace2-0d38dcf0cf74 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.489 250022 INFO nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Creating config drive at /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.495 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohwk1nqd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.517 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:30 compute-0 podman[271306]: 2026-01-20 14:29:30.530145353 +0000 UTC m=+0.116108012 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.630 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpohwk1nqd" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.664 250022 DEBUG nova.storage.rbd_utils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] rbd image ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.668 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.693 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.694 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.694 250022 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.744 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.744 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.744 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.830 250022 DEBUG oslo_concurrency.processutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config ad62888a-ef27-43b4-bb6c-439541ff5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.831 250022 INFO nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Deleting local config drive /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524/disk.config because it was imported into RBD.
Jan 20 14:29:30 compute-0 kernel: tapdc8abb47-59: entered promiscuous mode
Jan 20 14:29:30 compute-0 NetworkManager[48960]: <info>  [1768919370.8778] manager: (tapdc8abb47-59): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Jan 20 14:29:30 compute-0 ovn_controller[148666]: 2026-01-20T14:29:30Z|00093|binding|INFO|Claiming lport dc8abb47-5960-4824-b04c-1903f2eb5e32 for this chassis.
Jan 20 14:29:30 compute-0 ovn_controller[148666]: 2026-01-20T14:29:30Z|00094|binding|INFO|dc8abb47-5960-4824-b04c-1903f2eb5e32: Claiming fa:16:3e:76:68:31 10.100.0.4
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.878 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.891 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:68:31 10.100.0.4'], port_security=['fa:16:3e:76:68:31 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ad62888a-ef27-43b4-bb6c-439541ff5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=dc8abb47-5960-4824-b04c-1903f2eb5e32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.892 160071 INFO neutron.agent.ovn.metadata.agent [-] Port dc8abb47-5960-4824-b04c-1903f2eb5e32 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 bound to our chassis
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.893 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.906 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5a8690-f867-4b77-87ca-7fb84c030480]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.906 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02f86d1d-51 in ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:29:30 compute-0 systemd-machined[216401]: New machine qemu-14-instance-0000001a.
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.908 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02f86d1d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.909 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f484a37d-ba3a-403d-8038-643901a0473e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.910 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e2278463-c71a-4c71-8b5e-7e1593b64638]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.926 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[9215d0b3-2bc4-4880-a493-f100c1793d49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:30 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000001a.
Jan 20 14:29:30 compute-0 systemd-udevd[271406]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:30.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc45c69-a1f0-44aa-b8b6-dc656715ab91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.956 250022 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:30 compute-0 NetworkManager[48960]: <info>  [1768919370.9611] device (tapdc8abb47-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:29:30 compute-0 nova_compute[250018]: 2026-01-20 14:29:30.960 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:30 compute-0 NetworkManager[48960]: <info>  [1768919370.9622] device (tapdc8abb47-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.004 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5a20df57-cfc2-4176-b9e9-661807cb0e25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_controller[148666]: 2026-01-20T14:29:31Z|00095|binding|INFO|Setting lport dc8abb47-5960-4824-b04c-1903f2eb5e32 ovn-installed in OVS
Jan 20 14:29:31 compute-0 ovn_controller[148666]: 2026-01-20T14:29:31Z|00096|binding|INFO|Setting lport dc8abb47-5960-4824-b04c-1903f2eb5e32 up in Southbound
Jan 20 14:29:31 compute-0 NetworkManager[48960]: <info>  [1768919371.0127] manager: (tap02f86d1d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Jan 20 14:29:31 compute-0 systemd-udevd[271408]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.011 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.011 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3944f28c-34f6-4a93-89c1-a87160cf5202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.051 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a04823b9-2731-4d82-ad70-92ac1afbadc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.054 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[68af7d67-520f-4c85-889d-c6d09726f978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.059 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919356.058218, cc200085-11d4-4fbb-afb9-fa6186779430 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.059 250022 INFO nova.compute.manager [-] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] VM Stopped (Lifecycle Event)
Jan 20 14:29:31 compute-0 NetworkManager[48960]: <info>  [1768919371.0733] device (tap02f86d1d-50): carrier: link connected
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.077 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecf37df-b64a-419f-99b0-3d6ecd13ec15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.092 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[94d5d138-1a90-4423-93bf-c8540aa4e38e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02f86d1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:08:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531179, 'reachable_time': 38036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271436, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.094 250022 DEBUG nova.compute.manager [None req-feb21119-1cdd-4af2-a6d6-7d3809dde5e5 - - - - - -] [instance: cc200085-11d4-4fbb-afb9-fa6186779430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.106 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4b5005-4511-42f1-979f-9949f7ab00f6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:8de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531179, 'tstamp': 531179}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271437, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.120 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8342fce7-b1e9-4f56-b72d-704af85b0699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02f86d1d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:08:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531179, 'reachable_time': 38036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271438, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.145 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cecbc3ca-c5df-44e9-a509-5c43ef508979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.201 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[98cc24d2-ea02-487f-8524-875f91f6ed0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.202 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f86d1d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.202 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.203 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02f86d1d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:31 compute-0 kernel: tap02f86d1d-50: entered promiscuous mode
Jan 20 14:29:31 compute-0 NetworkManager[48960]: <info>  [1768919371.2470] manager: (tap02f86d1d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.248 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.254 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02f86d1d-50, col_values=(('external_ids', {'iface-id': '2f798c1c-f9b6-4141-904d-4124d05888ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:31 compute-0 ovn_controller[148666]: 2026-01-20T14:29:31Z|00097|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.256 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.257 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.264 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.265 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9de260e-7e9e-4e05-8314-c494b9df3e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.268 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/02f86d1d-5cad-49c5-9004-3de3e4739ad5.pid.haproxy
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 02f86d1d-5cad-49c5-9004-3de3e4739ad5
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:29:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:31.268 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'env', 'PROCESS_TAG=haproxy-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02f86d1d-5cad-49c5-9004-3de3e4739ad5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.279 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 160 op/s
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.372 250022 DEBUG nova.compute.manager [req-cc4b03e7-76b7-44e2-a30d-d5b19a2b5025 req-1b2f19d7-06c3-4bd6-8192-8dae6c2ad647 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.372 250022 DEBUG oslo_concurrency.lockutils [req-cc4b03e7-76b7-44e2-a30d-d5b19a2b5025 req-1b2f19d7-06c3-4bd6-8192-8dae6c2ad647 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.373 250022 DEBUG oslo_concurrency.lockutils [req-cc4b03e7-76b7-44e2-a30d-d5b19a2b5025 req-1b2f19d7-06c3-4bd6-8192-8dae6c2ad647 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.373 250022 DEBUG oslo_concurrency.lockutils [req-cc4b03e7-76b7-44e2-a30d-d5b19a2b5025 req-1b2f19d7-06c3-4bd6-8192-8dae6c2ad647 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.373 250022 DEBUG nova.compute.manager [req-cc4b03e7-76b7-44e2-a30d-d5b19a2b5025 req-1b2f19d7-06c3-4bd6-8192-8dae6c2ad647 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Processing event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:29:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/650463931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.462 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919371.4616292, ad62888a-ef27-43b4-bb6c-439541ff5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.462 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] VM Started (Lifecycle Event)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.464 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.468 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.471 250022 INFO nova.virt.libvirt.driver [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Instance spawned successfully.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.472 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.495 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.495 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.496 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.497 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.497 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.498 250022 DEBUG nova.virt.libvirt.driver [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.501 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.504 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.533 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.533 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919371.4618037, ad62888a-ef27-43b4-bb6c-439541ff5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.534 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] VM Paused (Lifecycle Event)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.575 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.578 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919371.4680517, ad62888a-ef27-43b4-bb6c-439541ff5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.578 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] VM Resumed (Lifecycle Event)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.614 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.617 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.629 250022 INFO nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Took 12.48 seconds to spawn the instance on the hypervisor.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.630 250022 DEBUG nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:29:31 compute-0 podman[271512]: 2026-01-20 14:29:31.656061909 +0000 UTC m=+0.057793734 container create c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:29:31 compute-0 systemd[1]: Started libpod-conmon-c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e.scope.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.707 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.714 250022 DEBUG nova.network.neutron [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:31 compute-0 podman[271512]: 2026-01-20 14:29:31.627319216 +0000 UTC m=+0.029051061 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:29:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ede76b2e5f9c26cb8825a67ace742c989a58072ca2992f4a3c537602accae853/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.742 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:31 compute-0 podman[271512]: 2026-01-20 14:29:31.751613908 +0000 UTC m=+0.153345723 container init c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.754 250022 INFO nova.compute.manager [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Took 13.85 seconds to build instance.
Jan 20 14:29:31 compute-0 podman[271512]: 2026-01-20 14:29:31.75728406 +0000 UTC m=+0.159015865 container start c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:29:31 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [NOTICE]   (271529) : New worker (271531) forked
Jan 20 14:29:31 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [NOTICE]   (271529) : Loading success.
Jan 20 14:29:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.810 250022 DEBUG oslo_concurrency.lockutils [None req-45d14fa6-8879-40cf-b833-137ecce45c8e 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.925 250022 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.926 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Creating file /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/a8a22162458f4027a19a30149a0de405.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:29:31 compute-0 nova_compute[250018]: 2026-01-20 14:29:31.926 250022 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/a8a22162458f4027a19a30149a0de405.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.426 250022 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/a8a22162458f4027a19a30149a0de405.tmp" returned: 1 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.427 250022 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/a8a22162458f4027a19a30149a0de405.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.427 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Creating directory /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.427 250022 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:32 compute-0 ceph-mon[74360]: pgmap v1162: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 160 op/s
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.624 250022 DEBUG oslo_concurrency.processutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:32 compute-0 nova_compute[250018]: 2026-01-20 14:29:32.628 250022 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:29:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.595 250022 DEBUG nova.compute.manager [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.595 250022 DEBUG oslo_concurrency.lockutils [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.595 250022 DEBUG oslo_concurrency.lockutils [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.595 250022 DEBUG oslo_concurrency.lockutils [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.595 250022 DEBUG nova.compute.manager [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] No waiting events found dispatching network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:29:33 compute-0 nova_compute[250018]: 2026-01-20 14:29:33.596 250022 WARNING nova.compute.manager [req-3102eaa9-87d1-4661-9937-a1e195e8b618 req-fa4e8780-4ab9-4ede-8f3a-d399cf78330f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received unexpected event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 for instance with vm_state active and task_state None.
Jan 20 14:29:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:34 compute-0 ceph-mon[74360]: pgmap v1163: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:29:34 compute-0 nova_compute[250018]: 2026-01-20 14:29:34.600 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:34 compute-0 NetworkManager[48960]: <info>  [1768919374.7707] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 20 14:29:34 compute-0 NetworkManager[48960]: <info>  [1768919374.7720] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 20 14:29:34 compute-0 nova_compute[250018]: 2026-01-20 14:29:34.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:34 compute-0 nova_compute[250018]: 2026-01-20 14:29:34.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:34 compute-0 ovn_controller[148666]: 2026-01-20T14:29:34Z|00098|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:29:34 compute-0 nova_compute[250018]: 2026-01-20 14:29:34.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.188 250022 DEBUG nova.compute.manager [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.189 250022 DEBUG nova.compute.manager [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing instance network info cache due to event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.189 250022 DEBUG oslo_concurrency.lockutils [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.190 250022 DEBUG oslo_concurrency.lockutils [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.190 250022 DEBUG nova.network.neutron [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:29:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 185 op/s
Jan 20 14:29:35 compute-0 nova_compute[250018]: 2026-01-20 14:29:35.520 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:36 compute-0 ceph-mon[74360]: pgmap v1164: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 185 op/s
Jan 20 14:29:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 36 KiB/s wr, 145 op/s
Jan 20 14:29:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:37.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:38 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 20 14:29:38 compute-0 ceph-mon[74360]: pgmap v1165: 321 pgs: 321 active+clean; 296 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 36 KiB/s wr, 145 op/s
Jan 20 14:29:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:39 compute-0 ovn_controller[148666]: 2026-01-20T14:29:39Z|00099|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:29:39 compute-0 nova_compute[250018]: 2026-01-20 14:29:39.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 305 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 630 KiB/s wr, 140 op/s
Jan 20 14:29:39 compute-0 nova_compute[250018]: 2026-01-20 14:29:39.554 250022 DEBUG nova.network.neutron [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updated VIF entry in instance network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:29:39 compute-0 nova_compute[250018]: 2026-01-20 14:29:39.555 250022 DEBUG nova.network.neutron [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:39 compute-0 nova_compute[250018]: 2026-01-20 14:29:39.584 250022 DEBUG oslo_concurrency.lockutils [req-dfd2ae75-3924-4590-ac4d-76aad47f9264 req-3235068f-aa26-4946-83c1-a6e23d22b0fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:39 compute-0 nova_compute[250018]: 2026-01-20 14:29:39.602 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:39 compute-0 sudo[271547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:39 compute-0 sudo[271547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:39 compute-0 sudo[271547]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:39 compute-0 sudo[271572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:39 compute-0 sudo[271572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:39 compute-0 sudo[271572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:40 compute-0 nova_compute[250018]: 2026-01-20 14:29:40.538 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:40 compute-0 ceph-mon[74360]: pgmap v1166: 321 pgs: 321 active+clean; 305 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 630 KiB/s wr, 140 op/s
Jan 20 14:29:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:41 compute-0 nova_compute[250018]: 2026-01-20 14:29:41.290 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 20 14:29:41 compute-0 nova_compute[250018]: 2026-01-20 14:29:41.326 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:41.327 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:29:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:41.329 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:29:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:42 compute-0 ceph-mon[74360]: pgmap v1167: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Jan 20 14:29:42 compute-0 nova_compute[250018]: 2026-01-20 14:29:42.667 250022 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:29:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 20 14:29:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:43.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:44 compute-0 ceph-mon[74360]: pgmap v1168: 321 pgs: 321 active+clean; 329 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 20 14:29:44 compute-0 nova_compute[250018]: 2026-01-20 14:29:44.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:44 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 20 14:29:44 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001b.scope: Consumed 14.256s CPU time.
Jan 20 14:29:44 compute-0 systemd-machined[216401]: Machine qemu-13-instance-0000001b terminated.
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.120 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 338 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.541 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.681 250022 INFO nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance shutdown successfully after 13 seconds.
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.687 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance destroyed successfully.
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.690 250022 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.691 250022 DEBUG nova.virt.libvirt.driver [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:29:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.834 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.834 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:29:45 compute-0 nova_compute[250018]: 2026-01-20 14:29:45.835 250022 DEBUG oslo_concurrency.lockutils [None req-69535ae1-0353-42a7-8c9d-2289163476c0 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:29:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.163745) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386163843, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 412, "num_deletes": 255, "total_data_size": 345209, "memory_usage": 354280, "flush_reason": "Manual Compaction"}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386168217, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 342211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26487, "largest_seqno": 26897, "table_properties": {"data_size": 339776, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5437, "raw_average_key_size": 17, "raw_value_size": 335011, "raw_average_value_size": 1053, "num_data_blocks": 24, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919367, "oldest_key_time": 1768919367, "file_creation_time": 1768919386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 4517 microseconds, and 1939 cpu microseconds.
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.168271) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 342211 bytes OK
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.168288) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.169688) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.169703) EVENT_LOG_v1 {"time_micros": 1768919386169698, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.169720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 342646, prev total WAL file size 342646, number of live WAL files 2.
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.170111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353039' seq:72057594037927935, type:22 .. '6C6F676D00373630' seq:0, type:0; will stop at (end)
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(334KB)], [59(10MB)]
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386170146, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11087592, "oldest_snapshot_seqno": -1}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5262 keys, 10979779 bytes, temperature: kUnknown
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386262424, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10979779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10941659, "index_size": 23866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 133952, "raw_average_key_size": 25, "raw_value_size": 10843844, "raw_average_value_size": 2060, "num_data_blocks": 979, "num_entries": 5262, "num_filter_entries": 5262, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.262671) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10979779 bytes
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.264074) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.1 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.2 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(64.5) write-amplify(32.1) OK, records in: 5780, records dropped: 518 output_compression: NoCompression
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.264094) EVENT_LOG_v1 {"time_micros": 1768919386264085, "job": 32, "event": "compaction_finished", "compaction_time_micros": 92358, "compaction_time_cpu_micros": 31610, "output_level": 6, "num_output_files": 1, "total_output_size": 10979779, "num_input_records": 5780, "num_output_records": 5262, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386264270, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919386266570, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.170056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.266657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.266663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.266664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.266665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:29:46.266667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:29:46 compute-0 ceph-mon[74360]: pgmap v1169: 321 pgs: 321 active+clean; 338 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 20 14:29:46 compute-0 ovn_controller[148666]: 2026-01-20T14:29:46Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:68:31 10.100.0.4
Jan 20 14:29:46 compute-0 ovn_controller[148666]: 2026-01-20T14:29:46Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:68:31 10.100.0.4
Jan 20 14:29:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 349 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:29:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 20 14:29:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 20 14:29:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 20 14:29:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:48 compute-0 ceph-mon[74360]: pgmap v1170: 321 pgs: 321 active+clean; 349 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:29:48 compute-0 ceph-mon[74360]: osdmap e160: 3 total, 3 up, 3 in
Jan 20 14:29:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3115653324' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:48.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 355 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 116 op/s
Jan 20 14:29:49 compute-0 nova_compute[250018]: 2026-01-20 14:29:49.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2245526178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:49.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:50 compute-0 nova_compute[250018]: 2026-01-20 14:29:50.544 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:50 compute-0 ceph-mon[74360]: pgmap v1172: 321 pgs: 321 active+clean; 355 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 116 op/s
Jan 20 14:29:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:50.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:29:51.331 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:29:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/956641195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:51.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:29:52
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:29:52 compute-0 ceph-mon[74360]: pgmap v1173: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:52.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:53 compute-0 sshd-session[271607]: Invalid user admin from 157.245.78.139 port 37684
Jan 20 14:29:53 compute-0 sshd-session[271607]: Connection closed by invalid user admin 157.245.78.139 port 37684 [preauth]
Jan 20 14:29:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:53.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:53 compute-0 ceph-mon[74360]: pgmap v1174: 321 pgs: 321 active+clean; 362 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 14:29:54 compute-0 nova_compute[250018]: 2026-01-20 14:29:54.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:54.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 386 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 168 op/s
Jan 20 14:29:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3861732727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:29:55 compute-0 nova_compute[250018]: 2026-01-20 14:29:55.547 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:55 compute-0 nova_compute[250018]: 2026-01-20 14:29:55.609 250022 INFO nova.compute.manager [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Swapping old allocation on dict_keys(['068db7fd-4bd6-45a9-8bd6-a22cfe7596ed']) held by migration 9ed1b4f4-9705-4902-bd56-a18b9866cbf3 for instance
Jan 20 14:29:55 compute-0 nova_compute[250018]: 2026-01-20 14:29:55.653 250022 DEBUG nova.scheduler.client.report [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Overwriting current allocation {'allocations': {'bbb02880-a710-4ac1-8b2c-5c09765848d1': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 26}}, 'project_id': 'f3c2e72a7148496394c8bcd618a19c80', 'user_id': '01a3d712f05049b19d4ecc7051720ad5', 'consumer_generation': 1} on consumer 9f5c9253-e2bd-42d3-8253-fac568daeda7 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 20 14:29:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:29:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:29:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:29:56 compute-0 nova_compute[250018]: 2026-01-20 14:29:56.417 250022 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:29:56 compute-0 nova_compute[250018]: 2026-01-20 14:29:56.418 250022 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:29:56 compute-0 nova_compute[250018]: 2026-01-20 14:29:56.418 250022 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:29:56 compute-0 ceph-mon[74360]: pgmap v1175: 321 pgs: 321 active+clean; 386 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 168 op/s
Jan 20 14:29:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:29:57 compute-0 nova_compute[250018]: 2026-01-20 14:29:57.461 250022 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:29:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2270811472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3031609186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:57 compute-0 ceph-mon[74360]: pgmap v1176: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:29:58 compute-0 nova_compute[250018]: 2026-01-20 14:29:58.604 250022 DEBUG nova.network.neutron [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:29:58 compute-0 nova_compute[250018]: 2026-01-20 14:29:58.624 250022 DEBUG oslo_concurrency.lockutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:29:58 compute-0 nova_compute[250018]: 2026-01-20 14:29:58.625 250022 DEBUG nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 20 14:29:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:29:58.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:58 compute-0 nova_compute[250018]: 2026-01-20 14:29:58.798 250022 DEBUG nova.storage.rbd_utils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] rolling back rbd image(9f5c9253-e2bd-42d3-8253-fac568daeda7_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.012 250022 DEBUG nova.storage.rbd_utils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] removing snapshot(nova-resize) on rbd image(9f5c9253-e2bd-42d3-8253-fac568daeda7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:29:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 20 14:29:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 20 14:29:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.125 250022 DEBUG nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.129 250022 WARNING nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.133 250022 DEBUG nova.virt.libvirt.host [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.134 250022 DEBUG nova.virt.libvirt.host [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.137 250022 DEBUG nova.virt.libvirt.host [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.137 250022 DEBUG nova.virt.libvirt.host [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.138 250022 DEBUG nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.139 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:29:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='313bc8e0-fa4e-45e2-a827-fd3bf5c8eed4',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1217480715',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.139 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.139 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.139 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.140 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.140 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.140 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.140 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.140 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.141 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.141 250022 DEBUG nova.virt.hardware [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.141 250022 DEBUG nova.objects.instance [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.156 250022 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Jan 20 14:29:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:29:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1957976833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.588 250022 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.629 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:29:59 compute-0 nova_compute[250018]: 2026-01-20 14:29:59.638 250022 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:29:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:29:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:29:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:29:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:29:59 compute-0 sudo[271727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:59 compute-0 sudo[271727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:59 compute-0 sudo[271727]: pam_unix(sudo:session): session closed for user root
Jan 20 14:29:59 compute-0 sudo[271752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:29:59 compute-0 sudo[271752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:29:59 compute-0 sudo[271752]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:30:00 compute-0 ceph-mon[74360]: osdmap e161: 3 total, 3 up, 3 in
Jan 20 14:30:00 compute-0 ceph-mon[74360]: pgmap v1178: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Jan 20 14:30:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1957976833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:30:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310716266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.126 250022 DEBUG oslo_concurrency.processutils [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.132 250022 DEBUG nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <uuid>9f5c9253-e2bd-42d3-8253-fac568daeda7</uuid>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <name>instance-0000001b</name>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:name>tempest-MigrationsAdminTest-server-326963183</nova:name>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:29:59</nova:creationTime>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:flavor name="tempest-test_resize_flavor_-1217480715">
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <system>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="serial">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="uuid">9f5c9253-e2bd-42d3-8253-fac568daeda7</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </system>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <os>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </os>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <features>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </features>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk">
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9f5c9253-e2bd-42d3-8253-fac568daeda7_disk.config">
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:00 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7/console.log" append="off"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <video>
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </video>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:30:00 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:30:00 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:30:00 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:30:00 compute-0 nova_compute[250018]: </domain>
Jan 20 14:30:00 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.138 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919385.1374662, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.138 250022 INFO nova.compute.manager [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Stopped (Lifecycle Event)
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.173 250022 DEBUG nova.compute.manager [None req-561ed886-f625-45fa-86e5-00d913209978 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:00 compute-0 systemd-machined[216401]: New machine qemu-15-instance-0000001b.
Jan 20 14:30:00 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000001b.
Jan 20 14:30:00 compute-0 nova_compute[250018]: 2026-01-20 14:30:00.548 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.038 250022 DEBUG nova.compute.manager [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.039 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919401.0380242, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.039 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Resumed (Lifecycle Event)
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.045 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance running successfully.
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.046 250022 DEBUG nova.virt.libvirt.driver [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 20 14:30:01 compute-0 podman[271837]: 2026-01-20 14:30:01.054231552 +0000 UTC m=+0.086874016 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:30:01 compute-0 podman[271836]: 2026-01-20 14:30:01.06414942 +0000 UTC m=+0.098628233 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.076 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.078 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.114 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.114 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919401.0388248, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.114 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Started (Lifecycle Event)
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.138 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.141 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.144 250022 INFO nova.compute.manager [None req-5125edda-6367-41ce-aa7c-33caccb49490 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance to original state: 'active'
Jan 20 14:30:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:01 compute-0 nova_compute[250018]: 2026-01-20 14:30:01.384 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:30:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3310716266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:01.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:02 compute-0 ceph-mon[74360]: pgmap v1179: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:04 compute-0 nova_compute[250018]: 2026-01-20 14:30:04.634 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:04 compute-0 ceph-mon[74360]: pgmap v1180: 321 pgs: 321 active+clean; 409 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:30:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 917 KiB/s wr, 133 op/s
Jan 20 14:30:05 compute-0 nova_compute[250018]: 2026-01-20 14:30:05.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:06 compute-0 ceph-mon[74360]: pgmap v1181: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 917 KiB/s wr, 133 op/s
Jan 20 14:30:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 20 14:30:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 20 14:30:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 20 14:30:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 26 KiB/s wr, 230 op/s
Jan 20 14:30:07 compute-0 ceph-mon[74360]: osdmap e162: 3 total, 3 up, 3 in
Jan 20 14:30:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:07.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:08 compute-0 sudo[271887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:08 compute-0 sudo[271887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-0 sudo[271887]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-0 sudo[271912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:30:08 compute-0 sudo[271912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-0 sudo[271912]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-0 sudo[271937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:08 compute-0 sudo[271937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-0 sudo[271937]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:08 compute-0 ceph-mon[74360]: pgmap v1183: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 26 KiB/s wr, 230 op/s
Jan 20 14:30:08 compute-0 sudo[271962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:30:08 compute-0 sudo[271962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:09 compute-0 sudo[271962]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 30beaa8f-83ee-4f82-95ea-02ffff3c172e does not exist
Jan 20 14:30:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1c8ca732-eb7d-4362-af76-542b65bdfec6 does not exist
Jan 20 14:30:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5e73d962-c998-4eec-8d76-75b681a6eda5 does not exist
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:30:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 189 op/s
Jan 20 14:30:09 compute-0 sudo[272020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:09 compute-0 sudo[272020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:09 compute-0 sudo[272020]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:09 compute-0 sudo[272045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:30:09 compute-0 sudo[272045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:09 compute-0 sudo[272045]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:09 compute-0 sudo[272070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:09 compute-0 sudo[272070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:09 compute-0 sudo[272070]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:09 compute-0 sudo[272095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:30:09 compute-0 sudo[272095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:09 compute-0 nova_compute[250018]: 2026-01-20 14:30:09.638 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:09.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:09 compute-0 podman[272161]: 2026-01-20 14:30:09.827262001 +0000 UTC m=+0.025785384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4260148581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:30:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:30:10 compute-0 nova_compute[250018]: 2026-01-20 14:30:10.594 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:10 compute-0 podman[272161]: 2026-01-20 14:30:10.71035706 +0000 UTC m=+0.908880443 container create c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:30:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00967474816210683 of space, bias 1.0, pg target 2.902424448632049 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:30:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:11 compute-0 systemd[1]: Started libpod-conmon-c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37.scope.
Jan 20 14:30:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:11 compute-0 podman[272161]: 2026-01-20 14:30:11.734194222 +0000 UTC m=+1.932717645 container init c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:30:11 compute-0 podman[272161]: 2026-01-20 14:30:11.754306433 +0000 UTC m=+1.952829766 container start c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:30:11 compute-0 naughty_mahavira[272178]: 167 167
Jan 20 14:30:11 compute-0 systemd[1]: libpod-c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37.scope: Deactivated successfully.
Jan 20 14:30:11 compute-0 conmon[272178]: conmon c6e15670dade1023b712 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37.scope/container/memory.events
Jan 20 14:30:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:11 compute-0 ceph-mon[74360]: pgmap v1184: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 189 op/s
Jan 20 14:30:11 compute-0 ceph-mon[74360]: pgmap v1185: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:12 compute-0 podman[272161]: 2026-01-20 14:30:12.060597666 +0000 UTC m=+2.259121039 container attach c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:30:12 compute-0 podman[272161]: 2026-01-20 14:30:12.061291805 +0000 UTC m=+2.259815208 container died c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed62f07733e127a994b3d4a43dcb65cd9f4d0da1d5df7d44c2da4cec9eb805de-merged.mount: Deactivated successfully.
Jan 20 14:30:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:12 compute-0 podman[272161]: 2026-01-20 14:30:12.894186454 +0000 UTC m=+3.092709797 container remove c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mahavira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 14:30:12 compute-0 systemd[1]: libpod-conmon-c6e15670dade1023b712ad9acc318fe48b5537b7c321ec44b1dbdeb8020d8b37.scope: Deactivated successfully.
Jan 20 14:30:13 compute-0 podman[272204]: 2026-01-20 14:30:13.072812665 +0000 UTC m=+0.026035510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:13 compute-0 podman[272204]: 2026-01-20 14:30:13.225882871 +0000 UTC m=+0.179105656 container create 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:30:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:13 compute-0 systemd[1]: Started libpod-conmon-222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59.scope.
Jan 20 14:30:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:13 compute-0 podman[272204]: 2026-01-20 14:30:13.486151076 +0000 UTC m=+0.439373891 container init 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:30:13 compute-0 podman[272204]: 2026-01-20 14:30:13.493920586 +0000 UTC m=+0.447143381 container start 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:30:13 compute-0 podman[272204]: 2026-01-20 14:30:13.499934567 +0000 UTC m=+0.453157462 container attach 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:30:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:13.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:14 compute-0 ceph-mon[74360]: pgmap v1186: 321 pgs: 321 active+clean; 409 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 KiB/s wr, 169 op/s
Jan 20 14:30:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/478407422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:30:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/478407422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:30:14 compute-0 infallible_mcclintock[272220]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:30:14 compute-0 infallible_mcclintock[272220]: --> relative data size: 1.0
Jan 20 14:30:14 compute-0 infallible_mcclintock[272220]: --> All data devices are unavailable
Jan 20 14:30:14 compute-0 systemd[1]: libpod-222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59.scope: Deactivated successfully.
Jan 20 14:30:14 compute-0 podman[272204]: 2026-01-20 14:30:14.352881615 +0000 UTC m=+1.306104410 container died 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-232c36c15a35b87a18546186df4539d0fab0863b720b6d05f569cdbd544108b0-merged.mount: Deactivated successfully.
Jan 20 14:30:14 compute-0 nova_compute[250018]: 2026-01-20 14:30:14.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:14 compute-0 podman[272204]: 2026-01-20 14:30:14.693398209 +0000 UTC m=+1.646621004 container remove 222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:30:14 compute-0 sudo[272095]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:14 compute-0 systemd[1]: libpod-conmon-222cf9505c67a177a2e2f734320b497dc48f08dd63c3286e5de499fb1d694a59.scope: Deactivated successfully.
Jan 20 14:30:14 compute-0 sudo[272248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:14 compute-0 sudo[272248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:14.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:14 compute-0 sudo[272248]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:14 compute-0 sudo[272273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:30:14 compute-0 sudo[272273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:14 compute-0 sudo[272273]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:14 compute-0 sudo[272298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:14 compute-0 sudo[272298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:14 compute-0 sudo[272298]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:15 compute-0 sudo[272323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:30:15 compute-0 sudo[272323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1567150312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1108007621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3859938377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 463 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.382476622 +0000 UTC m=+0.052851052 container create a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:30:15 compute-0 systemd[1]: Started libpod-conmon-a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12.scope.
Jan 20 14:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.355683512 +0000 UTC m=+0.026057972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.456723688 +0000 UTC m=+0.127098138 container init a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.467704134 +0000 UTC m=+0.138078554 container start a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.47094895 +0000 UTC m=+0.141323390 container attach a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:30:15 compute-0 inspiring_tu[272402]: 167 167
Jan 20 14:30:15 compute-0 systemd[1]: libpod-a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12.scope: Deactivated successfully.
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.472625255 +0000 UTC m=+0.142999675 container died a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4212fd744e2437548c77649f80e16c38ecd06e439245b96047e98d48c103255d-merged.mount: Deactivated successfully.
Jan 20 14:30:15 compute-0 podman[272386]: 2026-01-20 14:30:15.513500614 +0000 UTC m=+0.183875044 container remove a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_tu, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:30:15 compute-0 systemd[1]: libpod-conmon-a549ce66e9d0863468a4440928777d32d62cfdfec09f35a5569b48a7ff6b3c12.scope: Deactivated successfully.
Jan 20 14:30:15 compute-0 nova_compute[250018]: 2026-01-20 14:30:15.597 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:15 compute-0 podman[272425]: 2026-01-20 14:30:15.673161916 +0000 UTC m=+0.036434891 container create a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:30:15 compute-0 systemd[1]: Started libpod-conmon-a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2.scope.
Jan 20 14:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1179aaa9e14a8c27500dbe55e2bce39e92a40145e2be02fb1c7e3d3371b3eb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1179aaa9e14a8c27500dbe55e2bce39e92a40145e2be02fb1c7e3d3371b3eb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1179aaa9e14a8c27500dbe55e2bce39e92a40145e2be02fb1c7e3d3371b3eb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1179aaa9e14a8c27500dbe55e2bce39e92a40145e2be02fb1c7e3d3371b3eb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:15 compute-0 podman[272425]: 2026-01-20 14:30:15.657231428 +0000 UTC m=+0.020504433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:15 compute-0 podman[272425]: 2026-01-20 14:30:15.754593056 +0000 UTC m=+0.117866061 container init a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:30:15 compute-0 podman[272425]: 2026-01-20 14:30:15.760443683 +0000 UTC m=+0.123716658 container start a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:30:15 compute-0 podman[272425]: 2026-01-20 14:30:15.76368755 +0000 UTC m=+0.126960545 container attach a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:30:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:15.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:16 compute-0 nova_compute[250018]: 2026-01-20 14:30:16.238 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]: {
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:     "0": [
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:         {
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "devices": [
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "/dev/loop3"
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             ],
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "lv_name": "ceph_lv0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "lv_size": "7511998464",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "name": "ceph_lv0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "tags": {
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.cluster_name": "ceph",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.crush_device_class": "",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.encrypted": "0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.osd_id": "0",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.type": "block",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:                 "ceph.vdo": "0"
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             },
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "type": "block",
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:             "vg_name": "ceph_vg0"
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:         }
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]:     ]
Jan 20 14:30:16 compute-0 hopeful_hoover[272442]: }
Jan 20 14:30:16 compute-0 systemd[1]: libpod-a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2.scope: Deactivated successfully.
Jan 20 14:30:16 compute-0 conmon[272442]: conmon a0cf9c4f2c3943127bb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2.scope/container/memory.events
Jan 20 14:30:16 compute-0 podman[272425]: 2026-01-20 14:30:16.61187535 +0000 UTC m=+0.975148345 container died a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:30:16 compute-0 ceph-mon[74360]: pgmap v1187: 321 pgs: 321 active+clean; 463 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Jan 20 14:30:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3056334551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:16.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1179aaa9e14a8c27500dbe55e2bce39e92a40145e2be02fb1c7e3d3371b3eb8-merged.mount: Deactivated successfully.
Jan 20 14:30:16 compute-0 podman[272425]: 2026-01-20 14:30:16.957578613 +0000 UTC m=+1.320851588 container remove a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:30:16 compute-0 systemd[1]: libpod-conmon-a0cf9c4f2c3943127bb420b96c31a7cb177d5d2465069f0558f1c56b9fd144c2.scope: Deactivated successfully.
Jan 20 14:30:16 compute-0 sudo[272323]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:17 compute-0 sudo[272464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:17 compute-0 sudo[272464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:17 compute-0 sudo[272464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.093 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.094 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.094 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.094 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.094 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:17 compute-0 sudo[272489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:30:17 compute-0 sudo[272489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:17 compute-0 sudo[272489]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:17 compute-0 sudo[272515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:17 compute-0 sudo[272515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:17 compute-0 sudo[272515]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:17 compute-0 sudo[272540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:30:17 compute-0 sudo[272540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 505 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 872 KiB/s rd, 5.0 MiB/s wr, 142 op/s
Jan 20 14:30:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2106470088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.568 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.579585283 +0000 UTC m=+0.059504340 container create b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.542332031 +0000 UTC m=+0.022251108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.721 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.721 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.724 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.724 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:30:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2106470088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:17 compute-0 systemd[1]: Started libpod-conmon-b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88.scope.
Jan 20 14:30:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:17.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.869202818 +0000 UTC m=+0.349121905 container init b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.876418532 +0000 UTC m=+0.356337589 container start b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.879897946 +0000 UTC m=+0.359817003 container attach b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:30:17 compute-0 sweet_rubin[272646]: 167 167
Jan 20 14:30:17 compute-0 systemd[1]: libpod-b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88.scope: Deactivated successfully.
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.881628463 +0000 UTC m=+0.361547520 container died b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.886 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.889 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4294MB free_disk=20.756866455078125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.889 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.890 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23233b3e6061f9aabe6dd18cba85080e2d94cb82e9b709d8e4cfa438d4f856d-merged.mount: Deactivated successfully.
Jan 20 14:30:17 compute-0 podman[272626]: 2026-01-20 14:30:17.939463117 +0000 UTC m=+0.419382174 container remove b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:30:17 compute-0 systemd[1]: libpod-conmon-b2e708b0d6ef0b8badec5e6d4970c3bb29b122d77f8c9b50a06742c44edd6e88.scope: Deactivated successfully.
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.980 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance ad62888a-ef27-43b4-bb6c-439541ff5524 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.981 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 9f5c9253-e2bd-42d3-8253-fac568daeda7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.981 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:30:17 compute-0 nova_compute[250018]: 2026-01-20 14:30:17.982 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:30:18 compute-0 podman[272670]: 2026-01-20 14:30:18.09954175 +0000 UTC m=+0.039350249 container create b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:30:18 compute-0 systemd[1]: Started libpod-conmon-b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628.scope.
Jan 20 14:30:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:30:18 compute-0 podman[272670]: 2026-01-20 14:30:18.081137785 +0000 UTC m=+0.020946294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4dd610be781843a795b4f03029086e4de24be32975c7baee048942fc2100acf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4dd610be781843a795b4f03029086e4de24be32975c7baee048942fc2100acf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4dd610be781843a795b4f03029086e4de24be32975c7baee048942fc2100acf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4dd610be781843a795b4f03029086e4de24be32975c7baee048942fc2100acf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:30:18 compute-0 podman[272670]: 2026-01-20 14:30:18.202999511 +0000 UTC m=+0.142808040 container init b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:30:18 compute-0 podman[272670]: 2026-01-20 14:30:18.210341779 +0000 UTC m=+0.150150278 container start b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:30:18 compute-0 podman[272670]: 2026-01-20 14:30:18.228110186 +0000 UTC m=+0.167918695 container attach b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.250 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828975272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.687 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.697 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.725 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.760 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:30:18 compute-0 nova_compute[250018]: 2026-01-20 14:30:18.761 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:18 compute-0 ceph-mon[74360]: pgmap v1188: 321 pgs: 321 active+clean; 505 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 872 KiB/s rd, 5.0 MiB/s wr, 142 op/s
Jan 20 14:30:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/61502453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1828975272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]: {
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:         "osd_id": 0,
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:         "type": "bluestore"
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]:     }
Jan 20 14:30:19 compute-0 silly_mcnulty[272687]: }
Jan 20 14:30:19 compute-0 systemd[1]: libpod-b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628.scope: Deactivated successfully.
Jan 20 14:30:19 compute-0 podman[272731]: 2026-01-20 14:30:19.090132128 +0000 UTC m=+0.022411723 container died b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4dd610be781843a795b4f03029086e4de24be32975c7baee048942fc2100acf-merged.mount: Deactivated successfully.
Jan 20 14:30:19 compute-0 podman[272731]: 2026-01-20 14:30:19.201688318 +0000 UTC m=+0.133967913 container remove b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:30:19 compute-0 systemd[1]: libpod-conmon-b09dafb63e0f720b2a99de47ea294f6e6b44dd4668609c01145d1938d8763628.scope: Deactivated successfully.
Jan 20 14:30:19 compute-0 sudo[272540]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 520 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 168 op/s
Jan 20 14:30:19 compute-0 nova_compute[250018]: 2026-01-20 14:30:19.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:19 compute-0 nova_compute[250018]: 2026-01-20 14:30:19.762 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:19 compute-0 nova_compute[250018]: 2026-01-20 14:30:19.762 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:30:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:30:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:30:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 00099a66-87ff-4733-839a-240a7a8b0801 does not exist
Jan 20 14:30:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 975aaa5d-1f06-4b42-9c12-c13fec52c503 does not exist
Jan 20 14:30:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dc273ab1-e432-4e09-9085-0910b966d51f does not exist
Jan 20 14:30:20 compute-0 sudo[272748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-0 sudo[272746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-0 sudo[272748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-0 sudo[272746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-0 sudo[272748]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-0 sudo[272746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-0 sudo[272796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:30:20 compute-0 sudo[272796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-0 sudo[272798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:20 compute-0 sudo[272796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-0 sudo[272798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:20 compute-0 sudo[272798]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.599 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.638 250022 DEBUG nova.compute.manager [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.721 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.722 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.757 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'pci_requests' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.775 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.776 250022 INFO nova.compute.claims [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.777 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'resources' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.789 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'numa_topology' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.799 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'pci_devices' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.840 250022 INFO nova.compute.resource_tracker [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating resource usage from migration 60518ded-c4ce-45b7-a976-2c06150ab129
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.841 250022 DEBUG nova.compute.resource_tracker [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Starting to track incoming migration 60518ded-c4ce-45b7-a976-2c06150ab129 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 14:30:20 compute-0 nova_compute[250018]: 2026-01-20 14:30:20.939 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2148169791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2203175762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-0 ceph-mon[74360]: pgmap v1189: 321 pgs: 321 active+clean; 520 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 168 op/s
Jan 20 14:30:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3241562052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075205744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:21 compute-0 nova_compute[250018]: 2026-01-20 14:30:21.476 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:21 compute-0 nova_compute[250018]: 2026-01-20 14:30:21.483 250022 DEBUG nova.compute.provider_tree [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:21 compute-0 nova_compute[250018]: 2026-01-20 14:30:21.500 250022 DEBUG nova.scheduler.client.report [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:21 compute-0 nova_compute[250018]: 2026-01-20 14:30:21.528 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:21 compute-0 nova_compute[250018]: 2026-01-20 14:30:21.528 250022 INFO nova.compute.manager [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Migrating
Jan 20 14:30:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:22 compute-0 nova_compute[250018]: 2026-01-20 14:30:22.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:30:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2267082636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/891766828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-0 ceph-mon[74360]: pgmap v1190: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3075205744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:22 compute-0 sshd-session[272869]: Accepted publickey for nova from 192.168.122.101 port 41992 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:30:22 compute-0 systemd-logind[796]: New session 53 of user nova.
Jan 20 14:30:22 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 14:30:22 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 14:30:22 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 14:30:22 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 20 14:30:22 compute-0 systemd[272874]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:30:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:22 compute-0 systemd[272874]: Queued start job for default target Main User Target.
Jan 20 14:30:22 compute-0 systemd[272874]: Created slice User Application Slice.
Jan 20 14:30:22 compute-0 systemd[272874]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:30:22 compute-0 systemd[272874]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:30:22 compute-0 systemd[272874]: Reached target Paths.
Jan 20 14:30:22 compute-0 systemd[272874]: Reached target Timers.
Jan 20 14:30:22 compute-0 systemd[272874]: Starting D-Bus User Message Bus Socket...
Jan 20 14:30:22 compute-0 systemd[272874]: Starting Create User's Volatile Files and Directories...
Jan 20 14:30:22 compute-0 systemd[272874]: Finished Create User's Volatile Files and Directories.
Jan 20 14:30:22 compute-0 systemd[272874]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:30:22 compute-0 systemd[272874]: Reached target Sockets.
Jan 20 14:30:22 compute-0 systemd[272874]: Reached target Basic System.
Jan 20 14:30:22 compute-0 systemd[272874]: Reached target Main User Target.
Jan 20 14:30:22 compute-0 systemd[272874]: Startup finished in 145ms.
Jan 20 14:30:22 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 20 14:30:22 compute-0 systemd[1]: Started Session 53 of User nova.
Jan 20 14:30:22 compute-0 sshd-session[272869]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:30:23 compute-0 sshd-session[272889]: Received disconnect from 192.168.122.101 port 41992:11: disconnected by user
Jan 20 14:30:23 compute-0 sshd-session[272889]: Disconnected from user nova 192.168.122.101 port 41992
Jan 20 14:30:23 compute-0 sshd-session[272869]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:30:23 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 20 14:30:23 compute-0 systemd-logind[796]: Session 53 logged out. Waiting for processes to exit.
Jan 20 14:30:23 compute-0 systemd-logind[796]: Removed session 53.
Jan 20 14:30:23 compute-0 nova_compute[250018]: 2026-01-20 14:30:23.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:23 compute-0 sshd-session[272891]: Accepted publickey for nova from 192.168.122.101 port 41998 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 14:30:23 compute-0 systemd-logind[796]: New session 55 of user nova.
Jan 20 14:30:23 compute-0 systemd[1]: Started Session 55 of User nova.
Jan 20 14:30:23 compute-0 sshd-session[272891]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 14:30:23 compute-0 sshd-session[272894]: Received disconnect from 192.168.122.101 port 41998:11: disconnected by user
Jan 20 14:30:23 compute-0 sshd-session[272894]: Disconnected from user nova 192.168.122.101 port 41998
Jan 20 14:30:23 compute-0 sshd-session[272891]: pam_unix(sshd:session): session closed for user nova
Jan 20 14:30:23 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 20 14:30:23 compute-0 systemd-logind[796]: Session 55 logged out. Waiting for processes to exit.
Jan 20 14:30:23 compute-0 systemd-logind[796]: Removed session 55.
Jan 20 14:30:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.470 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.470 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.470 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.471 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:24 compute-0 nova_compute[250018]: 2026-01-20 14:30:24.645 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:25 compute-0 ceph-mon[74360]: pgmap v1191: 321 pgs: 321 active+clean; 582 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Jan 20 14:30:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.5 MiB/s wr, 338 op/s
Jan 20 14:30:25 compute-0 nova_compute[250018]: 2026-01-20 14:30:25.601 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:25.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:26 compute-0 nova_compute[250018]: 2026-01-20 14:30:26.142 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:26 compute-0 nova_compute[250018]: 2026-01-20 14:30:26.160 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:26 compute-0 nova_compute[250018]: 2026-01-20 14:30:26.160 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:30:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/256037197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1585323425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:26 compute-0 ceph-mon[74360]: pgmap v1192: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.5 MiB/s wr, 338 op/s
Jan 20 14:30:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 328 op/s
Jan 20 14:30:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2870659096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:27 compute-0 nova_compute[250018]: 2026-01-20 14:30:27.825 250022 DEBUG nova.compute.manager [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:30:27 compute-0 nova_compute[250018]: 2026-01-20 14:30:27.826 250022 DEBUG nova.compute.manager [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing instance network info cache due to event network-changed-dc8abb47-5960-4824-b04c-1903f2eb5e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:30:27 compute-0 nova_compute[250018]: 2026-01-20 14:30:27.826 250022 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:27 compute-0 nova_compute[250018]: 2026-01-20 14:30:27.826 250022 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:27 compute-0 nova_compute[250018]: 2026-01-20 14:30:27.826 250022 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Refreshing network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:30:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:28 compute-0 ceph-mon[74360]: pgmap v1193: 321 pgs: 321 active+clean; 583 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 328 op/s
Jan 20 14:30:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 590 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 302 op/s
Jan 20 14:30:29 compute-0 nova_compute[250018]: 2026-01-20 14:30:29.545 250022 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updated VIF entry in instance network info cache for port dc8abb47-5960-4824-b04c-1903f2eb5e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:30:29 compute-0 nova_compute[250018]: 2026-01-20 14:30:29.546 250022 DEBUG nova.network.neutron [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [{"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:29 compute-0 nova_compute[250018]: 2026-01-20 14:30:29.570 250022 DEBUG oslo_concurrency.lockutils [req-5feb12f5-f148-4dd7-8e9e-4c1e1b23a535 req-cebe04e4-5266-4ccb-a562-4d3cd542d204 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-ad62888a-ef27-43b4-bb6c-439541ff5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:29 compute-0 nova_compute[250018]: 2026-01-20 14:30:29.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:29.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:30 compute-0 ceph-mon[74360]: pgmap v1194: 321 pgs: 321 active+clean; 590 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 302 op/s
Jan 20 14:30:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2358003891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:30 compute-0 nova_compute[250018]: 2026-01-20 14:30:30.603 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:30.745 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:30.745 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:30.746 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2341563966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 301 op/s
Jan 20 14:30:31 compute-0 podman[272901]: 2026-01-20 14:30:31.464839676 +0000 UTC m=+0.058899254 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:30:31 compute-0 podman[272900]: 2026-01-20 14:30:31.497898125 +0000 UTC m=+0.093038303 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:30:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:32 compute-0 ceph-mon[74360]: pgmap v1195: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.1 MiB/s wr, 301 op/s
Jan 20 14:30:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 203 op/s
Jan 20 14:30:33 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 14:30:33 compute-0 systemd[272874]: Activating special unit Exit the Session...
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped target Main User Target.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped target Basic System.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped target Paths.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped target Sockets.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped target Timers.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:30:33 compute-0 systemd[272874]: Closed D-Bus User Message Bus Socket.
Jan 20 14:30:33 compute-0 systemd[272874]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:30:33 compute-0 systemd[272874]: Removed slice User Application Slice.
Jan 20 14:30:33 compute-0 systemd[272874]: Reached target Shutdown.
Jan 20 14:30:33 compute-0 systemd[272874]: Finished Exit the Session.
Jan 20 14:30:33 compute-0 systemd[272874]: Reached target Exit the Session.
Jan 20 14:30:33 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 14:30:33 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 14:30:33 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 14:30:33 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 14:30:33 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 14:30:33 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 14:30:33 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 14:30:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:33.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:34 compute-0 ceph-mon[74360]: pgmap v1196: 321 pgs: 321 active+clean; 655 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 203 op/s
Jan 20 14:30:34 compute-0 nova_compute[250018]: 2026-01-20 14:30:34.650 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 674 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.9 MiB/s wr, 281 op/s
Jan 20 14:30:35 compute-0 nova_compute[250018]: 2026-01-20 14:30:35.604 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:35.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:35 compute-0 ceph-mon[74360]: pgmap v1197: 321 pgs: 321 active+clean; 674 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.9 MiB/s wr, 281 op/s
Jan 20 14:30:36 compute-0 sshd-session[272949]: Invalid user admin from 157.245.78.139 port 45690
Jan 20 14:30:36 compute-0 sshd-session[272949]: Connection closed by invalid user admin 157.245.78.139 port 45690 [preauth]
Jan 20 14:30:36 compute-0 nova_compute[250018]: 2026-01-20 14:30:36.596 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:36 compute-0 nova_compute[250018]: 2026-01-20 14:30:36.598 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquired lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:36 compute-0 nova_compute[250018]: 2026-01-20 14:30:36.598 250022 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 689 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 288 op/s
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.466 250022 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.750 250022 DEBUG nova.network.neutron [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.769 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Releasing lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.851 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.853 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.853 250022 INFO nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Creating image(s)
Jan 20 14:30:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:37 compute-0 nova_compute[250018]: 2026-01-20 14:30:37.927 250022 DEBUG nova.storage.rbd_utils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] creating snapshot(nova-resize) on rbd image(d95ca690-20e1-4b0c-919b-d64c9af25eba_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:30:37 compute-0 ceph-mon[74360]: pgmap v1198: 321 pgs: 321 active+clean; 689 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.1 MiB/s wr, 288 op/s
Jan 20 14:30:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:30:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:30:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 20 14:30:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 20 14:30:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 20 14:30:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/561453425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:39 compute-0 ceph-mon[74360]: osdmap e163: 3 total, 3 up, 3 in
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.204 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.321 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.322 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Ensure instance console log exists: /var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.322 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.322 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.323 250022 DEBUG oslo_concurrency.lockutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.324 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.329 250022 WARNING nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.333 250022 DEBUG nova.virt.libvirt.host [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.334 250022 DEBUG nova.virt.libvirt.host [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.337 250022 DEBUG nova.virt.libvirt.host [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.338 250022 DEBUG nova.virt.libvirt.host [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.339 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.339 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.339 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.340 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.340 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.340 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.340 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.341 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.341 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.341 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.341 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.342 250022 DEBUG nova.virt.hardware [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.342 250022 DEBUG nova.objects.instance [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 696 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.9 MiB/s wr, 276 op/s
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.361 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.653 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742726590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.820 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:39 compute-0 nova_compute[250018]: 2026-01-20 14:30:39.854 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:39.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:40 compute-0 ceph-mon[74360]: pgmap v1200: 321 pgs: 321 active+clean; 696 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.9 MiB/s wr, 276 op/s
Jan 20 14:30:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/742726590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766126878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.343 250022 DEBUG oslo_concurrency.processutils [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.347 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <uuid>d95ca690-20e1-4b0c-919b-d64c9af25eba</uuid>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <name>instance-0000001d</name>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:name>tempest-MigrationsAdminTest-server-1542965426</nova:name>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:30:39</nova:creationTime>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:user uuid="01a3d712f05049b19d4ecc7051720ad5">tempest-MigrationsAdminTest-1518611738-project-member</nova:user>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <nova:project uuid="f3c2e72a7148496394c8bcd618a19c80">tempest-MigrationsAdminTest-1518611738</nova:project>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <system>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="serial">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="uuid">d95ca690-20e1-4b0c-919b-d64c9af25eba</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </system>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <os>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </os>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <features>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </features>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk">
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d95ca690-20e1-4b0c-919b-d64c9af25eba_disk.config">
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:40 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/d95ca690-20e1-4b0c-919b-d64c9af25eba/console.log" append="off"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <video>
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </video>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:30:40 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:30:40 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:30:40 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:30:40 compute-0 nova_compute[250018]: </domain>
Jan 20 14:30:40 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.391 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.392 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.392 250022 INFO nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Using config drive
Jan 20 14:30:40 compute-0 systemd-machined[216401]: New machine qemu-16-instance-0000001d.
Jan 20 14:30:40 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000001d.
Jan 20 14:30:40 compute-0 sudo[273119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:40 compute-0 sudo[273119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:40 compute-0 sudo[273119]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:40 compute-0 nova_compute[250018]: 2026-01-20 14:30:40.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:40 compute-0 sudo[273145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:30:40 compute-0 sudo[273145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:30:40 compute-0 sudo[273145]: pam_unix(sudo:session): session closed for user root
Jan 20 14:30:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.160 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919441.1601455, d95ca690-20e1-4b0c-919b-d64c9af25eba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.161 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Resumed (Lifecycle Event)
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.163 250022 DEBUG nova.compute.manager [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.166 250022 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance running successfully.
Jan 20 14:30:41 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.169 250022 DEBUG nova.virt.libvirt.guest [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.170 250022 DEBUG nova.virt.libvirt.driver [None req-c9b86ee8-0957-4e69-a1fc-4cc0a2f42a3b d4b36d8e19cb4f529d2185f573f5072a 2074a786307f4427bbbbc1103d4a9305 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.185 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3766126878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.202 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.233 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.233 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919441.162694, d95ca690-20e1-4b0c-919b-d64c9af25eba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.234 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Started (Lifecycle Event)
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.253 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:41 compute-0 nova_compute[250018]: 2026-01-20 14:30:41.267 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:41.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:42 compute-0 ceph-mon[74360]: pgmap v1201: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:42 compute-0 nova_compute[250018]: 2026-01-20 14:30:42.505 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:42 compute-0 nova_compute[250018]: 2026-01-20 14:30:42.505 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:42 compute-0 nova_compute[250018]: 2026-01-20 14:30:42.506 250022 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:42 compute-0 nova_compute[250018]: 2026-01-20 14:30:42.739 250022 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.004 250022 DEBUG nova.network.neutron [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.018 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-d95ca690-20e1-4b0c-919b-d64c9af25eba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:43.047 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:30:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:43.049 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.048 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:43 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 20 14:30:43 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Consumed 2.664s CPU time.
Jan 20 14:30:43 compute-0 systemd-machined[216401]: Machine qemu-16-instance-0000001d terminated.
Jan 20 14:30:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2733108454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.267 250022 INFO nova.virt.libvirt.driver [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Instance destroyed successfully.
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.268 250022 DEBUG nova.objects.instance [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.284 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.285 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.299 250022 DEBUG nova.objects.instance [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'migration_context' on Instance uuid d95ca690-20e1-4b0c-919b-d64c9af25eba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.404 250022 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040306816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.802 250022 DEBUG oslo_concurrency.processutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.808 250022 DEBUG nova.compute.provider_tree [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:43.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:43 compute-0 nova_compute[250018]: 2026-01-20 14:30:43.882 250022 DEBUG nova.scheduler.client.report [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:30:44.050 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:30:44 compute-0 nova_compute[250018]: 2026-01-20 14:30:44.151 250022 DEBUG oslo_concurrency.lockutils [None req-789e21ba-eed3-4ca4-aee2-f814206d5b1e 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:44 compute-0 ceph-mon[74360]: pgmap v1202: 321 pgs: 321 active+clean; 681 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 341 op/s
Jan 20 14:30:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1040306816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:44 compute-0 nova_compute[250018]: 2026-01-20 14:30:44.657 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 632 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 326 op/s
Jan 20 14:30:45 compute-0 nova_compute[250018]: 2026-01-20 14:30:45.659 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 20 14:30:46 compute-0 ceph-mon[74360]: pgmap v1203: 321 pgs: 321 active+clean; 632 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 326 op/s
Jan 20 14:30:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 20 14:30:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 20 14:30:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Jan 20 14:30:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:47 compute-0 ceph-mon[74360]: osdmap e164: 3 total, 3 up, 3 in
Jan 20 14:30:47 compute-0 ceph-mon[74360]: pgmap v1205: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Jan 20 14:30:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3386445853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.403 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.404 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.428 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.500 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.501 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.509 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.509 250022 INFO nova.compute.claims [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:30:48 compute-0 nova_compute[250018]: 2026-01-20 14:30:48.665 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:48.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/462858156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2967623465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.125 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.133 250022 DEBUG nova.compute.provider_tree [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.150 250022 DEBUG nova.scheduler.client.report [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.174 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.176 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.224 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.250 250022 INFO nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.272 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:30:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.351 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.352 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.352 250022 INFO nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Creating image(s)
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.382 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.414 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.446 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.450 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.513 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.515 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.515 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.516 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.541 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.545 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.660 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.875 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2967623465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:49 compute-0 ceph-mon[74360]: pgmap v1206: 321 pgs: 321 active+clean; 602 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 20 14:30:49 compute-0 nova_compute[250018]: 2026-01-20 14:30:49.954 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] resizing rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.060 250022 DEBUG nova.objects.instance [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lazy-loading 'migration_context' on Instance uuid cb46d844-0d47-49f3-9677-e8fdb9bf2b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.083 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.083 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Ensure instance console log exists: /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.084 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.084 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.084 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.086 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.090 250022 WARNING nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.096 250022 DEBUG nova.virt.libvirt.host [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.097 250022 DEBUG nova.virt.libvirt.host [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.099 250022 DEBUG nova.virt.libvirt.host [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.100 250022 DEBUG nova.virt.libvirt.host [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.101 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.102 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.102 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.103 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.103 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.103 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.103 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.103 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.104 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.104 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.104 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.104 250022 DEBUG nova.virt.hardware [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.108 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855722107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.625 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.657 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.661 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:50 compute-0 nova_compute[250018]: 2026-01-20 14:30:50.687 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:50.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3855722107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:30:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012040475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.155 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.158 250022 DEBUG nova.objects.instance [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb46d844-0d47-49f3-9677-e8fdb9bf2b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.184 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <uuid>cb46d844-0d47-49f3-9677-e8fdb9bf2b46</uuid>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <name>instance-00000021</name>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-29644070</nova:name>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:30:50</nova:creationTime>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:user uuid="31a6b24ce83d4824b02148030eca531c">tempest-ServerDiagnosticsV248Test-44593157-project-member</nova:user>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <nova:project uuid="dfafda1e1dad4adfa4b670ec63073d36">tempest-ServerDiagnosticsV248Test-44593157</nova:project>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <system>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="serial">cb46d844-0d47-49f3-9677-e8fdb9bf2b46</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="uuid">cb46d844-0d47-49f3-9677-e8fdb9bf2b46</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </system>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <os>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </os>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <features>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </features>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk">
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config">
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:30:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/console.log" append="off"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <video>
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </video>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:30:51 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:30:51 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:30:51 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:30:51 compute-0 nova_compute[250018]: </domain>
Jan 20 14:30:51 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.241 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.242 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.243 250022 INFO nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Using config drive
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.279 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.475 250022 INFO nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Creating config drive at /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.480 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpam4t02na execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.612 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpam4t02na" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.644 250022 DEBUG nova.storage.rbd_utils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] rbd image cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.650 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.891 250022 DEBUG oslo_concurrency.processutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config cb46d844-0d47-49f3-9677-e8fdb9bf2b46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:51 compute-0 nova_compute[250018]: 2026-01-20 14:30:51.893 250022 INFO nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Deleting local config drive /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46/disk.config because it was imported into RBD.
Jan 20 14:30:51 compute-0 systemd-machined[216401]: New machine qemu-17-instance-00000021.
Jan 20 14:30:51 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000021.
Jan 20 14:30:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1012040475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:30:52 compute-0 ceph-mon[74360]: pgmap v1207: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:30:52
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'images', 'volumes', 'default.rgw.control', 'backups', 'vms']
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.535 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919452.5353355, cb46d844-0d47-49f3-9677-e8fdb9bf2b46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.537 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] VM Resumed (Lifecycle Event)
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.541 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.541 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.545 250022 INFO nova.virt.libvirt.driver [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Instance spawned successfully.
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.546 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.563 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.565 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.572 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.572 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.573 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.573 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.573 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.574 250022 DEBUG nova.virt.libvirt.driver [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.610 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.610 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919452.5366468, cb46d844-0d47-49f3-9677-e8fdb9bf2b46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.610 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] VM Started (Lifecycle Event)
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.652 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.656 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.681 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.693 250022 INFO nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Took 3.34 seconds to spawn the instance on the hypervisor.
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.694 250022 DEBUG nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.760 250022 INFO nova.compute.manager [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Took 4.29 seconds to build instance.
Jan 20 14:30:52 compute-0 nova_compute[250018]: 2026-01-20 14:30:52.782 250022 DEBUG oslo_concurrency.lockutils [None req-d655d649-c97f-4e84-9ded-f3a1c2b25469 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:52.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:54 compute-0 ceph-mon[74360]: pgmap v1208: 321 pgs: 321 active+clean; 640 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 20 14:30:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1259635317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.610 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.610 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.611 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.611 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.611 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.612 250022 INFO nova.compute.manager [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Terminating instance
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.612 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.613 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquired lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.613 250022 DEBUG nova.network.neutron [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.663 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.720 250022 DEBUG nova.compute.manager [None req-c9a3f450-a190-4139-a852-7bb5daba2024 1431af9a73934db9afa530dc0b75d80f e2ce8450af574dfabee24677527aa662 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.725 250022 INFO nova.compute.manager [None req-c9a3f450-a190-4139-a852-7bb5daba2024 1431af9a73934db9afa530dc0b75d80f e2ce8450af574dfabee24677527aa662 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Retrieving diagnostics
Jan 20 14:30:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:54.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:54 compute-0 nova_compute[250018]: 2026-01-20 14:30:54.874 250022 DEBUG nova.network.neutron [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 548 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Jan 20 14:30:55 compute-0 nova_compute[250018]: 2026-01-20 14:30:55.644 250022 DEBUG nova.network.neutron [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:55 compute-0 nova_compute[250018]: 2026-01-20 14:30:55.656 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Releasing lock "refresh_cache-9f5c9253-e2bd-42d3-8253-fac568daeda7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:30:55 compute-0 nova_compute[250018]: 2026-01-20 14:30:55.657 250022 DEBUG nova.compute.manager [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:30:55 compute-0 nova_compute[250018]: 2026-01-20 14:30:55.662 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1852034554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:56 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 20 14:30:56 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001b.scope: Consumed 15.523s CPU time.
Jan 20 14:30:56 compute-0 systemd-machined[216401]: Machine qemu-15-instance-0000001b terminated.
Jan 20 14:30:56 compute-0 nova_compute[250018]: 2026-01-20 14:30:56.277 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance destroyed successfully.
Jan 20 14:30:56 compute-0 nova_compute[250018]: 2026-01-20 14:30:56.278 250022 DEBUG nova.objects.instance [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lazy-loading 'resources' on Instance uuid 9f5c9253-e2bd-42d3-8253-fac568daeda7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:30:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 20 14:30:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 20 14:30:56 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 20 14:30:57 compute-0 ceph-mon[74360]: pgmap v1209: 321 pgs: 321 active+clean; 548 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Jan 20 14:30:57 compute-0 ceph-mon[74360]: osdmap e165: 3 total, 3 up, 3 in
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 490 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 279 op/s
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.779 250022 INFO nova.virt.libvirt.driver [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Deleting instance files /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7_del
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.780 250022 INFO nova.virt.libvirt.driver [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Deletion of /var/lib/nova/instances/9f5c9253-e2bd-42d3-8253-fac568daeda7_del complete
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.836 250022 INFO nova.compute.manager [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Took 2.18 seconds to destroy the instance on the hypervisor.
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.836 250022 DEBUG oslo.service.loopingcall [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.837 250022 DEBUG nova.compute.manager [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.838 250022 DEBUG nova.network.neutron [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:30:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:30:57 compute-0 nova_compute[250018]: 2026-01-20 14:30:57.988 250022 DEBUG nova.network.neutron [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.008 250022 DEBUG nova.network.neutron [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.025 250022 INFO nova.compute.manager [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Took 0.19 seconds to deallocate network for instance.
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.074 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.075 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:58 compute-0 ceph-mon[74360]: pgmap v1211: 321 pgs: 321 active+clean; 490 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 279 op/s
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.168 250022 DEBUG oslo_concurrency.processutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.264 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919443.2633498, d95ca690-20e1-4b0c-919b-d64c9af25eba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.265 250022 INFO nova.compute.manager [-] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] VM Stopped (Lifecycle Event)
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.283 250022 DEBUG nova.compute.manager [None req-86c0e062-d60b-4585-a9ad-0d50449399f9 - - - - - -] [instance: d95ca690-20e1-4b0c-919b-d64c9af25eba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:30:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:30:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951074256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.584 250022 DEBUG oslo_concurrency.processutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.591 250022 DEBUG nova.compute.provider_tree [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.606 250022 DEBUG nova.scheduler.client.report [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.628 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.655 250022 INFO nova.scheduler.client.report [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Deleted allocations for instance 9f5c9253-e2bd-42d3-8253-fac568daeda7
Jan 20 14:30:58 compute-0 nova_compute[250018]: 2026-01-20 14:30:58.725 250022 DEBUG oslo_concurrency.lockutils [None req-fa2dbce2-e797-4edc-96ce-414a6bf06e25 01a3d712f05049b19d4ecc7051720ad5 f3c2e72a7148496394c8bcd618a19c80 - - default default] Lock "9f5c9253-e2bd-42d3-8253-fac568daeda7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:30:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:30:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:30:58 compute-0 ovn_controller[148666]: 2026-01-20T14:30:58Z|00100|binding|INFO|Releasing lport 2f798c1c-f9b6-4141-904d-4124d05888ca from this chassis (sb_readonly=0)
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/951074256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:30:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 472 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.443 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.443 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.458 250022 DEBUG nova.objects.instance [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.498 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.664 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.849 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.850 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:30:59 compute-0 nova_compute[250018]: 2026-01-20 14:30:59.850 250022 INFO nova.compute.manager [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Attaching volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 to /dev/vdb
Jan 20 14:30:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:30:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:30:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:30:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.195 250022 DEBUG os_brick.utils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.197 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.215 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.216 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[dc21b685-c8c4-4826-893d-04209c697337]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.217 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.229 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.230 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a0e3e-6dfe-4388-a202-c85748e2e3d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.231 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.244 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.245 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[177cb1e8-3ab2-4ecd-bb84-12b11da3d570]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.246 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[0181a9b1-f6a9-4f41-a85b-ed998d7441ec]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.247 250022 DEBUG oslo_concurrency.processutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.280 250022 DEBUG oslo_concurrency.processutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.282 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.282 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.282 250022 DEBUG os_brick.initiator.connectors.lightos [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.283 250022 DEBUG os_brick.utils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.283 250022 DEBUG nova.virt.block_device [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating existing volume attachment record: 285224ad-1d47-48db-81c5-b2e37e24a7ee _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:31:00 compute-0 ceph-mon[74360]: pgmap v1212: 321 pgs: 321 active+clean; 472 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Jan 20 14:31:00 compute-0 nova_compute[250018]: 2026-01-20 14:31:00.669 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:00 compute-0 sudo[273665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:00 compute-0 sudo[273665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:00 compute-0 sudo[273665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:00 compute-0 sudo[273690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:00 compute-0 sudo[273690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:00 compute-0 sudo[273690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.446 250022 DEBUG nova.objects.instance [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.488 250022 DEBUG nova.virt.libvirt.driver [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Attempting to attach volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.491 250022 DEBUG nova.virt.libvirt.guest [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:01 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   </source>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 14:31:01 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   </auth>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:01 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 14:31:01 compute-0 nova_compute[250018]: </disk>
Jan 20 14:31:01 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.628 250022 DEBUG nova.virt.libvirt.driver [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.629 250022 DEBUG nova.virt.libvirt.driver [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.629 250022 DEBUG nova.virt.libvirt.driver [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:31:01 compute-0 nova_compute[250018]: 2026-01-20 14:31:01.629 250022 DEBUG nova.virt.libvirt.driver [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] No VIF found with MAC fa:16:3e:76:68:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:31:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3285058670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:02 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:31:02 compute-0 podman[273737]: 2026-01-20 14:31:02.291067231 +0000 UTC m=+0.072172792 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:31:02 compute-0 podman[273736]: 2026-01-20 14:31:02.326279607 +0000 UTC m=+0.107418068 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:31:02 compute-0 nova_compute[250018]: 2026-01-20 14:31:02.330 250022 DEBUG oslo_concurrency.lockutils [None req-d8cc8d51-f6a4-49a4-a00d-d0c5b965a9f6 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:03 compute-0 ceph-mon[74360]: pgmap v1213: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:04 compute-0 ceph-mon[74360]: pgmap v1214: 321 pgs: 321 active+clean; 409 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 423 KiB/s wr, 220 op/s
Jan 20 14:31:04 compute-0 nova_compute[250018]: 2026-01-20 14:31:04.666 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 355 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.1 KiB/s wr, 132 op/s
Jan 20 14:31:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:31:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3890482712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.587 250022 DEBUG nova.compute.manager [None req-c12fe2e8-29fe-4a9f-98c8-b03a0df65b37 1431af9a73934db9afa530dc0b75d80f e2ce8450af574dfabee24677527aa662 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.592 250022 INFO nova.compute.manager [None req-c12fe2e8-29fe-4a9f-98c8-b03a0df65b37 1431af9a73934db9afa530dc0b75d80f e2ce8450af574dfabee24677527aa662 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Retrieving diagnostics
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.670 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:05.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.989 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.990 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.990 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.990 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.991 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.992 250022 INFO nova.compute.manager [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Terminating instance
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.993 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "refresh_cache-cb46d844-0d47-49f3-9677-e8fdb9bf2b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.993 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquired lock "refresh_cache-cb46d844-0d47-49f3-9677-e8fdb9bf2b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:31:05 compute-0 nova_compute[250018]: 2026-01-20 14:31:05.994 250022 DEBUG nova.network.neutron [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:31:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3890482712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:06 compute-0 nova_compute[250018]: 2026-01-20 14:31:06.478 250022 DEBUG nova.network.neutron [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:31:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:06.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:07 compute-0 ceph-mon[74360]: pgmap v1215: 321 pgs: 321 active+clean; 355 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.1 KiB/s wr, 132 op/s
Jan 20 14:31:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3548925211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 330 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 115 KiB/s wr, 84 op/s
Jan 20 14:31:07 compute-0 nova_compute[250018]: 2026-01-20 14:31:07.657 250022 DEBUG nova.network.neutron [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:07 compute-0 nova_compute[250018]: 2026-01-20 14:31:07.673 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Releasing lock "refresh_cache-cb46d844-0d47-49f3-9677-e8fdb9bf2b46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:31:07 compute-0 nova_compute[250018]: 2026-01-20 14:31:07.674 250022 DEBUG nova.compute.manager [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:31:07 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 20 14:31:07 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000021.scope: Consumed 12.876s CPU time.
Jan 20 14:31:07 compute-0 systemd-machined[216401]: Machine qemu-17-instance-00000021 terminated.
Jan 20 14:31:07 compute-0 nova_compute[250018]: 2026-01-20 14:31:07.898 250022 INFO nova.virt.libvirt.driver [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Instance destroyed successfully.
Jan 20 14:31:07 compute-0 nova_compute[250018]: 2026-01-20 14:31:07.899 250022 DEBUG nova.objects.instance [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lazy-loading 'resources' on Instance uuid cb46d844-0d47-49f3-9677-e8fdb9bf2b46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:07.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:08 compute-0 ceph-mon[74360]: pgmap v1216: 321 pgs: 321 active+clean; 330 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 115 KiB/s wr, 84 op/s
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.492 250022 INFO nova.virt.libvirt.driver [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Deleting instance files /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46_del
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.493 250022 INFO nova.virt.libvirt.driver [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Deletion of /var/lib/nova/instances/cb46d844-0d47-49f3-9677-e8fdb9bf2b46_del complete
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.652 250022 INFO nova.compute.manager [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Took 0.98 seconds to destroy the instance on the hypervisor.
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.654 250022 DEBUG oslo.service.loopingcall [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.654 250022 DEBUG nova.compute.manager [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.654 250022 DEBUG nova.network.neutron [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:31:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:08.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.936 250022 DEBUG nova.network.neutron [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.952 250022 DEBUG nova.network.neutron [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:08 compute-0 nova_compute[250018]: 2026-01-20 14:31:08.967 250022 INFO nova.compute.manager [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Took 0.31 seconds to deallocate network for instance.
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.032 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.032 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.114 250022 DEBUG oslo_concurrency.processutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 322 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 945 KiB/s wr, 93 op/s
Jan 20 14:31:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268478636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.532 250022 DEBUG oslo_concurrency.processutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.537 250022 DEBUG nova.compute.provider_tree [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:09 compute-0 nova_compute[250018]: 2026-01-20 14:31:09.670 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:09.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:10 compute-0 nova_compute[250018]: 2026-01-20 14:31:10.255 250022 DEBUG nova.scheduler.client.report [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:10 compute-0 nova_compute[250018]: 2026-01-20 14:31:10.286 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:10 compute-0 nova_compute[250018]: 2026-01-20 14:31:10.376 250022 INFO nova.scheduler.client.report [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Deleted allocations for instance cb46d844-0d47-49f3-9677-e8fdb9bf2b46
Jan 20 14:31:10 compute-0 ceph-mon[74360]: pgmap v1217: 321 pgs: 321 active+clean; 322 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 945 KiB/s wr, 93 op/s
Jan 20 14:31:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1268478636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:10 compute-0 nova_compute[250018]: 2026-01-20 14:31:10.476 250022 DEBUG oslo_concurrency.lockutils [None req-0bd039b4-c44d-4854-8d10-77b416919b88 31a6b24ce83d4824b02148030eca531c dfafda1e1dad4adfa4b670ec63073d36 - - default default] Lock "cb46d844-0d47-49f3-9677-e8fdb9bf2b46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:10 compute-0 nova_compute[250018]: 2026-01-20 14:31:10.671 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007805229852968547 of space, bias 1.0, pg target 2.341568955890564 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:31:11 compute-0 nova_compute[250018]: 2026-01-20 14:31:11.276 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919456.2746167, 9f5c9253-e2bd-42d3-8253-fac568daeda7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:11 compute-0 nova_compute[250018]: 2026-01-20 14:31:11.277 250022 INFO nova.compute.manager [-] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] VM Stopped (Lifecycle Event)
Jan 20 14:31:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 306 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 1.2 MiB/s wr, 106 op/s
Jan 20 14:31:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:12 compute-0 ceph-mon[74360]: pgmap v1218: 321 pgs: 321 active+clean; 306 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 1.2 MiB/s wr, 106 op/s
Jan 20 14:31:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 263 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Jan 20 14:31:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/86488182' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/86488182' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:13.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:14 compute-0 nova_compute[250018]: 2026-01-20 14:31:14.258 250022 DEBUG nova.compute.manager [None req-a4f83686-d541-4c49-9862-7302058e96d5 - - - - - -] [instance: 9f5c9253-e2bd-42d3-8253-fac568daeda7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:14 compute-0 nova_compute[250018]: 2026-01-20 14:31:14.673 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:14 compute-0 ceph-mon[74360]: pgmap v1219: 321 pgs: 321 active+clean; 263 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.0 MiB/s wr, 119 op/s
Jan 20 14:31:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:14.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 355 KiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:31:15 compute-0 nova_compute[250018]: 2026-01-20 14:31:15.675 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2038386253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:15.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:17 compute-0 ceph-mon[74360]: pgmap v1220: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 355 KiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 20 14:31:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 331 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 20 14:31:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:18 compute-0 nova_compute[250018]: 2026-01-20 14:31:18.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:18 compute-0 nova_compute[250018]: 2026-01-20 14:31:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:18 compute-0 ceph-mon[74360]: pgmap v1221: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 331 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 20 14:31:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.077 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:19 compute-0 sshd-session[273832]: Invalid user admin from 157.245.78.139 port 60212
Jan 20 14:31:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 316 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 20 14:31:19 compute-0 sshd-session[273832]: Connection closed by invalid user admin 157.245.78.139 port 60212 [preauth]
Jan 20 14:31:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306767532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.569 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.675 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.681 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.682 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.682 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.839 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.840 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4406MB free_disk=20.89706039428711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.840 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:19 compute-0 nova_compute[250018]: 2026-01-20 14:31:19.841 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:19.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:20 compute-0 ceph-mon[74360]: pgmap v1222: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 316 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 20 14:31:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3306767532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:20 compute-0 nova_compute[250018]: 2026-01-20 14:31:20.720 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:20 compute-0 sudo[273859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:20 compute-0 sudo[273859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-0 sudo[273859]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:20.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:20 compute-0 sudo[273884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:20 compute-0 sudo[273884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-0 sudo[273884]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:20 compute-0 sudo[273891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:20 compute-0 sudo[273891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:20 compute-0 sudo[273891]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:21 compute-0 sudo[273936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:21 compute-0 sudo[273936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:21 compute-0 sudo[273934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:21 compute-0 sudo[273936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:21 compute-0 sudo[273934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:21 compute-0 sudo[273934]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:21 compute-0 sudo[273984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:31:21 compute-0 sudo[273984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 250 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.408 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance ad62888a-ef27-43b4-bb6c-439541ff5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.409 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.409 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.492 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:21 compute-0 podman[274078]: 2026-01-20 14:31:21.55920969 +0000 UTC m=+0.064045343 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:21.596 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:21.597 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:31:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:21.598 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:21 compute-0 podman[274078]: 2026-01-20 14:31:21.662672431 +0000 UTC m=+0.167508084 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:21.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483551337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:21 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.991 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:22 compute-0 nova_compute[250018]: 2026-01-20 14:31:21.999 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 podman[274254]: 2026-01-20 14:31:22.312870739 +0000 UTC m=+0.058350249 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:31:22 compute-0 podman[274254]: 2026-01-20 14:31:22.326894786 +0000 UTC m=+0.072374276 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:31:22 compute-0 ceph-mon[74360]: pgmap v1223: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 250 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Jan 20 14:31:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1483551337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:22 compute-0 podman[274320]: 2026-01-20 14:31:22.573184627 +0000 UTC m=+0.063108508 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived)
Jan 20 14:31:22 compute-0 podman[274320]: 2026-01-20 14:31:22.649801857 +0000 UTC m=+0.139725738 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, release=1793, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 14:31:22 compute-0 sudo[273984]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:31:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:22 compute-0 sudo[274352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:22 compute-0 sudo[274352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-0 sudo[274352]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-0 nova_compute[250018]: 2026-01-20 14:31:22.897 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919467.8953993, cb46d844-0d47-49f3-9677-e8fdb9bf2b46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:22 compute-0 sudo[274377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:22 compute-0 nova_compute[250018]: 2026-01-20 14:31:22.897 250022 INFO nova.compute.manager [-] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] VM Stopped (Lifecycle Event)
Jan 20 14:31:22 compute-0 sudo[274377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:22 compute-0 sudo[274377]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:22 compute-0 sudo[274402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:22 compute-0 sudo[274402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:22 compute-0 sudo[274402]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:23 compute-0 nova_compute[250018]: 2026-01-20 14:31:23.002 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:23 compute-0 nova_compute[250018]: 2026-01-20 14:31:23.006 250022 DEBUG nova.compute.manager [None req-3c4436f3-3036-4e8b-9476-72100468e0d7 - - - - - -] [instance: cb46d844-0d47-49f3-9677-e8fdb9bf2b46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:23 compute-0 sudo[274427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:31:23 compute-0 sudo[274427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:23 compute-0 nova_compute[250018]: 2026-01-20 14:31:23.052 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:31:23 compute-0 nova_compute[250018]: 2026-01-20 14:31:23.052 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 855 KiB/s wr, 54 op/s
Jan 20 14:31:23 compute-0 sudo[274427]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cf65ca70-fe05-4185-9873-7fe7cbb91513 does not exist
Jan 20 14:31:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d0c1e654-a996-456e-85a9-cd2869b27568 does not exist
Jan 20 14:31:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2f120494-7e0d-4513-8c0e-8844f5722a7f does not exist
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:31:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:23 compute-0 sudo[274483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:23 compute-0 sudo[274483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:23 compute-0 sudo[274483]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:23 compute-0 sudo[274508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:23 compute-0 sudo[274508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:23 compute-0 sudo[274508]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:23 compute-0 sudo[274533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:23 compute-0 sudo[274533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:23 compute-0 sudo[274533]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:24 compute-0 sudo[274558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:31:24 compute-0 sudo[274558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.44294628 +0000 UTC m=+0.098906030 container create a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.374513491 +0000 UTC m=+0.030473321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:24 compute-0 systemd[1]: Started libpod-conmon-a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056.scope.
Jan 20 14:31:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.588724049 +0000 UTC m=+0.244683799 container init a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.600045393 +0000 UTC m=+0.256005153 container start a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:31:24 compute-0 magical_kapitsa[274641]: 167 167
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.605948091 +0000 UTC m=+0.261907831 container attach a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:31:24 compute-0 systemd[1]: libpod-a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056.scope: Deactivated successfully.
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.607108563 +0000 UTC m=+0.263068313 container died a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ee414d0f96b13271d4b18d3e9ddd9802fd31a2b8d4ab60b37af9a71f2de6ff-merged.mount: Deactivated successfully.
Jan 20 14:31:24 compute-0 podman[274624]: 2026-01-20 14:31:24.650970622 +0000 UTC m=+0.306930362 container remove a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:31:24 compute-0 systemd[1]: libpod-conmon-a84ff103b8d4cd1fd76c9a1e211f0fb1b830c099c0fcb0d55d3312d403b1d056.scope: Deactivated successfully.
Jan 20 14:31:24 compute-0 nova_compute[250018]: 2026-01-20 14:31:24.676 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:24 compute-0 podman[274666]: 2026-01-20 14:31:24.797723997 +0000 UTC m=+0.023876983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:24 compute-0 ceph-mon[74360]: pgmap v1224: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 855 KiB/s wr, 54 op/s
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:31:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4199824210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:24 compute-0 podman[274666]: 2026-01-20 14:31:24.950429121 +0000 UTC m=+0.176582117 container create 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:31:25 compute-0 systemd[1]: Started libpod-conmon-28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990.scope.
Jan 20 14:31:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:25 compute-0 nova_compute[250018]: 2026-01-20 14:31:25.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:31:25 compute-0 nova_compute[250018]: 2026-01-20 14:31:25.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:25 compute-0 podman[274666]: 2026-01-20 14:31:25.073813088 +0000 UTC m=+0.299966074 container init 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:25 compute-0 nova_compute[250018]: 2026-01-20 14:31:25.087 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:31:25 compute-0 podman[274666]: 2026-01-20 14:31:25.090132687 +0000 UTC m=+0.316285683 container start 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:31:25 compute-0 podman[274666]: 2026-01-20 14:31:25.121544871 +0000 UTC m=+0.347697927 container attach 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 9.1 KiB/s wr, 22 op/s
Jan 20 14:31:25 compute-0 nova_compute[250018]: 2026-01-20 14:31:25.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:25 compute-0 exciting_turing[274683]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:31:25 compute-0 exciting_turing[274683]: --> relative data size: 1.0
Jan 20 14:31:25 compute-0 exciting_turing[274683]: --> All data devices are unavailable
Jan 20 14:31:25 compute-0 systemd[1]: libpod-28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990.scope: Deactivated successfully.
Jan 20 14:31:25 compute-0 podman[274698]: 2026-01-20 14:31:25.891093947 +0000 UTC m=+0.028210659 container died 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/183400782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1937716485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:25 compute-0 ceph-mon[74360]: pgmap v1225: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 9.1 KiB/s wr, 22 op/s
Jan 20 14:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-62e9a606aa060baffd6cfb438ba943d0e8c2d3f296b6f55fa595926f69ed3791-merged.mount: Deactivated successfully.
Jan 20 14:31:26 compute-0 podman[274698]: 2026-01-20 14:31:26.142924217 +0000 UTC m=+0.280040919 container remove 28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:31:26 compute-0 systemd[1]: libpod-conmon-28e89b92bb1f367da77457c98ca667e1cee40a1891473cb7f6ed109e1d345990.scope: Deactivated successfully.
Jan 20 14:31:26 compute-0 sudo[274558]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:26 compute-0 sudo[274713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:26 compute-0 sudo[274713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:26 compute-0 sudo[274713]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:26 compute-0 sudo[274738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:26 compute-0 sudo[274738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:26 compute-0 sudo[274738]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:26 compute-0 sudo[274763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:26 compute-0 sudo[274763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:26 compute-0 sudo[274763]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:26 compute-0 sudo[274788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:31:26 compute-0 sudo[274788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.004263982 +0000 UTC m=+0.094943064 container create 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:26.932448451 +0000 UTC m=+0.023127553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:27 compute-0 systemd[1]: Started libpod-conmon-1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a.scope.
Jan 20 14:31:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.288703267 +0000 UTC m=+0.379382379 container init 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.296974729 +0000 UTC m=+0.387653811 container start 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:31:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:27 compute-0 relaxed_carver[274871]: 167 167
Jan 20 14:31:27 compute-0 systemd[1]: libpod-1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a.scope: Deactivated successfully.
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.527006152 +0000 UTC m=+0.617685234 container attach 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.527602579 +0000 UTC m=+0.618281681 container died 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 14:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4a68534d8bf4b206b2085bab3151b48ef60d0ae10f969505b502e3b4ed382c-merged.mount: Deactivated successfully.
Jan 20 14:31:27 compute-0 podman[274854]: 2026-01-20 14:31:27.5979553 +0000 UTC m=+0.688634382 container remove 1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_carver, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:31:27 compute-0 systemd[1]: libpod-conmon-1b75abf516bfd355ce2b4ec9777608d475bd4d6d97ba27ebbb5418da1ed1c20a.scope: Deactivated successfully.
Jan 20 14:31:27 compute-0 nova_compute[250018]: 2026-01-20 14:31:27.684 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:27 compute-0 podman[274896]: 2026-01-20 14:31:27.777185338 +0000 UTC m=+0.048951597 container create 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:31:27 compute-0 nova_compute[250018]: 2026-01-20 14:31:27.780 250022 DEBUG oslo_concurrency.lockutils [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:27 compute-0 nova_compute[250018]: 2026-01-20 14:31:27.781 250022 DEBUG oslo_concurrency.lockutils [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:27 compute-0 systemd[1]: Started libpod-conmon-52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf.scope.
Jan 20 14:31:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf99c564a7d3dd205e67a8e6804fc11c652ab2b19b2f8ca381d445642806b21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf99c564a7d3dd205e67a8e6804fc11c652ab2b19b2f8ca381d445642806b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf99c564a7d3dd205e67a8e6804fc11c652ab2b19b2f8ca381d445642806b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf99c564a7d3dd205e67a8e6804fc11c652ab2b19b2f8ca381d445642806b21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:27 compute-0 podman[274896]: 2026-01-20 14:31:27.758501606 +0000 UTC m=+0.030267895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:27 compute-0 podman[274896]: 2026-01-20 14:31:27.877478014 +0000 UTC m=+0.149244293 container init 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:31:27 compute-0 podman[274896]: 2026-01-20 14:31:27.893361191 +0000 UTC m=+0.165127450 container start 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:31:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:27.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:28 compute-0 podman[274896]: 2026-01-20 14:31:28.25790628 +0000 UTC m=+0.529672579 container attach 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:31:28 compute-0 ceph-mon[74360]: pgmap v1226: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]: {
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:     "0": [
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:         {
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "devices": [
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "/dev/loop3"
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             ],
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "lv_name": "ceph_lv0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "lv_size": "7511998464",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "name": "ceph_lv0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "tags": {
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.cluster_name": "ceph",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.crush_device_class": "",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.encrypted": "0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.osd_id": "0",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.type": "block",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:                 "ceph.vdo": "0"
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             },
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "type": "block",
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:             "vg_name": "ceph_vg0"
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:         }
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]:     ]
Jan 20 14:31:28 compute-0 vigilant_joliot[274913]: }
Jan 20 14:31:28 compute-0 systemd[1]: libpod-52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf.scope: Deactivated successfully.
Jan 20 14:31:28 compute-0 podman[274896]: 2026-01-20 14:31:28.733018162 +0000 UTC m=+1.004784421 container died 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdf99c564a7d3dd205e67a8e6804fc11c652ab2b19b2f8ca381d445642806b21-merged.mount: Deactivated successfully.
Jan 20 14:31:28 compute-0 podman[274896]: 2026-01-20 14:31:28.873797386 +0000 UTC m=+1.145563645 container remove 52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:31:28 compute-0 systemd[1]: libpod-conmon-52b9679536f1db1b983d9703e5dd088d0b47e8d6625923587dadb204c791d7bf.scope: Deactivated successfully.
Jan 20 14:31:28 compute-0 sudo[274788]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:28.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:28 compute-0 sudo[274935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:28 compute-0 sudo[274935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:28 compute-0 sudo[274935]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:29 compute-0 sudo[274960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:31:29 compute-0 sudo[274960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:29 compute-0 sudo[274960]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:29 compute-0 sudo[274985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:29 compute-0 sudo[274985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:29 compute-0 sudo[274985]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:29 compute-0 sudo[275010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:31:29 compute-0 sudo[275010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:29 compute-0 nova_compute[250018]: 2026-01-20 14:31:29.330 250022 INFO nova.compute.manager [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Detaching volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745
Jan 20 14:31:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.470693172 +0000 UTC m=+0.041083606 container create 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:31:29 compute-0 systemd[1]: Started libpod-conmon-2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb.scope.
Jan 20 14:31:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.454125897 +0000 UTC m=+0.024516341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.553728724 +0000 UTC m=+0.124119248 container init 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.560314281 +0000 UTC m=+0.130704715 container start 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.563371353 +0000 UTC m=+0.133761827 container attach 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:31:29 compute-0 dreamy_darwin[275091]: 167 167
Jan 20 14:31:29 compute-0 systemd[1]: libpod-2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb.scope: Deactivated successfully.
Jan 20 14:31:29 compute-0 conmon[275091]: conmon 2d3b49e73d6a5cd12e47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb.scope/container/memory.events
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.568506191 +0000 UTC m=+0.138896675 container died 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d48f7572178603d87a2c0fa072b9a6da16d370756c0e658ab34b595b58f771f-merged.mount: Deactivated successfully.
Jan 20 14:31:29 compute-0 podman[275075]: 2026-01-20 14:31:29.607907611 +0000 UTC m=+0.178298035 container remove 2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:31:29 compute-0 systemd[1]: libpod-conmon-2d3b49e73d6a5cd12e47b4017da488b2df57830ebede486d40d343c775083deb.scope: Deactivated successfully.
Jan 20 14:31:29 compute-0 nova_compute[250018]: 2026-01-20 14:31:29.680 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:29 compute-0 podman[275114]: 2026-01-20 14:31:29.788793383 +0000 UTC m=+0.042205385 container create cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:31:29 compute-0 systemd[1]: Started libpod-conmon-cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d.scope.
Jan 20 14:31:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29765588cfa8edafc6235c1e207536d5f9c571856c624c117cff8636e696f7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29765588cfa8edafc6235c1e207536d5f9c571856c624c117cff8636e696f7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29765588cfa8edafc6235c1e207536d5f9c571856c624c117cff8636e696f7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29765588cfa8edafc6235c1e207536d5f9c571856c624c117cff8636e696f7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:31:29 compute-0 podman[275114]: 2026-01-20 14:31:29.770049048 +0000 UTC m=+0.023461080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:31:29 compute-0 podman[275114]: 2026-01-20 14:31:29.874343832 +0000 UTC m=+0.127755844 container init cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:31:29 compute-0 podman[275114]: 2026-01-20 14:31:29.880242211 +0000 UTC m=+0.133654213 container start cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:31:29 compute-0 podman[275114]: 2026-01-20 14:31:29.883169299 +0000 UTC m=+0.136581301 container attach cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:31:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:29.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.444 250022 INFO nova.virt.block_device [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Attempting to driver detach volume 73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745 from mountpoint /dev/vdb
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.456 250022 DEBUG nova.virt.libvirt.driver [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Attempting to detach device vdb from instance ad62888a-ef27-43b4-bb6c-439541ff5524 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.456 250022 DEBUG nova.virt.libvirt.guest [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   </source>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]: </disk>
Jan 20 14:31:30 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.471 250022 INFO nova.virt.libvirt.driver [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully detached device vdb from instance ad62888a-ef27-43b4-bb6c-439541ff5524 from the persistent domain config.
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.471 250022 DEBUG nova.virt.libvirt.driver [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ad62888a-ef27-43b4-bb6c-439541ff5524 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.472 250022 DEBUG nova.virt.libvirt.guest [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745">
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   </source>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <serial>73c5b3f0-c4ca-48f3-9dc2-d2c15d3fd745</serial>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 14:31:30 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:31:30 compute-0 nova_compute[250018]: </disk>
Jan 20 14:31:30 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.531 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768919490.5309455, ad62888a-ef27-43b4-bb6c-439541ff5524 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.533 250022 DEBUG nova.virt.libvirt.driver [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ad62888a-ef27-43b4-bb6c-439541ff5524 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.535 250022 INFO nova.virt.libvirt.driver [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully detached device vdb from instance ad62888a-ef27-43b4-bb6c-439541ff5524 from the live domain config.
Jan 20 14:31:30 compute-0 ceph-mon[74360]: pgmap v1227: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1825073402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]: {
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:         "osd_id": 0,
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:         "type": "bluestore"
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]:     }
Jan 20 14:31:30 compute-0 ecstatic_haibt[275132]: }
Jan 20 14:31:30 compute-0 systemd[1]: libpod-cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d.scope: Deactivated successfully.
Jan 20 14:31:30 compute-0 nova_compute[250018]: 2026-01-20 14:31:30.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:30.746 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:30.747 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:30.747 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:30 compute-0 podman[275155]: 2026-01-20 14:31:30.758188571 +0000 UTC m=+0.025332632 container died cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f29765588cfa8edafc6235c1e207536d5f9c571856c624c117cff8636e696f7d-merged.mount: Deactivated successfully.
Jan 20 14:31:30 compute-0 podman[275155]: 2026-01-20 14:31:30.816854028 +0000 UTC m=+0.083998039 container remove cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:31:30 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:31:30 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:31:30 compute-0 systemd[1]: libpod-conmon-cbcbdceac1afa6c8d33a7dce05d9bd4ebbcfd431bd53aa455198ecb448306f1d.scope: Deactivated successfully.
Jan 20 14:31:30 compute-0 sudo[275010]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:31:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:31:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev af04f3a9-b0dd-4c12-a110-0fbf4657fe46 does not exist
Jan 20 14:31:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b375d4e5-10e3-463c-90e2-6f768992f3f2 does not exist
Jan 20 14:31:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 80735cd6-8722-49ba-bf2b-a38d56e0e936 does not exist
Jan 20 14:31:31 compute-0 sudo[275173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:31 compute-0 sudo[275173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:31 compute-0 sudo[275173]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:31 compute-0 sudo[275198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:31:31 compute-0 sudo[275198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:31 compute-0 sudo[275198]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:31.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:31:31 compute-0 ceph-mon[74360]: pgmap v1228: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.7 KiB/s wr, 1 op/s
Jan 20 14:31:32 compute-0 nova_compute[250018]: 2026-01-20 14:31:32.322 250022 DEBUG nova.objects.instance [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'flavor' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:32 compute-0 nova_compute[250018]: 2026-01-20 14:31:32.389 250022 DEBUG oslo_concurrency.lockutils [None req-93f7c99e-19e6-4f32-8523-e3e413cb955c 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 4.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:32 compute-0 podman[275224]: 2026-01-20 14:31:32.453625196 +0000 UTC m=+0.048024841 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:31:32 compute-0 podman[275223]: 2026-01-20 14:31:32.493147139 +0000 UTC m=+0.087696428 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:31:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:33.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:34 compute-0 nova_compute[250018]: 2026-01-20 14:31:34.683 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:34 compute-0 ceph-mon[74360]: pgmap v1229: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:35 compute-0 nova_compute[250018]: 2026-01-20 14:31:35.795 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:35 compute-0 ceph-mon[74360]: pgmap v1230: 321 pgs: 321 active+clean; 200 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.6 KiB/s wr, 1 op/s
Jan 20 14:31:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:35.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 175 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 680 B/s wr, 5 op/s
Jan 20 14:31:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:38 compute-0 ceph-mon[74360]: pgmap v1231: 321 pgs: 321 active+clean; 175 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 680 B/s wr, 5 op/s
Jan 20 14:31:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:38.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 162 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 682 B/s wr, 6 op/s
Jan 20 14:31:39 compute-0 nova_compute[250018]: 2026-01-20 14:31:39.685 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/246380524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:40 compute-0 ceph-mon[74360]: pgmap v1232: 321 pgs: 321 active+clean; 162 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 682 B/s wr, 6 op/s
Jan 20 14:31:40 compute-0 nova_compute[250018]: 2026-01-20 14:31:40.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:40.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:41 compute-0 sudo[275271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:41 compute-0 sudo[275271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:41 compute-0 sudo[275271]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:41 compute-0 sudo[275296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:31:41 compute-0 sudo[275296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:31:41 compute-0 sudo[275296]: pam_unix(sudo:session): session closed for user root
Jan 20 14:31:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:41 compute-0 ceph-mon[74360]: pgmap v1233: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:41.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3784413517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2619803401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:43 compute-0 ceph-mon[74360]: pgmap v1234: 321 pgs: 321 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:31:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:43.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.959 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.959 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.960 250022 INFO nova.compute.manager [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Terminating instance
Jan 20 14:31:43 compute-0 nova_compute[250018]: 2026-01-20 14:31:43.960 250022 DEBUG nova.compute.manager [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:31:44 compute-0 kernel: tapdc8abb47-59 (unregistering): left promiscuous mode
Jan 20 14:31:44 compute-0 NetworkManager[48960]: <info>  [1768919504.0109] device (tapdc8abb47-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 ovn_controller[148666]: 2026-01-20T14:31:44Z|00101|binding|INFO|Releasing lport dc8abb47-5960-4824-b04c-1903f2eb5e32 from this chassis (sb_readonly=0)
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.022 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 ovn_controller[148666]: 2026-01-20T14:31:44Z|00102|binding|INFO|Setting lport dc8abb47-5960-4824-b04c-1903f2eb5e32 down in Southbound
Jan 20 14:31:44 compute-0 ovn_controller[148666]: 2026-01-20T14:31:44Z|00103|binding|INFO|Removing iface tapdc8abb47-59 ovn-installed in OVS
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.037 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:68:31 10.100.0.4'], port_security=['fa:16:3e:76:68:31 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ad62888a-ef27-43b4-bb6c-439541ff5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cee74dd60da4a839bb5eb0ba3137edf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e08f10e3-3a95-4e33-b03d-21860ea0dc91', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f4fb07a-2698-4a11-a9e3-5a66d678d9d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=dc8abb47-5960-4824-b04c-1903f2eb5e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.038 160071 INFO neutron.agent.ovn.metadata.agent [-] Port dc8abb47-5960-4824-b04c-1903f2eb5e32 in datapath 02f86d1d-5cad-49c5-9004-3de3e4739ad5 unbound from our chassis
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.039 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02f86d1d-5cad-49c5-9004-3de3e4739ad5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.040 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd1071a-44a6-4fab-bba5-39a6440ce6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.041 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 namespace which is not needed anymore
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 20 14:31:44 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001a.scope: Consumed 19.634s CPU time.
Jan 20 14:31:44 compute-0 systemd-machined[216401]: Machine qemu-14-instance-0000001a terminated.
Jan 20 14:31:44 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [NOTICE]   (271529) : haproxy version is 2.8.14-c23fe91
Jan 20 14:31:44 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [NOTICE]   (271529) : path to executable is /usr/sbin/haproxy
Jan 20 14:31:44 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [WARNING]  (271529) : Exiting Master process...
Jan 20 14:31:44 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [ALERT]    (271529) : Current worker (271531) exited with code 143 (Terminated)
Jan 20 14:31:44 compute-0 neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5[271525]: [WARNING]  (271529) : All workers exited. Exiting... (0)
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 systemd[1]: libpod-c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e.scope: Deactivated successfully.
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.183 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 podman[275347]: 2026-01-20 14:31:44.185952185 +0000 UTC m=+0.054647730 container died c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.195 250022 INFO nova.virt.libvirt.driver [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Instance destroyed successfully.
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.196 250022 DEBUG nova.objects.instance [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lazy-loading 'resources' on Instance uuid ad62888a-ef27-43b4-bb6c-439541ff5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e-userdata-shm.mount: Deactivated successfully.
Jan 20 14:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede76b2e5f9c26cb8825a67ace742c989a58072ca2992f4a3c537602accae853-merged.mount: Deactivated successfully.
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.219 250022 DEBUG nova.virt.libvirt.vif [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-947631498',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-947631498',id=26,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+01n3DJe3yYfRmwifZEomZrLtaFilErLasmr7ze/p0n1d6nPaSWQOHrHfJ9ubgBCwoqlwHjFIWrKKyRcRI1f3OIubHCG4LO7UMySAzmCXBSDkLJPz6Qzoln3dTb/xrow==',key_name='tempest-keypair-696534507',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:29:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cee74dd60da4a839bb5eb0ba3137edf',ramdisk_id='',reservation_id='r-n5nexebz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-859917658-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:29:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6a3fbc3f92a849e88cbf34d28ca17e43',uuid=ad62888a-ef27-43b4-bb6c-439541ff5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.220 250022 DEBUG nova.network.os_vif_util [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converting VIF {"id": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "address": "fa:16:3e:76:68:31", "network": {"id": "02f86d1d-5cad-49c5-9004-3de3e4739ad5", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-889517255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cee74dd60da4a839bb5eb0ba3137edf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc8abb47-59", "ovs_interfaceid": "dc8abb47-5960-4824-b04c-1903f2eb5e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.220 250022 DEBUG nova.network.os_vif_util [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.221 250022 DEBUG os_vif [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.223 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc8abb47-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.224 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 podman[275347]: 2026-01-20 14:31:44.225697744 +0000 UTC m=+0.094393289 container cleanup c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.229 250022 INFO os_vif [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:68:31,bridge_name='br-int',has_traffic_filtering=True,id=dc8abb47-5960-4824-b04c-1903f2eb5e32,network=Network(02f86d1d-5cad-49c5-9004-3de3e4739ad5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc8abb47-59')
Jan 20 14:31:44 compute-0 systemd[1]: libpod-conmon-c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e.scope: Deactivated successfully.
Jan 20 14:31:44 compute-0 podman[275392]: 2026-01-20 14:31:44.303629129 +0000 UTC m=+0.048114604 container remove c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.308 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d99c7c89-9e14-4177-b997-bf8a1bb6e355]: (4, ('Tue Jan 20 02:31:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 (c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e)\nc1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e\nTue Jan 20 02:31:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 (c1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e)\nc1cc1769e78c249f0edec290c2389bc855f4d8ddff40a6c844fbcc6f68cd863e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.310 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e6668d06-799f-4aed-8e27-836f259c8a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.311 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02f86d1d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.312 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 kernel: tap02f86d1d-50: left promiscuous mode
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.328 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2baab203-9bf1-4b7f-8e90-4e5aeecbbf2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.343 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8c51268a-b4d6-4ed8-84f3-ed08bd5453e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[630e99ef-f617-4adc-a6bd-48999907d45e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.358 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[01e7dc00-6133-43b0-b33a-184889a6c13a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531170, 'reachable_time': 39610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275416, 'error': None, 'target': 'ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.361 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02f86d1d-5cad-49c5-9004-3de3e4739ad5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:31:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:31:44.361 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c28ffd-1019-4686-9291-11a01530837b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:31:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d02f86d1d\x2d5cad\x2d49c5\x2d9004\x2d3de3e4739ad5.mount: Deactivated successfully.
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.539 250022 DEBUG nova.compute.manager [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-unplugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.540 250022 DEBUG oslo_concurrency.lockutils [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.541 250022 DEBUG oslo_concurrency.lockutils [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.541 250022 DEBUG oslo_concurrency.lockutils [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.541 250022 DEBUG nova.compute.manager [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] No waiting events found dispatching network-vif-unplugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.542 250022 DEBUG nova.compute.manager [req-df2b9914-4de7-40e6-a754-9a0ec3180d3d req-1984cf77-065e-43e7-a890-3a774fdae565 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-unplugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.575 250022 INFO nova.virt.libvirt.driver [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Deleting instance files /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524_del
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.576 250022 INFO nova.virt.libvirt.driver [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Deletion of /var/lib/nova/instances/ad62888a-ef27-43b4-bb6c-439541ff5524_del complete
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.867 250022 INFO nova.compute.manager [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.868 250022 DEBUG oslo.service.loopingcall [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.868 250022 DEBUG nova.compute.manager [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:31:44 compute-0 nova_compute[250018]: 2026-01-20 14:31:44.868 250022 DEBUG nova.network.neutron [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:31:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2511464146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1240311285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:44.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 133 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:31:45 compute-0 nova_compute[250018]: 2026-01-20 14:31:45.799 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:45 compute-0 ceph-mon[74360]: pgmap v1235: 321 pgs: 321 active+clean; 133 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:31:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:31:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:45.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.915 250022 DEBUG nova.compute.manager [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.915 250022 DEBUG oslo_concurrency.lockutils [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.916 250022 DEBUG oslo_concurrency.lockutils [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.916 250022 DEBUG oslo_concurrency.lockutils [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.916 250022 DEBUG nova.compute.manager [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] No waiting events found dispatching network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:31:46 compute-0 nova_compute[250018]: 2026-01-20 14:31:46.916 250022 WARNING nova.compute.manager [req-652aff62-c66b-4718-9008-32d1fc9f69ab req-53c0e89c-3a23-4fe1-8196-7d30239eef6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received unexpected event network-vif-plugged-dc8abb47-5960-4824-b04c-1903f2eb5e32 for instance with vm_state active and task_state deleting.
Jan 20 14:31:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 127 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.747 250022 DEBUG nova.network.neutron [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.777 250022 INFO nova.compute.manager [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Took 2.91 seconds to deallocate network for instance.
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.840 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.841 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.862 250022 DEBUG nova.compute.manager [req-061e3b85-109b-4e2f-8557-26db00f2617d req-eed707d1-3975-436f-9b83-94325813a876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Received event network-vif-deleted-dc8abb47-5960-4824-b04c-1903f2eb5e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:31:47 compute-0 nova_compute[250018]: 2026-01-20 14:31:47.909 250022 DEBUG oslo_concurrency.processutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:31:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:47.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:31:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433866750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.378 250022 DEBUG oslo_concurrency.processutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.384 250022 DEBUG nova.compute.provider_tree [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.410 250022 DEBUG nova.scheduler.client.report [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.437 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:48 compute-0 ceph-mon[74360]: pgmap v1236: 321 pgs: 321 active+clean; 127 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Jan 20 14:31:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1433866750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.482 250022 INFO nova.scheduler.client.report [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Deleted allocations for instance ad62888a-ef27-43b4-bb6c-439541ff5524
Jan 20 14:31:48 compute-0 nova_compute[250018]: 2026-01-20 14:31:48.551 250022 DEBUG oslo_concurrency.lockutils [None req-9a130214-188b-4102-b551-dbb762cf8a77 6a3fbc3f92a849e88cbf34d28ca17e43 0cee74dd60da4a839bb5eb0ba3137edf - - default default] Lock "ad62888a-ef27-43b4-bb6c-439541ff5524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:31:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:48.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:49 compute-0 nova_compute[250018]: 2026-01-20 14:31:49.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 134 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 183 KiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 20 14:31:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:49.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:50 compute-0 ceph-mon[74360]: pgmap v1237: 321 pgs: 321 active+clean; 134 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 183 KiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 20 14:31:50 compute-0 nova_compute[250018]: 2026-01-20 14:31:50.801 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 14:31:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:51.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:31:52
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data']
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:31:52 compute-0 ceph-mon[74360]: pgmap v1238: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 14:31:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 20 14:31:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4014400721' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4014400721' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3938435753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/715650878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:31:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:54 compute-0 nova_compute[250018]: 2026-01-20 14:31:54.229 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:31:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:31:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-0 ceph-mon[74360]: pgmap v1239: 321 pgs: 321 active+clean; 134 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 20 14:31:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4270292619' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:31:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:31:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:31:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 134 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 14:31:55 compute-0 nova_compute[250018]: 2026-01-20 14:31:55.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:56 compute-0 ceph-mon[74360]: pgmap v1240: 321 pgs: 321 active+clean; 134 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 14:31:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:31:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:31:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:58 compute-0 nova_compute[250018]: 2026-01-20 14:31:58.315 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:58 compute-0 nova_compute[250018]: 2026-01-20 14:31:58.510 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:58 compute-0 ceph-mon[74360]: pgmap v1241: 321 pgs: 321 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 20 14:31:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:31:58.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:31:59 compute-0 nova_compute[250018]: 2026-01-20 14:31:59.195 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919504.1934633, ad62888a-ef27-43b4-bb6c-439541ff5524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:31:59 compute-0 nova_compute[250018]: 2026-01-20 14:31:59.196 250022 INFO nova.compute.manager [-] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] VM Stopped (Lifecycle Event)
Jan 20 14:31:59 compute-0 nova_compute[250018]: 2026-01-20 14:31:59.231 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:31:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 143 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 20 14:31:59 compute-0 nova_compute[250018]: 2026-01-20 14:31:59.501 250022 DEBUG nova.compute.manager [None req-92616712-edd7-436e-b093-3acea253c241 - - - - - -] [instance: ad62888a-ef27-43b4-bb6c-439541ff5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:31:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:31:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:31:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:31:59.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:00 compute-0 sshd-session[275450]: Invalid user admin from 157.245.78.139 port 47310
Jan 20 14:32:00 compute-0 sshd-session[275450]: Connection closed by invalid user admin 157.245.78.139 port 47310 [preauth]
Jan 20 14:32:00 compute-0 ceph-mon[74360]: pgmap v1242: 321 pgs: 321 active+clean; 143 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 20 14:32:00 compute-0 nova_compute[250018]: 2026-01-20 14:32:00.852 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:01 compute-0 sudo[275453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:01 compute-0 sudo[275453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:01 compute-0 sudo[275453]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:01 compute-0 sudo[275478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:01 compute-0 sudo[275478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:01 compute-0 sudo[275478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 208 op/s
Jan 20 14:32:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:01.783 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:32:01 compute-0 nova_compute[250018]: 2026-01-20 14:32:01.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:01.784 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:32:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:01.784 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:32:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:02.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:03 compute-0 ceph-mon[74360]: pgmap v1243: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 208 op/s
Jan 20 14:32:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Jan 20 14:32:03 compute-0 podman[275505]: 2026-01-20 14:32:03.468263215 +0000 UTC m=+0.054892267 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:03 compute-0 podman[275504]: 2026-01-20 14:32:03.496453403 +0000 UTC m=+0.083892667 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:32:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:04 compute-0 nova_compute[250018]: 2026-01-20 14:32:04.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:04 compute-0 ceph-mon[74360]: pgmap v1244: 321 pgs: 321 active+clean; 166 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Jan 20 14:32:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 122 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 20 14:32:05 compute-0 nova_compute[250018]: 2026-01-20 14:32:05.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:06 compute-0 ceph-mon[74360]: pgmap v1245: 321 pgs: 321 active+clean; 122 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 20 14:32:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 108 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:32:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:07.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:08 compute-0 ceph-mon[74360]: pgmap v1246: 321 pgs: 321 active+clean; 108 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:32:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:08.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:09 compute-0 nova_compute[250018]: 2026-01-20 14:32:09.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 113 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 20 14:32:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1509807238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3429196109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:10 compute-0 ceph-mon[74360]: pgmap v1247: 321 pgs: 321 active+clean; 113 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 20 14:32:10 compute-0 nova_compute[250018]: 2026-01-20 14:32:10.856 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016050667108691607 of space, bias 1.0, pg target 0.4815200132607482 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:32:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 180 op/s
Jan 20 14:32:11 compute-0 ceph-mon[74360]: pgmap v1248: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 180 op/s
Jan 20 14:32:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2127417258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:32:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:32:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:32:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:32:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:32:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:14 compute-0 ceph-mon[74360]: pgmap v1249: 321 pgs: 321 active+clean; 167 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Jan 20 14:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3261774672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:32:14 compute-0 nova_compute[250018]: 2026-01-20 14:32:14.239 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:14.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/651665474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1619627399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 186 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 210 op/s
Jan 20 14:32:15 compute-0 nova_compute[250018]: 2026-01-20 14:32:15.859 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:16 compute-0 ceph-mon[74360]: pgmap v1250: 321 pgs: 321 active+clean; 186 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 210 op/s
Jan 20 14:32:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:16.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:17 compute-0 nova_compute[250018]: 2026-01-20 14:32:17.081 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 206 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 229 op/s
Jan 20 14:32:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:18 compute-0 ceph-mon[74360]: pgmap v1251: 321 pgs: 321 active+clean; 206 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 229 op/s
Jan 20 14:32:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:18.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.077 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Jan 20 14:32:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069181788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.509 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.717 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.718 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4671MB free_disk=20.901988983154297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.719 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.719 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2069181788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.798 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.799 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:32:19 compute-0 nova_compute[250018]: 2026-01-20 14:32:19.826 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619202486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.235 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.240 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.274 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.296 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.297 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:20 compute-0 ceph-mon[74360]: pgmap v1252: 321 pgs: 321 active+clean; 214 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 218 op/s
Jan 20 14:32:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1619202486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:20 compute-0 nova_compute[250018]: 2026-01-20 14:32:20.860 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 244 op/s
Jan 20 14:32:21 compute-0 sudo[275600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:21 compute-0 sudo[275600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:21 compute-0 sudo[275600]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:21 compute-0 sudo[275625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:21 compute-0 sudo[275625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:21 compute-0 sudo[275625]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1361981238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:21 compute-0 ceph-mon[74360]: pgmap v1253: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 244 op/s
Jan 20 14:32:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:22 compute-0 nova_compute[250018]: 2026-01-20 14:32:22.299 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:22 compute-0 nova_compute[250018]: 2026-01-20 14:32:22.300 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:22 compute-0 nova_compute[250018]: 2026-01-20 14:32:22.300 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:22 compute-0 nova_compute[250018]: 2026-01-20 14:32:22.300 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3324155321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2670042754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:23 compute-0 nova_compute[250018]: 2026-01-20 14:32:23.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 20 14:32:23 compute-0 ceph-mon[74360]: pgmap v1254: 321 pgs: 321 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 20 14:32:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:24 compute-0 nova_compute[250018]: 2026-01-20 14:32:24.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:24 compute-0 nova_compute[250018]: 2026-01-20 14:32:24.244 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2967665820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:25.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 210 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 219 op/s
Jan 20 14:32:25 compute-0 nova_compute[250018]: 2026-01-20 14:32:25.862 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/157752220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:25 compute-0 ceph-mon[74360]: pgmap v1255: 321 pgs: 321 active+clean; 210 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 219 op/s
Jan 20 14:32:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:27 compute-0 nova_compute[250018]: 2026-01-20 14:32:27.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:27 compute-0 nova_compute[250018]: 2026-01-20 14:32:27.078 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:32:27 compute-0 nova_compute[250018]: 2026-01-20 14:32:27.079 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:32:27 compute-0 nova_compute[250018]: 2026-01-20 14:32:27.079 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:32:27 compute-0 nova_compute[250018]: 2026-01-20 14:32:27.132 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:32:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 200 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 188 op/s
Jan 20 14:32:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:28 compute-0 ceph-mon[74360]: pgmap v1256: 321 pgs: 321 active+clean; 200 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 188 op/s
Jan 20 14:32:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:29.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:29 compute-0 nova_compute[250018]: 2026-01-20 14:32:29.247 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 140 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Jan 20 14:32:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:30.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:30 compute-0 ceph-mon[74360]: pgmap v1257: 321 pgs: 321 active+clean; 140 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Jan 20 14:32:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4060471299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/115828176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/830453568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:30.746 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:30.747 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:30.747 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:30 compute-0 nova_compute[250018]: 2026-01-20 14:32:30.863 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:31 compute-0 sudo[275655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:31 compute-0 sudo[275655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-0 sudo[275655]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Jan 20 14:32:31 compute-0 sudo[275680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:32:31 compute-0 sudo[275680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-0 sudo[275680]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-0 sudo[275705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:31 compute-0 sudo[275705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-0 sudo[275705]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:31 compute-0 sudo[275730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:32:31 compute-0 sudo[275730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:32.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:32 compute-0 sudo[275730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f2f99117-a2dc-4090-b186-0424976d4189 does not exist
Jan 20 14:32:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8f44a210-e805-4500-8b3d-9c61a04732d4 does not exist
Jan 20 14:32:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 91ceace7-6100-4fad-8fb8-c306ec32348e does not exist
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:32:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:32 compute-0 sudo[275786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:32 compute-0 sudo[275786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:32 compute-0 sudo[275786]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:32 compute-0 sudo[275811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:32:32 compute-0 sudo[275811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:32 compute-0 sudo[275811]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:32 compute-0 sudo[275836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:32 compute-0 sudo[275836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:32 compute-0 sudo[275836]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:32 compute-0 sudo[275861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:32:32 compute-0 sudo[275861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:32 compute-0 ceph-mon[74360]: pgmap v1258: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 239 op/s
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:32:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.76448263 +0000 UTC m=+0.042793803 container create 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:32:32 compute-0 systemd[1]: Started libpod-conmon-134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43.scope.
Jan 20 14:32:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.821367539 +0000 UTC m=+0.099678702 container init 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.827672269 +0000 UTC m=+0.105983432 container start 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.830395852 +0000 UTC m=+0.108707035 container attach 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:32:32 compute-0 upbeat_northcutt[275944]: 167 167
Jan 20 14:32:32 compute-0 systemd[1]: libpod-134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43.scope: Deactivated successfully.
Jan 20 14:32:32 compute-0 conmon[275944]: conmon 134ab22e67b5d5509aeb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43.scope/container/memory.events
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.833788343 +0000 UTC m=+0.112099546 container died 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.742678693 +0000 UTC m=+0.020989866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-54dbb1b9502752d8400571d9952d0cbb1cdd04fbcef1c00c26412b31fb51a601-merged.mount: Deactivated successfully.
Jan 20 14:32:32 compute-0 podman[275926]: 2026-01-20 14:32:32.869106642 +0000 UTC m=+0.147417805 container remove 134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:32:32 compute-0 systemd[1]: libpod-conmon-134ab22e67b5d5509aeb362164519898fb6e197ba5f1e5fb827dac108e4bef43.scope: Deactivated successfully.
Jan 20 14:32:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:33.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.034246143 +0000 UTC m=+0.038916386 container create 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:32:33 compute-0 systemd[1]: Started libpod-conmon-2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc.scope.
Jan 20 14:32:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.017211995 +0000 UTC m=+0.021882248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.116543537 +0000 UTC m=+0.121213830 container init 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.129060733 +0000 UTC m=+0.133730976 container start 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.132759893 +0000 UTC m=+0.137430186 container attach 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:32:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.8 MiB/s wr, 172 op/s
Jan 20 14:32:33 compute-0 confident_noether[275982]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:32:33 compute-0 confident_noether[275982]: --> relative data size: 1.0
Jan 20 14:32:33 compute-0 confident_noether[275982]: --> All data devices are unavailable
Jan 20 14:32:33 compute-0 systemd[1]: libpod-2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc.scope: Deactivated successfully.
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.894511148 +0000 UTC m=+0.899181391 container died 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:32:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-de929345d931836090b9ce80335fd2510f6ad37be5f183425394efaaf628623b-merged.mount: Deactivated successfully.
Jan 20 14:32:33 compute-0 podman[275966]: 2026-01-20 14:32:33.954674366 +0000 UTC m=+0.959344609 container remove 2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:32:33 compute-0 systemd[1]: libpod-conmon-2effe46ec90a705b94a58c1d557c6a6decf3b5e2a6d62fd890520a440b198fcc.scope: Deactivated successfully.
Jan 20 14:32:33 compute-0 sudo[275861]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:33 compute-0 podman[276005]: 2026-01-20 14:32:33.992087251 +0000 UTC m=+0.070946518 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 14:32:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:34.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:34 compute-0 podman[275998]: 2026-01-20 14:32:34.03588905 +0000 UTC m=+0.115204240 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 14:32:34 compute-0 sudo[276049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:34 compute-0 sudo[276049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:34 compute-0 sudo[276049]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:34 compute-0 sudo[276078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:32:34 compute-0 sudo[276078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:34 compute-0 sudo[276078]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:34 compute-0 sudo[276103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:34 compute-0 sudo[276103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:34 compute-0 sudo[276103]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:34 compute-0 sudo[276128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:32:34 compute-0 sudo[276128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:34 compute-0 nova_compute[250018]: 2026-01-20 14:32:34.248 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.499255601 +0000 UTC m=+0.035672731 container create 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:32:34 compute-0 systemd[1]: Started libpod-conmon-9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb.scope.
Jan 20 14:32:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.570357962 +0000 UTC m=+0.106775122 container init 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.577470414 +0000 UTC m=+0.113887544 container start 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.484945695 +0000 UTC m=+0.021362845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.581536583 +0000 UTC m=+0.117953733 container attach 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:32:34 compute-0 fervent_perlman[276211]: 167 167
Jan 20 14:32:34 compute-0 systemd[1]: libpod-9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb.scope: Deactivated successfully.
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.583919867 +0000 UTC m=+0.120336997 container died 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5c6d1a2834f4bc031d8692fb82819bc8c1a6fce61fcb54c40c565a25bd7020d-merged.mount: Deactivated successfully.
Jan 20 14:32:34 compute-0 podman[276194]: 2026-01-20 14:32:34.620231884 +0000 UTC m=+0.156649014 container remove 9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:34 compute-0 systemd[1]: libpod-conmon-9902499c816b34a4519ed46282600e5c7477a8d369a7170c8ee4c40f797e2aeb.scope: Deactivated successfully.
Jan 20 14:32:34 compute-0 ceph-mon[74360]: pgmap v1259: 321 pgs: 321 active+clean; 80 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.8 MiB/s wr, 172 op/s
Jan 20 14:32:34 compute-0 podman[276235]: 2026-01-20 14:32:34.787839732 +0000 UTC m=+0.045036652 container create beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:32:34 compute-0 systemd[1]: Started libpod-conmon-beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052.scope.
Jan 20 14:32:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82385e936ef6a65fe79a0dc1302305dec31879b26bb9a72d3de1e1763e408df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82385e936ef6a65fe79a0dc1302305dec31879b26bb9a72d3de1e1763e408df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82385e936ef6a65fe79a0dc1302305dec31879b26bb9a72d3de1e1763e408df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82385e936ef6a65fe79a0dc1302305dec31879b26bb9a72d3de1e1763e408df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:34 compute-0 podman[276235]: 2026-01-20 14:32:34.77031759 +0000 UTC m=+0.027514530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:34 compute-0 podman[276235]: 2026-01-20 14:32:34.868737127 +0000 UTC m=+0.125934057 container init beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:32:34 compute-0 podman[276235]: 2026-01-20 14:32:34.876728151 +0000 UTC m=+0.133925071 container start beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:32:34 compute-0 podman[276235]: 2026-01-20 14:32:34.880614826 +0000 UTC m=+0.137811776 container attach beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:32:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:35.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 88 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1020 KiB/s rd, 3.9 MiB/s wr, 202 op/s
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]: {
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:     "0": [
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:         {
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "devices": [
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "/dev/loop3"
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             ],
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "lv_name": "ceph_lv0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "lv_size": "7511998464",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "name": "ceph_lv0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "tags": {
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.cluster_name": "ceph",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.crush_device_class": "",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.encrypted": "0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.osd_id": "0",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.type": "block",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:                 "ceph.vdo": "0"
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             },
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "type": "block",
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:             "vg_name": "ceph_vg0"
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:         }
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]:     ]
Jan 20 14:32:35 compute-0 gallant_lederberg[276253]: }
Jan 20 14:32:35 compute-0 systemd[1]: libpod-beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052.scope: Deactivated successfully.
Jan 20 14:32:35 compute-0 podman[276235]: 2026-01-20 14:32:35.630227544 +0000 UTC m=+0.887424484 container died beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c82385e936ef6a65fe79a0dc1302305dec31879b26bb9a72d3de1e1763e408df-merged.mount: Deactivated successfully.
Jan 20 14:32:35 compute-0 podman[276235]: 2026-01-20 14:32:35.688258925 +0000 UTC m=+0.945455845 container remove beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:32:35 compute-0 systemd[1]: libpod-conmon-beb421e7612f3cebb55d590e2af9351d2747dd19d1d65476693869046f765052.scope: Deactivated successfully.
Jan 20 14:32:35 compute-0 sudo[276128]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:35 compute-0 sudo[276274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:35 compute-0 sudo[276274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:35 compute-0 sudo[276274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:35 compute-0 sudo[276299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:32:35 compute-0 sudo[276299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:35 compute-0 sudo[276299]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:35 compute-0 nova_compute[250018]: 2026-01-20 14:32:35.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:35 compute-0 sudo[276324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:35 compute-0 sudo[276324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:35 compute-0 sudo[276324]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:35 compute-0 sudo[276349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:32:35 compute-0 sudo[276349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.248739578 +0000 UTC m=+0.038143007 container create a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:32:36 compute-0 systemd[1]: Started libpod-conmon-a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611.scope.
Jan 20 14:32:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.317780025 +0000 UTC m=+0.107183464 container init a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.324160026 +0000 UTC m=+0.113563455 container start a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:32:36 compute-0 hardcore_hermann[276428]: 167 167
Jan 20 14:32:36 compute-0 systemd[1]: libpod-a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611.scope: Deactivated successfully.
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.328192325 +0000 UTC m=+0.117595754 container attach a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.233185629 +0000 UTC m=+0.022589068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.328913033 +0000 UTC m=+0.118316482 container died a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e59927bcfe3a35f1fe2f0db125763cef712723fa4973ff26261e6af7fcca1841-merged.mount: Deactivated successfully.
Jan 20 14:32:36 compute-0 podman[276412]: 2026-01-20 14:32:36.362561389 +0000 UTC m=+0.151964828 container remove a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hermann, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:32:36 compute-0 systemd[1]: libpod-conmon-a491d78d0b7f731d4b1605ff9cb3fa4145f5264811e7b69c6ff04b6bea474611.scope: Deactivated successfully.
Jan 20 14:32:36 compute-0 podman[276451]: 2026-01-20 14:32:36.532669673 +0000 UTC m=+0.047923059 container create 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:32:36 compute-0 systemd[1]: Started libpod-conmon-58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b.scope.
Jan 20 14:32:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7720c4ccc62f4a7ad016298cd83fe9e60807ba3f749ba01af924140fe59d825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7720c4ccc62f4a7ad016298cd83fe9e60807ba3f749ba01af924140fe59d825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7720c4ccc62f4a7ad016298cd83fe9e60807ba3f749ba01af924140fe59d825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7720c4ccc62f4a7ad016298cd83fe9e60807ba3f749ba01af924140fe59d825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:32:36 compute-0 podman[276451]: 2026-01-20 14:32:36.512170182 +0000 UTC m=+0.027423598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:32:36 compute-0 podman[276451]: 2026-01-20 14:32:36.610064094 +0000 UTC m=+0.125317500 container init 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:32:36 compute-0 podman[276451]: 2026-01-20 14:32:36.617096653 +0000 UTC m=+0.132350039 container start 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:32:36 compute-0 podman[276451]: 2026-01-20 14:32:36.620532866 +0000 UTC m=+0.135786252 container attach 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:32:36 compute-0 ceph-mon[74360]: pgmap v1260: 321 pgs: 321 active+clean; 88 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1020 KiB/s rd, 3.9 MiB/s wr, 202 op/s
Jan 20 14:32:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 62 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 20 14:32:37 compute-0 charming_brown[276468]: {
Jan 20 14:32:37 compute-0 charming_brown[276468]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:32:37 compute-0 charming_brown[276468]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:32:37 compute-0 charming_brown[276468]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:32:37 compute-0 charming_brown[276468]:         "osd_id": 0,
Jan 20 14:32:37 compute-0 charming_brown[276468]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:32:37 compute-0 charming_brown[276468]:         "type": "bluestore"
Jan 20 14:32:37 compute-0 charming_brown[276468]:     }
Jan 20 14:32:37 compute-0 charming_brown[276468]: }
Jan 20 14:32:37 compute-0 systemd[1]: libpod-58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b.scope: Deactivated successfully.
Jan 20 14:32:37 compute-0 podman[276490]: 2026-01-20 14:32:37.503860611 +0000 UTC m=+0.023544584 container died 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7720c4ccc62f4a7ad016298cd83fe9e60807ba3f749ba01af924140fe59d825-merged.mount: Deactivated successfully.
Jan 20 14:32:37 compute-0 podman[276490]: 2026-01-20 14:32:37.548720187 +0000 UTC m=+0.068404160 container remove 58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:32:37 compute-0 systemd[1]: libpod-conmon-58b7f29ad011cab3043a3a78734f01e91af470062524667032fa8a59aa5ed62b.scope: Deactivated successfully.
Jan 20 14:32:37 compute-0 sudo[276349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:32:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:32:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 22f75f87-95d0-4b6f-ae9f-624a619281ed does not exist
Jan 20 14:32:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b02bf8f7-b63f-4ec0-8e96-89539f2069a8 does not exist
Jan 20 14:32:37 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4793d259-2527-4b39-b5a0-1ea8aa4af64e does not exist
Jan 20 14:32:37 compute-0 sudo[276505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:37 compute-0 sudo[276505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:37 compute-0 sudo[276505]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:37 compute-0 sudo[276530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:32:37 compute-0 sudo[276530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:37 compute-0 sudo[276530]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4194897972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3125161546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:32:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:38.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:38 compute-0 ceph-mon[74360]: pgmap v1261: 321 pgs: 321 active+clean; 62 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 20 14:32:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:39.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:39 compute-0 nova_compute[250018]: 2026-01-20 14:32:39.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 70 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Jan 20 14:32:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/151885523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:40 compute-0 ceph-mon[74360]: pgmap v1262: 321 pgs: 321 active+clean; 70 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Jan 20 14:32:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4006511380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:32:40 compute-0 nova_compute[250018]: 2026-01-20 14:32:40.868 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:41.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.575935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561576072, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2269, "num_deletes": 505, "total_data_size": 3379030, "memory_usage": 3441560, "flush_reason": "Manual Compaction"}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 20 14:32:41 compute-0 sudo[276557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:41 compute-0 sudo[276557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:41 compute-0 sudo[276557]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561621368, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 2936030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26898, "largest_seqno": 29166, "table_properties": {"data_size": 2926940, "index_size": 5008, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 24239, "raw_average_key_size": 20, "raw_value_size": 2906012, "raw_average_value_size": 2448, "num_data_blocks": 218, "num_entries": 1187, "num_filter_entries": 1187, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919386, "oldest_key_time": 1768919386, "file_creation_time": 1768919561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 45591 microseconds, and 13358 cpu microseconds.
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.621546) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 2936030 bytes OK
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.621597) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.624867) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.624888) EVENT_LOG_v1 {"time_micros": 1768919561624882, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.624905) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3368630, prev total WAL file size 3368630, number of live WAL files 2.
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.626173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(2867KB)], [62(10MB)]
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561626244, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13915809, "oldest_snapshot_seqno": -1}
Jan 20 14:32:41 compute-0 sudo[276582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:32:41 compute-0 sudo[276582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:32:41 compute-0 sudo[276582]: pam_unix(sudo:session): session closed for user root
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5442 keys, 8452119 bytes, temperature: kUnknown
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561674511, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8452119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8416530, "index_size": 20887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 139110, "raw_average_key_size": 25, "raw_value_size": 8319218, "raw_average_value_size": 1528, "num_data_blocks": 843, "num_entries": 5442, "num_filter_entries": 5442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.674719) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8452119 bytes
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.686097) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 288.0 rd, 174.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.5 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(7.6) write-amplify(2.9) OK, records in: 6449, records dropped: 1007 output_compression: NoCompression
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.686130) EVENT_LOG_v1 {"time_micros": 1768919561686118, "job": 34, "event": "compaction_finished", "compaction_time_micros": 48325, "compaction_time_cpu_micros": 20738, "output_level": 6, "num_output_files": 1, "total_output_size": 8452119, "num_input_records": 6449, "num_output_records": 5442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561686798, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919561688724, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.626077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.688769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.688774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.688776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.688778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:32:41.688780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:32:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:42 compute-0 ceph-mon[74360]: pgmap v1263: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Jan 20 14:32:42 compute-0 sshd-session[276607]: Invalid user admin from 157.245.78.139 port 58540
Jan 20 14:32:43 compute-0 sshd-session[276607]: Connection closed by invalid user admin 157.245.78.139 port 58540 [preauth]
Jan 20 14:32:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:43.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 129 op/s
Jan 20 14:32:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:44.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:44 compute-0 nova_compute[250018]: 2026-01-20 14:32:44.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:44 compute-0 ceph-mon[74360]: pgmap v1264: 321 pgs: 321 active+clean; 88 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 129 op/s
Jan 20 14:32:44 compute-0 nova_compute[250018]: 2026-01-20 14:32:44.702 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:44.703 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:32:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:44.703 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:32:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:45.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 68 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 183 op/s
Jan 20 14:32:45 compute-0 nova_compute[250018]: 2026-01-20 14:32:45.870 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:46 compute-0 ceph-mon[74360]: pgmap v1265: 321 pgs: 321 active+clean; 68 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 183 op/s
Jan 20 14:32:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/260108864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:47.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 57 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 14:32:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:48 compute-0 ceph-mon[74360]: pgmap v1266: 321 pgs: 321 active+clean; 57 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 14:32:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:49.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:49 compute-0 nova_compute[250018]: 2026-01-20 14:32:49.257 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 20 14:32:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:50 compute-0 ceph-mon[74360]: pgmap v1267: 321 pgs: 321 active+clean; 41 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 20 14:32:50 compute-0 nova_compute[250018]: 2026-01-20 14:32:50.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 20 14:32:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:32:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:52.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:32:52
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Jan 20 14:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:32:52 compute-0 ceph-mon[74360]: pgmap v1268: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 20 14:32:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:53.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:54.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.091 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.091 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.161 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.258 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.283 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.284 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.312 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.312 250022 INFO nova.compute.claims [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.455 250022 DEBUG nova.scheduler.client.report [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.477 250022 DEBUG nova.scheduler.client.report [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.478 250022 DEBUG nova.compute.provider_tree [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.511 250022 DEBUG nova.scheduler.client.report [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.555 250022 DEBUG nova.scheduler.client.report [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:32:54 compute-0 nova_compute[250018]: 2026-01-20 14:32:54.626 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:32:54.705 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:32:54 compute-0 ceph-mon[74360]: pgmap v1269: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:32:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:32:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:32:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950850921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.080 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.091 250022 DEBUG nova.compute.provider_tree [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.116 250022 DEBUG nova.scheduler.client.report [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.163 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.165 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.238 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.239 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.283 250022 INFO nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.314 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:32:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.462 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.463 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.464 250022 INFO nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Creating image(s)
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.491 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.517 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.546 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.551 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.615 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.617 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.618 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.619 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.642 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.645 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.813 250022 DEBUG nova.policy [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '729ca8a2a7414735af25d05df4a563b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48488e875f2e472f97f07cc7ee07e0be', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:32:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1950850921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:32:55 compute-0 nova_compute[250018]: 2026-01-20 14:32:55.874 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.620 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.710 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] resizing rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.826 250022 DEBUG nova.objects.instance [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'migration_context' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.850 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.851 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Ensure instance console log exists: /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.852 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.852 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:32:56 compute-0 nova_compute[250018]: 2026-01-20 14:32:56.853 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:32:56 compute-0 ceph-mon[74360]: pgmap v1270: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Jan 20 14:32:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:32:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:57.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 625 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:32:57 compute-0 nova_compute[250018]: 2026-01-20 14:32:57.832 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Successfully created port: ca52d5c1-4a07-4d94-9adb-1311a4d89044 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:32:57 compute-0 ceph-mon[74360]: pgmap v1271: 321 pgs: 321 active+clean; 41 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 625 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Jan 20 14:32:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:32:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:32:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:32:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:32:59.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:32:59 compute-0 nova_compute[250018]: 2026-01-20 14:32:59.261 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:32:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 45 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 20 14:32:59 compute-0 nova_compute[250018]: 2026-01-20 14:32:59.941 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Successfully updated port: ca52d5c1-4a07-4d94-9adb-1311a4d89044 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:32:59 compute-0 nova_compute[250018]: 2026-01-20 14:32:59.971 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:32:59 compute-0 nova_compute[250018]: 2026-01-20 14:32:59.971 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquired lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:32:59 compute-0 nova_compute[250018]: 2026-01-20 14:32:59.971 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:33:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:00.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:00 compute-0 nova_compute[250018]: 2026-01-20 14:33:00.126 250022 DEBUG nova.compute.manager [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-changed-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:00 compute-0 nova_compute[250018]: 2026-01-20 14:33:00.126 250022 DEBUG nova.compute.manager [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Refreshing instance network info cache due to event network-changed-ca52d5c1-4a07-4d94-9adb-1311a4d89044. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:33:00 compute-0 nova_compute[250018]: 2026-01-20 14:33:00.127 250022 DEBUG oslo_concurrency.lockutils [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:00 compute-0 nova_compute[250018]: 2026-01-20 14:33:00.188 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:33:00 compute-0 ceph-mon[74360]: pgmap v1272: 321 pgs: 321 active+clean; 45 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 20 14:33:00 compute-0 nova_compute[250018]: 2026-01-20 14:33:00.876 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:01.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 14:33:01 compute-0 sudo[276808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:01 compute-0 sudo[276808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:01 compute-0 sudo[276808]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:01 compute-0 sudo[276833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:01 compute-0 sudo[276833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:01 compute-0 sudo[276833]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.863 250022 DEBUG nova.network.neutron [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating instance_info_cache with network_info: [{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.904 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Releasing lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.904 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Instance network_info: |[{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.905 250022 DEBUG oslo_concurrency.lockutils [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.905 250022 DEBUG nova.network.neutron [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Refreshing network info cache for port ca52d5c1-4a07-4d94-9adb-1311a4d89044 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.907 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Start _get_guest_xml network_info=[{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.912 250022 WARNING nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.922 250022 DEBUG nova.virt.libvirt.host [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.924 250022 DEBUG nova.virt.libvirt.host [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.936 250022 DEBUG nova.virt.libvirt.host [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.937 250022 DEBUG nova.virt.libvirt.host [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.938 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.938 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.939 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.939 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.939 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.939 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.940 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.940 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.940 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.940 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.941 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.941 250022 DEBUG nova.virt.hardware [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:33:01 compute-0 nova_compute[250018]: 2026-01-20 14:33:01.944 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:02.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:02 compute-0 ceph-mon[74360]: pgmap v1273: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 14:33:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212914262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:02 compute-0 nova_compute[250018]: 2026-01-20 14:33:02.544 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:02 compute-0 nova_compute[250018]: 2026-01-20 14:33:02.570 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:02 compute-0 nova_compute[250018]: 2026-01-20 14:33:02.574 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4157934418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.004 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.006 250022 DEBUG nova.virt.libvirt.vif [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-360507108',display_name='tempest-VolumesAdminNegativeTest-server-360507108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-360507108',id=38,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEfpLP+4Mq2CZ5ZMswuqwlZgEKKrioeoV+EpxSKHke61Yt8UqOI4Nb0ZGFOm7IzrLdRE4GvfieoPu5jR6ZfidedQLi0pPlRx8BniFmrNq4zfNCZEmtF+sKw9ryVBJbHVQ==',key_name='tempest-keypair-1809754421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-sgcyfa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:32:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='729ca8a2a7414735af25d05df4a563b9',uuid=a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.006 250022 DEBUG nova.network.os_vif_util [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.007 250022 DEBUG nova.network.os_vif_util [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.008 250022 DEBUG nova.objects.instance [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'pci_devices' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.033 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <uuid>a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174</uuid>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <name>instance-00000026</name>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:name>tempest-VolumesAdminNegativeTest-server-360507108</nova:name>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:33:01</nova:creationTime>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:user uuid="729ca8a2a7414735af25d05df4a563b9">tempest-VolumesAdminNegativeTest-1678705027-project-member</nova:user>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:project uuid="48488e875f2e472f97f07cc7ee07e0be">tempest-VolumesAdminNegativeTest-1678705027</nova:project>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <nova:port uuid="ca52d5c1-4a07-4d94-9adb-1311a4d89044">
Jan 20 14:33:03 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <system>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="serial">a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="uuid">a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </system>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <os>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </os>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <features>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </features>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk">
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </source>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config">
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </source>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:33:03 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:94:c6:27"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <target dev="tapca52d5c1-4a"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/console.log" append="off"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <video>
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </video>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:33:03 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:33:03 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:33:03 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:33:03 compute-0 nova_compute[250018]: </domain>
Jan 20 14:33:03 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.035 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Preparing to wait for external event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.035 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.035 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.036 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.036 250022 DEBUG nova.virt.libvirt.vif [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-360507108',display_name='tempest-VolumesAdminNegativeTest-server-360507108',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-360507108',id=38,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEfpLP+4Mq2CZ5ZMswuqwlZgEKKrioeoV+EpxSKHke61Yt8UqOI4Nb0ZGFOm7IzrLdRE4GvfieoPu5jR6ZfidedQLi0pPlRx8BniFmrNq4zfNCZEmtF+sKw9ryVBJbHVQ==',key_name='tempest-keypair-1809754421',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-sgcyfa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:32:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='729ca8a2a7414735af25d05df4a563b9',uuid=a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.037 250022 DEBUG nova.network.os_vif_util [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.037 250022 DEBUG nova.network.os_vif_util [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.038 250022 DEBUG os_vif [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.039 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.039 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.044 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca52d5c1-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.044 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca52d5c1-4a, col_values=(('external_ids', {'iface-id': 'ca52d5c1-4a07-4d94-9adb-1311a4d89044', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:c6:27', 'vm-uuid': 'a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.046 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:03 compute-0 NetworkManager[48960]: <info>  [1768919583.0476] manager: (tapca52d5c1-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.047 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.054 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.054 250022 INFO os_vif [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a')
Jan 20 14:33:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.211 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.211 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.212 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No VIF found with MAC fa:16:3e:94:c6:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.212 250022 INFO nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Using config drive
Jan 20 14:33:03 compute-0 nova_compute[250018]: 2026-01-20 14:33:03.240 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 14:33:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4212914262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4157934418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:04.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.134 250022 INFO nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Creating config drive at /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.139 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn86i16b0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.267 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn86i16b0" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.298 250022 DEBUG nova.storage.rbd_utils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.301 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:04 compute-0 podman[276979]: 2026-01-20 14:33:04.495076464 +0000 UTC m=+0.067753723 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:33:04 compute-0 podman[276978]: 2026-01-20 14:33:04.528289437 +0000 UTC m=+0.100968546 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:33:04 compute-0 ceph-mon[74360]: pgmap v1274: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.915 250022 DEBUG oslo_concurrency.processutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.915 250022 INFO nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Deleting local config drive /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174/disk.config because it was imported into RBD.
Jan 20 14:33:04 compute-0 kernel: tapca52d5c1-4a: entered promiscuous mode
Jan 20 14:33:04 compute-0 NetworkManager[48960]: <info>  [1768919584.9714] manager: (tapca52d5c1-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Jan 20 14:33:04 compute-0 nova_compute[250018]: 2026-01-20 14:33:04.972 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:04 compute-0 ovn_controller[148666]: 2026-01-20T14:33:04Z|00104|binding|INFO|Claiming lport ca52d5c1-4a07-4d94-9adb-1311a4d89044 for this chassis.
Jan 20 14:33:04 compute-0 ovn_controller[148666]: 2026-01-20T14:33:04Z|00105|binding|INFO|ca52d5c1-4a07-4d94-9adb-1311a4d89044: Claiming fa:16:3e:94:c6:27 10.100.0.8
Jan 20 14:33:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:04.984 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c6:27 10.100.0.8'], port_security=['fa:16:3e:94:c6:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b774e474-3e68-434c-8017-93bd087d2285', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48488e875f2e472f97f07cc7ee07e0be', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a57e9380-dedd-484d-b2c3-1886f12d2575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3367f149-9a0b-42bf-93b6-10dba98995c7, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ca52d5c1-4a07-4d94-9adb-1311a4d89044) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:33:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:04.985 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ca52d5c1-4a07-4d94-9adb-1311a4d89044 in datapath b774e474-3e68-434c-8017-93bd087d2285 bound to our chassis
Jan 20 14:33:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:04.987 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b774e474-3e68-434c-8017-93bd087d2285
Jan 20 14:33:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:04.997 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[33fcdb34-cae3-4345-b440-c9ce13d5581e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:04.998 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb774e474-31 in ovnmeta-b774e474-3e68-434c-8017-93bd087d2285 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:33:04 compute-0 systemd-machined[216401]: New machine qemu-18-instance-00000026.
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.000 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb774e474-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.001 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8eadc7d7-7974-41e9-bc4e-618a656a4ded]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.002 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb4ca7e-2f42-4a50-b85f-1519edb5864e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.014 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e985a144-1828-467f-b1b3-71cc52064223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.039 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f85fe6f3-a1b2-4c6a-b694-63c7d4ad2ac7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000026.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.048 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:05 compute-0 ovn_controller[148666]: 2026-01-20T14:33:05Z|00106|binding|INFO|Setting lport ca52d5c1-4a07-4d94-9adb-1311a4d89044 ovn-installed in OVS
Jan 20 14:33:05 compute-0 ovn_controller[148666]: 2026-01-20T14:33:05Z|00107|binding|INFO|Setting lport ca52d5c1-4a07-4d94-9adb-1311a4d89044 up in Southbound
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:05 compute-0 systemd-udevd[277040]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:33:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:05.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.067 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c4428458-f84a-456f-a47d-3d5a4abcd1c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 NetworkManager[48960]: <info>  [1768919585.0737] device (tapca52d5c1-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.073 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc32abf-8ff0-435a-a321-8d4d0a3ed583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 NetworkManager[48960]: <info>  [1768919585.0749] manager: (tapb774e474-30): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Jan 20 14:33:05 compute-0 NetworkManager[48960]: <info>  [1768919585.0752] device (tapca52d5c1-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:33:05 compute-0 systemd-udevd[277042]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.100 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[20e6d8b7-f52c-4798-970e-be3ae1c7a3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.103 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6760c009-1569-4a86-8e51-d8cc61dd5f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 NetworkManager[48960]: <info>  [1768919585.1255] device (tapb774e474-30): carrier: link connected
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.131 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2f157d71-50b2-419c-a650-b5a334cfe80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.148 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4110d7b0-5ef9-415b-9033-384d90f9991d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb774e474-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:ae:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552584, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277068, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.162 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ae3ba8-b2ba-4eed-877e-a2b3092a18a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:ae64'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552584, 'tstamp': 552584}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277069, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.180 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[915b97ca-a82f-456e-9983-5f9ba41f0876]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb774e474-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:ae:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552584, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277070, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.208 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd1f8f3-ae67-44e9-a36b-eec92240f837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.279 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb92d74-62d0-4de3-905b-78d1a4492a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.282 250022 DEBUG nova.network.neutron [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updated VIF entry in instance network info cache for port ca52d5c1-4a07-4d94-9adb-1311a4d89044. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.283 250022 DEBUG nova.network.neutron [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating instance_info_cache with network_info: [{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.286 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb774e474-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.286 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.287 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb774e474-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:05 compute-0 kernel: tapb774e474-30: entered promiscuous mode
Jan 20 14:33:05 compute-0 NetworkManager[48960]: <info>  [1768919585.2902] manager: (tapb774e474-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.293 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb774e474-30, col_values=(('external_ids', {'iface-id': '01ec1fed-ce23-42e9-9147-b3495425c336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:05 compute-0 ovn_controller[148666]: 2026-01-20T14:33:05Z|00108|binding|INFO|Releasing lport 01ec1fed-ce23-42e9-9147-b3495425c336 from this chassis (sb_readonly=0)
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.309 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.310 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b774e474-3e68-434c-8017-93bd087d2285.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b774e474-3e68-434c-8017-93bd087d2285.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.311 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[46524c07-f97d-4cb8-99d3-b36bac4c74cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.312 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b774e474-3e68-434c-8017-93bd087d2285
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b774e474-3e68-434c-8017-93bd087d2285.pid.haproxy
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b774e474-3e68-434c-8017-93bd087d2285
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:33:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:05.312 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'env', 'PROCESS_TAG=haproxy-b774e474-3e68-434c-8017-93bd087d2285', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b774e474-3e68-434c-8017-93bd087d2285.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.316 250022 DEBUG oslo_concurrency.lockutils [req-d71d4c70-378e-49b8-b74c-065de0816c42 req-e43e2872-ba23-4754-a4a0-75bf18a5b137 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.588 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919585.5882745, a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.589 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] VM Started (Lifecycle Event)
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.608 250022 DEBUG nova.compute.manager [req-9b7a2cb3-231c-41ad-956c-6fb9572fd3fa req-77e686e4-0eb7-4f47-8266-b5cb6f3b5632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.608 250022 DEBUG oslo_concurrency.lockutils [req-9b7a2cb3-231c-41ad-956c-6fb9572fd3fa req-77e686e4-0eb7-4f47-8266-b5cb6f3b5632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.608 250022 DEBUG oslo_concurrency.lockutils [req-9b7a2cb3-231c-41ad-956c-6fb9572fd3fa req-77e686e4-0eb7-4f47-8266-b5cb6f3b5632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.609 250022 DEBUG oslo_concurrency.lockutils [req-9b7a2cb3-231c-41ad-956c-6fb9572fd3fa req-77e686e4-0eb7-4f47-8266-b5cb6f3b5632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.609 250022 DEBUG nova.compute.manager [req-9b7a2cb3-231c-41ad-956c-6fb9572fd3fa req-77e686e4-0eb7-4f47-8266-b5cb6f3b5632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Processing event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.609 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.612 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.616 250022 INFO nova.virt.libvirt.driver [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Instance spawned successfully.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.616 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.630 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.634 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.663 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.663 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.664 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.664 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.664 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.665 250022 DEBUG nova.virt.libvirt.driver [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.677 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.678 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919585.5892498, a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.678 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] VM Paused (Lifecycle Event)
Jan 20 14:33:05 compute-0 podman[277144]: 2026-01-20 14:33:05.695231528 +0000 UTC m=+0.052331448 container create f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.714 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.719 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919585.6124368, a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.719 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] VM Resumed (Lifecycle Event)
Jan 20 14:33:05 compute-0 systemd[1]: Started libpod-conmon-f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6.scope.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.754 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:05 compute-0 podman[277144]: 2026-01-20 14:33:05.667848492 +0000 UTC m=+0.024948432 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.761 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.766 250022 INFO nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Took 10.30 seconds to spawn the instance on the hypervisor.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.766 250022 DEBUG nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669f7aad4565a1ef60d54e29274ebcfb188daad43f2bc4b9ae05140434ef8309/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.780 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:05 compute-0 podman[277144]: 2026-01-20 14:33:05.792203276 +0000 UTC m=+0.149303216 container init f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:33:05 compute-0 podman[277144]: 2026-01-20 14:33:05.797528319 +0000 UTC m=+0.154628239 container start f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:33:05 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [NOTICE]   (277163) : New worker (277165) forked
Jan 20 14:33:05 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [NOTICE]   (277163) : Loading success.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.875 250022 INFO nova.compute.manager [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Took 11.65 seconds to build instance.
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.877 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:05 compute-0 nova_compute[250018]: 2026-01-20 14:33:05.895 250022 DEBUG oslo_concurrency.lockutils [None req-9c217d05-8238-441e-9339-73765448b3e5 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:05 compute-0 ceph-mon[74360]: pgmap v1275: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 14:33:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:06.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:07.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.095 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.375 250022 DEBUG nova.compute.manager [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.378 250022 DEBUG oslo_concurrency.lockutils [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.379 250022 DEBUG oslo_concurrency.lockutils [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.380 250022 DEBUG oslo_concurrency.lockutils [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.381 250022 DEBUG nova.compute.manager [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] No waiting events found dispatching network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:33:08 compute-0 nova_compute[250018]: 2026-01-20 14:33:08.381 250022 WARNING nova.compute.manager [req-b5614346-c3ac-4d78-9706-84241e8d22a7 req-7a3896c6-dc42-4a12-9bc2-ebff182a3b96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received unexpected event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 for instance with vm_state active and task_state None.
Jan 20 14:33:08 compute-0 ceph-mon[74360]: pgmap v1276: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:09.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:09 compute-0 NetworkManager[48960]: <info>  [1768919589.6117] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 20 14:33:09 compute-0 NetworkManager[48960]: <info>  [1768919589.6124] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 20 14:33:09 compute-0 nova_compute[250018]: 2026-01-20 14:33:09.611 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:09 compute-0 nova_compute[250018]: 2026-01-20 14:33:09.756 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:09 compute-0 ovn_controller[148666]: 2026-01-20T14:33:09Z|00109|binding|INFO|Releasing lport 01ec1fed-ce23-42e9-9147-b3495425c336 from this chassis (sb_readonly=0)
Jan 20 14:33:09 compute-0 nova_compute[250018]: 2026-01-20 14:33:09.773 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.298 250022 DEBUG nova.compute.manager [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-changed-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.299 250022 DEBUG nova.compute.manager [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Refreshing instance network info cache due to event network-changed-ca52d5c1-4a07-4d94-9adb-1311a4d89044. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.299 250022 DEBUG oslo_concurrency.lockutils [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.299 250022 DEBUG oslo_concurrency.lockutils [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.299 250022 DEBUG nova.network.neutron [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Refreshing network info cache for port ca52d5c1-4a07-4d94-9adb-1311a4d89044 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:33:10 compute-0 ceph-mon[74360]: pgmap v1277: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 14:33:10 compute-0 nova_compute[250018]: 2026-01-20 14:33:10.879 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:11 compute-0 nova_compute[250018]: 2026-01-20 14:33:11.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:11 compute-0 nova_compute[250018]: 2026-01-20 14:33:11.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:33:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:11.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00099691891168807 of space, bias 1.0, pg target 0.299075673506421 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:33:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 14:33:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:12.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:12 compute-0 nova_compute[250018]: 2026-01-20 14:33:12.549 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:12 compute-0 ceph-mon[74360]: pgmap v1278: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 14:33:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:13.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:13 compute-0 nova_compute[250018]: 2026-01-20 14:33:13.097 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:13 compute-0 nova_compute[250018]: 2026-01-20 14:33:13.233 250022 DEBUG nova.network.neutron [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updated VIF entry in instance network info cache for port ca52d5c1-4a07-4d94-9adb-1311a4d89044. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:33:13 compute-0 nova_compute[250018]: 2026-01-20 14:33:13.234 250022 DEBUG nova.network.neutron [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating instance_info_cache with network_info: [{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:13 compute-0 nova_compute[250018]: 2026-01-20 14:33:13.279 250022 DEBUG oslo_concurrency.lockutils [req-9b291e2f-76c5-441a-a800-04b22618d492 req-8167092d-9b1b-4c73-8c48-8365f68eb783 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2650109795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:33:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2650109795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:33:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:14.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:14 compute-0 ceph-mon[74360]: pgmap v1279: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:15.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:15 compute-0 nova_compute[250018]: 2026-01-20 14:33:15.880 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:15 compute-0 ceph-mon[74360]: pgmap v1280: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:33:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:16.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:17.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 20 14:33:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:18 compute-0 nova_compute[250018]: 2026-01-20 14:33:18.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:18 compute-0 ceph-mon[74360]: pgmap v1281: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 20 14:33:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:19.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:19 compute-0 ovn_controller[148666]: 2026-01-20T14:33:19Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:c6:27 10.100.0.8
Jan 20 14:33:19 compute-0 ovn_controller[148666]: 2026-01-20T14:33:19Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:c6:27 10.100.0.8
Jan 20 14:33:19 compute-0 nova_compute[250018]: 2026-01-20 14:33:19.336 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 96 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 78 op/s
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:20.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.086 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.086 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.087 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.087 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830974497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.565 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 20 14:33:20 compute-0 ceph-mon[74360]: pgmap v1282: 321 pgs: 321 active+clean; 96 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 78 op/s
Jan 20 14:33:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1830974497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 20 14:33:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.667 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.668 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.837 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.838 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4462MB free_disk=20.961212158203125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.838 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.839 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:20 compute-0 nova_compute[250018]: 2026-01-20 14:33:20.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:21.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.343 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.344 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.345 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:33:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.551 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:21 compute-0 ceph-mon[74360]: osdmap e166: 3 total, 3 up, 3 in
Jan 20 14:33:21 compute-0 sudo[277226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:21 compute-0 sudo[277226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:21 compute-0 sudo[277226]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496720829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.966 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:21 compute-0 nova_compute[250018]: 2026-01-20 14:33:21.972 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:33:21 compute-0 sudo[277251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:21 compute-0 sudo[277251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:21 compute-0 sudo[277251]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.003 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.031 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.031 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.032 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.032 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:33:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:33:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:22.171 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:33:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:22.173 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:33:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:22 compute-0 nova_compute[250018]: 2026-01-20 14:33:22.222 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:22 compute-0 ceph-mon[74360]: pgmap v1284: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1270072728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1496720829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:23.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:23 compute-0 nova_compute[250018]: 2026-01-20 14:33:23.103 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4270915150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.068 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:24 compute-0 nova_compute[250018]: 2026-01-20 14:33:24.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:33:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:24.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 20 14:33:24 compute-0 ceph-mon[74360]: pgmap v1285: 321 pgs: 321 active+clean; 114 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 20 14:33:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 20 14:33:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 20 14:33:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:25.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 545 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Jan 20 14:33:25 compute-0 ceph-mon[74360]: osdmap e167: 3 total, 3 up, 3 in
Jan 20 14:33:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2112737495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:25 compute-0 ceph-mon[74360]: pgmap v1287: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 545 KiB/s rd, 3.2 MiB/s wr, 118 op/s
Jan 20 14:33:25 compute-0 nova_compute[250018]: 2026-01-20 14:33:25.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:25 compute-0 sshd-session[277280]: Invalid user admin from 157.245.78.139 port 51558
Jan 20 14:33:26 compute-0 sshd-session[277280]: Connection closed by invalid user admin 157.245.78.139 port 51558 [preauth]
Jan 20 14:33:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:26.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:26 compute-0 nova_compute[250018]: 2026-01-20 14:33:26.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:33:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:27.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1012833942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.419 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.419 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.420 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:33:27 compute-0 nova_compute[250018]: 2026-01-20 14:33:27.420 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 511 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Jan 20 14:33:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:28.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.108 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.433 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.433 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:28 compute-0 ceph-mon[74360]: pgmap v1288: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 511 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.466 250022 DEBUG nova.objects.instance [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'flavor' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.521 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.989 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.989 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:28 compute-0 nova_compute[250018]: 2026-01-20 14:33:28.990 250022 INFO nova.compute.manager [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Attaching volume 0a09166e-3f2b-4406-9246-5c03480cda80 to /dev/vdb
Jan 20 14:33:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:29.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 379 KiB/s wr, 68 op/s
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.505 250022 DEBUG os_brick.utils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.506 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.517 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.517 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[b060adb9-8da2-495e-8327-96e92339166f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.519 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.526 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.527 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[165c816d-b0f0-4feb-ab3d-e2852e95c64f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.529 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.536 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.536 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[04faa84c-dec5-4ea5-93ae-abe3bd6d1dd0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.538 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[d28520be-195b-4458-9069-aba33ce806c3]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.538 250022 DEBUG oslo_concurrency.processutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.563 250022 DEBUG oslo_concurrency.processutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.566 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.566 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.566 250022 DEBUG os_brick.initiator.connectors.lightos [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.567 250022 DEBUG os_brick.utils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:33:29 compute-0 nova_compute[250018]: 2026-01-20 14:33:29.567 250022 DEBUG nova.virt.block_device [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating existing volume attachment record: 8f32b739-0c75-4e74-8c94-29b1c987749e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:33:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:30.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.557 250022 DEBUG nova.objects.instance [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'flavor' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.587 250022 DEBUG nova.virt.libvirt.driver [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Attempting to attach volume 0a09166e-3f2b-4406-9246-5c03480cda80 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.590 250022 DEBUG nova.virt.libvirt.guest [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:33:30 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0a09166e-3f2b-4406-9246-5c03480cda80">
Jan 20 14:33:30 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   </source>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 14:33:30 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   </auth>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:33:30 compute-0 nova_compute[250018]:   <serial>0a09166e-3f2b-4406-9246-5c03480cda80</serial>
Jan 20 14:33:30 compute-0 nova_compute[250018]: </disk>
Jan 20 14:33:30 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.638 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating instance_info_cache with network_info: [{"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.656 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.657 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.657 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:30.747 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:30.748 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:30.748 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.794 250022 DEBUG nova.virt.libvirt.driver [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.795 250022 DEBUG nova.virt.libvirt.driver [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.795 250022 DEBUG nova.virt.libvirt.driver [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.795 250022 DEBUG nova.virt.libvirt.driver [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No VIF found with MAC fa:16:3e:94:c6:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:33:30 compute-0 ceph-mon[74360]: pgmap v1289: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 379 KiB/s wr, 68 op/s
Jan 20 14:33:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/659662857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:30 compute-0 nova_compute[250018]: 2026-01-20 14:33:30.884 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:31.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:31 compute-0 nova_compute[250018]: 2026-01-20 14:33:31.120 250022 DEBUG oslo_concurrency.lockutils [None req-3bbbb0b0-3ebc-4160-aa9b-b3cd787a2295 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 20 14:33:31 compute-0 ovn_controller[148666]: 2026-01-20T14:33:31Z|00110|binding|INFO|Releasing lport 01ec1fed-ce23-42e9-9147-b3495425c336 from this chassis (sb_readonly=0)
Jan 20 14:33:31 compute-0 nova_compute[250018]: 2026-01-20 14:33:31.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:31 compute-0 ceph-mon[74360]: pgmap v1290: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 38 KiB/s wr, 48 op/s
Jan 20 14:33:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:32.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:32.175 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 20 14:33:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 20 14:33:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 20 14:33:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:33.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:33 compute-0 nova_compute[250018]: 2026-01-20 14:33:33.109 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:33 compute-0 ceph-mon[74360]: osdmap e168: 3 total, 3 up, 3 in
Jan 20 14:33:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3926106385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 14:33:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:34.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.184 250022 DEBUG oslo_concurrency.lockutils [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.185 250022 DEBUG oslo_concurrency.lockutils [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.205 250022 INFO nova.compute.manager [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Detaching volume 0a09166e-3f2b-4406-9246-5c03480cda80
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.595 250022 INFO nova.virt.block_device [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Attempting to driver detach volume 0a09166e-3f2b-4406-9246-5c03480cda80 from mountpoint /dev/vdb
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.603 250022 DEBUG nova.virt.libvirt.driver [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Attempting to detach device vdb from instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:33:34 compute-0 nova_compute[250018]: 2026-01-20 14:33:34.604 250022 DEBUG nova.virt.libvirt.guest [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:33:34 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0a09166e-3f2b-4406-9246-5c03480cda80">
Jan 20 14:33:34 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]:   </source>
Jan 20 14:33:34 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]:   <serial>0a09166e-3f2b-4406-9246-5c03480cda80</serial>
Jan 20 14:33:34 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:33:34 compute-0 nova_compute[250018]: </disk>
Jan 20 14:33:34 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.110 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:35.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.229 250022 INFO nova.virt.libvirt.driver [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully detached device vdb from instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 from the persistent domain config.
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.230 250022 DEBUG nova.virt.libvirt.driver [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.231 250022 DEBUG nova.virt.libvirt.guest [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 14:33:35 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-0a09166e-3f2b-4406-9246-5c03480cda80">
Jan 20 14:33:35 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]:   </source>
Jan 20 14:33:35 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]:   <serial>0a09166e-3f2b-4406-9246-5c03480cda80</serial>
Jan 20 14:33:35 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 14:33:35 compute-0 nova_compute[250018]: </disk>
Jan 20 14:33:35 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:33:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 150 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 14:33:35 compute-0 podman[277315]: 2026-01-20 14:33:35.485531836 +0000 UTC m=+0.068530384 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:33:35 compute-0 podman[277314]: 2026-01-20 14:33:35.509227213 +0000 UTC m=+0.101276895 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.508 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768919615.5086532, a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.510 250022 DEBUG nova.virt.libvirt.driver [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.512 250022 INFO nova.virt.libvirt.driver [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully detached device vdb from instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 from the live domain config.
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.886 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:35 compute-0 nova_compute[250018]: 2026-01-20 14:33:35.947 250022 DEBUG nova.objects.instance [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'flavor' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:36 compute-0 nova_compute[250018]: 2026-01-20 14:33:36.015 250022 DEBUG oslo_concurrency.lockutils [None req-cb94e645-c35c-4bbc-a2fc-5bcc3ea14aa2 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:36.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:36 compute-0 ceph-mon[74360]: pgmap v1292: 321 pgs: 321 active+clean; 121 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 14:33:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:37 compute-0 ceph-mon[74360]: pgmap v1293: 321 pgs: 321 active+clean; 150 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 14:33:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2118206802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 156 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 20 14:33:38 compute-0 sudo[277362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:38 compute-0 sudo[277362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-0 sudo[277362]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:38.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:38 compute-0 nova_compute[250018]: 2026-01-20 14:33:38.113 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:38 compute-0 sudo[277387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:33:38 compute-0 sudo[277387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-0 sudo[277387]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-0 sudo[277412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:38 compute-0 sudo[277412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-0 sudo[277412]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-0 sudo[277437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:33:38 compute-0 sudo[277437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-0 ceph-mon[74360]: pgmap v1294: 321 pgs: 321 active+clean; 156 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 20 14:33:38 compute-0 sudo[277437]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0be7b56d-ee8c-4bba-8965-a718d705184a does not exist
Jan 20 14:33:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bd91924d-16e6-4320-806f-fe51a629a6cd does not exist
Jan 20 14:33:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bab35c49-19c0-4850-8700-31a610f88507 does not exist
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:33:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:38 compute-0 sudo[277494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:38 compute-0 sudo[277494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:38 compute-0 sudo[277494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:39 compute-0 sudo[277519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:33:39 compute-0 sudo[277519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:39 compute-0 sudo[277519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:39 compute-0 sudo[277544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:39 compute-0 sudo[277544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:39 compute-0 sudo[277544]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:39.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:39 compute-0 sudo[277569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:33:39 compute-0 sudo[277569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 180 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.5 MiB/s wr, 37 op/s
Jan 20 14:33:39 compute-0 podman[277635]: 2026-01-20 14:33:39.456936225 +0000 UTC m=+0.022517226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/246388539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:33:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1858393990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:39 compute-0 podman[277635]: 2026-01-20 14:33:39.908334785 +0000 UTC m=+0.473915746 container create 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:33:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:40.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:40 compute-0 systemd[1]: Started libpod-conmon-76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041.scope.
Jan 20 14:33:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:40 compute-0 podman[277635]: 2026-01-20 14:33:40.286939926 +0000 UTC m=+0.852520917 container init 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:33:40 compute-0 podman[277635]: 2026-01-20 14:33:40.295432155 +0000 UTC m=+0.861013116 container start 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:33:40 compute-0 funny_bouman[277651]: 167 167
Jan 20 14:33:40 compute-0 systemd[1]: libpod-76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041.scope: Deactivated successfully.
Jan 20 14:33:40 compute-0 nova_compute[250018]: 2026-01-20 14:33:40.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:40 compute-0 podman[277635]: 2026-01-20 14:33:40.411941128 +0000 UTC m=+0.977522109 container attach 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:33:40 compute-0 podman[277635]: 2026-01-20 14:33:40.412439762 +0000 UTC m=+0.978020743 container died 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-811b02e215dac3d22b1da8232ddcf7fc849a196056621e11eb94695459f6d5f3-merged.mount: Deactivated successfully.
Jan 20 14:33:40 compute-0 podman[277635]: 2026-01-20 14:33:40.467762769 +0000 UTC m=+1.033343730 container remove 76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:33:40 compute-0 systemd[1]: libpod-conmon-76134fe4294434a575d8b3ab282cf8a63c6bf19030e7e0fb25e7befb89bec041.scope: Deactivated successfully.
Jan 20 14:33:40 compute-0 podman[277677]: 2026-01-20 14:33:40.611949336 +0000 UTC m=+0.021741545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:40 compute-0 podman[277677]: 2026-01-20 14:33:40.80359988 +0000 UTC m=+0.213392059 container create 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:33:40 compute-0 nova_compute[250018]: 2026-01-20 14:33:40.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:40 compute-0 ceph-mon[74360]: pgmap v1295: 321 pgs: 321 active+clean; 180 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.5 MiB/s wr, 37 op/s
Jan 20 14:33:41 compute-0 systemd[1]: Started libpod-conmon-9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3.scope.
Jan 20 14:33:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:41 compute-0 podman[277677]: 2026-01-20 14:33:41.316286877 +0000 UTC m=+0.726079056 container init 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:33:41 compute-0 podman[277677]: 2026-01-20 14:33:41.328306851 +0000 UTC m=+0.738099030 container start 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:33:41 compute-0 podman[277677]: 2026-01-20 14:33:41.385162289 +0000 UTC m=+0.794954468 container attach 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:33:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.3 MiB/s wr, 70 op/s
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.582 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.584 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.613 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.704 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.704 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.710 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.711 250022 INFO nova.compute.claims [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:33:41 compute-0 nova_compute[250018]: 2026-01-20 14:33:41.930 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:42 compute-0 sudo[277700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:42 compute-0 sudo[277700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:42 compute-0 sudo[277700]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:42 compute-0 sudo[277739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:42 compute-0 sudo[277739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:42 compute-0 sudo[277739]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:42 compute-0 busy_ritchie[277694]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:33:42 compute-0 busy_ritchie[277694]: --> relative data size: 1.0
Jan 20 14:33:42 compute-0 busy_ritchie[277694]: --> All data devices are unavailable
Jan 20 14:33:42 compute-0 systemd[1]: libpod-9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3.scope: Deactivated successfully.
Jan 20 14:33:42 compute-0 podman[277677]: 2026-01-20 14:33:42.197143385 +0000 UTC m=+1.606935564 container died 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:33:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:33:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270142018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.701 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.771s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.712 250022 DEBUG nova.compute.provider_tree [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.744 250022 DEBUG nova.scheduler.client.report [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.822 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.824 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.879 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.880 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.894 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.916 250022 INFO nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:33:42 compute-0 nova_compute[250018]: 2026-01-20 14:33:42.943 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.073 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.075 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.075 250022 INFO nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Creating image(s)
Jan 20 14:33:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:33:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:33:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.9 MiB/s wr, 63 op/s
Jan 20 14:33:43 compute-0 ceph-mon[74360]: pgmap v1296: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.3 MiB/s wr, 70 op/s
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.499 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-aadbfb9bc553ea44d327095bbda731b618c598f87181c41a2debb98b572bca3d-merged.mount: Deactivated successfully.
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.569 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.605 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.608 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.633 250022 DEBUG nova.policy [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '729ca8a2a7414735af25d05df4a563b9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48488e875f2e472f97f07cc7ee07e0be', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.636 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.668 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.669 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.669 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.669 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:43 compute-0 podman[277677]: 2026-01-20 14:33:43.688227793 +0000 UTC m=+3.098019972 container remove 9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.701 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:43 compute-0 nova_compute[250018]: 2026-01-20 14:33:43.705 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e8b5659b-7304-4db5-a31a-dcea49a6666e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:43 compute-0 sudo[277569]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:43 compute-0 systemd[1]: libpod-conmon-9f02e10284963e9b53ad9e6055ffcf49713f627a6f3a67e38db66a3463ef6fc3.scope: Deactivated successfully.
Jan 20 14:33:43 compute-0 sudo[277871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:43 compute-0 sudo[277871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:43 compute-0 sudo[277871]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:43 compute-0 sudo[277911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:33:43 compute-0 sudo[277911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:43 compute-0 sudo[277911]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:43 compute-0 sudo[277936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:43 compute-0 sudo[277936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:43 compute-0 sudo[277936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:43 compute-0 sudo[277961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:33:43 compute-0 sudo[277961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:44.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.319 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e8b5659b-7304-4db5-a31a-dcea49a6666e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:44 compute-0 podman[278029]: 2026-01-20 14:33:44.262519027 +0000 UTC m=+0.028684612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.396 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] resizing rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:33:44 compute-0 podman[278029]: 2026-01-20 14:33:44.640724759 +0000 UTC m=+0.406890314 container create 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:33:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1412916936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2270142018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2864710443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:44 compute-0 ceph-mon[74360]: pgmap v1297: 321 pgs: 321 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.9 MiB/s wr, 63 op/s
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.675 250022 DEBUG nova.objects.instance [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'migration_context' on Instance uuid e8b5659b-7304-4db5-a31a-dcea49a6666e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.805 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.805 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Ensure instance console log exists: /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.806 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.806 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:44 compute-0 nova_compute[250018]: 2026-01-20 14:33:44.807 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:44 compute-0 systemd[1]: Started libpod-conmon-21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431.scope.
Jan 20 14:33:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:45.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:45 compute-0 podman[278029]: 2026-01-20 14:33:45.158143313 +0000 UTC m=+0.924308958 container init 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:33:45 compute-0 podman[278029]: 2026-01-20 14:33:45.17030042 +0000 UTC m=+0.936466005 container start 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:33:45 compute-0 optimistic_kepler[278118]: 167 167
Jan 20 14:33:45 compute-0 systemd[1]: libpod-21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431.scope: Deactivated successfully.
Jan 20 14:33:45 compute-0 conmon[278118]: conmon 21b6280f972e138f75d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431.scope/container/memory.events
Jan 20 14:33:45 compute-0 podman[278029]: 2026-01-20 14:33:45.311408885 +0000 UTC m=+1.077574540 container attach 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:33:45 compute-0 podman[278029]: 2026-01-20 14:33:45.313819199 +0000 UTC m=+1.079984794 container died 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:33:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 229 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 968 KiB/s rd, 4.1 MiB/s wr, 110 op/s
Jan 20 14:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-678f60586d39fe2b1e12955bbe2f347839643bece3fbe14398bdb8b17e2709fc-merged.mount: Deactivated successfully.
Jan 20 14:33:45 compute-0 nova_compute[250018]: 2026-01-20 14:33:45.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:46 compute-0 podman[278029]: 2026-01-20 14:33:46.089566191 +0000 UTC m=+1.855731776 container remove 21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:33:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:46.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:46 compute-0 systemd[1]: libpod-conmon-21b6280f972e138f75d265d93e8ac136f5aa5c60b97e2874a02f5a938a665431.scope: Deactivated successfully.
Jan 20 14:33:46 compute-0 ceph-mon[74360]: pgmap v1298: 321 pgs: 321 active+clean; 229 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 968 KiB/s rd, 4.1 MiB/s wr, 110 op/s
Jan 20 14:33:46 compute-0 podman[278144]: 2026-01-20 14:33:46.338136296 +0000 UTC m=+0.063366305 container create 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:33:46 compute-0 systemd[1]: Started libpod-conmon-27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2.scope.
Jan 20 14:33:46 compute-0 podman[278144]: 2026-01-20 14:33:46.296123546 +0000 UTC m=+0.021353575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2027e480167b99efbdfdf9a6f88a61e6a5f131c2d5fbf045ee112dd514407ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2027e480167b99efbdfdf9a6f88a61e6a5f131c2d5fbf045ee112dd514407ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2027e480167b99efbdfdf9a6f88a61e6a5f131c2d5fbf045ee112dd514407ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2027e480167b99efbdfdf9a6f88a61e6a5f131c2d5fbf045ee112dd514407ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:46 compute-0 podman[278144]: 2026-01-20 14:33:46.465269485 +0000 UTC m=+0.190499564 container init 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:33:46 compute-0 podman[278144]: 2026-01-20 14:33:46.473784054 +0000 UTC m=+0.199014063 container start 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:33:46 compute-0 podman[278144]: 2026-01-20 14:33:46.503240816 +0000 UTC m=+0.228470875 container attach 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:33:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:47 compute-0 clever_poitras[278161]: {
Jan 20 14:33:47 compute-0 clever_poitras[278161]:     "0": [
Jan 20 14:33:47 compute-0 clever_poitras[278161]:         {
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "devices": [
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "/dev/loop3"
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             ],
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "lv_name": "ceph_lv0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "lv_size": "7511998464",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "name": "ceph_lv0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "tags": {
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.cluster_name": "ceph",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.crush_device_class": "",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.encrypted": "0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.osd_id": "0",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.type": "block",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:                 "ceph.vdo": "0"
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             },
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "type": "block",
Jan 20 14:33:47 compute-0 clever_poitras[278161]:             "vg_name": "ceph_vg0"
Jan 20 14:33:47 compute-0 clever_poitras[278161]:         }
Jan 20 14:33:47 compute-0 clever_poitras[278161]:     ]
Jan 20 14:33:47 compute-0 clever_poitras[278161]: }
Jan 20 14:33:47 compute-0 systemd[1]: libpod-27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2.scope: Deactivated successfully.
Jan 20 14:33:47 compute-0 podman[278144]: 2026-01-20 14:33:47.257571412 +0000 UTC m=+0.982801441 container died 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2027e480167b99efbdfdf9a6f88a61e6a5f131c2d5fbf045ee112dd514407ca6-merged.mount: Deactivated successfully.
Jan 20 14:33:47 compute-0 podman[278144]: 2026-01-20 14:33:47.380771475 +0000 UTC m=+1.106001484 container remove 27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:33:47 compute-0 systemd[1]: libpod-conmon-27a69024f51fb3663e869f9954971538a86586202c634cfeb523c0e3e5a6e4c2.scope: Deactivated successfully.
Jan 20 14:33:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:47 compute-0 sudo[277961]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 252 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 111 op/s
Jan 20 14:33:47 compute-0 sudo[278185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:47 compute-0 sudo[278185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:47 compute-0 sudo[278185]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:47 compute-0 sudo[278210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:33:47 compute-0 sudo[278210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:47 compute-0 sudo[278210]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:47 compute-0 sudo[278235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:47 compute-0 sudo[278235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:47 compute-0 sudo[278235]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:47 compute-0 sudo[278260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:33:47 compute-0 sudo[278260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:47 compute-0 podman[278325]: 2026-01-20 14:33:47.961019648 +0000 UTC m=+0.066994092 container create a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:33:47 compute-0 systemd[1]: Started libpod-conmon-a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5.scope.
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:47.917059667 +0000 UTC m=+0.023034121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:48.038104481 +0000 UTC m=+0.144078935 container init a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:48.044818762 +0000 UTC m=+0.150793236 container start a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:33:48 compute-0 condescending_raman[278341]: 167 167
Jan 20 14:33:48 compute-0 systemd[1]: libpod-a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5.scope: Deactivated successfully.
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:48.049742434 +0000 UTC m=+0.155716918 container attach a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:48.050095934 +0000 UTC m=+0.156070378 container died a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-389ace7c07179ebb4eabc3c4d7b07f8ae2107e2a214d17dc8643d875cc00aaa9-merged.mount: Deactivated successfully.
Jan 20 14:33:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:48.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:48 compute-0 podman[278325]: 2026-01-20 14:33:48.152130787 +0000 UTC m=+0.258105221 container remove a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:33:48 compute-0 systemd[1]: libpod-conmon-a49efaba16bc8c8eed56663dffea621a63ee848951ac6c60da00cbd46075cfc5.scope: Deactivated successfully.
Jan 20 14:33:48 compute-0 podman[278365]: 2026-01-20 14:33:48.305294837 +0000 UTC m=+0.024012547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:33:48 compute-0 nova_compute[250018]: 2026-01-20 14:33:48.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:48 compute-0 podman[278365]: 2026-01-20 14:33:48.650797048 +0000 UTC m=+0.369514768 container create 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:33:48 compute-0 nova_compute[250018]: 2026-01-20 14:33:48.933 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Successfully created port: f25a0319-31f3-4675-8d35-5881fbada019 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:33:49 compute-0 ceph-mon[74360]: pgmap v1299: 321 pgs: 321 active+clean; 252 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 111 op/s
Jan 20 14:33:49 compute-0 systemd[1]: Started libpod-conmon-09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23.scope.
Jan 20 14:33:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f83d75cdd0d0913e46f46941849c0610780a199dfcfe31193993bb52f38ea7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f83d75cdd0d0913e46f46941849c0610780a199dfcfe31193993bb52f38ea7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f83d75cdd0d0913e46f46941849c0610780a199dfcfe31193993bb52f38ea7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f83d75cdd0d0913e46f46941849c0610780a199dfcfe31193993bb52f38ea7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:33:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:49.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:49 compute-0 podman[278365]: 2026-01-20 14:33:49.201711003 +0000 UTC m=+0.920428723 container init 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:33:49 compute-0 podman[278365]: 2026-01-20 14:33:49.210112409 +0000 UTC m=+0.928830109 container start 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:33:49 compute-0 podman[278365]: 2026-01-20 14:33:49.214515937 +0000 UTC m=+0.933233627 container attach 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:33:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 133 op/s
Jan 20 14:33:50 compute-0 agitated_snyder[278383]: {
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:         "osd_id": 0,
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:         "type": "bluestore"
Jan 20 14:33:50 compute-0 agitated_snyder[278383]:     }
Jan 20 14:33:50 compute-0 agitated_snyder[278383]: }
Jan 20 14:33:50 compute-0 systemd[1]: libpod-09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23.scope: Deactivated successfully.
Jan 20 14:33:50 compute-0 podman[278365]: 2026-01-20 14:33:50.064654399 +0000 UTC m=+1.783372099 container died 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:33:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:50.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:50 compute-0 ceph-mon[74360]: pgmap v1300: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 133 op/s
Jan 20 14:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f83d75cdd0d0913e46f46941849c0610780a199dfcfe31193993bb52f38ea7d-merged.mount: Deactivated successfully.
Jan 20 14:33:50 compute-0 podman[278365]: 2026-01-20 14:33:50.618895725 +0000 UTC m=+2.337613415 container remove 09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:33:50 compute-0 systemd[1]: libpod-conmon-09bd7c0a80c1701808e9adf5deed58c7bdbe56d104c74985f3f041cf491c2d23.scope: Deactivated successfully.
Jan 20 14:33:50 compute-0 sudo[278260]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:33:50 compute-0 nova_compute[250018]: 2026-01-20 14:33:50.830 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Successfully updated port: f25a0319-31f3-4675-8d35-5881fbada019 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:33:50 compute-0 nova_compute[250018]: 2026-01-20 14:33:50.854 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:50 compute-0 nova_compute[250018]: 2026-01-20 14:33:50.854 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquired lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:50 compute-0 nova_compute[250018]: 2026-01-20 14:33:50.855 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:33:50 compute-0 nova_compute[250018]: 2026-01-20 14:33:50.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:33:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Jan 20 14:33:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 98bce78c-8b19-44ee-802c-7239c971c8ee does not exist
Jan 20 14:33:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fda2b42c-3549-4095-9497-0d67fb471b8f does not exist
Jan 20 14:33:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e9529ff4-1cf1-41da-aa3a-02e5ce9fa7aa does not exist
Jan 20 14:33:51 compute-0 sudo[278417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:33:51 compute-0 sudo[278417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:51 compute-0 sudo[278417]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:51 compute-0 sudo[278442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:33:51 compute-0 sudo[278442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:33:51 compute-0 sudo[278442]: pam_unix(sudo:session): session closed for user root
Jan 20 14:33:51 compute-0 nova_compute[250018]: 2026-01-20 14:33:51.884 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:33:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:52.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:52 compute-0 ceph-mon[74360]: pgmap v1301: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Jan 20 14:33:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:33:52 compute-0 nova_compute[250018]: 2026-01-20 14:33:52.334 250022 DEBUG nova.compute.manager [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-changed-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:52 compute-0 nova_compute[250018]: 2026-01-20 14:33:52.334 250022 DEBUG nova.compute.manager [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Refreshing instance network info cache due to event network-changed-f25a0319-31f3-4675-8d35-5881fbada019. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:33:52 compute-0 nova_compute[250018]: 2026-01-20 14:33:52.334 250022 DEBUG oslo_concurrency.lockutils [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:33:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:33:52
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 20 14:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.071 250022 DEBUG nova.network.neutron [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Updating instance_info_cache with network_info: [{"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.111 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Releasing lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.111 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Instance network_info: |[{"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.112 250022 DEBUG oslo_concurrency.lockutils [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.112 250022 DEBUG nova.network.neutron [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Refreshing network info cache for port f25a0319-31f3-4675-8d35-5881fbada019 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.115 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Start _get_guest_xml network_info=[{"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.122 250022 WARNING nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.132 250022 DEBUG nova.virt.libvirt.host [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.132 250022 DEBUG nova.virt.libvirt.host [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:33:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.148 250022 DEBUG nova.virt.libvirt.host [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.149 250022 DEBUG nova.virt.libvirt.host [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.151 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.152 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.152 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.153 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.153 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.153 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.154 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.154 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.154 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.155 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.155 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.155 250022 DEBUG nova.virt.hardware [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.161 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 20 14:33:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854223112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.648 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.687 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:53 compute-0 nova_compute[250018]: 2026-01-20 14:33:53.692 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3854223112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:54.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:33:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867397850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.153 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.154 250022 DEBUG nova.virt.libvirt.vif [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1790393255',display_name='tempest-VolumesAdminNegativeTest-server-1790393255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1790393255',id=41,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-d6f2b7sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:33:43Z,user_data=None,user_id='729ca8a2a7414735af25d05df4a563b9',uuid=e8b5659b-7304-4db5-a31a-dcea49a6666e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.155 250022 DEBUG nova.network.os_vif_util [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.155 250022 DEBUG nova.network.os_vif_util [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.157 250022 DEBUG nova.objects.instance [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'pci_devices' on Instance uuid e8b5659b-7304-4db5-a31a-dcea49a6666e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.194 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <uuid>e8b5659b-7304-4db5-a31a-dcea49a6666e</uuid>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <name>instance-00000029</name>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:name>tempest-VolumesAdminNegativeTest-server-1790393255</nova:name>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:33:53</nova:creationTime>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:user uuid="729ca8a2a7414735af25d05df4a563b9">tempest-VolumesAdminNegativeTest-1678705027-project-member</nova:user>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:project uuid="48488e875f2e472f97f07cc7ee07e0be">tempest-VolumesAdminNegativeTest-1678705027</nova:project>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <nova:port uuid="f25a0319-31f3-4675-8d35-5881fbada019">
Jan 20 14:33:54 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <system>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="serial">e8b5659b-7304-4db5-a31a-dcea49a6666e</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="uuid">e8b5659b-7304-4db5-a31a-dcea49a6666e</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </system>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <os>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </os>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <features>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </features>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/e8b5659b-7304-4db5-a31a-dcea49a6666e_disk">
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </source>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config">
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </source>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:33:54 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:f3:26:d3"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <target dev="tapf25a0319-31"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/console.log" append="off"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <video>
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </video>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:33:54 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:33:54 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:33:54 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:33:54 compute-0 nova_compute[250018]: </domain>
Jan 20 14:33:54 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.196 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Preparing to wait for external event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.197 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.197 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.198 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.199 250022 DEBUG nova.virt.libvirt.vif [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:33:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1790393255',display_name='tempest-VolumesAdminNegativeTest-server-1790393255',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1790393255',id=41,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-d6f2b7sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:33:43Z,user_data=None,user_id='729ca8a2a7414735af25d05df4a563b9',uuid=e8b5659b-7304-4db5-a31a-dcea49a6666e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.200 250022 DEBUG nova.network.os_vif_util [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.201 250022 DEBUG nova.network.os_vif_util [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.201 250022 DEBUG os_vif [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.203 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.204 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.208 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.209 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf25a0319-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.210 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf25a0319-31, col_values=(('external_ids', {'iface-id': 'f25a0319-31f3-4675-8d35-5881fbada019', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:26:d3', 'vm-uuid': 'e8b5659b-7304-4db5-a31a-dcea49a6666e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.211 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-0 NetworkManager[48960]: <info>  [1768919634.2128] manager: (tapf25a0319-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.214 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.218 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.220 250022 INFO os_vif [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31')
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.499 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.500 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.500 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] No VIF found with MAC fa:16:3e:f3:26:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.500 250022 INFO nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Using config drive
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.540 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:54 compute-0 nova_compute[250018]: 2026-01-20 14:33:54.694 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:54 compute-0 ceph-mon[74360]: pgmap v1302: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 20 14:33:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/867397850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:33:55 compute-0 nova_compute[250018]: 2026-01-20 14:33:55.028 250022 DEBUG nova.network.neutron [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Updated VIF entry in instance network info cache for port f25a0319-31f3-4675-8d35-5881fbada019. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:33:55 compute-0 nova_compute[250018]: 2026-01-20 14:33:55.029 250022 DEBUG nova.network.neutron [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Updating instance_info_cache with network_info: [{"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:33:55 compute-0 nova_compute[250018]: 2026-01-20 14:33:55.065 250022 DEBUG oslo_concurrency.lockutils [req-960cc453-6fb6-46e3-8e73-e897a89f51a1 req-51af17e5-53a2-467a-868c-0163bade5bb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-e8b5659b-7304-4db5-a31a-dcea49a6666e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:33:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:55.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Jan 20 14:33:55 compute-0 nova_compute[250018]: 2026-01-20 14:33:55.895 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:56 compute-0 ceph-mon[74360]: pgmap v1303: 321 pgs: 321 active+clean; 260 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.251 250022 INFO nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Creating config drive at /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.255 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp192omsen execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.387 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp192omsen" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.422 250022 DEBUG nova.storage.rbd_utils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] rbd image e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.427 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.593 250022 DEBUG oslo_concurrency.processutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config e8b5659b-7304-4db5-a31a-dcea49a6666e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.595 250022 INFO nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Deleting local config drive /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e/disk.config because it was imported into RBD.
Jan 20 14:33:56 compute-0 kernel: tapf25a0319-31: entered promiscuous mode
Jan 20 14:33:56 compute-0 NetworkManager[48960]: <info>  [1768919636.6365] manager: (tapf25a0319-31): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Jan 20 14:33:56 compute-0 ovn_controller[148666]: 2026-01-20T14:33:56Z|00111|binding|INFO|Claiming lport f25a0319-31f3-4675-8d35-5881fbada019 for this chassis.
Jan 20 14:33:56 compute-0 ovn_controller[148666]: 2026-01-20T14:33:56Z|00112|binding|INFO|f25a0319-31f3-4675-8d35-5881fbada019: Claiming fa:16:3e:f3:26:d3 10.100.0.7
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.639 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.645 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:26:d3 10.100.0.7'], port_security=['fa:16:3e:f3:26:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e8b5659b-7304-4db5-a31a-dcea49a6666e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b774e474-3e68-434c-8017-93bd087d2285', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48488e875f2e472f97f07cc7ee07e0be', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a51c0542-cae2-4ca8-b2f1-adb393fdc087', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3367f149-9a0b-42bf-93b6-10dba98995c7, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f25a0319-31f3-4675-8d35-5881fbada019) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.646 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f25a0319-31f3-4675-8d35-5881fbada019 in datapath b774e474-3e68-434c-8017-93bd087d2285 bound to our chassis
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.647 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b774e474-3e68-434c-8017-93bd087d2285
Jan 20 14:33:56 compute-0 ovn_controller[148666]: 2026-01-20T14:33:56Z|00113|binding|INFO|Setting lport f25a0319-31f3-4675-8d35-5881fbada019 ovn-installed in OVS
Jan 20 14:33:56 compute-0 ovn_controller[148666]: 2026-01-20T14:33:56Z|00114|binding|INFO|Setting lport f25a0319-31f3-4675-8d35-5881fbada019 up in Southbound
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.654 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.656 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.663 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3295c5f7-4da6-4eb0-b8bd-201879c93ae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 systemd-machined[216401]: New machine qemu-19-instance-00000029.
Jan 20 14:33:56 compute-0 systemd-udevd[278606]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:33:56 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000029.
Jan 20 14:33:56 compute-0 NetworkManager[48960]: <info>  [1768919636.6855] device (tapf25a0319-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:33:56 compute-0 NetworkManager[48960]: <info>  [1768919636.6861] device (tapf25a0319-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.695 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a57ed5c0-a4ec-4254-8dae-4d0ca56909cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.698 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e8747353-4c1d-423d-a3d7-688b4cdca2bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.730 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7d17b994-f174-4a26-98b8-1a73f4b0ec33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.748 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[93f08bd3-60dc-4955-9b2f-c119ff493a50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb774e474-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:ae:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552584, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278617, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.764 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dfec23d8-89c3-4cda-a269-ee3cbefec5b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb774e474-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552595, 'tstamp': 552595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278618, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb774e474-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552599, 'tstamp': 552599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278618, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.766 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb774e474-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.767 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.769 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb774e474-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.769 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:56 compute-0 nova_compute[250018]: 2026-01-20 14:33:56.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.769 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb774e474-30, col_values=(('external_ids', {'iface-id': '01ec1fed-ce23-42e9-9147-b3495425c336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:33:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:33:56.770 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:33:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:57.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:57 compute-0 nova_compute[250018]: 2026-01-20 14:33:57.213 250022 DEBUG nova.compute.manager [req-a8921e06-ef72-4020-bbfd-614d463fe3f2 req-e3fc9596-454c-4190-9304-91cb6fe6eab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:57 compute-0 nova_compute[250018]: 2026-01-20 14:33:57.214 250022 DEBUG oslo_concurrency.lockutils [req-a8921e06-ef72-4020-bbfd-614d463fe3f2 req-e3fc9596-454c-4190-9304-91cb6fe6eab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:57 compute-0 nova_compute[250018]: 2026-01-20 14:33:57.214 250022 DEBUG oslo_concurrency.lockutils [req-a8921e06-ef72-4020-bbfd-614d463fe3f2 req-e3fc9596-454c-4190-9304-91cb6fe6eab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:57 compute-0 nova_compute[250018]: 2026-01-20 14:33:57.214 250022 DEBUG oslo_concurrency.lockutils [req-a8921e06-ef72-4020-bbfd-614d463fe3f2 req-e3fc9596-454c-4190-9304-91cb6fe6eab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:57 compute-0 nova_compute[250018]: 2026-01-20 14:33:57.215 250022 DEBUG nova.compute.manager [req-a8921e06-ef72-4020-bbfd-614d463fe3f2 req-e3fc9596-454c-4190-9304-91cb6fe6eab2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Processing event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 273 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.031 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.032 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919638.0316133, e8b5659b-7304-4db5-a31a-dcea49a6666e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.032 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] VM Started (Lifecycle Event)
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.035 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.038 250022 INFO nova.virt.libvirt.driver [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Instance spawned successfully.
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.038 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.064 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.070 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.074 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.074 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.075 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.075 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.075 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.076 250022 DEBUG nova.virt.libvirt.driver [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:33:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:33:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:33:58.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.122 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.122 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919638.0316758, e8b5659b-7304-4db5-a31a-dcea49a6666e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.123 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] VM Paused (Lifecycle Event)
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.153 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.156 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919638.0344973, e8b5659b-7304-4db5-a31a-dcea49a6666e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.156 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] VM Resumed (Lifecycle Event)
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.181 250022 INFO nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Took 15.11 seconds to spawn the instance on the hypervisor.
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.182 250022 DEBUG nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.189 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.191 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.223 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.284 250022 INFO nova.compute.manager [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Took 16.60 seconds to build instance.
Jan 20 14:33:58 compute-0 nova_compute[250018]: 2026-01-20 14:33:58.308 250022 DEBUG oslo_concurrency.lockutils [None req-5855ea1f-0be3-4a60-abc6-fc32c7e404a7 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:58 compute-0 ceph-mon[74360]: pgmap v1304: 321 pgs: 321 active+clean; 273 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Jan 20 14:33:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:33:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:33:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:33:59.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.212 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.390 250022 DEBUG nova.compute.manager [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.391 250022 DEBUG oslo_concurrency.lockutils [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.391 250022 DEBUG oslo_concurrency.lockutils [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.392 250022 DEBUG oslo_concurrency.lockutils [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.392 250022 DEBUG nova.compute.manager [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] No waiting events found dispatching network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:33:59 compute-0 nova_compute[250018]: 2026-01-20 14:33:59.393 250022 WARNING nova.compute.manager [req-f248df7e-97e4-4359-8879-0ab9a161b8ed req-347133a9-8d40-4524-aa16-233cade0784e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received unexpected event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 for instance with vm_state active and task_state None.
Jan 20 14:33:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 278 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Jan 20 14:34:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:00 compute-0 ceph-mon[74360]: pgmap v1305: 321 pgs: 321 active+clean; 278 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 127 op/s
Jan 20 14:34:00 compute-0 nova_compute[250018]: 2026-01-20 14:34:00.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:01.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Jan 20 14:34:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2444071069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:02 compute-0 sudo[278665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:02 compute-0 sudo[278665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:02 compute-0 sudo[278665]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:02 compute-0 sudo[278690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:02 compute-0 sudo[278690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:02 compute-0 sudo[278690]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:02 compute-0 ceph-mon[74360]: pgmap v1306: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Jan 20 14:34:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:03 compute-0 ovn_controller[148666]: 2026-01-20T14:34:03Z|00115|binding|INFO|Releasing lport 01ec1fed-ce23-42e9-9147-b3495425c336 from this chassis (sb_readonly=0)
Jan 20 14:34:03 compute-0 nova_compute[250018]: 2026-01-20 14:34:03.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 20 14:34:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:04.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.215 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.640 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.641 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.642 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.642 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.643 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.644 250022 INFO nova.compute.manager [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Terminating instance
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.645 250022 DEBUG nova.compute.manager [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:34:04 compute-0 kernel: tapf25a0319-31 (unregistering): left promiscuous mode
Jan 20 14:34:04 compute-0 NetworkManager[48960]: <info>  [1768919644.6924] device (tapf25a0319-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:34:04 compute-0 ovn_controller[148666]: 2026-01-20T14:34:04Z|00116|binding|INFO|Releasing lport f25a0319-31f3-4675-8d35-5881fbada019 from this chassis (sb_readonly=0)
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.700 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 ovn_controller[148666]: 2026-01-20T14:34:04Z|00117|binding|INFO|Setting lport f25a0319-31f3-4675-8d35-5881fbada019 down in Southbound
Jan 20 14:34:04 compute-0 ovn_controller[148666]: 2026-01-20T14:34:04Z|00118|binding|INFO|Removing iface tapf25a0319-31 ovn-installed in OVS
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.704 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.709 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:26:d3 10.100.0.7'], port_security=['fa:16:3e:f3:26:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e8b5659b-7304-4db5-a31a-dcea49a6666e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b774e474-3e68-434c-8017-93bd087d2285', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48488e875f2e472f97f07cc7ee07e0be', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a51c0542-cae2-4ca8-b2f1-adb393fdc087', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3367f149-9a0b-42bf-93b6-10dba98995c7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f25a0319-31f3-4675-8d35-5881fbada019) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.711 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f25a0319-31f3-4675-8d35-5881fbada019 in datapath b774e474-3e68-434c-8017-93bd087d2285 unbound from our chassis
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.712 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b774e474-3e68-434c-8017-93bd087d2285
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.725 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.727 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f4731b5f-1809-434f-beae-2ff413a236be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 20 14:34:04 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000029.scope: Consumed 8.019s CPU time.
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.752 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[48638fe3-51f2-4fca-a293-536084eabced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 systemd-machined[216401]: Machine qemu-19-instance-00000029 terminated.
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.756 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[36c9d654-ed2d-4fdd-a4dc-cee68850cccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.779 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3a1d56-266b-4eef-8b18-46a138a14337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 ceph-mon[74360]: pgmap v1307: 321 pgs: 321 active+clean; 304 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 20 14:34:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/424104226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.794 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cc679940-c460-4655-9121-aba49505f75b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb774e474-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:ae:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552584, 'reachable_time': 33890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278729, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.805 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4e3b3d-550d-40ab-9f94-15c2b5d650cb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb774e474-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552595, 'tstamp': 552595}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278730, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb774e474-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552599, 'tstamp': 552599}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278730, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.807 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb774e474-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.808 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.813 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.813 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb774e474-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.813 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.814 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb774e474-30, col_values=(('external_ids', {'iface-id': '01ec1fed-ce23-42e9-9147-b3495425c336'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:04.814 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.887 250022 INFO nova.virt.libvirt.driver [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Instance destroyed successfully.
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.888 250022 DEBUG nova.objects.instance [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'resources' on Instance uuid e8b5659b-7304-4db5-a31a-dcea49a6666e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.929 250022 DEBUG nova.virt.libvirt.vif [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:33:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1790393255',display_name='tempest-VolumesAdminNegativeTest-server-1790393255',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1790393255',id=41,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:33:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-d6f2b7sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:33:58Z,user_data=None,user_id='729ca8a2a7414735af25d05df4a563b9',uuid=e8b5659b-7304-4db5-a31a-dcea49a6666e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.930 250022 DEBUG nova.network.os_vif_util [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "f25a0319-31f3-4675-8d35-5881fbada019", "address": "fa:16:3e:f3:26:d3", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf25a0319-31", "ovs_interfaceid": "f25a0319-31f3-4675-8d35-5881fbada019", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.931 250022 DEBUG nova.network.os_vif_util [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.931 250022 DEBUG os_vif [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.935 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.935 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf25a0319-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:04 compute-0 nova_compute[250018]: 2026-01-20 14:34:04.942 250022 INFO os_vif [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:26:d3,bridge_name='br-int',has_traffic_filtering=True,id=f25a0319-31f3-4675-8d35-5881fbada019,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf25a0319-31')
Jan 20 14:34:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.388 250022 DEBUG nova.compute.manager [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-unplugged-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.389 250022 DEBUG oslo_concurrency.lockutils [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.390 250022 DEBUG oslo_concurrency.lockutils [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.390 250022 DEBUG oslo_concurrency.lockutils [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.391 250022 DEBUG nova.compute.manager [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] No waiting events found dispatching network-vif-unplugged-f25a0319-31f3-4675-8d35-5881fbada019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.391 250022 DEBUG nova.compute.manager [req-f6a57736-5c40-403b-844b-19886df77863 req-09bb2daf-6e2b-460d-be3c-256a0686d6b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-unplugged-f25a0319-31f3-4675-8d35-5881fbada019 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:34:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 357 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 237 op/s
Jan 20 14:34:05 compute-0 ceph-mon[74360]: pgmap v1308: 321 pgs: 321 active+clean; 357 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 237 op/s
Jan 20 14:34:05 compute-0 nova_compute[250018]: 2026-01-20 14:34:05.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:06.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.182 250022 INFO nova.virt.libvirt.driver [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Deleting instance files /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e_del
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.183 250022 INFO nova.virt.libvirt.driver [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Deletion of /var/lib/nova/instances/e8b5659b-7304-4db5-a31a-dcea49a6666e_del complete
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.240 250022 INFO nova.compute.manager [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Took 1.59 seconds to destroy the instance on the hypervisor.
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.241 250022 DEBUG oslo.service.loopingcall [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.241 250022 DEBUG nova.compute.manager [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:34:06 compute-0 nova_compute[250018]: 2026-01-20 14:34:06.241 250022 DEBUG nova.network.neutron [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:34:06 compute-0 podman[278763]: 2026-01-20 14:34:06.472123529 +0000 UTC m=+0.055581836 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 14:34:06 compute-0 podman[278762]: 2026-01-20 14:34:06.508866337 +0000 UTC m=+0.095877449 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:34:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 349 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.487 250022 DEBUG nova.compute.manager [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.487 250022 DEBUG oslo_concurrency.lockutils [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.488 250022 DEBUG oslo_concurrency.lockutils [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.488 250022 DEBUG oslo_concurrency.lockutils [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.488 250022 DEBUG nova.compute.manager [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] No waiting events found dispatching network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:07 compute-0 nova_compute[250018]: 2026-01-20 14:34:07.488 250022 WARNING nova.compute.manager [req-fefa9a35-5e49-48a8-8ae9-60da045f3184 req-d9083806-9ba8-4045-ac86-316894853417 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received unexpected event network-vif-plugged-f25a0319-31f3-4675-8d35-5881fbada019 for instance with vm_state active and task_state deleting.
Jan 20 14:34:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:08 compute-0 ceph-mon[74360]: pgmap v1309: 321 pgs: 321 active+clean; 349 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 251 op/s
Jan 20 14:34:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:09.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 341 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Jan 20 14:34:09 compute-0 nova_compute[250018]: 2026-01-20 14:34:09.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:10.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:10 compute-0 sshd-session[278807]: Invalid user admin from 157.245.78.139 port 57232
Jan 20 14:34:10 compute-0 sshd-session[278807]: Connection closed by invalid user admin 157.245.78.139 port 57232 [preauth]
Jan 20 14:34:10 compute-0 ovn_controller[148666]: 2026-01-20T14:34:10Z|00119|binding|INFO|Releasing lport 01ec1fed-ce23-42e9-9147-b3495425c336 from this chassis (sb_readonly=0)
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.502 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.546 250022 DEBUG nova.network.neutron [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.575 250022 INFO nova.compute.manager [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Took 4.33 seconds to deallocate network for instance.
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.650 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.651 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.723 250022 DEBUG nova.compute.manager [req-4deddfea-65d5-4a2e-83a9-9f5981291c15 req-45760755-40f0-4c2d-835b-96f452492715 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Received event network-vif-deleted-f25a0319-31f3-4675-8d35-5881fbada019 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.769 250022 DEBUG oslo_concurrency.processutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:10 compute-0 ceph-mon[74360]: pgmap v1310: 321 pgs: 321 active+clean; 341 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 248 op/s
Jan 20 14:34:10 compute-0 nova_compute[250018]: 2026-01-20 14:34:10.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 14:34:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 14:34:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936457463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.212 250022 DEBUG oslo_concurrency.processutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.219 250022 DEBUG nova.compute.provider_tree [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007783055904522613 of space, bias 1.0, pg target 2.334916771356784 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.272 250022 DEBUG nova.scheduler.client.report [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.304 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.406 250022 INFO nova.scheduler.client.report [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Deleted allocations for instance e8b5659b-7304-4db5-a31a-dcea49a6666e
Jan 20 14:34:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 239 op/s
Jan 20 14:34:11 compute-0 nova_compute[250018]: 2026-01-20 14:34:11.503 250022 DEBUG oslo_concurrency.lockutils [None req-8ec4ed2d-b0ad-4d88-a833-3dca688b1202 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "e8b5659b-7304-4db5-a31a-dcea49a6666e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3250290687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1936457463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-0 ceph-mon[74360]: pgmap v1311: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 239 op/s
Jan 20 14:34:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1227090643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:12.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:13.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Jan 20 14:34:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:14.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:14 compute-0 nova_compute[250018]: 2026-01-20 14:34:14.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:15.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:15 compute-0 ceph-mon[74360]: pgmap v1312: 321 pgs: 321 active+clean; 326 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Jan 20 14:34:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/467321397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Jan 20 14:34:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:34:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:34:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:15 compute-0 nova_compute[250018]: 2026-01-20 14:34:15.940 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.110 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.110 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.110 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.110 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.111 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.112 250022 INFO nova.compute.manager [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Terminating instance
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.112 250022 DEBUG nova.compute.manager [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:34:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:16 compute-0 kernel: tapca52d5c1-4a (unregistering): left promiscuous mode
Jan 20 14:34:16 compute-0 NetworkManager[48960]: <info>  [1768919656.2969] device (tapca52d5c1-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:34:16 compute-0 ovn_controller[148666]: 2026-01-20T14:34:16Z|00120|binding|INFO|Releasing lport ca52d5c1-4a07-4d94-9adb-1311a4d89044 from this chassis (sb_readonly=0)
Jan 20 14:34:16 compute-0 ovn_controller[148666]: 2026-01-20T14:34:16Z|00121|binding|INFO|Setting lport ca52d5c1-4a07-4d94-9adb-1311a4d89044 down in Southbound
Jan 20 14:34:16 compute-0 ovn_controller[148666]: 2026-01-20T14:34:16Z|00122|binding|INFO|Removing iface tapca52d5c1-4a ovn-installed in OVS
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:16 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 20 14:34:16 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000026.scope: Consumed 16.335s CPU time.
Jan 20 14:34:16 compute-0 systemd-machined[216401]: Machine qemu-18-instance-00000026 terminated.
Jan 20 14:34:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:16.352 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c6:27 10.100.0.8'], port_security=['fa:16:3e:94:c6:27 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b774e474-3e68-434c-8017-93bd087d2285', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48488e875f2e472f97f07cc7ee07e0be', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a57e9380-dedd-484d-b2c3-1886f12d2575', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3367f149-9a0b-42bf-93b6-10dba98995c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ca52d5c1-4a07-4d94-9adb-1311a4d89044) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:16.354 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ca52d5c1-4a07-4d94-9adb-1311a4d89044 in datapath b774e474-3e68-434c-8017-93bd087d2285 unbound from our chassis
Jan 20 14:34:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:16.357 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b774e474-3e68-434c-8017-93bd087d2285, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:34:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:16.358 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3aa46f-9471-4f15-ab29-78314f829672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:16.358 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b774e474-3e68-434c-8017-93bd087d2285 namespace which is not needed anymore
Jan 20 14:34:16 compute-0 ceph-mon[74360]: pgmap v1313: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Jan 20 14:34:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:34:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4048469168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.552 250022 INFO nova.virt.libvirt.driver [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Instance destroyed successfully.
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.553 250022 DEBUG nova.objects.instance [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lazy-loading 'resources' on Instance uuid a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [NOTICE]   (277163) : haproxy version is 2.8.14-c23fe91
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [NOTICE]   (277163) : path to executable is /usr/sbin/haproxy
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [WARNING]  (277163) : Exiting Master process...
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [WARNING]  (277163) : Exiting Master process...
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [ALERT]    (277163) : Current worker (277165) exited with code 143 (Terminated)
Jan 20 14:34:16 compute-0 neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285[277159]: [WARNING]  (277163) : All workers exited. Exiting... (0)
Jan 20 14:34:16 compute-0 systemd[1]: libpod-f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6.scope: Deactivated successfully.
Jan 20 14:34:16 compute-0 podman[278860]: 2026-01-20 14:34:16.567116154 +0000 UTC m=+0.097487723 container died f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.567 250022 DEBUG nova.virt.libvirt.vif [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:32:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-360507108',display_name='tempest-VolumesAdminNegativeTest-server-360507108',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-360507108',id=38,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEfpLP+4Mq2CZ5ZMswuqwlZgEKKrioeoV+EpxSKHke61Yt8UqOI4Nb0ZGFOm7IzrLdRE4GvfieoPu5jR6ZfidedQLi0pPlRx8BniFmrNq4zfNCZEmtF+sKw9ryVBJbHVQ==',key_name='tempest-keypair-1809754421',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48488e875f2e472f97f07cc7ee07e0be',ramdisk_id='',reservation_id='r-sgcyfa1m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1678705027',owner_user_name='tempest-VolumesAdminNegativeTest-1678705027-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:33:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='729ca8a2a7414735af25d05df4a563b9',uuid=a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.568 250022 DEBUG nova.network.os_vif_util [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converting VIF {"id": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "address": "fa:16:3e:94:c6:27", "network": {"id": "b774e474-3e68-434c-8017-93bd087d2285", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-829886895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48488e875f2e472f97f07cc7ee07e0be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca52d5c1-4a", "ovs_interfaceid": "ca52d5c1-4a07-4d94-9adb-1311a4d89044", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.569 250022 DEBUG nova.network.os_vif_util [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.569 250022 DEBUG os_vif [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.570 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.571 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca52d5c1-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.611 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.613 250022 DEBUG nova.compute.manager [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-unplugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.613 250022 DEBUG oslo_concurrency.lockutils [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.613 250022 DEBUG oslo_concurrency.lockutils [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.614 250022 DEBUG oslo_concurrency.lockutils [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.614 250022 DEBUG nova.compute.manager [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] No waiting events found dispatching network-vif-unplugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.614 250022 DEBUG nova.compute.manager [req-9d969d5a-2384-4bc4-a458-f2b31b50b556 req-ab40901f-2e04-45fa-b477-14076ecd5ed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-unplugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.615 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:34:16 compute-0 nova_compute[250018]: 2026-01-20 14:34:16.616 250022 INFO os_vif [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:c6:27,bridge_name='br-int',has_traffic_filtering=True,id=ca52d5c1-4a07-4d94-9adb-1311a4d89044,network=Network(b774e474-3e68-434c-8017-93bd087d2285),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca52d5c1-4a')
Jan 20 14:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-669f7aad4565a1ef60d54e29274ebcfb188daad43f2bc4b9ae05140434ef8309-merged.mount: Deactivated successfully.
Jan 20 14:34:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:17.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 389 KiB/s wr, 70 op/s
Jan 20 14:34:17 compute-0 podman[278860]: 2026-01-20 14:34:17.522716061 +0000 UTC m=+1.053087630 container cleanup f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:34:17 compute-0 systemd[1]: libpod-conmon-f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6.scope: Deactivated successfully.
Jan 20 14:34:17 compute-0 podman[278923]: 2026-01-20 14:34:17.674873964 +0000 UTC m=+0.131357134 container remove f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.682 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[008d3acc-b123-4d56-a798-d2e666324841]: (4, ('Tue Jan 20 02:34:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285 (f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6)\nf0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6\nTue Jan 20 02:34:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b774e474-3e68-434c-8017-93bd087d2285 (f0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6)\nf0f3ba0713c8d10811d4335f4052129a8192aeb6ee27e08b11d8ef96dde857b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.684 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f03f38-622a-4542-8559-e24491093b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.685 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb774e474-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.687 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:17 compute-0 kernel: tapb774e474-30: left promiscuous mode
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.703 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.706 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d371f8-04cb-4911-b853-497ac64dba91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.724 250022 INFO nova.virt.libvirt.driver [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Deleting instance files /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_del
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.724 250022 INFO nova.virt.libvirt.driver [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Deletion of /var/lib/nova/instances/a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174_del complete
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.726 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[53f67923-ac73-4334-bc79-ead43bf77336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.727 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d05a82a1-40d9-43ca-983b-ef882a67beb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.740 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f3dfa6-6061-46ce-8f23-374ff38768f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552578, 'reachable_time': 30189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278939, 'error': None, 'target': 'ovnmeta-b774e474-3e68-434c-8017-93bd087d2285', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.743 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b774e474-3e68-434c-8017-93bd087d2285 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:34:17 compute-0 systemd[1]: run-netns-ovnmeta\x2db774e474\x2d3e68\x2d434c\x2d8017\x2d93bd087d2285.mount: Deactivated successfully.
Jan 20 14:34:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:17.743 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec35384-d891-4813-9ed2-8204341c273c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.785 250022 INFO nova.compute.manager [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Took 1.67 seconds to destroy the instance on the hypervisor.
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.785 250022 DEBUG oslo.service.loopingcall [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.785 250022 DEBUG nova.compute.manager [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:34:17 compute-0 nova_compute[250018]: 2026-01-20 14:34:17.786 250022 DEBUG nova.network.neutron [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:34:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:18.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:18 compute-0 ceph-mon[74360]: pgmap v1314: 321 pgs: 321 active+clean; 326 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 389 KiB/s wr, 70 op/s
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.739 250022 DEBUG nova.compute.manager [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.740 250022 DEBUG oslo_concurrency.lockutils [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.740 250022 DEBUG oslo_concurrency.lockutils [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.740 250022 DEBUG oslo_concurrency.lockutils [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.741 250022 DEBUG nova.compute.manager [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] No waiting events found dispatching network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:18 compute-0 nova_compute[250018]: 2026-01-20 14:34:18.741 250022 WARNING nova.compute.manager [req-66f39915-e138-4bbb-8425-a6c86fc0b855 req-aab154ca-5969-4b51-bcee-30814d89905a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received unexpected event network-vif-plugged-ca52d5c1-4a07-4d94-9adb-1311a4d89044 for instance with vm_state active and task_state deleting.
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.075 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.201 250022 DEBUG nova.network.neutron [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.220 250022 INFO nova.compute.manager [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Took 1.43 seconds to deallocate network for instance.
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.279 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.280 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.334 250022 DEBUG oslo_concurrency.processutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.364 250022 DEBUG nova.compute.manager [req-2aded16a-1b42-47c6-9370-3623a1f47f06 req-35513de1-d18a-454e-b52a-33498e8e83fa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Received event network-vif-deleted-ca52d5c1-4a07-4d94-9adb-1311a4d89044 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 308 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 27 KiB/s wr, 57 op/s
Jan 20 14:34:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805719283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.870 250022 DEBUG oslo_concurrency.processutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.879 250022 DEBUG nova.compute.provider_tree [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.885 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919644.8850017, e8b5659b-7304-4db5-a31a-dcea49a6666e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.886 250022 INFO nova.compute.manager [-] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] VM Stopped (Lifecycle Event)
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.895 250022 DEBUG nova.scheduler.client.report [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.908 250022 DEBUG nova.compute.manager [None req-2cabd8bf-ca20-4c1a-adae-25313722dc17 - - - - - -] [instance: e8b5659b-7304-4db5-a31a-dcea49a6666e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:19 compute-0 ceph-mon[74360]: pgmap v1315: 321 pgs: 321 active+clean; 308 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 600 KiB/s rd, 27 KiB/s wr, 57 op/s
Jan 20 14:34:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1805719283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.921 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:19 compute-0 nova_compute[250018]: 2026-01-20 14:34:19.980 250022 INFO nova.scheduler.client.report [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Deleted allocations for instance a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.065 250022 DEBUG oslo_concurrency.lockutils [None req-1bbe7aa0-634b-4ef9-b28b-42077972971a 729ca8a2a7414735af25d05df4a563b9 48488e875f2e472f97f07cc7ee07e0be - - default default] Lock "a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.447 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.448 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.465 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.528 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.529 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.535 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.536 250022 INFO nova.compute.claims [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.634 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:20 compute-0 nova_compute[250018]: 2026-01-20 14:34:20.958 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074879648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.157 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.162 250022 DEBUG nova.compute.provider_tree [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:21.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.195 250022 DEBUG nova.scheduler.client.report [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.227 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.227 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.301 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.301 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.330 250022 INFO nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.354 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.465 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.467 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.468 250022 INFO nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Creating image(s)
Jan 20 14:34:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 125 op/s
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.503 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.535 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.564 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.568 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.594 250022 DEBUG nova.policy [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f51c395107c84dbd9067113b84ff01dd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a841e7a1434c488390475174e10bc161', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.610 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.627 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.629 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.630 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.630 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.668 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:21 compute-0 nova_compute[250018]: 2026-01-20 14:34:21.673 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 280e5549-64d8-4573-a271-a210145a151d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3074879648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.178 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 280e5549-64d8-4573-a271-a210145a151d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.249 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] resizing rbd image 280e5549-64d8-4573-a271-a210145a151d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.287 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.288 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.288 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.288 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.289 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:22 compute-0 sudo[279132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:22 compute-0 sudo[279132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:22 compute-0 sudo[279132]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.378 250022 DEBUG nova.objects.instance [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'migration_context' on Instance uuid 280e5549-64d8-4573-a271-a210145a151d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:22 compute-0 sudo[279160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:22 compute-0 sudo[279160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:22 compute-0 sudo[279160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462991547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.712 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:22 compute-0 ceph-mon[74360]: pgmap v1316: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 125 op/s
Jan 20 14:34:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1462991547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.894 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.895 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4606MB free_disk=20.876426696777344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.895 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:22 compute-0 nova_compute[250018]: 2026-01-20 14:34:22.895 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:23.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.305 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.306 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Ensure instance console log exists: /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.306 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.307 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.307 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.511 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 280e5549-64d8-4573-a271-a210145a151d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.512 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.512 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.546 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:23.629 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:23.631 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:34:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:23.631 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.630 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:23 compute-0 ceph-mon[74360]: pgmap v1317: 321 pgs: 321 active+clean; 246 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 20 14:34:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128845683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.973 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.980 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:23 compute-0 nova_compute[250018]: 2026-01-20 14:34:23.997 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.023 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.023 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:24.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.188 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Successfully created port: 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:34:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3128845683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3789156520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.935 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Successfully updated port: 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.952 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.953 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquired lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:24 compute-0 nova_compute[250018]: 2026-01-20 14:34:24.953 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:34:25 compute-0 nova_compute[250018]: 2026-01-20 14:34:25.022 250022 DEBUG nova.compute.manager [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-changed-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:25 compute-0 nova_compute[250018]: 2026-01-20 14:34:25.023 250022 DEBUG nova.compute.manager [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Refreshing instance network info cache due to event network-changed-06dd2b18-21ca-4ffc-8d34-c78b1c568f68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:34:25 compute-0 nova_compute[250018]: 2026-01-20 14:34:25.023 250022 DEBUG oslo_concurrency.lockutils [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:25 compute-0 nova_compute[250018]: 2026-01-20 14:34:25.125 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:34:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:25.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 281 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Jan 20 14:34:25 compute-0 nova_compute[250018]: 2026-01-20 14:34:25.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1416592101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:25 compute-0 ceph-mon[74360]: pgmap v1318: 321 pgs: 321 active+clean; 281 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.024 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.025 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:34:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.429 250022 DEBUG nova.network.neutron [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Updating instance_info_cache with network_info: [{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.449 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Releasing lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.450 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Instance network_info: |[{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.451 250022 DEBUG oslo_concurrency.lockutils [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.451 250022 DEBUG nova.network.neutron [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Refreshing network info cache for port 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.457 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Start _get_guest_xml network_info=[{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.464 250022 WARNING nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.476 250022 DEBUG nova.virt.libvirt.host [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.477 250022 DEBUG nova.virt.libvirt.host [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.482 250022 DEBUG nova.virt.libvirt.host [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.483 250022 DEBUG nova.virt.libvirt.host [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.485 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.485 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.486 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.486 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.487 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.487 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.487 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.488 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.488 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.488 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.489 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.489 250022 DEBUG nova.virt.hardware [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.495 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.530 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.611 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.773 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329220002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.965 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:26 compute-0 nova_compute[250018]: 2026-01-20 14:34:26.996 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.000 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2572931249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3329220002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:34:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247561712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 293 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.481 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.483 250022 DEBUG nova.virt.libvirt.vif [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:34:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-328542049',display_name='tempest-ServersAdminTestJSON-server-328542049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-328542049',id=43,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-a4ohfxz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:21Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=280e5549-64d8-4573-a271-a210145a151d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.484 250022 DEBUG nova.network.os_vif_util [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.484 250022 DEBUG nova.network.os_vif_util [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.486 250022 DEBUG nova.objects.instance [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'pci_devices' on Instance uuid 280e5549-64d8-4573-a271-a210145a151d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.510 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <uuid>280e5549-64d8-4573-a271-a210145a151d</uuid>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <name>instance-0000002b</name>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersAdminTestJSON-server-328542049</nova:name>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:34:26</nova:creationTime>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:user uuid="f51c395107c84dbd9067113b84ff01dd">tempest-ServersAdminTestJSON-1261404595-project-member</nova:user>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:project uuid="a841e7a1434c488390475174e10bc161">tempest-ServersAdminTestJSON-1261404595</nova:project>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <nova:port uuid="06dd2b18-21ca-4ffc-8d34-c78b1c568f68">
Jan 20 14:34:27 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <system>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="serial">280e5549-64d8-4573-a271-a210145a151d</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="uuid">280e5549-64d8-4573-a271-a210145a151d</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </system>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <os>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </os>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <features>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </features>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/280e5549-64d8-4573-a271-a210145a151d_disk">
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </source>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/280e5549-64d8-4573-a271-a210145a151d_disk.config">
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </source>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:34:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:90:6f:bd"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <target dev="tap06dd2b18-21"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/console.log" append="off"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <video>
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </video>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:34:27 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:34:27 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:34:27 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:34:27 compute-0 nova_compute[250018]: </domain>
Jan 20 14:34:27 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.512 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Preparing to wait for external event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.512 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.513 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.514 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.515 250022 DEBUG nova.virt.libvirt.vif [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:34:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-328542049',display_name='tempest-ServersAdminTestJSON-server-328542049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-328542049',id=43,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-a4ohfxz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:34:21Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=280e5549-64d8-4573-a271-a210145a151d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.516 250022 DEBUG nova.network.os_vif_util [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.517 250022 DEBUG nova.network.os_vif_util [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.517 250022 DEBUG os_vif [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.519 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.520 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.520 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.525 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.526 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dd2b18-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.527 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06dd2b18-21, col_values=(('external_ids', {'iface-id': '06dd2b18-21ca-4ffc-8d34-c78b1c568f68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:6f:bd', 'vm-uuid': '280e5549-64d8-4573-a271-a210145a151d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:27 compute-0 NetworkManager[48960]: <info>  [1768919667.5310] manager: (tap06dd2b18-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.532 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.536 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.538 250022 INFO os_vif [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21')
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.616 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.616 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.616 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] No VIF found with MAC fa:16:3e:90:6f:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.617 250022 INFO nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Using config drive
Jan 20 14:34:27 compute-0 nova_compute[250018]: 2026-01-20 14:34:27.646 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.071 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.126 250022 INFO nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Creating config drive at /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.132 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3737celm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:28.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/252065068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/247561712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:28 compute-0 ceph-mon[74360]: pgmap v1319: 321 pgs: 321 active+clean; 293 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.262 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3737celm" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.320 250022 DEBUG nova.storage.rbd_utils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] rbd image 280e5549-64d8-4573-a271-a210145a151d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.324 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config 280e5549-64d8-4573-a271-a210145a151d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.352 250022 DEBUG nova.network.neutron [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Updated VIF entry in instance network info cache for port 06dd2b18-21ca-4ffc-8d34-c78b1c568f68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.353 250022 DEBUG nova.network.neutron [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Updating instance_info_cache with network_info: [{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.370 250022 DEBUG oslo_concurrency.lockutils [req-9751e8fd-da51-4f30-9cb4-8b34456d0e4c req-655679b6-5f5a-43c5-9087-1f8c5da10fd3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.751 250022 DEBUG oslo_concurrency.processutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config 280e5549-64d8-4573-a271-a210145a151d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.752 250022 INFO nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Deleting local config drive /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d/disk.config because it was imported into RBD.
Jan 20 14:34:28 compute-0 kernel: tap06dd2b18-21: entered promiscuous mode
Jan 20 14:34:28 compute-0 NetworkManager[48960]: <info>  [1768919668.8107] manager: (tap06dd2b18-21): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 20 14:34:28 compute-0 ovn_controller[148666]: 2026-01-20T14:34:28Z|00123|binding|INFO|Claiming lport 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 for this chassis.
Jan 20 14:34:28 compute-0 ovn_controller[148666]: 2026-01-20T14:34:28Z|00124|binding|INFO|06dd2b18-21ca-4ffc-8d34-c78b1c568f68: Claiming fa:16:3e:90:6f:bd 10.100.0.5
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:28 compute-0 ovn_controller[148666]: 2026-01-20T14:34:28Z|00125|binding|INFO|Setting lport 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 ovn-installed in OVS
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.827 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:28 compute-0 nova_compute[250018]: 2026-01-20 14:34:28.829 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:28 compute-0 systemd-udevd[279385]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:34:28 compute-0 systemd-machined[216401]: New machine qemu-20-instance-0000002b.
Jan 20 14:34:28 compute-0 ovn_controller[148666]: 2026-01-20T14:34:28Z|00126|binding|INFO|Setting lport 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 up in Southbound
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.846 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:6f:bd 10.100.0.5'], port_security=['fa:16:3e:90:6f:bd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '280e5549-64d8-4573-a271-a210145a151d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=06dd2b18-21ca-4ffc-8d34-c78b1c568f68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.847 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a bound to our chassis
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.849 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:28 compute-0 NetworkManager[48960]: <info>  [1768919668.8526] device (tap06dd2b18-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:34:28 compute-0 NetworkManager[48960]: <info>  [1768919668.8534] device (tap06dd2b18-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.861 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b026f1-010a-480a-a4bd-f9be2a530612]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000002b.
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.862 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33c9a20a-d1 in ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.864 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33c9a20a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.865 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[79ec91e9-9cbc-4097-849a-94b71fab2451]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.865 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[faf1e751-8a7e-4f97-b515-f9e9d0c7f29f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.876 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4a3719-cddd-4461-9d30-84192450b7a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.892 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[04fa4b78-de52-4061-98b2-1537c9e69ec8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.918 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[de30f722-931c-4489-b619-80ca3df0dc31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 NetworkManager[48960]: <info>  [1768919668.9242] manager: (tap33c9a20a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.924 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e921755d-31ea-4d39-9b47-819674c47717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 systemd-udevd[279388]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.952 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[17c5a67e-1b47-4306-b26a-e097562c6ac1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.955 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7aebb9-429c-4883-a108-86cb0b9a8782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 NetworkManager[48960]: <info>  [1768919668.9757] device (tap33c9a20a-d0): carrier: link connected
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.979 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[edc10f84-077a-4fd1-bd18-0d126ef34122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:28.997 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0180d6-ef2b-49b4-9519-b68dfcaedf3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560969, 'reachable_time': 17505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279419, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.014 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb2c1b7-0fd4-43db-96cc-6c5ef4e8b8ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:8ebd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560969, 'tstamp': 560969}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279420, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.034 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[20be5a49-fd9d-4572-9b71-19e6311e3a27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33c9a20a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:8e:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560969, 'reachable_time': 17505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279421, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.061 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdb4fad-2820-472f-a7d1-5baa718dfda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.112 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f5aad16c-0239-4dcb-80a3-46594aebc0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.114 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.114 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.114 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33c9a20a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:29 compute-0 NetworkManager[48960]: <info>  [1768919669.1170] manager: (tap33c9a20a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 20 14:34:29 compute-0 kernel: tap33c9a20a-d0: entered promiscuous mode
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.119 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.120 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33c9a20a-d0, col_values=(('external_ids', {'iface-id': '90c69687-c788-4dba-881f-3ed4a5ee6007'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:29 compute-0 ovn_controller[148666]: 2026-01-20T14:34:29Z|00127|binding|INFO|Releasing lport 90c69687-c788-4dba-881f-3ed4a5ee6007 from this chassis (sb_readonly=0)
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.122 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.122 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.123 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc8a60f-fc79-42be-ba4f-37562e7d46ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.124 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.pid.haproxy
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:34:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:29.125 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'env', 'PROCESS_TAG=haproxy-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33c9a20a-d976-42a8-b8bf-f83ddfc97c9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.137 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.176 250022 DEBUG nova.compute.manager [req-7a6b3afc-47f5-4db5-a4cb-82a10224889d req-19bee6fd-419c-4217-9be8-586bb3d7efff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.176 250022 DEBUG oslo_concurrency.lockutils [req-7a6b3afc-47f5-4db5-a4cb-82a10224889d req-19bee6fd-419c-4217-9be8-586bb3d7efff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.177 250022 DEBUG oslo_concurrency.lockutils [req-7a6b3afc-47f5-4db5-a4cb-82a10224889d req-19bee6fd-419c-4217-9be8-586bb3d7efff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.177 250022 DEBUG oslo_concurrency.lockutils [req-7a6b3afc-47f5-4db5-a4cb-82a10224889d req-19bee6fd-419c-4217-9be8-586bb3d7efff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.177 250022 DEBUG nova.compute.manager [req-7a6b3afc-47f5-4db5-a4cb-82a10224889d req-19bee6fd-419c-4217-9be8-586bb3d7efff 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Processing event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:34:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 299 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 14:34:29 compute-0 podman[279453]: 2026-01-20 14:34:29.517272278 +0000 UTC m=+0.056352676 container create b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:34:29 compute-0 systemd[1]: Started libpod-conmon-b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f.scope.
Jan 20 14:34:29 compute-0 podman[279453]: 2026-01-20 14:34:29.485873555 +0000 UTC m=+0.024953993 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:34:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab8394c3905ecfa04dc9ae2ef607e2a0b3adb6c8a312b398996d8cf8955ad33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:29 compute-0 podman[279453]: 2026-01-20 14:34:29.693822346 +0000 UTC m=+0.232902784 container init b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:34:29 compute-0 podman[279453]: 2026-01-20 14:34:29.701485273 +0000 UTC m=+0.240565681 container start b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:34:29 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [NOTICE]   (279514) : New worker (279517) forked
Jan 20 14:34:29 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [NOTICE]   (279514) : Loading success.
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.777 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919669.7762744, 280e5549-64d8-4573-a271-a210145a151d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.777 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] VM Started (Lifecycle Event)
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.779 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.785 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.790 250022 INFO nova.virt.libvirt.driver [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] Instance spawned successfully.
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.790 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.800 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.803 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.813 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.813 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.814 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.814 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.815 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.815 250022 DEBUG nova.virt.libvirt.driver [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.823 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.823 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919669.7765148, 280e5549-64d8-4573-a271-a210145a151d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.824 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] VM Paused (Lifecycle Event)
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.852 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.855 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919669.7827027, 280e5549-64d8-4573-a271-a210145a151d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.856 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] VM Resumed (Lifecycle Event)
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.893 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.896 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.917 250022 INFO nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Took 8.45 seconds to spawn the instance on the hypervisor.
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.918 250022 DEBUG nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:29 compute-0 nova_compute[250018]: 2026-01-20 14:34:29.919 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.007 250022 INFO nova.compute.manager [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Took 9.50 seconds to build instance.
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.031 250022 DEBUG oslo_concurrency.lockutils [None req-77ec8232-ff3d-4ff9-85ce-092db0fcf1b7 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:30.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:30 compute-0 ceph-mon[74360]: pgmap v1320: 321 pgs: 321 active+clean; 299 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 14:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:30.748 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:30.750 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:34:30.750 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:30 compute-0 ovn_controller[148666]: 2026-01-20T14:34:30Z|00128|binding|INFO|Releasing lport 90c69687-c788-4dba-881f-3ed4a5ee6007 from this chassis (sb_readonly=0)
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.784 250022 DEBUG oslo_concurrency.lockutils [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] Acquiring lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.784 250022 DEBUG oslo_concurrency.lockutils [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] Acquired lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.784 250022 DEBUG nova.network.neutron [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:34:30 compute-0 nova_compute[250018]: 2026-01-20 14:34:30.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:31 compute-0 ovn_controller[148666]: 2026-01-20T14:34:31Z|00129|binding|INFO|Releasing lport 90c69687-c788-4dba-881f-3ed4a5ee6007 from this chassis (sb_readonly=0)
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.257 250022 DEBUG nova.compute.manager [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.258 250022 DEBUG oslo_concurrency.lockutils [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.258 250022 DEBUG oslo_concurrency.lockutils [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.258 250022 DEBUG oslo_concurrency.lockutils [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.258 250022 DEBUG nova.compute.manager [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] No waiting events found dispatching network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.259 250022 WARNING nova.compute.manager [req-9c278ea3-2611-4fa6-ae73-8bf1019de696 req-60d27c3b-767f-458e-8241-42860da69063 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received unexpected event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 for instance with vm_state active and task_state None.
Jan 20 14:34:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.551 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919656.5508864, a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.552 250022 INFO nova.compute.manager [-] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] VM Stopped (Lifecycle Event)
Jan 20 14:34:31 compute-0 nova_compute[250018]: 2026-01-20 14:34:31.571 250022 DEBUG nova.compute.manager [None req-b55fb5d5-b975-43cd-85b2-2350f1435cc1 - - - - - -] [instance: a5f0dd02-bd8a-41ed-a3cc-0b52c7e31174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:34:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1462311559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:32.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:32 compute-0 nova_compute[250018]: 2026-01-20 14:34:32.530 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:32 compute-0 ceph-mon[74360]: pgmap v1321: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Jan 20 14:34:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:33.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:33 compute-0 nova_compute[250018]: 2026-01-20 14:34:33.290 250022 DEBUG nova.network.neutron [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Updating instance_info_cache with network_info: [{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:34:33 compute-0 nova_compute[250018]: 2026-01-20 14:34:33.309 250022 DEBUG oslo_concurrency.lockutils [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] Releasing lock "refresh_cache-280e5549-64d8-4573-a271-a210145a151d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:34:33 compute-0 nova_compute[250018]: 2026-01-20 14:34:33.310 250022 DEBUG nova.compute.manager [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 20 14:34:33 compute-0 nova_compute[250018]: 2026-01-20 14:34:33.310 250022 DEBUG nova.compute.manager [None req-9f84d2f0-1631-4cf1-afc5-e052d204085f 4b1ebf2d77844ef9bf7bd8a5f6359af4 70f46f2d9d2d4c4cb05ef4e71ae50d53 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] network_info to inject: |[{"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 20 14:34:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 843 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Jan 20 14:34:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:35 compute-0 ceph-mon[74360]: pgmap v1322: 321 pgs: 321 active+clean; 326 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 843 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Jan 20 14:34:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 361 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 190 op/s
Jan 20 14:34:36 compute-0 nova_compute[250018]: 2026-01-20 14:34:36.071 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:36.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:36 compute-0 ceph-mon[74360]: pgmap v1323: 321 pgs: 321 active+clean; 361 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 190 op/s
Jan 20 14:34:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:37.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:37 compute-0 podman[279532]: 2026-01-20 14:34:37.464059623 +0000 UTC m=+0.056172982 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:34:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:34:37 compute-0 podman[279531]: 2026-01-20 14:34:37.525031792 +0000 UTC m=+0.116769981 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 20 14:34:37 compute-0 nova_compute[250018]: 2026-01-20 14:34:37.533 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/366852801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3690699125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:38 compute-0 ceph-mon[74360]: pgmap v1324: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:34:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 20 14:34:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:39.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 20 14:34:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 20 14:34:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Jan 20 14:34:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:40.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:40 compute-0 ceph-mon[74360]: osdmap e169: 3 total, 3 up, 3 in
Jan 20 14:34:40 compute-0 ceph-mon[74360]: pgmap v1326: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Jan 20 14:34:41 compute-0 nova_compute[250018]: 2026-01-20 14:34:41.073 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:41.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:42 compute-0 ceph-mon[74360]: pgmap v1327: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:42 compute-0 sudo[279574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:42 compute-0 sudo[279574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:42 compute-0 sudo[279574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:42 compute-0 sudo[279599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:42 compute-0 sudo[279599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:42 compute-0 sudo[279599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:42 compute-0 nova_compute[250018]: 2026-01-20 14:34:42.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:44 compute-0 ceph-mon[74360]: pgmap v1328: 321 pgs: 321 active+clean; 372 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 20 14:34:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:44 compute-0 ovn_controller[148666]: 2026-01-20T14:34:44Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:6f:bd 10.100.0.5
Jan 20 14:34:44 compute-0 ovn_controller[148666]: 2026-01-20T14:34:44Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:6f:bd 10.100.0.5
Jan 20 14:34:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 338 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 197 op/s
Jan 20 14:34:46 compute-0 nova_compute[250018]: 2026-01-20 14:34:46.118 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 20 14:34:46 compute-0 ceph-mon[74360]: pgmap v1329: 321 pgs: 321 active+clean; 338 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 197 op/s
Jan 20 14:34:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 20 14:34:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 20 14:34:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 322 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Jan 20 14:34:47 compute-0 nova_compute[250018]: 2026-01-20 14:34:47.537 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 20 14:34:47 compute-0 ceph-mon[74360]: osdmap e170: 3 total, 3 up, 3 in
Jan 20 14:34:47 compute-0 ceph-mon[74360]: pgmap v1331: 321 pgs: 321 active+clean; 322 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 252 op/s
Jan 20 14:34:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 20 14:34:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 20 14:34:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-0 ceph-mon[74360]: osdmap e171: 3 total, 3 up, 3 in
Jan 20 14:34:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/80779919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/578134844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:34:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:49.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 347 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 367 op/s
Jan 20 14:34:49 compute-0 ceph-mon[74360]: osdmap e172: 3 total, 3 up, 3 in
Jan 20 14:34:49 compute-0 ceph-mon[74360]: pgmap v1334: 321 pgs: 321 active+clean; 347 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 6.4 MiB/s wr, 367 op/s
Jan 20 14:34:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:51 compute-0 nova_compute[250018]: 2026-01-20 14:34:51.120 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:51.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 20 14:34:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 20 14:34:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 20 14:34:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 298 op/s
Jan 20 14:34:51 compute-0 sudo[279629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:51 compute-0 sudo[279629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:51 compute-0 sudo[279629]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 sudo[279654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:34:52 compute-0 sudo[279654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 sudo[279654]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 sudo[279679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:52 compute-0 sudo[279679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 sudo[279679]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 sudo[279704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:34:52 compute-0 sudo[279704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:34:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:52.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 20 14:34:52 compute-0 ceph-mon[74360]: osdmap e173: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-0 ceph-mon[74360]: pgmap v1336: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 298 op/s
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:34:52
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:34:52 compute-0 nova_compute[250018]: 2026-01-20 14:34:52.540 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:52 compute-0 sudo[279704]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a3a5a97f-7a89-40f8-80fe-3dbcd63c845a does not exist
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 04e1321c-1c6d-4a81-8008-20c0ebc7434d does not exist
Jan 20 14:34:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ecf64eb0-b294-4282-aa88-ff7fd27879b6 does not exist
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:34:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:34:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:52 compute-0 sudo[279761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:52 compute-0 sudo[279761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 sudo[279761]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 sudo[279786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:34:52 compute-0 sudo[279786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 sudo[279786]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:52 compute-0 sudo[279811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:52 compute-0 sudo[279811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:52 compute-0 sudo[279811]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:53 compute-0 sudo[279836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:34:53 compute-0 sudo[279836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:53.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:53 compute-0 ceph-mon[74360]: osdmap e174: 3 total, 3 up, 3 in
Jan 20 14:34:53 compute-0 ceph-mon[74360]: osdmap e175: 3 total, 3 up, 3 in
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:34:53 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.348414572 +0000 UTC m=+0.039143233 container create 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:34:53 compute-0 systemd[1]: Started libpod-conmon-0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3.scope.
Jan 20 14:34:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.331497277 +0000 UTC m=+0.022225968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.435149595 +0000 UTC m=+0.125878276 container init 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.442432981 +0000 UTC m=+0.133161642 container start 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.445522784 +0000 UTC m=+0.136251445 container attach 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:34:53 compute-0 friendly_dijkstra[279918]: 167 167
Jan 20 14:34:53 compute-0 systemd[1]: libpod-0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3.scope: Deactivated successfully.
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.450137768 +0000 UTC m=+0.140866429 container died 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-27b50ba1a8fd2cf5607dc1968bc2aeab1c0a221d7182351ff389c173a52cb8a8-merged.mount: Deactivated successfully.
Jan 20 14:34:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.5 MiB/s wr, 180 op/s
Jan 20 14:34:53 compute-0 podman[279901]: 2026-01-20 14:34:53.491501001 +0000 UTC m=+0.182229662 container remove 0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:34:53 compute-0 systemd[1]: libpod-conmon-0c651a0b36e2c440506e53ff763fab6248d9af91f0f9bda7642632f33c9c8aa3.scope: Deactivated successfully.
Jan 20 14:34:53 compute-0 podman[279942]: 2026-01-20 14:34:53.651960785 +0000 UTC m=+0.041394183 container create decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:34:53 compute-0 systemd[1]: Started libpod-conmon-decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04.scope.
Jan 20 14:34:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:53 compute-0 podman[279942]: 2026-01-20 14:34:53.636846369 +0000 UTC m=+0.026279797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:53 compute-0 podman[279942]: 2026-01-20 14:34:53.734586997 +0000 UTC m=+0.124020395 container init decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:34:53 compute-0 podman[279942]: 2026-01-20 14:34:53.743360154 +0000 UTC m=+0.132793552 container start decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:34:53 compute-0 podman[279942]: 2026-01-20 14:34:53.74657518 +0000 UTC m=+0.136008578 container attach decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 20 14:34:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 20 14:34:54 compute-0 ceph-mon[74360]: pgmap v1339: 321 pgs: 321 active+clean; 418 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.5 MiB/s wr, 180 op/s
Jan 20 14:34:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 20 14:34:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 20 14:34:54 compute-0 sharp_herschel[279959]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:34:54 compute-0 sharp_herschel[279959]: --> relative data size: 1.0
Jan 20 14:34:54 compute-0 sharp_herschel[279959]: --> All data devices are unavailable
Jan 20 14:34:54 compute-0 systemd[1]: libpod-decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04.scope: Deactivated successfully.
Jan 20 14:34:54 compute-0 podman[279942]: 2026-01-20 14:34:54.658466432 +0000 UTC m=+1.047899830 container died decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cd95c57d5964e46fee299710d8d25606fb41ce3b07710be71fad723bc36b16b-merged.mount: Deactivated successfully.
Jan 20 14:34:55 compute-0 podman[279942]: 2026-01-20 14:34:55.034907856 +0000 UTC m=+1.424341254 container remove decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:34:55 compute-0 systemd[1]: libpod-conmon-decfd4bbf4db4309bf3021f7b541012272dddc04a8978efa78083f5f47d21f04.scope: Deactivated successfully.
Jan 20 14:34:55 compute-0 sudo[279836]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:55 compute-0 sudo[279989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:55 compute-0 sudo[279989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:55 compute-0 sudo[279989]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:55 compute-0 sudo[280014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:34:55 compute-0 sudo[280014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:55 compute-0 sudo[280014]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:55 compute-0 sudo[280039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:55 compute-0 sudo[280039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:55 compute-0 sudo[280039]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:55 compute-0 sudo[280064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:34:55 compute-0 sudo[280064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:55 compute-0 ceph-mon[74360]: osdmap e176: 3 total, 3 up, 3 in
Jan 20 14:34:55 compute-0 sshd-session[279987]: Invalid user admin from 157.245.78.139 port 60378
Jan 20 14:34:55 compute-0 sshd-session[279987]: Connection closed by invalid user admin 157.245.78.139 port 60378 [preauth]
Jan 20 14:34:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 356 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 363 op/s
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.648282401 +0000 UTC m=+0.041984240 container create 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:34:55 compute-0 systemd[1]: Started libpod-conmon-564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243.scope.
Jan 20 14:34:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.629624269 +0000 UTC m=+0.023326138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.725181919 +0000 UTC m=+0.118883788 container init 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.733955604 +0000 UTC m=+0.127657453 container start 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.73825659 +0000 UTC m=+0.131958459 container attach 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:34:55 compute-0 practical_villani[280145]: 167 167
Jan 20 14:34:55 compute-0 systemd[1]: libpod-564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243.scope: Deactivated successfully.
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.741405105 +0000 UTC m=+0.135106964 container died 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7472f7820f5745f357d95a9d2a39eb7dad0c2a86017a00ad753309f19506fe0-merged.mount: Deactivated successfully.
Jan 20 14:34:55 compute-0 podman[280129]: 2026-01-20 14:34:55.879859768 +0000 UTC m=+0.273561617 container remove 564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:34:55 compute-0 systemd[1]: libpod-conmon-564d0c51cf7bbc65e913f8f878a32975ef3d8a0b817dc6b9196992670943a243.scope: Deactivated successfully.
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.039116071 +0000 UTC m=+0.040254154 container create 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:34:56 compute-0 systemd[1]: Started libpod-conmon-46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6.scope.
Jan 20 14:34:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d32fa67076e9e4407aa36809e8ded093695211f90255d64b8fabb78512ed9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d32fa67076e9e4407aa36809e8ded093695211f90255d64b8fabb78512ed9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d32fa67076e9e4407aa36809e8ded093695211f90255d64b8fabb78512ed9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d32fa67076e9e4407aa36809e8ded093695211f90255d64b8fabb78512ed9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.022488544 +0000 UTC m=+0.023626647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.118149827 +0000 UTC m=+0.119287930 container init 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:34:56 compute-0 nova_compute[250018]: 2026-01-20 14:34:56.122 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.125965636 +0000 UTC m=+0.127103719 container start 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.134425914 +0000 UTC m=+0.135563997 container attach 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:34:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:34:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:56.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:34:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2671332732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:56 compute-0 ceph-mon[74360]: pgmap v1341: 321 pgs: 321 active+clean; 356 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 363 op/s
Jan 20 14:34:56 compute-0 practical_rosalind[280185]: {
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:     "0": [
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:         {
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "devices": [
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "/dev/loop3"
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             ],
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "lv_name": "ceph_lv0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "lv_size": "7511998464",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "name": "ceph_lv0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "tags": {
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.cluster_name": "ceph",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.crush_device_class": "",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.encrypted": "0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.osd_id": "0",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.type": "block",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:                 "ceph.vdo": "0"
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             },
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "type": "block",
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:             "vg_name": "ceph_vg0"
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:         }
Jan 20 14:34:56 compute-0 practical_rosalind[280185]:     ]
Jan 20 14:34:56 compute-0 practical_rosalind[280185]: }
Jan 20 14:34:56 compute-0 systemd[1]: libpod-46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6.scope: Deactivated successfully.
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.921764068 +0000 UTC m=+0.922902151 container died 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8d32fa67076e9e4407aa36809e8ded093695211f90255d64b8fabb78512ed9e-merged.mount: Deactivated successfully.
Jan 20 14:34:56 compute-0 podman[280169]: 2026-01-20 14:34:56.985772218 +0000 UTC m=+0.986910301 container remove 46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:34:56 compute-0 systemd[1]: libpod-conmon-46648fbdc62c41c94efe43c5bba48fc7c91eb921d4ceefd3426da5d1b8db67e6.scope: Deactivated successfully.
Jan 20 14:34:57 compute-0 sudo[280064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:57 compute-0 sudo[280208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:57 compute-0 sudo[280208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:57 compute-0 sudo[280208]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:57 compute-0 sudo[280233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:34:57 compute-0 sudo[280233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:57 compute-0 sudo[280233]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:57 compute-0 sudo[280258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:57 compute-0 sudo[280258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:57 compute-0 sudo[280258]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:57 compute-0 sudo[280283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:34:57 compute-0 sudo[280283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 20 14:34:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 20 14:34:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 20 14:34:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3288395047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:34:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 326 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 38 KiB/s wr, 333 op/s
Jan 20 14:34:57 compute-0 nova_compute[250018]: 2026-01-20 14:34:57.542 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.649573249 +0000 UTC m=+0.045721050 container create 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 14:34:57 compute-0 systemd[1]: Started libpod-conmon-33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe.scope.
Jan 20 14:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.628918984 +0000 UTC m=+0.025066815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.73399333 +0000 UTC m=+0.130141151 container init 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.742485878 +0000 UTC m=+0.138633679 container start 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.745776947 +0000 UTC m=+0.141924778 container attach 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:34:57 compute-0 infallible_mcclintock[280363]: 167 167
Jan 20 14:34:57 compute-0 systemd[1]: libpod-33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe.scope: Deactivated successfully.
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.748945301 +0000 UTC m=+0.145093102 container died 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a591579867db9ac71c0b13322adf78274774ce09bca9840c180031eda0228f0b-merged.mount: Deactivated successfully.
Jan 20 14:34:57 compute-0 podman[280347]: 2026-01-20 14:34:57.790227472 +0000 UTC m=+0.186375273 container remove 33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:34:57 compute-0 systemd[1]: libpod-conmon-33c3cfae4d5ff9f94703d73c24475d0d4db457a5a1db5387b097f2626cfb0ffe.scope: Deactivated successfully.
Jan 20 14:34:57 compute-0 podman[280386]: 2026-01-20 14:34:57.975192466 +0000 UTC m=+0.048449814 container create 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:34:58 compute-0 systemd[1]: Started libpod-conmon-15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4.scope.
Jan 20 14:34:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63b56b5819543232c79a9c9a3898ef3f5996e42654795e03ef54d136d813b97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63b56b5819543232c79a9c9a3898ef3f5996e42654795e03ef54d136d813b97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63b56b5819543232c79a9c9a3898ef3f5996e42654795e03ef54d136d813b97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63b56b5819543232c79a9c9a3898ef3f5996e42654795e03ef54d136d813b97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:34:58 compute-0 podman[280386]: 2026-01-20 14:34:57.95562889 +0000 UTC m=+0.028886268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:34:58 compute-0 podman[280386]: 2026-01-20 14:34:58.060323736 +0000 UTC m=+0.133581124 container init 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:34:58 compute-0 podman[280386]: 2026-01-20 14:34:58.066983975 +0000 UTC m=+0.140241323 container start 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:34:58 compute-0 podman[280386]: 2026-01-20 14:34:58.071105635 +0000 UTC m=+0.144362983 container attach 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:34:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:34:58.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:58 compute-0 ceph-mon[74360]: osdmap e177: 3 total, 3 up, 3 in
Jan 20 14:34:58 compute-0 ceph-mon[74360]: pgmap v1343: 321 pgs: 321 active+clean; 326 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 38 KiB/s wr, 333 op/s
Jan 20 14:34:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3207990597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:58 compute-0 nova_compute[250018]: 2026-01-20 14:34:58.922 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:58 compute-0 nova_compute[250018]: 2026-01-20 14:34:58.924 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:58 compute-0 magical_northcutt[280403]: {
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:         "osd_id": 0,
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:         "type": "bluestore"
Jan 20 14:34:58 compute-0 magical_northcutt[280403]:     }
Jan 20 14:34:58 compute-0 magical_northcutt[280403]: }
Jan 20 14:34:58 compute-0 nova_compute[250018]: 2026-01-20 14:34:58.948 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:34:58 compute-0 systemd[1]: libpod-15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4.scope: Deactivated successfully.
Jan 20 14:34:58 compute-0 conmon[280403]: conmon 15ab16989fb6c033eaa7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4.scope/container/memory.events
Jan 20 14:34:58 compute-0 podman[280386]: 2026-01-20 14:34:58.970367289 +0000 UTC m=+1.043624637 container died 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63b56b5819543232c79a9c9a3898ef3f5996e42654795e03ef54d136d813b97-merged.mount: Deactivated successfully.
Jan 20 14:34:59 compute-0 podman[280386]: 2026-01-20 14:34:59.024634317 +0000 UTC m=+1.097891665 container remove 15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:34:59 compute-0 systemd[1]: libpod-conmon-15ab16989fb6c033eaa7d454e081a5e9391a0d50bf15f6ffcd6d21a5d0b3f0b4.scope: Deactivated successfully.
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.034 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.035 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.042 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.043 250022 INFO nova.compute.claims [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:34:59 compute-0 sudo[280283]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:34:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:34:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:34:59 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:34:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5acf72ed-307b-4abc-a208-6206a1acc7d0 does not exist
Jan 20 14:34:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 57377209-586a-43b8-aecd-d09dfd1d4c45 does not exist
Jan 20 14:34:59 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a8487a00-3f35-480c-9b2b-e12f92e776d3 does not exist
Jan 20 14:34:59 compute-0 sudo[280439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:34:59 compute-0 sudo[280439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:59 compute-0 sudo[280439]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.187 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:34:59 compute-0 sudo[280464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:34:59 compute-0 sudo[280464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:34:59 compute-0 sudo[280464]: pam_unix(sudo:session): session closed for user root
Jan 20 14:34:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:34:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:34:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:34:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:34:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 343 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 280 op/s
Jan 20 14:34:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:34:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333457106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.662 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.669 250022 DEBUG nova.compute.provider_tree [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.697 250022 DEBUG nova.scheduler.client.report [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.731 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.732 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.791 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.792 250022 DEBUG nova.network.neutron [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.812 250022 INFO nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.831 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.953 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.955 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.955 250022 INFO nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Creating image(s)
Jan 20 14:34:59 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.977 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:34:59.999 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.021 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.025 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.084 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.085 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.085 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.085 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:35:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:35:00 compute-0 ceph-mon[74360]: pgmap v1344: 321 pgs: 321 active+clean; 343 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 280 op/s
Jan 20 14:35:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3333457106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2891658671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.170 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.176 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.221 250022 DEBUG nova.network.neutron [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.222 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.483 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.554 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] resizing rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.813 250022 DEBUG nova.objects.instance [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'migration_context' on Instance uuid f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.828 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.828 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Ensure instance console log exists: /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.829 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.829 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.829 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.830 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.835 250022 WARNING nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.839 250022 DEBUG nova.virt.libvirt.host [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.839 250022 DEBUG nova.virt.libvirt.host [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.842 250022 DEBUG nova.virt.libvirt.host [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.843 250022 DEBUG nova.virt.libvirt.host [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.844 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.845 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.845 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.845 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.845 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.846 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.846 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.846 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.846 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.847 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.847 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.847 250022 DEBUG nova.virt.hardware [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:35:00 compute-0 nova_compute[250018]: 2026-01-20 14:35:00.849 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/38576092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221613903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.379 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.403 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.407 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.6 MiB/s wr, 322 op/s
Jan 20 14:35:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378119403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.852 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.854 250022 DEBUG nova.objects.instance [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'pci_devices' on Instance uuid f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.886 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <uuid>f7ef5c6e-053e-4a03-a68f-60f399ea2fc9</uuid>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <name>instance-0000002f</name>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1042132418</nova:name>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:35:00</nova:creationTime>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:user uuid="72ad8e217e1348378596753eefca1452">tempest-ListImageFiltersTestJSON-1649594432-project-member</nova:user>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <nova:project uuid="9e10f687e8a14fc3bfa98df19df5befd">tempest-ListImageFiltersTestJSON-1649594432</nova:project>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <system>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="serial">f7ef5c6e-053e-4a03-a68f-60f399ea2fc9</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="uuid">f7ef5c6e-053e-4a03-a68f-60f399ea2fc9</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </system>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <os>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </os>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <features>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </features>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk">
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config">
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:01 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/console.log" append="off"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <video>
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </video>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:35:01 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:35:01 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:35:01 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:35:01 compute-0 nova_compute[250018]: </domain>
Jan 20 14:35:01 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.953 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.954 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.955 250022 INFO nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Using config drive
Jan 20 14:35:01 compute-0 nova_compute[250018]: 2026-01-20 14:35:01.979 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.192 250022 INFO nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Creating config drive at /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config
Jan 20 14:35:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2503250766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3221613903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-0 ceph-mon[74360]: pgmap v1345: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.6 MiB/s wr, 322 op/s
Jan 20 14:35:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3622645078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2378119403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.199 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplz7scy5v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.331 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplz7scy5v" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.362 250022 DEBUG nova.storage.rbd_utils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] rbd image f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.366 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 20 14:35:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 20 14:35:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.546 250022 DEBUG oslo_concurrency.processutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.547 250022 INFO nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Deleting local config drive /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9/disk.config because it was imported into RBD.
Jan 20 14:35:02 compute-0 nova_compute[250018]: 2026-01-20 14:35:02.548 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:02 compute-0 sudo[280799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:02 compute-0 sudo[280799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:02 compute-0 sudo[280799]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:02 compute-0 systemd-machined[216401]: New machine qemu-21-instance-0000002f.
Jan 20 14:35:02 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000002f.
Jan 20 14:35:02 compute-0 sudo[280832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:02 compute-0 sudo[280832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:02 compute-0 sudo[280832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:35:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.277 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919703.2769551, f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.279 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] VM Resumed (Lifecycle Event)
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.282 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.283 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.287 250022 INFO nova.virt.libvirt.driver [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance spawned successfully.
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.288 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.312 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.325 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.328 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.329 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.329 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.330 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.330 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.330 250022 DEBUG nova.virt.libvirt.driver [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.362 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.363 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919703.2782817, f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.363 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] VM Started (Lifecycle Event)
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.396 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.399 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.409 250022 INFO nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Took 3.46 seconds to spawn the instance on the hypervisor.
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.410 250022 DEBUG nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.421 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:03 compute-0 ceph-mon[74360]: osdmap e178: 3 total, 3 up, 3 in
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.474 250022 INFO nova.compute.manager [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Took 4.47 seconds to build instance.
Jan 20 14:35:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.6 MiB/s wr, 177 op/s
Jan 20 14:35:03 compute-0 nova_compute[250018]: 2026-01-20 14:35:03.492 250022 DEBUG oslo_concurrency.lockutils [None req-1eb06bd1-4609-4194-8cbf-d078c08f3866 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:04 compute-0 ceph-mon[74360]: pgmap v1347: 321 pgs: 321 active+clean; 429 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.6 MiB/s wr, 177 op/s
Jan 20 14:35:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 490 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 10 MiB/s wr, 327 op/s
Jan 20 14:35:06 compute-0 nova_compute[250018]: 2026-01-20 14:35:06.126 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:06.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 20 14:35:06 compute-0 ceph-mon[74360]: pgmap v1348: 321 pgs: 321 active+clean; 490 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 10 MiB/s wr, 327 op/s
Jan 20 14:35:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 20 14:35:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 20 14:35:07 compute-0 nova_compute[250018]: 2026-01-20 14:35:07.210 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:07.210 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:07.211 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:35:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:35:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:35:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 497 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 10 MiB/s wr, 466 op/s
Jan 20 14:35:07 compute-0 nova_compute[250018]: 2026-01-20 14:35:07.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:07 compute-0 ceph-mon[74360]: osdmap e179: 3 total, 3 up, 3 in
Jan 20 14:35:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:35:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7014 writes, 30K keys, 7013 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 7014 writes, 7013 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1718 writes, 7369 keys, 1718 commit groups, 1.0 writes per commit group, ingest: 11.19 MB, 0.02 MB/s
                                           Interval WAL: 1718 writes, 1718 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     36.5      1.00              0.14        17    0.059       0      0       0.0       0.0
                                             L6      1/0    8.06 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7    123.4    101.8      1.31              0.43        16    0.082     80K   8891       0.0       0.0
                                            Sum      1/0    8.06 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     70.1     73.6      2.31              0.57        33    0.070     80K   8891       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    119.2    120.6      0.39              0.15         8    0.049     23K   2578       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    123.4    101.8      1.31              0.43        16    0.082     80K   8891       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     36.6      0.99              0.14        16    0.062       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.036, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 2.3 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 17.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000194 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1033,17.07 MB,5.61482%) FilterBlock(34,231.55 KB,0.0743816%) IndexBlock(34,410.69 KB,0.131928%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:35:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:08.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:08.213 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:08 compute-0 podman[280910]: 2026-01-20 14:35:08.511639952 +0000 UTC m=+0.081138633 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:35:08 compute-0 podman[280909]: 2026-01-20 14:35:08.568073211 +0000 UTC m=+0.137568822 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:35:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 20 14:35:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 20 14:35:08 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 20 14:35:08 compute-0 ceph-mon[74360]: pgmap v1350: 321 pgs: 321 active+clean; 497 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 10 MiB/s wr, 466 op/s
Jan 20 14:35:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 507 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.0 MiB/s wr, 609 op/s
Jan 20 14:35:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 20 14:35:09 compute-0 ceph-mon[74360]: osdmap e180: 3 total, 3 up, 3 in
Jan 20 14:35:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 20 14:35:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 20 14:35:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:10.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:10 compute-0 ceph-mon[74360]: pgmap v1352: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 507 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.0 MiB/s wr, 609 op/s
Jan 20 14:35:10 compute-0 ceph-mon[74360]: osdmap e181: 3 total, 3 up, 3 in
Jan 20 14:35:11 compute-0 nova_compute[250018]: 2026-01-20 14:35:11.128 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011150588009491903 of space, bias 1.0, pg target 3.345176402847571 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002456946189279732 of space, bias 1.0, pg target 0.7297130182160804 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 20 14:35:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 533 op/s
Jan 20 14:35:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1346870109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2312094919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:12 compute-0 nova_compute[250018]: 2026-01-20 14:35:12.552 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:12 compute-0 ceph-mon[74360]: pgmap v1354: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 533 op/s
Jan 20 14:35:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:13.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:13 compute-0 nova_compute[250018]: 2026-01-20 14:35:13.478 250022 DEBUG nova.compute.manager [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 20 14:35:13 compute-0 nova_compute[250018]: 2026-01-20 14:35:13.527 250022 INFO nova.compute.manager [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] instance snapshotting
Jan 20 14:35:13 compute-0 nova_compute[250018]: 2026-01-20 14:35:13.826 250022 INFO nova.virt.libvirt.driver [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Beginning live snapshot process
Jan 20 14:35:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4216979389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/52686846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:35:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/52686846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:35:14 compute-0 nova_compute[250018]: 2026-01-20 14:35:14.017 250022 DEBUG nova.virt.libvirt.imagebackend [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:35:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:14 compute-0 nova_compute[250018]: 2026-01-20 14:35:14.243 250022 DEBUG nova.storage.rbd_utils [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(1eebb08f347d40bf802435692d409847) on rbd image(f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 20 14:35:15 compute-0 ceph-mon[74360]: pgmap v1355: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 20 14:35:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 20 14:35:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 20 14:35:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:15.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:15 compute-0 nova_compute[250018]: 2026-01-20 14:35:15.291 250022 DEBUG nova.storage.rbd_utils [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] cloning vms/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk@1eebb08f347d40bf802435692d409847 to images/c6ead139-3a05-4d56-8181-f09e7d084275 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:35:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 562 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 478 op/s
Jan 20 14:35:15 compute-0 nova_compute[250018]: 2026-01-20 14:35:15.529 250022 DEBUG nova.storage.rbd_utils [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] flattening images/c6ead139-3a05-4d56-8181-f09e7d084275 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:35:16 compute-0 nova_compute[250018]: 2026-01-20 14:35:16.130 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:16 compute-0 ceph-mon[74360]: osdmap e182: 3 total, 3 up, 3 in
Jan 20 14:35:16 compute-0 ceph-mon[74360]: pgmap v1357: 321 pgs: 321 active+clean; 562 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 478 op/s
Jan 20 14:35:16 compute-0 nova_compute[250018]: 2026-01-20 14:35:16.549 250022 DEBUG nova.storage.rbd_utils [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] removing snapshot(1eebb08f347d40bf802435692d409847) on rbd image(f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:35:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 20 14:35:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 20 14:35:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 20 14:35:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 20 14:35:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 631 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 14 MiB/s wr, 352 op/s
Jan 20 14:35:17 compute-0 nova_compute[250018]: 2026-01-20 14:35:17.514 250022 DEBUG nova.storage.rbd_utils [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] creating snapshot(snap) on rbd image(c6ead139-3a05-4d56-8181-f09e7d084275) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:35:17 compute-0 nova_compute[250018]: 2026-01-20 14:35:17.553 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:18.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 20 14:35:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 20 14:35:18 compute-0 ceph-mon[74360]: osdmap e183: 3 total, 3 up, 3 in
Jan 20 14:35:18 compute-0 ceph-mon[74360]: pgmap v1359: 321 pgs: 321 active+clean; 631 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 14 MiB/s wr, 352 op/s
Jan 20 14:35:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3187377245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.534 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.534 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.534 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.535 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.535 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.536 250022 INFO nova.compute.manager [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Terminating instance
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.537 250022 DEBUG nova.compute.manager [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:35:18 compute-0 kernel: tap06dd2b18-21 (unregistering): left promiscuous mode
Jan 20 14:35:18 compute-0 NetworkManager[48960]: <info>  [1768919718.6021] device (tap06dd2b18-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:35:18 compute-0 ovn_controller[148666]: 2026-01-20T14:35:18Z|00130|binding|INFO|Releasing lport 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 from this chassis (sb_readonly=0)
Jan 20 14:35:18 compute-0 ovn_controller[148666]: 2026-01-20T14:35:18Z|00131|binding|INFO|Setting lport 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 down in Southbound
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 ovn_controller[148666]: 2026-01-20T14:35:18Z|00132|binding|INFO|Removing iface tap06dd2b18-21 ovn-installed in OVS
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.610 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.615 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:6f:bd 10.100.0.5'], port_security=['fa:16:3e:90:6f:bd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '280e5549-64d8-4573-a271-a210145a151d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a841e7a1434c488390475174e10bc161', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bbdea05-fba7-47c7-ba4e-5dac58212a25', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c43dea88-ea55-4069-a4be-2c30a432a754, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=06dd2b18-21ca-4ffc-8d34-c78b1c568f68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.617 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 06dd2b18-21ca-4ffc-8d34-c78b1c568f68 in datapath 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a unbound from our chassis
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.618 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.620 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0a02548f-d758-4815-8c62-7fc1c8cbb229]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.622 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a namespace which is not needed anymore
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.629 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 20 14:35:18 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002b.scope: Consumed 15.224s CPU time.
Jan 20 14:35:18 compute-0 systemd-machined[216401]: Machine qemu-20-instance-0000002b terminated.
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.755 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.760 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.774 250022 INFO nova.virt.libvirt.driver [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] Instance destroyed successfully.
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.776 250022 DEBUG nova.objects.instance [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lazy-loading 'resources' on Instance uuid 280e5549-64d8-4573-a271-a210145a151d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:18 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [NOTICE]   (279514) : haproxy version is 2.8.14-c23fe91
Jan 20 14:35:18 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [NOTICE]   (279514) : path to executable is /usr/sbin/haproxy
Jan 20 14:35:18 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [WARNING]  (279514) : Exiting Master process...
Jan 20 14:35:18 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [ALERT]    (279514) : Current worker (279517) exited with code 143 (Terminated)
Jan 20 14:35:18 compute-0 neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a[279472]: [WARNING]  (279514) : All workers exited. Exiting... (0)
Jan 20 14:35:18 compute-0 systemd[1]: libpod-b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f.scope: Deactivated successfully.
Jan 20 14:35:18 compute-0 podman[281125]: 2026-01-20 14:35:18.796628944 +0000 UTC m=+0.076842287 container died b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.826 250022 DEBUG nova.virt.libvirt.vif [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:34:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-328542049',display_name='tempest-ServersAdminTestJSON-server-328542049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-328542049',id=43,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:34:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a841e7a1434c488390475174e10bc161',ramdisk_id='',reservation_id='r-a4ohfxz5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1261404595',owner_user_name='tempest-ServersAdminTestJSON-1261404595-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:34:29Z,user_data=None,user_id='f51c395107c84dbd9067113b84ff01dd',uuid=280e5549-64d8-4573-a271-a210145a151d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.827 250022 DEBUG nova.network.os_vif_util [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converting VIF {"id": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "address": "fa:16:3e:90:6f:bd", "network": {"id": "33c9a20a-d976-42a8-b8bf-f83ddfc97c9a", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-202342440-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a841e7a1434c488390475174e10bc161", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06dd2b18-21", "ovs_interfaceid": "06dd2b18-21ca-4ffc-8d34-c78b1c568f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.827 250022 DEBUG nova.network.os_vif_util [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.828 250022 DEBUG os_vif [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.829 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.830 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dd2b18-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.831 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.832 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.835 250022 INFO os_vif [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:6f:bd,bridge_name='br-int',has_traffic_filtering=True,id=06dd2b18-21ca-4ffc-8d34-c78b1c568f68,network=Network(33c9a20a-d976-42a8-b8bf-f83ddfc97c9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06dd2b18-21')
Jan 20 14:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f-userdata-shm.mount: Deactivated successfully.
Jan 20 14:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab8394c3905ecfa04dc9ae2ef607e2a0b3adb6c8a312b398996d8cf8955ad33-merged.mount: Deactivated successfully.
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.869 250022 DEBUG nova.compute.manager [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-unplugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.870 250022 DEBUG oslo_concurrency.lockutils [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.870 250022 DEBUG oslo_concurrency.lockutils [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.870 250022 DEBUG oslo_concurrency.lockutils [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.870 250022 DEBUG nova.compute.manager [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] No waiting events found dispatching network-vif-unplugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.870 250022 DEBUG nova.compute.manager [req-79c5a8c9-aa48-4cd6-b333-cd497b1825ed req-0fe21314-c5d7-410d-90eb-80a7a95d782e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-unplugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:35:18 compute-0 podman[281125]: 2026-01-20 14:35:18.87979366 +0000 UTC m=+0.160006993 container cleanup b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:35:18 compute-0 systemd[1]: libpod-conmon-b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f.scope: Deactivated successfully.
Jan 20 14:35:18 compute-0 podman[281184]: 2026-01-20 14:35:18.947044739 +0000 UTC m=+0.044760345 container remove b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.952 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5a3832-a447-42da-a2b6-a3aea492cc91]: (4, ('Tue Jan 20 02:35:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a (b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f)\nb8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f\nTue Jan 20 02:35:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a (b8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f)\nb8e57c5e923cd0a4cd41293d0307cb9f35c40982a2498209277dd0c953b38c4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[55e09f25-82d0-4967-85bd-ddb1411047e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.955 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33c9a20a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.957 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 kernel: tap33c9a20a-d0: left promiscuous mode
Jan 20 14:35:18 compute-0 nova_compute[250018]: 2026-01-20 14:35:18.973 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.977 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[82c4ffb0-7d3e-4602-9dc2-c70889df8a56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.995 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f51b36eb-d3bc-46c0-b072-a28e2c114d45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:18.997 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[907ff8f5-269a-421c-abe8-2005e2ab2100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:19.014 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7fce9097-c2d5-44de-9282-58a6e24ad61d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560963, 'reachable_time': 38257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281199, 'error': None, 'target': 'ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:19.017 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-33c9a20a-d976-42a8-b8bf-f83ddfc97c9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:35:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:19.017 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e1dffb08-b62e-4dfe-95ff-83e77ce259d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d33c9a20a\x2dd976\x2d42a8\x2db8bf\x2df83ddfc97c9a.mount: Deactivated successfully.
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:19.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.425 250022 INFO nova.virt.libvirt.driver [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Deleting instance files /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d_del
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.426 250022 INFO nova.virt.libvirt.driver [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Deletion of /var/lib/nova/instances/280e5549-64d8-4573-a271-a210145a151d_del complete
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.477 250022 INFO nova.compute.manager [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.478 250022 DEBUG oslo.service.loopingcall [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.478 250022 DEBUG nova.compute.manager [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:35:19 compute-0 nova_compute[250018]: 2026-01-20 14:35:19.478 250022 DEBUG nova.network.neutron [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:35:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 662 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 18 MiB/s wr, 562 op/s
Jan 20 14:35:19 compute-0 ceph-mon[74360]: osdmap e184: 3 total, 3 up, 3 in
Jan 20 14:35:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1954847869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.072 250022 DEBUG nova.network.neutron [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.093 250022 INFO nova.compute.manager [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] Took 0.62 seconds to deallocate network for instance.
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.144 250022 DEBUG nova.compute.manager [req-3d475b35-6b61-4d71-9d46-4198de364098 req-d49e9411-45fb-44b2-afe1-e2b592fb1014 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-deleted-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.146 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.146 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:20.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.278 250022 INFO nova.virt.libvirt.driver [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Snapshot image upload complete
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.278 250022 INFO nova.compute.manager [None req-0957d8ea-23f4-45e0-968c-c25182905437 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Took 6.75 seconds to snapshot the instance on the hypervisor.
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.331 250022 DEBUG oslo_concurrency.processutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306401299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:20 compute-0 ceph-mon[74360]: pgmap v1361: 321 pgs: 321 active+clean; 662 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 18 MiB/s wr, 562 op/s
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.803 250022 DEBUG oslo_concurrency.processutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.810 250022 DEBUG nova.compute.provider_tree [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.825 250022 DEBUG nova.scheduler.client.report [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.853 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.891 250022 INFO nova.scheduler.client.report [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Deleted allocations for instance 280e5549-64d8-4573-a271-a210145a151d
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.964 250022 DEBUG nova.compute.manager [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.964 250022 DEBUG oslo_concurrency.lockutils [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "280e5549-64d8-4573-a271-a210145a151d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.965 250022 DEBUG oslo_concurrency.lockutils [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.965 250022 DEBUG oslo_concurrency.lockutils [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.965 250022 DEBUG nova.compute.manager [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] No waiting events found dispatching network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:20 compute-0 nova_compute[250018]: 2026-01-20 14:35:20.965 250022 WARNING nova.compute.manager [req-75557df4-6eb7-4a79-af06-01fabe25dcd1 req-ae15010e-ac7b-4cda-89a4-d4741d035844 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 280e5549-64d8-4573-a271-a210145a151d] Received unexpected event network-vif-plugged-06dd2b18-21ca-4ffc-8d34-c78b1c568f68 for instance with vm_state deleted and task_state None.
Jan 20 14:35:21 compute-0 nova_compute[250018]: 2026-01-20 14:35:21.000 250022 DEBUG oslo_concurrency.lockutils [None req-fead267c-3418-41c0-bb84-ab0e119c4306 f51c395107c84dbd9067113b84ff01dd a841e7a1434c488390475174e10bc161 - - default default] Lock "280e5549-64d8-4573-a271-a210145a151d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:21 compute-0 nova_compute[250018]: 2026-01-20 14:35:21.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 14 MiB/s wr, 583 op/s
Jan 20 14:35:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2306401299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:22 compute-0 nova_compute[250018]: 2026-01-20 14:35:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:22 compute-0 nova_compute[250018]: 2026-01-20 14:35:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:35:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:35:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 20 14:35:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 20 14:35:22 compute-0 sudo[281224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:22 compute-0 sudo[281224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:22 compute-0 sudo[281224]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:22 compute-0 sudo[281249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:22 compute-0 sudo[281249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:22 compute-0 sudo[281249]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:22 compute-0 ceph-mon[74360]: pgmap v1362: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 14 MiB/s wr, 583 op/s
Jan 20 14:35:22 compute-0 ceph-mon[74360]: osdmap e185: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.7 MiB/s wr, 503 op/s
Jan 20 14:35:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 20 14:35:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-0 nova_compute[250018]: 2026-01-20 14:35:23.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/832753989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:23 compute-0 ceph-mon[74360]: pgmap v1364: 321 pgs: 321 active+clean; 650 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.7 MiB/s wr, 503 op/s
Jan 20 14:35:23 compute-0 ceph-mon[74360]: osdmap e186: 3 total, 3 up, 3 in
Jan 20 14:35:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3777007206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.076 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.076 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4037861475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.494 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.564 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.565 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.712 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.714 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4425MB free_disk=20.703880310058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.714 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.715 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.788 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.788 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.788 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:35:24 compute-0 nova_compute[250018]: 2026-01-20 14:35:24.825 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 20 14:35:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 20 14:35:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 20 14:35:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/685615446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4037861475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618122951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-0 nova_compute[250018]: 2026-01-20 14:35:25.271 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:25 compute-0 nova_compute[250018]: 2026-01-20 14:35:25.276 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:25 compute-0 nova_compute[250018]: 2026-01-20 14:35:25.293 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:25 compute-0 nova_compute[250018]: 2026-01-20 14:35:25.316 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:35:25 compute-0 nova_compute[250018]: 2026-01-20 14:35:25.316 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 546 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.7 MiB/s wr, 494 op/s
Jan 20 14:35:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 20 14:35:25 compute-0 ceph-mon[74360]: osdmap e187: 3 total, 3 up, 3 in
Jan 20 14:35:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2618122951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/979848207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:25 compute-0 ceph-mon[74360]: pgmap v1367: 321 pgs: 321 active+clean; 546 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.7 MiB/s wr, 494 op/s
Jan 20 14:35:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 20 14:35:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 20 14:35:26 compute-0 nova_compute[250018]: 2026-01-20 14:35:26.179 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:26 compute-0 ceph-mon[74360]: osdmap e188: 3 total, 3 up, 3 in
Jan 20 14:35:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:27.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:27 compute-0 nova_compute[250018]: 2026-01-20 14:35:27.316 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 522 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.5 MiB/s wr, 531 op/s
Jan 20 14:35:27 compute-0 ceph-mon[74360]: pgmap v1369: 321 pgs: 321 active+clean; 522 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 7.5 MiB/s wr, 531 op/s
Jan 20 14:35:28 compute-0 nova_compute[250018]: 2026-01-20 14:35:28.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:28 compute-0 nova_compute[250018]: 2026-01-20 14:35:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:28 compute-0 nova_compute[250018]: 2026-01-20 14:35:28.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:35:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:28 compute-0 nova_compute[250018]: 2026-01-20 14:35:28.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2919091173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3279341162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:29.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 533 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 506 op/s
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:35:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3897833887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:30 compute-0 ceph-mon[74360]: pgmap v1370: 321 pgs: 321 active+clean; 533 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 506 op/s
Jan 20 14:35:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:30.749 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:30.749 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:30.749 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.926 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.926 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.927 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:35:30 compute-0 nova_compute[250018]: 2026-01-20 14:35:30.927 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:31 compute-0 nova_compute[250018]: 2026-01-20 14:35:31.118 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:31 compute-0 nova_compute[250018]: 2026-01-20 14:35:31.180 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:31.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:31 compute-0 nova_compute[250018]: 2026-01-20 14:35:31.489 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 499 op/s
Jan 20 14:35:31 compute-0 nova_compute[250018]: 2026-01-20 14:35:31.502 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:31 compute-0 nova_compute[250018]: 2026-01-20 14:35:31.503 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:35:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:32.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 20 14:35:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 20 14:35:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 20 14:35:32 compute-0 ceph-mon[74360]: pgmap v1371: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 499 op/s
Jan 20 14:35:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1217843012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:32 compute-0 ceph-mon[74360]: osdmap e189: 3 total, 3 up, 3 in
Jan 20 14:35:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:33.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 8.3 MiB/s wr, 333 op/s
Jan 20 14:35:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 20 14:35:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 20 14:35:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 20 14:35:33 compute-0 nova_compute[250018]: 2026-01-20 14:35:33.771 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919718.77024, 280e5549-64d8-4573-a271-a210145a151d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:33 compute-0 nova_compute[250018]: 2026-01-20 14:35:33.771 250022 INFO nova.compute.manager [-] [instance: 280e5549-64d8-4573-a271-a210145a151d] VM Stopped (Lifecycle Event)
Jan 20 14:35:33 compute-0 nova_compute[250018]: 2026-01-20 14:35:33.792 250022 DEBUG nova.compute.manager [None req-a76d5cfd-7d03-4608-99ec-9d442c3183a8 - - - - - -] [instance: 280e5549-64d8-4573-a271-a210145a151d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:33 compute-0 nova_compute[250018]: 2026-01-20 14:35:33.840 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:34.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:34 compute-0 ceph-mon[74360]: pgmap v1373: 321 pgs: 321 active+clean; 485 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 8.3 MiB/s wr, 333 op/s
Jan 20 14:35:34 compute-0 ceph-mon[74360]: osdmap e190: 3 total, 3 up, 3 in
Jan 20 14:35:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 20 14:35:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 20 14:35:34 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 20 14:35:35 compute-0 nova_compute[250018]: 2026-01-20 14:35:35.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 470 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 236 op/s
Jan 20 14:35:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 20 14:35:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 20 14:35:35 compute-0 ceph-mon[74360]: osdmap e191: 3 total, 3 up, 3 in
Jan 20 14:35:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 20 14:35:36 compute-0 nova_compute[250018]: 2026-01-20 14:35:36.182 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:35:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:35:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 20 14:35:36 compute-0 ceph-mon[74360]: pgmap v1376: 321 pgs: 321 active+clean; 470 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 236 op/s
Jan 20 14:35:36 compute-0 ceph-mon[74360]: osdmap e192: 3 total, 3 up, 3 in
Jan 20 14:35:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 20 14:35:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 20 14:35:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 503 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Jan 20 14:35:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 20 14:35:37 compute-0 ceph-mon[74360]: osdmap e193: 3 total, 3 up, 3 in
Jan 20 14:35:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 20 14:35:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 20 14:35:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:38 compute-0 nova_compute[250018]: 2026-01-20 14:35:38.844 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:38 compute-0 ceph-mon[74360]: pgmap v1379: 321 pgs: 321 active+clean; 503 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.0 MiB/s wr, 205 op/s
Jan 20 14:35:38 compute-0 ceph-mon[74360]: osdmap e194: 3 total, 3 up, 3 in
Jan 20 14:35:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 20 14:35:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 20 14:35:38 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:39 compute-0 sshd-session[281328]: Invalid user admin from 157.245.78.139 port 45460
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.440 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.441 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.441 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.441 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.441 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.442 250022 INFO nova.compute.manager [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Terminating instance
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.443 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.443 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquired lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.443 250022 DEBUG nova.network.neutron [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:35:39 compute-0 podman[281331]: 2026-01-20 14:35:39.474144643 +0000 UTC m=+0.062994515 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:35:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 503 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 8.4 MiB/s wr, 190 op/s
Jan 20 14:35:39 compute-0 podman[281330]: 2026-01-20 14:35:39.503170444 +0000 UTC m=+0.092779917 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:35:39 compute-0 sshd-session[281328]: Connection closed by invalid user admin 157.245.78.139 port 45460 [preauth]
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.581 250022 DEBUG nova.network.neutron [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.877 250022 DEBUG nova.network.neutron [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-0 ceph-mon[74360]: osdmap e195: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-0 ceph-mon[74360]: pgmap v1382: 321 pgs: 321 active+clean; 503 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 8.4 MiB/s wr, 190 op/s
Jan 20 14:35:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.905 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Releasing lock "refresh_cache-f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:39 compute-0 nova_compute[250018]: 2026-01-20 14:35:39.906 250022 DEBUG nova.compute.manager [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:35:40 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Jan 20 14:35:40 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002f.scope: Consumed 14.161s CPU time.
Jan 20 14:35:40 compute-0 systemd-machined[216401]: Machine qemu-21-instance-0000002f terminated.
Jan 20 14:35:40 compute-0 nova_compute[250018]: 2026-01-20 14:35:40.123 250022 INFO nova.virt.libvirt.driver [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance destroyed successfully.
Jan 20 14:35:40 compute-0 nova_compute[250018]: 2026-01-20 14:35:40.124 250022 DEBUG nova.objects.instance [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lazy-loading 'resources' on Instance uuid f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:40 compute-0 ceph-mon[74360]: osdmap e196: 3 total, 3 up, 3 in
Jan 20 14:35:40 compute-0 nova_compute[250018]: 2026-01-20 14:35:40.969 250022 INFO nova.virt.libvirt.driver [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Deleting instance files /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_del
Jan 20 14:35:40 compute-0 nova_compute[250018]: 2026-01-20 14:35:40.970 250022 INFO nova.virt.libvirt.driver [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Deletion of /var/lib/nova/instances/f7ef5c6e-053e-4a03-a68f-60f399ea2fc9_del complete
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.038 250022 INFO nova.compute.manager [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Took 1.13 seconds to destroy the instance on the hypervisor.
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.038 250022 DEBUG oslo.service.loopingcall [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.039 250022 DEBUG nova.compute.manager [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.039 250022 DEBUG nova.network.neutron [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.183 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.924 250022 DEBUG nova.network.neutron [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:41 compute-0 ceph-mon[74360]: pgmap v1384: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.948 250022 DEBUG nova.network.neutron [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:41 compute-0 nova_compute[250018]: 2026-01-20 14:35:41.961 250022 INFO nova.compute.manager [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Took 0.92 seconds to deallocate network for instance.
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.006 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.007 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.068 250022 DEBUG oslo_concurrency.processutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 20 14:35:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4269044680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.567 250022 DEBUG oslo_concurrency.processutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.572 250022 DEBUG nova.compute.provider_tree [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.587 250022 DEBUG nova.scheduler.client.report [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.605 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.637 250022 INFO nova.scheduler.client.report [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Deleted allocations for instance f7ef5c6e-053e-4a03-a68f-60f399ea2fc9
Jan 20 14:35:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 20 14:35:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 20 14:35:42 compute-0 nova_compute[250018]: 2026-01-20 14:35:42.710 250022 DEBUG oslo_concurrency.lockutils [None req-1d59f77f-7d30-4188-ac26-52e53cbe76d3 72ad8e217e1348378596753eefca1452 9e10f687e8a14fc3bfa98df19df5befd - - default default] Lock "f7ef5c6e-053e-4a03-a68f-60f399ea2fc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:42 compute-0 sudo[281419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:42 compute-0 sudo[281419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:42 compute-0 sudo[281419]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:42 compute-0 sudo[281444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:42 compute-0 sudo[281444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:42 compute-0 sudo[281444]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:43.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4269044680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:43 compute-0 ceph-mon[74360]: osdmap e197: 3 total, 3 up, 3 in
Jan 20 14:35:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 227 op/s
Jan 20 14:35:43 compute-0 nova_compute[250018]: 2026-01-20 14:35:43.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:44 compute-0 ceph-mon[74360]: pgmap v1386: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 283 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 227 op/s
Jan 20 14:35:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1152927409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:44 compute-0 nova_compute[250018]: 2026-01-20 14:35:44.925 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:44 compute-0 nova_compute[250018]: 2026-01-20 14:35:44.925 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:44 compute-0 nova_compute[250018]: 2026-01-20 14:35:44.966 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:44 compute-0 nova_compute[250018]: 2026-01-20 14:35:44.967 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.009 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.028 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:35:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.370 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.371 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.377 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.378 250022 INFO nova.compute.claims [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.402 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 9.9 KiB/s wr, 252 op/s
Jan 20 14:35:45 compute-0 nova_compute[250018]: 2026-01-20 14:35:45.757 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925318780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.177 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.183 250022 DEBUG nova.compute.provider_tree [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:46.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.341 250022 DEBUG nova.scheduler.client.report [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.422 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.423 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.426 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.434 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.435 250022 INFO nova.compute.claims [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:35:46 compute-0 ceph-mon[74360]: pgmap v1387: 321 pgs: 321 active+clean; 121 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 9.9 KiB/s wr, 252 op/s
Jan 20 14:35:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1925318780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.623 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.624 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.700 250022 INFO nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.718 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.787 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:46 compute-0 nova_compute[250018]: 2026-01-20 14:35:46.959 250022 DEBUG nova.policy [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00cec8cbb72b489da46855f8b3b4c42c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '79601368a3db41e0aacec93e8fd7f1d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.000 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.002 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.002 250022 INFO nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Creating image(s)
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.029 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.055 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.091 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.096 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.166 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.168 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.168 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.169 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.200 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.206 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:35:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3882842628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.280 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.289 250022 DEBUG nova.compute.provider_tree [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:35:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.419 250022 DEBUG nova.scheduler.client.report [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.475 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.476 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:35:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 20 14:35:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 97 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 9.5 KiB/s wr, 218 op/s
Jan 20 14:35:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 20 14:35:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.555671) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747555810, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2308, "num_deletes": 263, "total_data_size": 3731026, "memory_usage": 3786960, "flush_reason": "Manual Compaction"}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747581555, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3669107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29167, "largest_seqno": 31474, "table_properties": {"data_size": 3658662, "index_size": 6683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22357, "raw_average_key_size": 21, "raw_value_size": 3637541, "raw_average_value_size": 3434, "num_data_blocks": 288, "num_entries": 1059, "num_filter_entries": 1059, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919562, "oldest_key_time": 1768919562, "file_creation_time": 1768919747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 25934 microseconds, and 7374 cpu microseconds.
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.581605) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3669107 bytes OK
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.581627) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.586852) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.586889) EVENT_LOG_v1 {"time_micros": 1768919747586880, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.586911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3721471, prev total WAL file size 3723170, number of live WAL files 2.
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.587917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3583KB)], [65(8254KB)]
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747587967, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12121226, "oldest_snapshot_seqno": -1}
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.621 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.621 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5965 keys, 10180483 bytes, temperature: kUnknown
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747675655, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10180483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139621, "index_size": 24816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 151166, "raw_average_key_size": 25, "raw_value_size": 10031449, "raw_average_value_size": 1681, "num_data_blocks": 1003, "num_entries": 5965, "num_filter_entries": 5965, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:35:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4051494342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3882842628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:47 compute-0 ceph-mon[74360]: osdmap e198: 3 total, 3 up, 3 in
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.675876) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10180483 bytes
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.678367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 116.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6501, records dropped: 536 output_compression: NoCompression
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.678400) EVENT_LOG_v1 {"time_micros": 1768919747678380, "job": 36, "event": "compaction_finished", "compaction_time_micros": 87756, "compaction_time_cpu_micros": 21651, "output_level": 6, "num_output_files": 1, "total_output_size": 10180483, "num_input_records": 6501, "num_output_records": 5965, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747679153, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919747680977, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.587832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.681125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.681129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.681130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.681132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:35:47.681134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.815 250022 INFO nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.892 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.905 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.948 250022 DEBUG nova.policy [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56e2959629114d3d8a48e7a80ed96c4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3750c56415134773aa9d9880038f1749', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:35:47 compute-0 nova_compute[250018]: 2026-01-20 14:35:47.995 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] resizing rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.107 250022 DEBUG nova.objects.instance [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1b3763f0-b328-4db2-844b-7f56cc13c19e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.148 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.150 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.150 250022 INFO nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Creating image(s)
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.175 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.205 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.279 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.282 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.309 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.310 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Ensure instance console log exists: /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.311 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.312 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.312 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.361 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.362 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.363 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.363 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.394 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.398 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a529906d-6908-4a37-ac57-db4384de2893_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.424 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Successfully created port: dea207a0-1a8b-4f40-8fdc-1a5e76999db8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.750 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a529906d-6908-4a37-ac57-db4384de2893_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 20 14:35:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 20 14:35:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 20 14:35:48 compute-0 ceph-mon[74360]: pgmap v1388: 321 pgs: 321 active+clean; 97 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 9.5 KiB/s wr, 218 op/s
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.849 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] resizing rbd image a529906d-6908-4a37-ac57-db4384de2893_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:35:48 compute-0 nova_compute[250018]: 2026-01-20 14:35:48.931 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.010 250022 DEBUG nova.objects.instance [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'migration_context' on Instance uuid a529906d-6908-4a37-ac57-db4384de2893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.013 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Successfully created port: f2330816-cb0c-4408-9178-8d732ae1a45b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.042 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.042 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Ensure instance console log exists: /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.043 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.043 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.044 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 89 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 119 KiB/s rd, 780 KiB/s wr, 172 op/s
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.563 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Successfully updated port: dea207a0-1a8b-4f40-8fdc-1a5e76999db8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.575 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.576 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquired lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.576 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.678 250022 DEBUG nova.compute.manager [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-changed-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.679 250022 DEBUG nova.compute.manager [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Refreshing instance network info cache due to event network-changed-dea207a0-1a8b-4f40-8fdc-1a5e76999db8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.679 250022 DEBUG oslo_concurrency.lockutils [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:49 compute-0 nova_compute[250018]: 2026-01-20 14:35:49.781 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:49 compute-0 ceph-mon[74360]: osdmap e199: 3 total, 3 up, 3 in
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.018 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Successfully updated port: f2330816-cb0c-4408-9178-8d732ae1a45b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.082 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.082 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquired lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.082 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.158 250022 DEBUG nova.compute.manager [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-changed-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.159 250022 DEBUG nova.compute.manager [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Refreshing instance network info cache due to event network-changed-f2330816-cb0c-4408-9178-8d732ae1a45b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.159 250022 DEBUG oslo_concurrency.lockutils [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:35:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.296 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.489 250022 DEBUG nova.network.neutron [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Updating instance_info_cache with network_info: [{"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.526 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Releasing lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.526 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Instance network_info: |[{"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.527 250022 DEBUG oslo_concurrency.lockutils [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.527 250022 DEBUG nova.network.neutron [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Refreshing network info cache for port dea207a0-1a8b-4f40-8fdc-1a5e76999db8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.531 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Start _get_guest_xml network_info=[{"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.537 250022 WARNING nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.544 250022 DEBUG nova.virt.libvirt.host [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.545 250022 DEBUG nova.virt.libvirt.host [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.553 250022 DEBUG nova.virt.libvirt.host [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.554 250022 DEBUG nova.virt.libvirt.host [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.556 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.556 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.556 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.557 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.557 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.557 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.558 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.558 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.558 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.559 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.559 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.559 250022 DEBUG nova.virt.hardware [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.563 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 20 14:35:50 compute-0 ceph-mon[74360]: pgmap v1391: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 89 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 119 KiB/s rd, 780 KiB/s wr, 172 op/s
Jan 20 14:35:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/260268613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:35:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 20 14:35:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 20 14:35:50 compute-0 nova_compute[250018]: 2026-01-20 14:35:50.998 250022 DEBUG nova.network.neutron [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Updating instance_info_cache with network_info: [{"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648038281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.032 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.060 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.066 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.101 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Releasing lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.102 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Instance network_info: |[{"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.102 250022 DEBUG oslo_concurrency.lockutils [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.103 250022 DEBUG nova.network.neutron [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Refreshing network info cache for port f2330816-cb0c-4408-9178-8d732ae1a45b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.106 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Start _get_guest_xml network_info=[{"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.110 250022 WARNING nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.115 250022 DEBUG nova.virt.libvirt.host [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.116 250022 DEBUG nova.virt.libvirt.host [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.119 250022 DEBUG nova.virt.libvirt.host [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.119 250022 DEBUG nova.virt.libvirt.host [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.120 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.120 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.121 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.121 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.121 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.121 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.121 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.122 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.122 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.122 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.122 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.122 250022 DEBUG nova.virt.hardware [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.125 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 7.1 MiB/s wr, 225 op/s
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659460142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898319916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.535 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.559 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.563 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.585 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.587 250022 DEBUG nova.virt.libvirt.vif [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-84805515',display_name='tempest-ImagesOneServerTestJSON-server-84805515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-84805515',id=49,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79601368a3db41e0aacec93e8fd7f1d4',ramdisk_id='',reservation_id='r-pl8tjcab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-749029286',owner_user_name='tempest-ImagesOneServerTestJSON-749029286-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:46Z,user_data=None,user_id='00cec8cbb72b489da46855f8b3b4c42c',uuid=1b3763f0-b328-4db2-844b-7f56cc13c19e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.587 250022 DEBUG nova.network.os_vif_util [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converting VIF {"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.588 250022 DEBUG nova.network.os_vif_util [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.589 250022 DEBUG nova.objects.instance [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1b3763f0-b328-4db2-844b-7f56cc13c19e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.629 250022 DEBUG nova.network.neutron [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Updated VIF entry in instance network info cache for port dea207a0-1a8b-4f40-8fdc-1a5e76999db8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.630 250022 DEBUG nova.network.neutron [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Updating instance_info_cache with network_info: [{"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.676 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <uuid>1b3763f0-b328-4db2-844b-7f56cc13c19e</uuid>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <name>instance-00000031</name>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:name>tempest-ImagesOneServerTestJSON-server-84805515</nova:name>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:35:50</nova:creationTime>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:user uuid="00cec8cbb72b489da46855f8b3b4c42c">tempest-ImagesOneServerTestJSON-749029286-project-member</nova:user>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:project uuid="79601368a3db41e0aacec93e8fd7f1d4">tempest-ImagesOneServerTestJSON-749029286</nova:project>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <nova:port uuid="dea207a0-1a8b-4f40-8fdc-1a5e76999db8">
Jan 20 14:35:51 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <system>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="serial">1b3763f0-b328-4db2-844b-7f56cc13c19e</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="uuid">1b3763f0-b328-4db2-844b-7f56cc13c19e</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </system>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <os>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </os>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <features>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </features>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1b3763f0-b328-4db2-844b-7f56cc13c19e_disk">
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config">
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:01:06:ce"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <target dev="tapdea207a0-1a"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/console.log" append="off"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <video>
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </video>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:35:51 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:35:51 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:35:51 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:35:51 compute-0 nova_compute[250018]: </domain>
Jan 20 14:35:51 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.678 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Preparing to wait for external event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.678 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.678 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.679 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.679 250022 DEBUG nova.virt.libvirt.vif [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-84805515',display_name='tempest-ImagesOneServerTestJSON-server-84805515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-84805515',id=49,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79601368a3db41e0aacec93e8fd7f1d4',ramdisk_id='',reservation_id='r-pl8tjcab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-749029286',owner_user_name='tempest-ImagesOneServerTestJSON-749029286-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:46Z,user_data=None,user_id='00cec8cbb72b489da46855f8b3b4c42c',uuid=1b3763f0-b328-4db2-844b-7f56cc13c19e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.680 250022 DEBUG nova.network.os_vif_util [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converting VIF {"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.680 250022 DEBUG nova.network.os_vif_util [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.681 250022 DEBUG os_vif [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.682 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.683 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.684 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.687 250022 DEBUG oslo_concurrency.lockutils [req-1133b8dd-1a19-417a-b912-234b71184c47 req-54d31335-8c6d-41fd-949d-d78dc55774fe 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1b3763f0-b328-4db2-844b-7f56cc13c19e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.692 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.692 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdea207a0-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.693 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdea207a0-1a, col_values=(('external_ids', {'iface-id': 'dea207a0-1a8b-4f40-8fdc-1a5e76999db8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:06:ce', 'vm-uuid': '1b3763f0-b328-4db2-844b-7f56cc13c19e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.694 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:51 compute-0 NetworkManager[48960]: <info>  [1768919751.6956] manager: (tapdea207a0-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.696 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.702 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.703 250022 INFO os_vif [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a')
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.761 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.761 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.761 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] No VIF found with MAC fa:16:3e:01:06:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.762 250022 INFO nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Using config drive
Jan 20 14:35:51 compute-0 nova_compute[250018]: 2026-01-20 14:35:51.789 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 20 14:35:51 compute-0 ceph-mon[74360]: osdmap e200: 3 total, 3 up, 3 in
Jan 20 14:35:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2648038281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 ceph-mon[74360]: pgmap v1393: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 7.1 MiB/s wr, 225 op/s
Jan 20 14:35:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/659460142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2898319916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 20 14:35:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 20 14:35:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:35:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689223483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.003 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.005 250022 DEBUG nova.virt.libvirt.vif [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-207767720',display_name='tempest-ImagesTestJSON-server-207767720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-207767720',id=50,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-0a86zg6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:47Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a529906d-6908-4a37-ac57-db4384de2893,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.005 250022 DEBUG nova.network.os_vif_util [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.005 250022 DEBUG nova.network.os_vif_util [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.006 250022 DEBUG nova.objects.instance [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'pci_devices' on Instance uuid a529906d-6908-4a37-ac57-db4384de2893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.061 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <uuid>a529906d-6908-4a37-ac57-db4384de2893</uuid>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <name>instance-00000032</name>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:name>tempest-ImagesTestJSON-server-207767720</nova:name>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:35:51</nova:creationTime>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:user uuid="56e2959629114d3d8a48e7a80ed96c4b">tempest-ImagesTestJSON-338390217-project-member</nova:user>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:project uuid="3750c56415134773aa9d9880038f1749">tempest-ImagesTestJSON-338390217</nova:project>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <nova:port uuid="f2330816-cb0c-4408-9178-8d732ae1a45b">
Jan 20 14:35:52 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <system>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="serial">a529906d-6908-4a37-ac57-db4384de2893</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="uuid">a529906d-6908-4a37-ac57-db4384de2893</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </system>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <os>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </os>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <features>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </features>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a529906d-6908-4a37-ac57-db4384de2893_disk">
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a529906d-6908-4a37-ac57-db4384de2893_disk.config">
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </source>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:35:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:de:d2:be"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <target dev="tapf2330816-cb"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/console.log" append="off"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <video>
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </video>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:35:52 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:35:52 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:35:52 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:35:52 compute-0 nova_compute[250018]: </domain>
Jan 20 14:35:52 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.061 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Preparing to wait for external event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.062 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.062 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.062 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.063 250022 DEBUG nova.virt.libvirt.vif [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-207767720',display_name='tempest-ImagesTestJSON-server-207767720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-207767720',id=50,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-0a86zg6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:35:47Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a529906d-6908-4a37-ac57-db4384de2893,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.064 250022 DEBUG nova.network.os_vif_util [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.065 250022 DEBUG nova.network.os_vif_util [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.065 250022 DEBUG os_vif [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.066 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.066 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.066 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.069 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.069 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2330816-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.070 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2330816-cb, col_values=(('external_ids', {'iface-id': 'f2330816-cb0c-4408-9178-8d732ae1a45b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:d2:be', 'vm-uuid': 'a529906d-6908-4a37-ac57-db4384de2893'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:52 compute-0 NetworkManager[48960]: <info>  [1768919752.0728] manager: (tapf2330816-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.072 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.078 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.080 250022 INFO os_vif [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb')
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.202 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.203 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.203 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No VIF found with MAC fa:16:3e:de:d2:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.203 250022 INFO nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Using config drive
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.225 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:35:52
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', 'backups']
Jan 20 14:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.678 250022 INFO nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Creating config drive at /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.685 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnz_ic31b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.818 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnz_ic31b" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.845 250022 DEBUG nova.storage.rbd_utils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] rbd image 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.848 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.939 250022 INFO nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Creating config drive at /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config
Jan 20 14:35:52 compute-0 nova_compute[250018]: 2026-01-20 14:35:52.944 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3emigjt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:52 compute-0 ceph-mon[74360]: osdmap e201: 3 total, 3 up, 3 in
Jan 20 14:35:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3689223483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/635606860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.024 250022 DEBUG oslo_concurrency.processutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config 1b3763f0-b328-4db2-844b-7f56cc13c19e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.025 250022 INFO nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Deleting local config drive /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e/disk.config because it was imported into RBD.
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.074 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3emigjt" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.0776] manager: (tapdea207a0-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 20 14:35:53 compute-0 kernel: tapdea207a0-1a: entered promiscuous mode
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00133|binding|INFO|Claiming lport dea207a0-1a8b-4f40-8fdc-1a5e76999db8 for this chassis.
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00134|binding|INFO|dea207a0-1a8b-4f40-8fdc-1a5e76999db8: Claiming fa:16:3e:01:06:ce 10.100.0.4
Jan 20 14:35:53 compute-0 systemd-machined[216401]: New machine qemu-22-instance-00000031.
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.107 250022 DEBUG nova.storage.rbd_utils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a529906d-6908-4a37-ac57-db4384de2893_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.111 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config a529906d-6908-4a37-ac57-db4384de2893_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.139 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00135|binding|INFO|Setting lport dea207a0-1a8b-4f40-8fdc-1a5e76999db8 ovn-installed in OVS
Jan 20 14:35:53 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000031.
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00136|binding|INFO|Setting lport dea207a0-1a8b-4f40-8fdc-1a5e76999db8 up in Southbound
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.178 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:06:ce 10.100.0.4'], port_security=['fa:16:3e:01:06:ce 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1b3763f0-b328-4db2-844b-7f56cc13c19e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38acf72f-2b62-44ac-86b7-15a313b89179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79601368a3db41e0aacec93e8fd7f1d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7191c78b-0eaa-45d2-b9f9-4ac3c533ddac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=053fc91c-5a1e-4be0-b640-862275343a36, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=dea207a0-1a8b-4f40-8fdc-1a5e76999db8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:53 compute-0 systemd-udevd[282094]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.180 160071 INFO neutron.agent.ovn.metadata.agent [-] Port dea207a0-1a8b-4f40-8fdc-1a5e76999db8 in datapath 38acf72f-2b62-44ac-86b7-15a313b89179 bound to our chassis
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.181 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38acf72f-2b62-44ac-86b7-15a313b89179
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.1938] device (tapdea207a0-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.1964] device (tapdea207a0-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.197 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[52465f85-acba-4006-9d9c-6fa0a24ac6d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.198 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38acf72f-21 in ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.200 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38acf72f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.200 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[54e9f5f2-03c7-401a-b203-936279e4533a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.201 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b25708-ce5e-4f3c-9b4c-16126995cf8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.212 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.215 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[75c35eaf-eb28-4a3f-88fb-66f3772093ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.230 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d1cfb4-6df9-49d4-abe1-c81bdc2c307f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.263 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[bab0e92b-ec1c-4614-bc8a-9d00138db918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.268 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7afbf6-d129-4e73-ae88-bd66ff4a3999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.2698] manager: (tap38acf72f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 20 14:35:53 compute-0 systemd-udevd[282111]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.278 250022 DEBUG nova.network.neutron [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Updated VIF entry in instance network info cache for port f2330816-cb0c-4408-9178-8d732ae1a45b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.279 250022 DEBUG nova.network.neutron [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Updating instance_info_cache with network_info: [{"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.290 250022 DEBUG oslo_concurrency.processutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config a529906d-6908-4a37-ac57-db4384de2893_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.291 250022 INFO nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Deleting local config drive /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893/disk.config because it was imported into RBD.
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.306 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d4612d0c-bd82-4845-a927-4f3897debc4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.311 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6057f99d-1224-4dac-9191-497c61651355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.3407] device (tap38acf72f-20): carrier: link connected
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.346 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b51e316f-4a57-4a59-9b4a-795dfc5d9ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 kernel: tapf2330816-cb: entered promiscuous mode
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.3604] manager: (tapf2330816-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00137|binding|INFO|Claiming lport f2330816-cb0c-4408-9178-8d732ae1a45b for this chassis.
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00138|binding|INFO|f2330816-cb0c-4408-9178-8d732ae1a45b: Claiming fa:16:3e:de:d2:be 10.100.0.11
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.370 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc09ce2-30ca-4ac4-8cf9-a2876bb98092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38acf72f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:54:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569406, 'reachable_time': 35069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282155, 'error': None, 'target': 'ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.3739] device (tapf2330816-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.3749] device (tapf2330816-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.390 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[763865a2-a89a-4704-8353-9454d2d423ea]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:54a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569406, 'tstamp': 569406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282161, 'error': None, 'target': 'ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 systemd-machined[216401]: New machine qemu-23-instance-00000032.
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.402 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:d2:be 10.100.0.11'], port_security=['fa:16:3e:de:d2:be 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a529906d-6908-4a37-ac57-db4384de2893', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f2330816-cb0c-4408-9178-8d732ae1a45b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.411 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2cc0b8-283f-4109-8126-b015a258bbe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38acf72f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:54:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569406, 'reachable_time': 35069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282163, 'error': None, 'target': 'ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.416 250022 DEBUG oslo_concurrency.lockutils [req-cc4bc527-8ce7-4658-b8fd-5c37acb0acc8 req-a2fcba0b-fb98-4b0d-b1c6-2ec03f1ce857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a529906d-6908-4a37-ac57-db4384de2893" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.443 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[17655395-afd3-4b85-9c90-8323acc55965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00139|binding|INFO|Setting lport f2330816-cb0c-4408-9178-8d732ae1a45b ovn-installed in OVS
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00140|binding|INFO|Setting lport f2330816-cb0c-4408-9178-8d732ae1a45b up in Southbound
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.445 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000032.
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.504 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e53f7ab6-3f77-4a9b-a762-622f1f770d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.505 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38acf72f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.506 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.507 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38acf72f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.508 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 kernel: tap38acf72f-20: entered promiscuous mode
Jan 20 14:35:53 compute-0 NetworkManager[48960]: <info>  [1768919753.5097] manager: (tap38acf72f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 20 14:35:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 7.1 MiB/s wr, 214 op/s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.516 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38acf72f-20, col_values=(('external_ids', {'iface-id': '8022d77a-5026-427d-8008-411a7ad1e269'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.517 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 ovn_controller[148666]: 2026-01-20T14:35:53Z|00141|binding|INFO|Releasing lport 8022d77a-5026-427d-8008-411a7ad1e269 from this chassis (sb_readonly=0)
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.519 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38acf72f-2b62-44ac-86b7-15a313b89179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38acf72f-2b62-44ac-86b7-15a313b89179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.519 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf13abc-a894-4ea3-aae9-23e75a6c4c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.520 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-38acf72f-2b62-44ac-86b7-15a313b89179
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/38acf72f-2b62-44ac-86b7-15a313b89179.pid.haproxy
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 38acf72f-2b62-44ac-86b7-15a313b89179
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:35:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:53.522 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179', 'env', 'PROCESS_TAG=haproxy-38acf72f-2b62-44ac-86b7-15a313b89179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38acf72f-2b62-44ac-86b7-15a313b89179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.534 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.600 250022 DEBUG nova.compute.manager [req-ff6168f2-a4e3-422e-9913-9f2e7c839385 req-0d066c85-1a25-4a41-9865-b9a6a2da1edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.600 250022 DEBUG oslo_concurrency.lockutils [req-ff6168f2-a4e3-422e-9913-9f2e7c839385 req-0d066c85-1a25-4a41-9865-b9a6a2da1edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.600 250022 DEBUG oslo_concurrency.lockutils [req-ff6168f2-a4e3-422e-9913-9f2e7c839385 req-0d066c85-1a25-4a41-9865-b9a6a2da1edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.601 250022 DEBUG oslo_concurrency.lockutils [req-ff6168f2-a4e3-422e-9913-9f2e7c839385 req-0d066c85-1a25-4a41-9865-b9a6a2da1edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.601 250022 DEBUG nova.compute.manager [req-ff6168f2-a4e3-422e-9913-9f2e7c839385 req-0d066c85-1a25-4a41-9865-b9a6a2da1edd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Processing event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.692 250022 DEBUG nova.compute.manager [req-1ee9c192-dd74-4c60-aa25-029c2e7305f7 req-cf8d712f-fefe-45c4-a17c-28cead6e337a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.692 250022 DEBUG oslo_concurrency.lockutils [req-1ee9c192-dd74-4c60-aa25-029c2e7305f7 req-cf8d712f-fefe-45c4-a17c-28cead6e337a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.692 250022 DEBUG oslo_concurrency.lockutils [req-1ee9c192-dd74-4c60-aa25-029c2e7305f7 req-cf8d712f-fefe-45c4-a17c-28cead6e337a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.692 250022 DEBUG oslo_concurrency.lockutils [req-1ee9c192-dd74-4c60-aa25-029c2e7305f7 req-cf8d712f-fefe-45c4-a17c-28cead6e337a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.693 250022 DEBUG nova.compute.manager [req-1ee9c192-dd74-4c60-aa25-029c2e7305f7 req-cf8d712f-fefe-45c4-a17c-28cead6e337a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Processing event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.880 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.881 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919753.8794188, a529906d-6908-4a37-ac57-db4384de2893 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.881 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] VM Started (Lifecycle Event)
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.889 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.895 250022 INFO nova.virt.libvirt.driver [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] Instance spawned successfully.
Jan 20 14:35:53 compute-0 nova_compute[250018]: 2026-01-20 14:35:53.896 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:35:53 compute-0 podman[282261]: 2026-01-20 14:35:53.954425516 +0000 UTC m=+0.054071575 container create ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:35:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1932494090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:35:53 compute-0 ceph-mon[74360]: pgmap v1395: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 134 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 7.1 MiB/s wr, 214 op/s
Jan 20 14:35:53 compute-0 systemd[1]: Started libpod-conmon-ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453.scope.
Jan 20 14:35:54 compute-0 podman[282261]: 2026-01-20 14:35:53.926962757 +0000 UTC m=+0.026608836 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:35:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdbed832f1a2490809affa67d189fc748d9e559eb1a5fb3cb5390ff626246e25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.037 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.041 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:35:54 compute-0 podman[282261]: 2026-01-20 14:35:54.04267627 +0000 UTC m=+0.142322349 container init ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.043 250022 INFO nova.virt.libvirt.driver [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Instance spawned successfully.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.044 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:35:54 compute-0 podman[282261]: 2026-01-20 14:35:54.047794928 +0000 UTC m=+0.147440987 container start ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:35:54 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [NOTICE]   (282305) : New worker (282307) forked
Jan 20 14:35:54 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [NOTICE]   (282305) : Loading success.
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.103 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f2330816-cb0c-4408-9178-8d732ae1a45b in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a unbound from our chassis
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.105 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.117 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d065c3c-acc6-4247-b43b-fbadd15d506d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.119 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabb83e3e-01 in ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.120 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabb83e3e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.121 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e3154a98-b582-4dea-8645-1610ae33371d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.122 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d82e38b-6d57-42e3-9683-3aba9626778b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.134 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[6182f644-a489-40af-b99b-55507c23c18e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.161 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7493583a-a9e3-41bb-be0b-959cda2a78a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.189 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2384e816-d6c5-45a6-b804-79b3ae9f3f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.195 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5847fb14-f016-48f0-878b-5748444d0cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 NetworkManager[48960]: <info>  [1768919754.1962] manager: (tapabb83e3e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.226 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6920e48a-f17d-4c80-b845-cd4c57f0477b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.229 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7e403207-fa3f-43c6-8187-11b8c3b3b00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:54 compute-0 NetworkManager[48960]: <info>  [1768919754.2572] device (tapabb83e3e-00): carrier: link connected
Jan 20 14:35:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.263 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[aed834fc-56c1-4c21-8e6f-e48afc2119d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.280 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[88458788-5ecc-4b46-93c1-b02c7e606ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569498, 'reachable_time': 23430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282326, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.297 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5767deb9-2163-4530-ac13-10bd78efc46a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569498, 'tstamp': 569498}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282327, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.317 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e0b036-191d-49d4-adb8-af34605435ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569498, 'reachable_time': 23430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282328, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.345 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[22b7e487-81f7-4bba-835c-e4aa4d89f442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.397 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.405 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.408 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.409 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.409 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.409 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a749611f-9025-437e-a90a-ce96bcb9a196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.410 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.410 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.410 250022 DEBUG nova.virt.libvirt.driver [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.411 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.411 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.412 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabb83e3e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.414 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:54 compute-0 NetworkManager[48960]: <info>  [1768919754.4148] manager: (tapabb83e3e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 20 14:35:54 compute-0 kernel: tapabb83e3e-00: entered promiscuous mode
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.417 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabb83e3e-00, col_values=(('external_ids', {'iface-id': 'dfacaf19-f896-4c13-a7ad-47b57cf03fc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.417 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.418 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.418 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.418 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 ovn_controller[148666]: 2026-01-20T14:35:54Z|00142|binding|INFO|Releasing lport dfacaf19-f896-4c13-a7ad-47b57cf03fc1 from this chassis (sb_readonly=0)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.419 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.419 250022 DEBUG nova.virt.libvirt.driver [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.437 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.438 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5db562-f385-43ec-8b19-78be4dd21b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.440 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:35:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:54.441 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'env', 'PROCESS_TAG=haproxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abb83e3e-0b12-431b-ad86-a1d271b5b46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.444 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.445 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919753.8805606, a529906d-6908-4a37-ac57-db4384de2893 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.445 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] VM Paused (Lifecycle Event)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.485 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.492 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919753.887011, a529906d-6908-4a37-ac57-db4384de2893 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.492 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] VM Resumed (Lifecycle Event)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.553 250022 INFO nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Took 6.40 seconds to spawn the instance on the hypervisor.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.553 250022 DEBUG nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.554 250022 INFO nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 7.55 seconds to spawn the instance on the hypervisor.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.555 250022 DEBUG nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.557 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.567 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.614 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.615 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919754.036998, 1b3763f0-b328-4db2-844b-7f56cc13c19e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.615 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] VM Started (Lifecycle Event)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.655 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.671 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.796 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919754.0371609, 1b3763f0-b328-4db2-844b-7f56cc13c19e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.796 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] VM Paused (Lifecycle Event)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.800 250022 INFO nova.compute.manager [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 9.46 seconds to build instance.
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.801 250022 INFO nova.compute.manager [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Took 9.44 seconds to build instance.
Jan 20 14:35:54 compute-0 podman[282361]: 2026-01-20 14:35:54.830802284 +0000 UTC m=+0.050153710 container create 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.856 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.859 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919754.0419686, 1b3763f0-b328-4db2-844b-7f56cc13c19e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.859 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] VM Resumed (Lifecycle Event)
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.861 250022 DEBUG oslo_concurrency.lockutils [None req-7e2d09a8-0b11-4182-a2c3-04985cfa3573 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:54 compute-0 nova_compute[250018]: 2026-01-20 14:35:54.862 250022 DEBUG oslo_concurrency.lockutils [None req-76fb2ae9-c4fb-4be1-b0bb-b58de89b0ed4 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:54 compute-0 systemd[1]: Started libpod-conmon-3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3.scope.
Jan 20 14:35:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92a063ca7f020be5bb3d3149024585106a46ad855561a703720c5a36ed481b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:35:54 compute-0 podman[282361]: 2026-01-20 14:35:54.804221439 +0000 UTC m=+0.023572895 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:35:54 compute-0 podman[282361]: 2026-01-20 14:35:54.905711018 +0000 UTC m=+0.125062484 container init 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:35:54 compute-0 podman[282361]: 2026-01-20 14:35:54.911884644 +0000 UTC m=+0.131236080 container start 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:35:54 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [NOTICE]   (282380) : New worker (282382) forked
Jan 20 14:35:54 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [NOTICE]   (282380) : Loading success.
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.060 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.063 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.122 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919740.1213083, f7ef5c6e-053e-4a03-a68f-60f399ea2fc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.123 250022 INFO nova.compute.manager [-] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] VM Stopped (Lifecycle Event)
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.232 250022 DEBUG nova.compute.manager [None req-b5039925-cb0f-4430-892a-85bba5d8c386 - - - - - -] [instance: f7ef5c6e-053e-4a03-a68f-60f399ea2fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 165 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.7 MiB/s wr, 313 op/s
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.719 250022 DEBUG nova.compute.manager [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.719 250022 DEBUG oslo_concurrency.lockutils [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.720 250022 DEBUG oslo_concurrency.lockutils [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.720 250022 DEBUG oslo_concurrency.lockutils [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.721 250022 DEBUG nova.compute.manager [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] No waiting events found dispatching network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.721 250022 WARNING nova.compute.manager [req-05a7968d-697e-4f54-99ed-bdc0d494d51d req-dfbcc3fc-dc38-4544-901f-a1e51a8811b7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received unexpected event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 for instance with vm_state active and task_state None.
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.934 250022 DEBUG nova.compute.manager [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.934 250022 DEBUG oslo_concurrency.lockutils [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.934 250022 DEBUG oslo_concurrency.lockutils [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.934 250022 DEBUG oslo_concurrency.lockutils [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.935 250022 DEBUG nova.compute.manager [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] No waiting events found dispatching network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:35:55 compute-0 nova_compute[250018]: 2026-01-20 14:35:55.935 250022 WARNING nova.compute.manager [req-57e813d8-04e4-4ca7-b7a7-964627a64d9d req-f0c1091e-5260-4276-8db3-7ae68005b139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received unexpected event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b for instance with vm_state active and task_state None.
Jan 20 14:35:56 compute-0 nova_compute[250018]: 2026-01-20 14:35:56.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:56.049 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:35:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:35:56.050 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:35:56 compute-0 nova_compute[250018]: 2026-01-20 14:35:56.238 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:56 compute-0 ceph-mon[74360]: pgmap v1396: 321 pgs: 321 active+clean; 165 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.7 MiB/s wr, 313 op/s
Jan 20 14:35:57 compute-0 nova_compute[250018]: 2026-01-20 14:35:57.073 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:35:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:35:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:35:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 20 14:35:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 20 14:35:57 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 20 14:35:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 6.6 MiB/s wr, 438 op/s
Jan 20 14:35:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:35:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:35:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:35:58 compute-0 ceph-mon[74360]: osdmap e202: 3 total, 3 up, 3 in
Jan 20 14:35:58 compute-0 ceph-mon[74360]: pgmap v1398: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 6.6 MiB/s wr, 438 op/s
Jan 20 14:35:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:35:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:35:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:35:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:35:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.7 MiB/s wr, 335 op/s
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.536 250022 DEBUG nova.compute.manager [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:59 compute-0 sudo[282393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:59 compute-0 sudo[282393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-0 sudo[282393]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.635 250022 DEBUG nova.objects.instance [None req-cfaf53e0-e99d-4b52-b345-d3230907331b 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'pci_devices' on Instance uuid a529906d-6908-4a37-ac57-db4384de2893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:35:59 compute-0 sudo[282418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:35:59 compute-0 sudo[282418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-0 sudo[282418]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.678 250022 INFO nova.compute.manager [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] instance snapshotting
Jan 20 14:35:59 compute-0 sudo[282443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:35:59 compute-0 sudo[282443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-0 sudo[282443]: pam_unix(sudo:session): session closed for user root
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.742 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919759.738039, a529906d-6908-4a37-ac57-db4384de2893 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.743 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] VM Paused (Lifecycle Event)
Jan 20 14:35:59 compute-0 sudo[282468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:35:59 compute-0 sudo[282468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.785 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.793 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:35:59 compute-0 nova_compute[250018]: 2026-01-20 14:35:59.851 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.180 250022 INFO nova.virt.libvirt.driver [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Beginning live snapshot process
Jan 20 14:36:00 compute-0 sudo[282468]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:00.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:00 compute-0 kernel: tapf2330816-cb (unregistering): left promiscuous mode
Jan 20 14:36:00 compute-0 NetworkManager[48960]: <info>  [1768919760.3125] device (tapf2330816-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:36:00 compute-0 ovn_controller[148666]: 2026-01-20T14:36:00Z|00143|binding|INFO|Releasing lport f2330816-cb0c-4408-9178-8d732ae1a45b from this chassis (sb_readonly=0)
Jan 20 14:36:00 compute-0 ovn_controller[148666]: 2026-01-20T14:36:00Z|00144|binding|INFO|Setting lport f2330816-cb0c-4408-9178-8d732ae1a45b down in Southbound
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 ovn_controller[148666]: 2026-01-20T14:36:00Z|00145|binding|INFO|Removing iface tapf2330816-cb ovn-installed in OVS
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.323 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.345 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:d2:be 10.100.0.11'], port_security=['fa:16:3e:de:d2:be 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a529906d-6908-4a37-ac57-db4384de2893', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f2330816-cb0c-4408-9178-8d732ae1a45b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.347 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f2330816-cb0c-4408-9178-8d732ae1a45b in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a unbound from our chassis
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.348 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abb83e3e-0b12-431b-ad86-a1d271b5b46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.353 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4e15293c-95c3-404a-b7b9-27ce1e18c689]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.354 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace which is not needed anymore
Jan 20 14:36:00 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 20 14:36:00 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000032.scope: Consumed 6.469s CPU time.
Jan 20 14:36:00 compute-0 systemd-machined[216401]: Machine qemu-23-instance-00000032 terminated.
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 60960044-6033-4cd4-9a6b-519224ce4af6 does not exist
Jan 20 14:36:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 73728d48-5af0-49c4-bc40-bb15c78aaf87 does not exist
Jan 20 14:36:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a6e2cc4c-00cf-4ec8-81de-83517d42fccb does not exist
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:36:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.484 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.490 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.498 250022 DEBUG nova.compute.manager [None req-cfaf53e0-e99d-4b52-b345-d3230907331b 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [NOTICE]   (282380) : haproxy version is 2.8.14-c23fe91
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [NOTICE]   (282380) : path to executable is /usr/sbin/haproxy
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [WARNING]  (282380) : Exiting Master process...
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [WARNING]  (282380) : Exiting Master process...
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [ALERT]    (282380) : Current worker (282382) exited with code 143 (Terminated)
Jan 20 14:36:00 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[282376]: [WARNING]  (282380) : All workers exited. Exiting... (0)
Jan 20 14:36:00 compute-0 systemd[1]: libpod-3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3.scope: Deactivated successfully.
Jan 20 14:36:00 compute-0 sudo[282552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:00 compute-0 podman[282551]: 2026-01-20 14:36:00.515091566 +0000 UTC m=+0.052232946 container died 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:36:00 compute-0 sudo[282552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:00 compute-0 sudo[282552]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3-userdata-shm.mount: Deactivated successfully.
Jan 20 14:36:00 compute-0 ceph-mon[74360]: pgmap v1399: 321 pgs: 321 active+clean; 181 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.7 MiB/s wr, 335 op/s
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:36:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c92a063ca7f020be5bb3d3149024585106a46ad855561a703720c5a36ed481b-merged.mount: Deactivated successfully.
Jan 20 14:36:00 compute-0 podman[282551]: 2026-01-20 14:36:00.557719842 +0000 UTC m=+0.094861182 container cleanup 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:36:00 compute-0 systemd[1]: libpod-conmon-3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3.scope: Deactivated successfully.
Jan 20 14:36:00 compute-0 sudo[282605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:36:00 compute-0 sudo[282605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:00 compute-0 sudo[282605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:00 compute-0 sudo[282649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:00 compute-0 podman[282634]: 2026-01-20 14:36:00.638356211 +0000 UTC m=+0.051627470 container remove 3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:36:00 compute-0 sudo[282649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:00 compute-0 sudo[282649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.645 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9f5c0e-2762-49a6-adda-853f5babe229]: (4, ('Tue Jan 20 02:36:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3)\n3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3\nTue Jan 20 02:36:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3)\n3c1e16fba58896421ce4206215e420b3179cd561b7eec19d665f169052bf00b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.647 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8435f840-fcd0-4e2a-963a-d6f7e79ba488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.648 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:00 compute-0 kernel: tapabb83e3e-00: left promiscuous mode
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.649 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.673 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2acd39a2-13b5-4e23-b397-2877b13d9e4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.676 250022 DEBUG nova.virt.libvirt.imagebackend [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.678 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.684 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4385b5-0d35-4822-a7c7-e7ced7293d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.684 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef56350-c1b7-4d5a-9f18-a8772da01436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.701 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[27782116-904e-412e-8502-de3869cd8eb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569490, 'reachable_time': 44415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282734, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dabb83e3e\x2d0b12\x2d431b\x2dad86\x2da1d271b5b46a.mount: Deactivated successfully.
Jan 20 14:36:00 compute-0 sudo[282705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.705 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:36:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:00.705 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[32c10ea1-2d3c-4214-91fc-88e1016f54b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:00 compute-0 sudo[282705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.897 250022 DEBUG nova.compute.manager [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-vif-unplugged-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.898 250022 DEBUG oslo_concurrency.lockutils [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.899 250022 DEBUG oslo_concurrency.lockutils [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.899 250022 DEBUG oslo_concurrency.lockutils [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.900 250022 DEBUG nova.compute.manager [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] No waiting events found dispatching network-vif-unplugged-f2330816-cb0c-4408-9178-8d732ae1a45b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.900 250022 WARNING nova.compute.manager [req-acd59d07-42b2-4e7c-bf6c-718284cded74 req-2414bca0-661f-4fc7-85c4-f6cc258e4fee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received unexpected event network-vif-unplugged-f2330816-cb0c-4408-9178-8d732ae1a45b for instance with vm_state suspended and task_state None.
Jan 20 14:36:00 compute-0 nova_compute[250018]: 2026-01-20 14:36:00.991 250022 DEBUG nova.storage.rbd_utils [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] creating snapshot(2c6f2f06a7ac46629a5c923e9f18d4c9) on rbd image(1b3763f0-b328-4db2-844b-7f56cc13c19e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.050621557 +0000 UTC m=+0.041482586 container create e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:36:01 compute-0 systemd[1]: Started libpod-conmon-e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3.scope.
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.030197318 +0000 UTC m=+0.021058367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.145596031 +0000 UTC m=+0.136457090 container init e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:36:01 compute-0 awesome_varahamihira[282813]: 167 167
Jan 20 14:36:01 compute-0 systemd[1]: libpod-e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3.scope: Deactivated successfully.
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.155422135 +0000 UTC m=+0.146283164 container start e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.169597607 +0000 UTC m=+0.160458666 container attach e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.170627724 +0000 UTC m=+0.161488753 container died e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b197752a4d4e6c2712504e503757720a15b9e42405a02fae9cf320169d4704a6-merged.mount: Deactivated successfully.
Jan 20 14:36:01 compute-0 podman[282780]: 2026-01-20 14:36:01.209338346 +0000 UTC m=+0.200199375 container remove e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:01 compute-0 systemd[1]: libpod-conmon-e579cbd27edbff79063d4e759ecf93a33bbcfccb9f84113cdb3c4577b254a7d3.scope: Deactivated successfully.
Jan 20 14:36:01 compute-0 nova_compute[250018]: 2026-01-20 14:36:01.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:01 compute-0 podman[282838]: 2026-01-20 14:36:01.419547779 +0000 UTC m=+0.043931683 container create def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:36:01 compute-0 systemd[1]: Started libpod-conmon-def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023.scope.
Jan 20 14:36:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:01 compute-0 podman[282838]: 2026-01-20 14:36:01.402515561 +0000 UTC m=+0.026899485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:01 compute-0 podman[282838]: 2026-01-20 14:36:01.509157939 +0000 UTC m=+0.133541863 container init def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:36:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.3 MiB/s wr, 351 op/s
Jan 20 14:36:01 compute-0 podman[282838]: 2026-01-20 14:36:01.517591755 +0000 UTC m=+0.141975669 container start def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:36:01 compute-0 podman[282838]: 2026-01-20 14:36:01.521819339 +0000 UTC m=+0.146203243 container attach def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:36:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 20 14:36:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 20 14:36:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 20 14:36:01 compute-0 nova_compute[250018]: 2026-01-20 14:36:01.629 250022 DEBUG nova.storage.rbd_utils [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] cloning vms/1b3763f0-b328-4db2-844b-7f56cc13c19e_disk@2c6f2f06a7ac46629a5c923e9f18d4c9 to images/fba3fdca-b283-4008-bac5-0575a6416bc7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:36:01 compute-0 nova_compute[250018]: 2026-01-20 14:36:01.742 250022 DEBUG nova.storage.rbd_utils [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] flattening images/fba3fdca-b283-4008-bac5-0575a6416bc7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:36:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:02.052 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:02 compute-0 nova_compute[250018]: 2026-01-20 14:36:02.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:02 compute-0 nova_compute[250018]: 2026-01-20 14:36:02.113 250022 DEBUG nova.storage.rbd_utils [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] removing snapshot(2c6f2f06a7ac46629a5c923e9f18d4c9) on rbd image(1b3763f0-b328-4db2-844b-7f56cc13c19e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:36:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:02 compute-0 hopeful_galois[282854]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:36:02 compute-0 hopeful_galois[282854]: --> relative data size: 1.0
Jan 20 14:36:02 compute-0 hopeful_galois[282854]: --> All data devices are unavailable
Jan 20 14:36:02 compute-0 systemd[1]: libpod-def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023.scope: Deactivated successfully.
Jan 20 14:36:02 compute-0 podman[282838]: 2026-01-20 14:36:02.359417323 +0000 UTC m=+0.983801237 container died def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-364647466d7ac516582e7012aa48b5f01254e987bd8a4d8cf5cbb6c1711af265-merged.mount: Deactivated successfully.
Jan 20 14:36:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 20 14:36:02 compute-0 podman[282838]: 2026-01-20 14:36:02.573953112 +0000 UTC m=+1.198337016 container remove def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:36:02 compute-0 systemd[1]: libpod-conmon-def6e154041bdb927ec621a0622d654a3dffe1c7feaa22de9178590103797023.scope: Deactivated successfully.
Jan 20 14:36:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 20 14:36:02 compute-0 sudo[282705]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 20 14:36:02 compute-0 ceph-mon[74360]: pgmap v1400: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.3 MiB/s wr, 351 op/s
Jan 20 14:36:02 compute-0 ceph-mon[74360]: osdmap e203: 3 total, 3 up, 3 in
Jan 20 14:36:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3097242567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:02 compute-0 sudo[282952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:02 compute-0 nova_compute[250018]: 2026-01-20 14:36:02.676 250022 DEBUG nova.storage.rbd_utils [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] creating snapshot(snap) on rbd image(fba3fdca-b283-4008-bac5-0575a6416bc7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:02 compute-0 sudo[282952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:02 compute-0 sudo[282952]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:02 compute-0 sudo[282987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:36:02 compute-0 sudo[282987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:02 compute-0 sudo[282987]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:02 compute-0 sudo[283020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:02 compute-0 sudo[283020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:02 compute-0 sudo[283020]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:02 compute-0 sudo[283046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:36:02 compute-0 sudo[283046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:03 compute-0 sudo[283090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:03 compute-0 sudo[283090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:03 compute-0 sudo[283090]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:03 compute-0 sudo[283127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:03 compute-0 sudo[283127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:03 compute-0 sudo[283127]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.157521256 +0000 UTC m=+0.039589835 container create 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:36:03 compute-0 systemd[1]: Started libpod-conmon-5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609.scope.
Jan 20 14:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.230907939 +0000 UTC m=+0.112976528 container init 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.238167165 +0000 UTC m=+0.120235744 container start 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.14352796 +0000 UTC m=+0.025596549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.241910445 +0000 UTC m=+0.123979054 container attach 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:36:03 compute-0 systemd[1]: libpod-5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609.scope: Deactivated successfully.
Jan 20 14:36:03 compute-0 zen_feynman[283176]: 167 167
Jan 20 14:36:03 compute-0 conmon[283176]: conmon 5fa06da367c78b507939 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609.scope/container/memory.events
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.244822643 +0000 UTC m=+0.126891222 container died 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d28d9447e0b54485925d3a3543a0e7de7c9bd2a4fc384372d99f451bfe1713d-merged.mount: Deactivated successfully.
Jan 20 14:36:03 compute-0 podman[283157]: 2026-01-20 14:36:03.286856474 +0000 UTC m=+0.168925053 container remove 5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.286 250022 DEBUG nova.compute.manager [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.292 250022 DEBUG oslo_concurrency.lockutils [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.292 250022 DEBUG oslo_concurrency.lockutils [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.293 250022 DEBUG oslo_concurrency.lockutils [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.293 250022 DEBUG nova.compute.manager [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] No waiting events found dispatching network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.293 250022 WARNING nova.compute.manager [req-6e001630-5e1a-477c-b2b6-8148524b50d1 req-c966c997-fdb6-4e78-8744-28e51f58c876 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received unexpected event network-vif-plugged-f2330816-cb0c-4408-9178-8d732ae1a45b for instance with vm_state suspended and task_state None.
Jan 20 14:36:03 compute-0 systemd[1]: libpod-conmon-5fa06da367c78b507939da1f29e5baec4e940e715d3de1c96736a42983665609.scope: Deactivated successfully.
Jan 20 14:36:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:03 compute-0 podman[283200]: 2026-01-20 14:36:03.457203235 +0000 UTC m=+0.041173028 container create b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.501 250022 DEBUG nova.compute.manager [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:03 compute-0 systemd[1]: Started libpod-conmon-b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4.scope.
Jan 20 14:36:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 170 B/s wr, 177 op/s
Jan 20 14:36:03 compute-0 podman[283200]: 2026-01-20 14:36:03.439291983 +0000 UTC m=+0.023261796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878b3c1430b19bd5cb2f7becdab57496c61809559585bb17d9495166a19f106a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878b3c1430b19bd5cb2f7becdab57496c61809559585bb17d9495166a19f106a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878b3c1430b19bd5cb2f7becdab57496c61809559585bb17d9495166a19f106a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878b3c1430b19bd5cb2f7becdab57496c61809559585bb17d9495166a19f106a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:03 compute-0 podman[283200]: 2026-01-20 14:36:03.556449354 +0000 UTC m=+0.140419177 container init b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.561 250022 INFO nova.compute.manager [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] instance snapshotting
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.562 250022 WARNING nova.compute.manager [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 20 14:36:03 compute-0 podman[283200]: 2026-01-20 14:36:03.563833402 +0000 UTC m=+0.147803195 container start b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:36:03 compute-0 podman[283200]: 2026-01-20 14:36:03.572122535 +0000 UTC m=+0.156092348 container attach b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:36:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 20 14:36:03 compute-0 ceph-mon[74360]: osdmap e204: 3 total, 3 up, 3 in
Jan 20 14:36:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/791379787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 20 14:36:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 20 14:36:03 compute-0 nova_compute[250018]: 2026-01-20 14:36:03.922 250022 INFO nova.virt.libvirt.driver [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Beginning cold snapshot process
Jan 20 14:36:04 compute-0 nova_compute[250018]: 2026-01-20 14:36:04.047 250022 DEBUG nova.virt.libvirt.imagebackend [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:36:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:04.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]: {
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:     "0": [
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:         {
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "devices": [
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "/dev/loop3"
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             ],
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "lv_name": "ceph_lv0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "lv_size": "7511998464",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "name": "ceph_lv0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "tags": {
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.cluster_name": "ceph",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.crush_device_class": "",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.encrypted": "0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.osd_id": "0",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.type": "block",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:                 "ceph.vdo": "0"
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             },
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "type": "block",
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:             "vg_name": "ceph_vg0"
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:         }
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]:     ]
Jan 20 14:36:04 compute-0 condescending_jepsen[283217]: }
Jan 20 14:36:04 compute-0 nova_compute[250018]: 2026-01-20 14:36:04.339 250022 DEBUG nova.storage.rbd_utils [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(49c2c360b53445b19c1d801423a15695) on rbd image(a529906d-6908-4a37-ac57-db4384de2893_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:04 compute-0 systemd[1]: libpod-b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4.scope: Deactivated successfully.
Jan 20 14:36:04 compute-0 podman[283200]: 2026-01-20 14:36:04.345532853 +0000 UTC m=+0.929502646 container died b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-878b3c1430b19bd5cb2f7becdab57496c61809559585bb17d9495166a19f106a-merged.mount: Deactivated successfully.
Jan 20 14:36:04 compute-0 podman[283200]: 2026-01-20 14:36:04.398459277 +0000 UTC m=+0.982429070 container remove b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:36:04 compute-0 sudo[283046]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:04 compute-0 systemd[1]: libpod-conmon-b0ca93db1f01b4f5bba229f8ee4fe3bad16099990d79b09d7efa993b1e75a4d4.scope: Deactivated successfully.
Jan 20 14:36:04 compute-0 sudo[283291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:04 compute-0 sudo[283291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:04 compute-0 sudo[283291]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:04 compute-0 sudo[283316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:36:04 compute-0 sudo[283316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:04 compute-0 sudo[283316]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:04 compute-0 sudo[283341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:04 compute-0 sudo[283341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:04 compute-0 sudo[283341]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:04 compute-0 sudo[283366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:36:04 compute-0 sudo[283366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.003990041 +0000 UTC m=+0.047296303 container create 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:36:05 compute-0 systemd[1]: Started libpod-conmon-4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801.scope.
Jan 20 14:36:05 compute-0 ceph-mon[74360]: pgmap v1403: 321 pgs: 321 active+clean; 181 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 170 B/s wr, 177 op/s
Jan 20 14:36:05 compute-0 ceph-mon[74360]: osdmap e205: 3 total, 3 up, 3 in
Jan 20 14:36:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2849367976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:04.990270772 +0000 UTC m=+0.033577054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.082164174 +0000 UTC m=+0.125470466 container init 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.090870378 +0000 UTC m=+0.134176640 container start 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:36:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 20 14:36:05 compute-0 reverent_northcutt[283449]: 167 167
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.096066027 +0000 UTC m=+0.139372289 container attach 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:36:05 compute-0 systemd[1]: libpod-4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801.scope: Deactivated successfully.
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.096792917 +0000 UTC m=+0.140099179 container died 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 20 14:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c085b32378b1506ddf9ee5e595fa5217302bdbe9656dd9dd55df6e291d0f690-merged.mount: Deactivated successfully.
Jan 20 14:36:05 compute-0 podman[283432]: 2026-01-20 14:36:05.1378436 +0000 UTC m=+0.181149862 container remove 4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:36:05 compute-0 systemd[1]: libpod-conmon-4b342458e071f65aabcfb3588aa8ea96c39e4748592be0ef67782a94ed670801.scope: Deactivated successfully.
Jan 20 14:36:05 compute-0 nova_compute[250018]: 2026-01-20 14:36:05.168 250022 DEBUG nova.storage.rbd_utils [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] cloning vms/a529906d-6908-4a37-ac57-db4384de2893_disk@49c2c360b53445b19c1d801423a15695 to images/71863f13-4d5e-4ff6-bbc9-d5be0e9690ff clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:36:05 compute-0 podman[283506]: 2026-01-20 14:36:05.300976608 +0000 UTC m=+0.043003348 container create afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:36:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:05 compute-0 systemd[1]: Started libpod-conmon-afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003.scope.
Jan 20 14:36:05 compute-0 nova_compute[250018]: 2026-01-20 14:36:05.353 250022 DEBUG nova.storage.rbd_utils [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] flattening images/71863f13-4d5e-4ff6-bbc9-d5be0e9690ff flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3dcc74428dbdf0a47ee9f4f7ae57e959e83c5068192b490a65fa5a6a7856a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3dcc74428dbdf0a47ee9f4f7ae57e959e83c5068192b490a65fa5a6a7856a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3dcc74428dbdf0a47ee9f4f7ae57e959e83c5068192b490a65fa5a6a7856a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3dcc74428dbdf0a47ee9f4f7ae57e959e83c5068192b490a65fa5a6a7856a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:05 compute-0 podman[283506]: 2026-01-20 14:36:05.280895678 +0000 UTC m=+0.022922448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:36:05 compute-0 podman[283506]: 2026-01-20 14:36:05.385894271 +0000 UTC m=+0.127921051 container init afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:36:05 compute-0 podman[283506]: 2026-01-20 14:36:05.394577245 +0000 UTC m=+0.136603995 container start afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:36:05 compute-0 podman[283506]: 2026-01-20 14:36:05.398080329 +0000 UTC m=+0.140107089 container attach afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:36:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 247 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 9.6 MiB/s wr, 188 op/s
Jan 20 14:36:05 compute-0 nova_compute[250018]: 2026-01-20 14:36:05.840 250022 DEBUG nova.storage.rbd_utils [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] removing snapshot(49c2c360b53445b19c1d801423a15695) on rbd image(a529906d-6908-4a37-ac57-db4384de2893_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:36:06 compute-0 nova_compute[250018]: 2026-01-20 14:36:06.013 250022 INFO nova.virt.libvirt.driver [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Snapshot image upload complete
Jan 20 14:36:06 compute-0 nova_compute[250018]: 2026-01-20 14:36:06.014 250022 INFO nova.compute.manager [None req-e6b189ea-48e9-48de-96f0-3d6a6a4f594a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 6.33 seconds to snapshot the instance on the hypervisor.
Jan 20 14:36:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 20 14:36:06 compute-0 ceph-mon[74360]: osdmap e206: 3 total, 3 up, 3 in
Jan 20 14:36:06 compute-0 ceph-mon[74360]: pgmap v1406: 321 pgs: 321 active+clean; 247 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 9.6 MiB/s wr, 188 op/s
Jan 20 14:36:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 20 14:36:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 20 14:36:06 compute-0 nova_compute[250018]: 2026-01-20 14:36:06.173 250022 DEBUG nova.storage.rbd_utils [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(snap) on rbd image(71863f13-4d5e-4ff6-bbc9-d5be0e9690ff) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]: {
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:         "osd_id": 0,
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:         "type": "bluestore"
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]:     }
Jan 20 14:36:06 compute-0 sleepy_yalow[283525]: }
Jan 20 14:36:06 compute-0 systemd[1]: libpod-afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003.scope: Deactivated successfully.
Jan 20 14:36:06 compute-0 podman[283506]: 2026-01-20 14:36:06.210288711 +0000 UTC m=+0.952315481 container died afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3dcc74428dbdf0a47ee9f4f7ae57e959e83c5068192b490a65fa5a6a7856a07-merged.mount: Deactivated successfully.
Jan 20 14:36:06 compute-0 nova_compute[250018]: 2026-01-20 14:36:06.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:06 compute-0 podman[283506]: 2026-01-20 14:36:06.268537018 +0000 UTC m=+1.010563758 container remove afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_yalow, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:36:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:06.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:06 compute-0 systemd[1]: libpod-conmon-afb714f2b4eca953c24121adf7aa3995d1ba6d12c0b6f7adacdb4e18c4905003.scope: Deactivated successfully.
Jan 20 14:36:06 compute-0 sudo[283366]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:36:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:36:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a265e03f-52ad-4a9f-bbf6-e146bb04f3a5 does not exist
Jan 20 14:36:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 017676cc-206e-48cd-be7b-db9c819051e2 does not exist
Jan 20 14:36:06 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7cb305a0-daf5-45cb-a7d2-786b301ae831 does not exist
Jan 20 14:36:06 compute-0 sudo[283615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:06 compute-0 sudo[283615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:06 compute-0 sudo[283615]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:06 compute-0 sudo[283640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:36:06 compute-0 sudo[283640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:06 compute-0 sudo[283640]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:07 compute-0 nova_compute[250018]: 2026-01-20 14:36:07.104 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 20 14:36:07 compute-0 ceph-mon[74360]: osdmap e207: 3 total, 3 up, 3 in
Jan 20 14:36:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:36:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 20 14:36:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 20 14:36:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 310 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 17 MiB/s wr, 336 op/s
Jan 20 14:36:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:07 compute-0 ovn_controller[148666]: 2026-01-20T14:36:07Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:06:ce 10.100.0.4
Jan 20 14:36:07 compute-0 ovn_controller[148666]: 2026-01-20T14:36:07Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:06:ce 10.100.0.4
Jan 20 14:36:08 compute-0 ceph-mon[74360]: osdmap e208: 3 total, 3 up, 3 in
Jan 20 14:36:08 compute-0 ceph-mon[74360]: pgmap v1409: 321 pgs: 321 active+clean; 310 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 17 MiB/s wr, 336 op/s
Jan 20 14:36:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:08.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 20 14:36:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 329 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 20 14:36:09 compute-0 nova_compute[250018]: 2026-01-20 14:36:09.519 250022 INFO nova.virt.libvirt.driver [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Snapshot image upload complete
Jan 20 14:36:09 compute-0 nova_compute[250018]: 2026-01-20 14:36:09.520 250022 INFO nova.compute.manager [None req-fd54b769-df54-46a0-8b63-6685d544ef7e 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Took 5.96 seconds to snapshot the instance on the hypervisor.
Jan 20 14:36:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 20 14:36:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 20 14:36:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:10 compute-0 podman[283668]: 2026-01-20 14:36:10.511275632 +0000 UTC m=+0.095333614 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:36:10 compute-0 podman[283667]: 2026-01-20 14:36:10.518098366 +0000 UTC m=+0.106461694 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 14:36:10 compute-0 ceph-mon[74360]: pgmap v1410: 321 pgs: 321 active+clean; 329 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 12 MiB/s wr, 398 op/s
Jan 20 14:36:10 compute-0 ceph-mon[74360]: osdmap e209: 3 total, 3 up, 3 in
Jan 20 14:36:11 compute-0 nova_compute[250018]: 2026-01-20 14:36:11.243 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005043482749395123 of space, bias 1.0, pg target 1.513044824818537 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0037328569932998327 of space, bias 1.0, pg target 1.1161242409966499 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:36:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 594 op/s
Jan 20 14:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 20 14:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 20 14:36:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-0 nova_compute[250018]: 2026-01-20 14:36:12.106 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 20 14:36:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-0 ceph-mon[74360]: pgmap v1412: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 594 op/s
Jan 20 14:36:12 compute-0 ceph-mon[74360]: osdmap e210: 3 total, 3 up, 3 in
Jan 20 14:36:12 compute-0 ceph-mon[74360]: osdmap e211: 3 total, 3 up, 3 in
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.314 250022 DEBUG nova.compute.manager [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.347 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.348 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.348 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a529906d-6908-4a37-ac57-db4384de2893-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.348 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.349 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.350 250022 INFO nova.compute.manager [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Terminating instance
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.351 250022 DEBUG nova.compute.manager [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:36:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:13.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.356 250022 INFO nova.virt.libvirt.driver [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] Instance destroyed successfully.
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.356 250022 DEBUG nova.objects.instance [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'resources' on Instance uuid a529906d-6908-4a37-ac57-db4384de2893 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.382 250022 INFO nova.compute.manager [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] instance snapshotting
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.440 250022 DEBUG nova.virt.libvirt.vif [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-207767720',display_name='tempest-ImagesTestJSON-server-207767720',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-207767720',id=50,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-0a86zg6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:09Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a529906d-6908-4a37-ac57-db4384de2893,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.440 250022 DEBUG nova.network.os_vif_util [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "f2330816-cb0c-4408-9178-8d732ae1a45b", "address": "fa:16:3e:de:d2:be", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2330816-cb", "ovs_interfaceid": "f2330816-cb0c-4408-9178-8d732ae1a45b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.441 250022 DEBUG nova.network.os_vif_util [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.441 250022 DEBUG os_vif [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.443 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.444 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2330816-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.445 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.447 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.450 250022 INFO os_vif [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:d2:be,bridge_name='br-int',has_traffic_filtering=True,id=f2330816-cb0c-4408-9178-8d732ae1a45b,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2330816-cb')
Jan 20 14:36:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.8 MiB/s wr, 495 op/s
Jan 20 14:36:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/920590240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.748 250022 INFO nova.virt.libvirt.driver [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Beginning live snapshot process
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.960 250022 INFO nova.virt.libvirt.driver [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Deleting instance files /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893_del
Jan 20 14:36:13 compute-0 nova_compute[250018]: 2026-01-20 14:36:13.960 250022 INFO nova.virt.libvirt.driver [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Deletion of /var/lib/nova/instances/a529906d-6908-4a37-ac57-db4384de2893_del complete
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.238 250022 INFO nova.compute.manager [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.238 250022 DEBUG oslo.service.loopingcall [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.239 250022 DEBUG nova.compute.manager [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.240 250022 DEBUG nova.network.neutron [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:36:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:14.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.573 250022 DEBUG nova.virt.libvirt.imagebackend [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:36:14 compute-0 ceph-mon[74360]: pgmap v1415: 321 pgs: 321 active+clean; 367 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.8 MiB/s wr, 495 op/s
Jan 20 14:36:14 compute-0 nova_compute[250018]: 2026-01-20 14:36:14.835 250022 DEBUG nova.storage.rbd_utils [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] creating snapshot(f36dcda2888748eb99c7777e9ccf1bb5) on rbd image(1b3763f0-b328-4db2-844b-7f56cc13c19e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:15.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:15 compute-0 nova_compute[250018]: 2026-01-20 14:36:15.499 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919760.4982479, a529906d-6908-4a37-ac57-db4384de2893 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:15 compute-0 nova_compute[250018]: 2026-01-20 14:36:15.500 250022 INFO nova.compute.manager [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] VM Stopped (Lifecycle Event)
Jan 20 14:36:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 298 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 436 op/s
Jan 20 14:36:15 compute-0 nova_compute[250018]: 2026-01-20 14:36:15.519 250022 DEBUG nova.compute.manager [None req-1fb5e034-9ec6-49a8-b797-f2835d1a0868 - - - - - -] [instance: a529906d-6908-4a37-ac57-db4384de2893] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 20 14:36:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 20 14:36:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 20 14:36:15 compute-0 nova_compute[250018]: 2026-01-20 14:36:15.820 250022 DEBUG nova.storage.rbd_utils [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] cloning vms/1b3763f0-b328-4db2-844b-7f56cc13c19e_disk@f36dcda2888748eb99c7777e9ccf1bb5 to images/17b86a15-4cdb-41f2-82a9-c343b2420f69 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:36:15 compute-0 nova_compute[250018]: 2026-01-20 14:36:15.958 250022 DEBUG nova.storage.rbd_utils [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] flattening images/17b86a15-4cdb-41f2-82a9-c343b2420f69 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.279 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.481 250022 DEBUG nova.storage.rbd_utils [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] removing snapshot(f36dcda2888748eb99c7777e9ccf1bb5) on rbd image(1b3763f0-b328-4db2-844b-7f56cc13c19e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.625 250022 DEBUG nova.network.neutron [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.643 250022 INFO nova.compute.manager [-] [instance: a529906d-6908-4a37-ac57-db4384de2893] Took 2.40 seconds to deallocate network for instance.
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.685 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.686 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.738 250022 DEBUG nova.compute.manager [req-51645e02-e32c-4ea4-aa05-2e61fa4d5baa req-a127975e-8e7e-4db8-99c2-d7c4fb0d0df5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a529906d-6908-4a37-ac57-db4384de2893] Received event network-vif-deleted-f2330816-cb0c-4408-9178-8d732ae1a45b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.780 250022 DEBUG oslo_concurrency.processutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:16 compute-0 ceph-mon[74360]: pgmap v1416: 321 pgs: 321 active+clean; 298 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 436 op/s
Jan 20 14:36:16 compute-0 ceph-mon[74360]: osdmap e212: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/717322656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 20 14:36:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 20 14:36:16 compute-0 nova_compute[250018]: 2026-01-20 14:36:16.825 250022 DEBUG nova.storage.rbd_utils [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] creating snapshot(snap) on rbd image(17b86a15-4cdb-41f2-82a9-c343b2420f69) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591651344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.198 250022 DEBUG oslo_concurrency.processutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.203 250022 DEBUG nova.compute.provider_tree [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.291 250022 DEBUG nova.scheduler.client.report [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.315 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.362 250022 INFO nova.scheduler.client.report [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Deleted allocations for instance a529906d-6908-4a37-ac57-db4384de2893
Jan 20 14:36:17 compute-0 nova_compute[250018]: 2026-01-20 14:36:17.433 250022 DEBUG oslo_concurrency.lockutils [None req-d869e8e0-c747-40b5-9c67-aba8c048dc4f 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a529906d-6908-4a37-ac57-db4384de2893" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 270 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 233 op/s
Jan 20 14:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 20 14:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-0 ceph-mon[74360]: osdmap e213: 3 total, 3 up, 3 in
Jan 20 14:36:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2591651344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:17 compute-0 ceph-mon[74360]: osdmap e214: 3 total, 3 up, 3 in
Jan 20 14:36:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:18.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 14:36:18 compute-0 nova_compute[250018]: 2026-01-20 14:36:18.488 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:18 compute-0 ceph-mon[74360]: pgmap v1419: 321 pgs: 321 active+clean; 270 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.8 MiB/s wr, 233 op/s
Jan 20 14:36:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/169692220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:36:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:19.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:36:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 261 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Jan 20 14:36:19 compute-0 nova_compute[250018]: 2026-01-20 14:36:19.670 250022 INFO nova.virt.libvirt.driver [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Snapshot image upload complete
Jan 20 14:36:19 compute-0 nova_compute[250018]: 2026-01-20 14:36:19.671 250022 INFO nova.compute.manager [None req-6342b2cb-acf7-4532-aad9-80c255b93334 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 6.29 seconds to snapshot the instance on the hypervisor.
Jan 20 14:36:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:20 compute-0 nova_compute[250018]: 2026-01-20 14:36:20.497 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:20 compute-0 ceph-mon[74360]: pgmap v1421: 321 pgs: 321 active+clean; 261 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Jan 20 14:36:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3748006424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:21 compute-0 nova_compute[250018]: 2026-01-20 14:36:21.282 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 340 op/s
Jan 20 14:36:21 compute-0 ceph-mon[74360]: pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 340 op/s
Jan 20 14:36:22 compute-0 nova_compute[250018]: 2026-01-20 14:36:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 20 14:36:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 20 14:36:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 20 14:36:22 compute-0 sshd-session[283902]: Invalid user admin from 157.245.78.139 port 50790
Jan 20 14:36:22 compute-0 sshd-session[283902]: Connection closed by invalid user admin 157.245.78.139 port 50790 [preauth]
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:23 compute-0 sudo[283905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:23 compute-0 sudo[283905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:23 compute-0 sudo[283905]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:23 compute-0 sudo[283930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:23 compute-0 sudo[283930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:23 compute-0 sudo[283930]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:36:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:23.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.492 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 219 op/s
Jan 20 14:36:23 compute-0 ceph-mon[74360]: osdmap e215: 3 total, 3 up, 3 in
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.819 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.820 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.820 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.821 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.821 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.822 250022 INFO nova.compute.manager [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Terminating instance
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.823 250022 DEBUG nova.compute.manager [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:36:23 compute-0 kernel: tapdea207a0-1a (unregistering): left promiscuous mode
Jan 20 14:36:23 compute-0 NetworkManager[48960]: <info>  [1768919783.9310] device (tapdea207a0-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:36:23 compute-0 ovn_controller[148666]: 2026-01-20T14:36:23Z|00146|binding|INFO|Releasing lport dea207a0-1a8b-4f40-8fdc-1a5e76999db8 from this chassis (sb_readonly=0)
Jan 20 14:36:23 compute-0 ovn_controller[148666]: 2026-01-20T14:36:23Z|00147|binding|INFO|Setting lport dea207a0-1a8b-4f40-8fdc-1a5e76999db8 down in Southbound
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:23 compute-0 ovn_controller[148666]: 2026-01-20T14:36:23Z|00148|binding|INFO|Removing iface tapdea207a0-1a ovn-installed in OVS
Jan 20 14:36:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:23.945 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:06:ce 10.100.0.4'], port_security=['fa:16:3e:01:06:ce 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1b3763f0-b328-4db2-844b-7f56cc13c19e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38acf72f-2b62-44ac-86b7-15a313b89179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79601368a3db41e0aacec93e8fd7f1d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7191c78b-0eaa-45d2-b9f9-4ac3c533ddac', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=053fc91c-5a1e-4be0-b640-862275343a36, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=dea207a0-1a8b-4f40-8fdc-1a5e76999db8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:23.946 160071 INFO neutron.agent.ovn.metadata.agent [-] Port dea207a0-1a8b-4f40-8fdc-1a5e76999db8 in datapath 38acf72f-2b62-44ac-86b7-15a313b89179 unbound from our chassis
Jan 20 14:36:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:23.947 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38acf72f-2b62-44ac-86b7-15a313b89179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:36:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:23.948 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d4822455-a78f-430d-b488-211d7dacd441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:23.949 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179 namespace which is not needed anymore
Jan 20 14:36:23 compute-0 nova_compute[250018]: 2026-01-20 14:36:23.961 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:23 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 20 14:36:23 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000031.scope: Consumed 14.488s CPU time.
Jan 20 14:36:24 compute-0 systemd-machined[216401]: Machine qemu-22-instance-00000031 terminated.
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.052 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.068 250022 INFO nova.virt.libvirt.driver [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Instance destroyed successfully.
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.068 250022 DEBUG nova.objects.instance [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lazy-loading 'resources' on Instance uuid 1b3763f0-b328-4db2-844b-7f56cc13c19e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:24 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [NOTICE]   (282305) : haproxy version is 2.8.14-c23fe91
Jan 20 14:36:24 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [NOTICE]   (282305) : path to executable is /usr/sbin/haproxy
Jan 20 14:36:24 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [WARNING]  (282305) : Exiting Master process...
Jan 20 14:36:24 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [ALERT]    (282305) : Current worker (282307) exited with code 143 (Terminated)
Jan 20 14:36:24 compute-0 neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179[282300]: [WARNING]  (282305) : All workers exited. Exiting... (0)
Jan 20 14:36:24 compute-0 systemd[1]: libpod-ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453.scope: Deactivated successfully.
Jan 20 14:36:24 compute-0 podman[283979]: 2026-01-20 14:36:24.092656633 +0000 UTC m=+0.053267627 container died ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.093 250022 DEBUG nova.virt.libvirt.vif [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:35:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-84805515',display_name='tempest-ImagesOneServerTestJSON-server-84805515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-84805515',id=49,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:35:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79601368a3db41e0aacec93e8fd7f1d4',ramdisk_id='',reservation_id='r-pl8tjcab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-749029286',owner_user_name='tempest-ImagesOneServerTestJSON-749029286-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:19Z,user_data=None,user_id='00cec8cbb72b489da46855f8b3b4c42c',uuid=1b3763f0-b328-4db2-844b-7f56cc13c19e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.094 250022 DEBUG nova.network.os_vif_util [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converting VIF {"id": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "address": "fa:16:3e:01:06:ce", "network": {"id": "38acf72f-2b62-44ac-86b7-15a313b89179", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-242703573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79601368a3db41e0aacec93e8fd7f1d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdea207a0-1a", "ovs_interfaceid": "dea207a0-1a8b-4f40-8fdc-1a5e76999db8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.094 250022 DEBUG nova.network.os_vif_util [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.095 250022 DEBUG os_vif [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.096 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdea207a0-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.101 250022 INFO os_vif [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:06:ce,bridge_name='br-int',has_traffic_filtering=True,id=dea207a0-1a8b-4f40-8fdc-1a5e76999db8,network=Network(38acf72f-2b62-44ac-86b7-15a313b89179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdea207a0-1a')
Jan 20 14:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453-userdata-shm.mount: Deactivated successfully.
Jan 20 14:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbed832f1a2490809affa67d189fc748d9e559eb1a5fb3cb5390ff626246e25-merged.mount: Deactivated successfully.
Jan 20 14:36:24 compute-0 podman[283979]: 2026-01-20 14:36:24.200306376 +0000 UTC m=+0.160917370 container cleanup ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:36:24 compute-0 systemd[1]: libpod-conmon-ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453.scope: Deactivated successfully.
Jan 20 14:36:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.317 250022 DEBUG nova.compute.manager [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-unplugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.317 250022 DEBUG oslo_concurrency.lockutils [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.319 250022 DEBUG oslo_concurrency.lockutils [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.319 250022 DEBUG oslo_concurrency.lockutils [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.320 250022 DEBUG nova.compute.manager [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] No waiting events found dispatching network-vif-unplugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.320 250022 DEBUG nova.compute.manager [req-29b17652-caeb-49cd-b5ad-da522290c9ff req-4d318f0d-b87b-4d3f-a97a-473e5de4b3e6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-unplugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:36:24 compute-0 podman[284036]: 2026-01-20 14:36:24.378440329 +0000 UTC m=+0.157456966 container remove ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.384 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2bec66-0c6a-4bbf-8b35-dab6a20475e1]: (4, ('Tue Jan 20 02:36:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179 (ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453)\nba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453\nTue Jan 20 02:36:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179 (ba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453)\nba13c0318e056dc685f771f16cc69ff79557a1f3313642a283e33c7988ff9453\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.386 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[366a6830-ced2-486d-aa8a-97a453ef60ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.387 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38acf72f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:24 compute-0 kernel: tap38acf72f-20: left promiscuous mode
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.404 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.406 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[644be4e3-06da-4424-8a7f-e401b830b77e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.431 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d39f933d-ef06-481f-a8b6-920c4fc7ac2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.433 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[78d80b3d-5856-46d6-acb4-cecb539acca2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.449 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b493733a-9f45-4e93-a989-5943ec7094da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569398, 'reachable_time': 17778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284052, 'error': None, 'target': 'ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.451 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38acf72f-2b62-44ac-86b7-15a313b89179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:36:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d38acf72f\x2d2b62\x2d44ac\x2d86b7\x2d15a313b89179.mount: Deactivated successfully.
Jan 20 14:36:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:24.451 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[1c072f2e-5748-4f91-986c-38bb54e0422b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:24 compute-0 ceph-mon[74360]: pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 219 op/s
Jan 20 14:36:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/240091767' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.757 250022 INFO nova.virt.libvirt.driver [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Deleting instance files /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e_del
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.758 250022 INFO nova.virt.libvirt.driver [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Deletion of /var/lib/nova/instances/1b3763f0-b328-4db2-844b-7f56cc13c19e_del complete
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.810 250022 INFO nova.compute.manager [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 0.99 seconds to destroy the instance on the hypervisor.
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.812 250022 DEBUG oslo.service.loopingcall [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.812 250022 DEBUG nova.compute.manager [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:36:24 compute-0 nova_compute[250018]: 2026-01-20 14:36:24.813 250022 DEBUG nova.network.neutron [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:36:25 compute-0 nova_compute[250018]: 2026-01-20 14:36:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 168 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 231 op/s
Jan 20 14:36:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3193668944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1706595247' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2085895492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.070 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.071 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.071 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:26.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.357 250022 DEBUG nova.network.neutron [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.392 250022 INFO nova.compute.manager [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Took 1.58 seconds to deallocate network for instance.
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.436 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.437 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.441 250022 DEBUG nova.compute.manager [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.442 250022 DEBUG oslo_concurrency.lockutils [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.442 250022 DEBUG oslo_concurrency.lockutils [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.443 250022 DEBUG oslo_concurrency.lockutils [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.443 250022 DEBUG nova.compute.manager [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] No waiting events found dispatching network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.443 250022 WARNING nova.compute.manager [req-f0369a2c-8e8f-4902-80bb-71dc571d429a req-0c967d4a-bf7c-439d-af2b-8d2dd893878b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received unexpected event network-vif-plugged-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 for instance with vm_state active and task_state deleting.
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.445 250022 DEBUG nova.compute.manager [req-f6e1ae43-2f47-45ef-a04c-52417da84a21 req-1a33e9de-0409-4ec6-9051-ccce0b825789 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Received event network-vif-deleted-dea207a0-1a8b-4f40-8fdc-1a5e76999db8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430201608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.492 250022 DEBUG oslo_concurrency.processutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.518 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.706 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.707 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=20.938636779785156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.708 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:26 compute-0 ceph-mon[74360]: pgmap v1425: 321 pgs: 321 active+clean; 168 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.7 MiB/s wr, 231 op/s
Jan 20 14:36:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/430201608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077652599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.925 250022 DEBUG oslo_concurrency.processutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.929 250022 DEBUG nova.compute.provider_tree [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.946 250022 DEBUG nova.scheduler.client.report [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.963 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.965 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:26 compute-0 nova_compute[250018]: 2026-01-20 14:36:26.993 250022 INFO nova.scheduler.client.report [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Deleted allocations for instance 1b3763f0-b328-4db2-844b-7f56cc13c19e
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.025 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.025 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.042 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.076 250022 DEBUG oslo_concurrency.lockutils [None req-a6df1922-7ac5-4a44-b7ea-c042b6ec579a 00cec8cbb72b489da46855f8b3b4c42c 79601368a3db41e0aacec93e8fd7f1d4 - - default default] Lock "1b3763f0-b328-4db2-844b-7f56cc13c19e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:27.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674391804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.485 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.492 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 99 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.525 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.557 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:36:27 compute-0 nova_compute[250018]: 2026-01-20 14:36:27.557 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2077652599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3674391804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:27 compute-0 ceph-mon[74360]: pgmap v1426: 321 pgs: 321 active+clean; 99 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 210 op/s
Jan 20 14:36:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:28.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2774003200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3876039807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.098 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.558 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.558 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.559 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.574 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.575 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.596 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.660 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.661 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.667 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.667 250022 INFO nova.compute.claims [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:36:29 compute-0 nova_compute[250018]: 2026-01-20 14:36:29.780 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:29 compute-0 ceph-mon[74360]: pgmap v1427: 321 pgs: 321 active+clean; 88 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 199 op/s
Jan 20 14:36:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:36:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016561653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.232 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.238 250022 DEBUG nova.compute.provider_tree [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.262 250022 DEBUG nova.scheduler.client.report [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.292 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.294 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:36:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:30.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.356 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.356 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.382 250022 INFO nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.419 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.572 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.573 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.573 250022 INFO nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Creating image(s)
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.595 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.618 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.642 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.646 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.673 250022 DEBUG nova.policy [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af0dc50e860c4144ab2ecc679760d941', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.680 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.709 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.710 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.711 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.711 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.734 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:30 compute-0 nova_compute[250018]: 2026-01-20 14:36:30.737 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:30.750 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:30.750 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:30.751 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:36:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3016561653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.182 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.245 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] resizing rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.279 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.280 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.318 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.353 250022 DEBUG nova.objects.instance [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lazy-loading 'migration_context' on Instance uuid 64830e28-fd5b-41c5-ba24-3f203f4d4b10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.376 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.376 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Ensure instance console log exists: /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.377 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.378 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.378 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 308 KiB/s wr, 148 op/s
Jan 20 14:36:31 compute-0 nova_compute[250018]: 2026-01-20 14:36:31.632 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Successfully created port: d9d285b2-4ab9-4091-aa07-a9972a7e1031 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:36:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 20 14:36:32 compute-0 ceph-mon[74360]: pgmap v1428: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 308 KiB/s wr, 148 op/s
Jan 20 14:36:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 20 14:36:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 20 14:36:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:32.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 20 14:36:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 20 14:36:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.097 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Successfully updated port: d9d285b2-4ab9-4091-aa07-a9972a7e1031 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.128 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.128 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquired lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.128 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.334 250022 DEBUG nova.compute.manager [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-changed-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.334 250022 DEBUG nova.compute.manager [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Refreshing instance network info cache due to event network-changed-d9d285b2-4ab9-4091-aa07-a9972a7e1031. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.334 250022 DEBUG oslo_concurrency.lockutils [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:33.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:33 compute-0 nova_compute[250018]: 2026-01-20 14:36:33.436 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:36:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 283 KiB/s wr, 137 op/s
Jan 20 14:36:33 compute-0 ceph-mon[74360]: osdmap e216: 3 total, 3 up, 3 in
Jan 20 14:36:33 compute-0 ceph-mon[74360]: osdmap e217: 3 total, 3 up, 3 in
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.514 250022 DEBUG nova.network.neutron [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.532 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Releasing lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.532 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Instance network_info: |[{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.533 250022 DEBUG oslo_concurrency.lockutils [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.533 250022 DEBUG nova.network.neutron [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Refreshing network info cache for port d9d285b2-4ab9-4091-aa07-a9972a7e1031 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.536 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Start _get_guest_xml network_info=[{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.539 250022 WARNING nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.543 250022 DEBUG nova.virt.libvirt.host [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.543 250022 DEBUG nova.virt.libvirt.host [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.546 250022 DEBUG nova.virt.libvirt.host [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.546 250022 DEBUG nova.virt.libvirt.host [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.547 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.548 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.548 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.548 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.549 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.549 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.549 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.549 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.550 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.550 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.550 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.550 250022 DEBUG nova.virt.hardware [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:36:34 compute-0 nova_compute[250018]: 2026-01-20 14:36:34.553 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 20 14:36:34 compute-0 ceph-mon[74360]: pgmap v1431: 321 pgs: 321 active+clean; 93 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 283 KiB/s wr, 137 op/s
Jan 20 14:36:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 20 14:36:34 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 20 14:36:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394839658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.048 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.070 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.074 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:36:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940471981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 168 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.6 MiB/s wr, 233 op/s
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.537 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.539 250022 DEBUG nova.virt.libvirt.vif [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:36:30Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.539 250022 DEBUG nova.network.os_vif_util [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.540 250022 DEBUG nova.network.os_vif_util [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.541 250022 DEBUG nova.objects.instance [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 64830e28-fd5b-41c5-ba24-3f203f4d4b10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.563 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <uuid>64830e28-fd5b-41c5-ba24-3f203f4d4b10</uuid>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <name>instance-00000036</name>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:name>tempest-AttachInterfacesV270Test-server-2115140388</nova:name>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:36:34</nova:creationTime>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:user uuid="af0dc50e860c4144ab2ecc679760d941">tempest-AttachInterfacesV270Test-1034738423-project-member</nova:user>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:project uuid="3b81c28ccb1340c9bb2d254088a5793b">tempest-AttachInterfacesV270Test-1034738423</nova:project>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <nova:port uuid="d9d285b2-4ab9-4091-aa07-a9972a7e1031">
Jan 20 14:36:35 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <system>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="serial">64830e28-fd5b-41c5-ba24-3f203f4d4b10</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="uuid">64830e28-fd5b-41c5-ba24-3f203f4d4b10</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </system>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <os>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </os>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <features>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </features>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk">
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config">
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:36:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:45:c6:51"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <target dev="tapd9d285b2-4a"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/console.log" append="off"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <video>
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </video>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:36:35 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:36:35 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:36:35 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:36:35 compute-0 nova_compute[250018]: </domain>
Jan 20 14:36:35 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.565 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Preparing to wait for external event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.565 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.565 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.565 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.566 250022 DEBUG nova.virt.libvirt.vif [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:36:30Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.566 250022 DEBUG nova.network.os_vif_util [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.567 250022 DEBUG nova.network.os_vif_util [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.567 250022 DEBUG os_vif [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.568 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.568 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.569 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.574 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.574 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d285b2-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.575 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd9d285b2-4a, col_values=(('external_ids', {'iface-id': 'd9d285b2-4ab9-4091-aa07-a9972a7e1031', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:c6:51', 'vm-uuid': '64830e28-fd5b-41c5-ba24-3f203f4d4b10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.577 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:35 compute-0 NetworkManager[48960]: <info>  [1768919795.5785] manager: (tapd9d285b2-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.579 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.584 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.585 250022 INFO os_vif [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a')
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.631 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.631 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.632 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No VIF found with MAC fa:16:3e:45:c6:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.632 250022 INFO nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Using config drive
Jan 20 14:36:35 compute-0 nova_compute[250018]: 2026-01-20 14:36:35.655 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 20 14:36:35 compute-0 ceph-mon[74360]: osdmap e218: 3 total, 3 up, 3 in
Jan 20 14:36:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1394839658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3940471981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:35 compute-0 ceph-mon[74360]: pgmap v1433: 321 pgs: 321 active+clean; 168 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.6 MiB/s wr, 233 op/s
Jan 20 14:36:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 20 14:36:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.273 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:36:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.315 250022 DEBUG nova.network.neutron [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updated VIF entry in instance network info cache for port d9d285b2-4ab9-4091-aa07-a9972a7e1031. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.316 250022 DEBUG nova.network.neutron [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.323 250022 INFO nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Creating config drive at /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.333 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc5soycn4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.466 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc5soycn4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.490 250022 DEBUG nova.storage.rbd_utils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] rbd image 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.493 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.562 250022 DEBUG oslo_concurrency.lockutils [req-9c6a8dda-a9af-40dc-b05c-9d4d301fdc62 req-5fca5540-ba85-43ac-869f-c8cae35f2399 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.671 250022 DEBUG oslo_concurrency.processutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config 64830e28-fd5b-41c5-ba24-3f203f4d4b10_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.671 250022 INFO nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Deleting local config drive /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10/disk.config because it was imported into RBD.
Jan 20 14:36:36 compute-0 kernel: tapd9d285b2-4a: entered promiscuous mode
Jan 20 14:36:36 compute-0 NetworkManager[48960]: <info>  [1768919796.7151] manager: (tapd9d285b2-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.715 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 ovn_controller[148666]: 2026-01-20T14:36:36Z|00149|binding|INFO|Claiming lport d9d285b2-4ab9-4091-aa07-a9972a7e1031 for this chassis.
Jan 20 14:36:36 compute-0 ovn_controller[148666]: 2026-01-20T14:36:36Z|00150|binding|INFO|d9d285b2-4ab9-4091-aa07-a9972a7e1031: Claiming fa:16:3e:45:c6:51 10.100.0.6
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.718 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 systemd-udevd[284449]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:36:36 compute-0 systemd-machined[216401]: New machine qemu-24-instance-00000036.
Jan 20 14:36:36 compute-0 NetworkManager[48960]: <info>  [1768919796.7833] device (tapd9d285b2-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:36:36 compute-0 NetworkManager[48960]: <info>  [1768919796.7846] device (tapd9d285b2-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:36:36 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000036.
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 ovn_controller[148666]: 2026-01-20T14:36:36Z|00151|binding|INFO|Setting lport d9d285b2-4ab9-4091-aa07-a9972a7e1031 ovn-installed in OVS
Jan 20 14:36:36 compute-0 nova_compute[250018]: 2026-01-20 14:36:36.827 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:36 compute-0 ovn_controller[148666]: 2026-01-20T14:36:36Z|00152|binding|INFO|Setting lport d9d285b2-4ab9-4091-aa07-a9972a7e1031 up in Southbound
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.936 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:c6:51 10.100.0.6'], port_security=['fa:16:3e:45:c6:51 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '64830e28-fd5b-41c5-ba24-3f203f4d4b10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfd63a86-3b8f-4e92-886c-1c808edda567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=180c0e39-8d10-4e06-9164-434ccb2cb4a5, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d9d285b2-4ab9-4091-aa07-a9972a7e1031) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.938 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d9d285b2-4ab9-4091-aa07-a9972a7e1031 in datapath f4d9ccb1-41fe-4463-aceb-85602ee3a20f bound to our chassis
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.940 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4d9ccb1-41fe-4463-aceb-85602ee3a20f
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fee78e-499e-4b1c-ba70-714636417002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.958 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4d9ccb1-41 in ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.961 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4d9ccb1-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.961 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4077fb30-4fd7-4506-b4e9-61797744fa58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.963 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6509a3be-c9d3-486d-a177-f8dfe911c75b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.976 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4df846e3-4a6a-429f-abcb-b57dd2ffb5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:36.992 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b06bf0-85e4-4a5b-bf44-d36f91a14677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.026 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d349519b-a52a-463d-a81e-dc8ea1d52465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 NetworkManager[48960]: <info>  [1768919797.0393] manager: (tapf4d9ccb1-40): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.038 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d74127ca-1a29-42d7-b3cc-d993ec2b9bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ceph-mon[74360]: osdmap e219: 3 total, 3 up, 3 in
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.091 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd7d92d-bfd5-4c4c-ac27-efe29981ca02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.094 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[def07a52-2efc-40f5-ba35-bca4af465a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 NetworkManager[48960]: <info>  [1768919797.1167] device (tapf4d9ccb1-40): carrier: link connected
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.122 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7bd262-8b09-44f6-b29d-9ffeeb3db7bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.139 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[eacaadaa-9d52-4c89-94ee-ec4bdcd16b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4d9ccb1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573783, 'reachable_time': 38336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284483, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.154 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce392b6-0164-423a-b115-92644a9034fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:fe2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573783, 'tstamp': 573783}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284484, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.170 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2f78ece0-5adf-49fa-bb35-c8f5550691df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4d9ccb1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573783, 'reachable_time': 38336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284485, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.199 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eab269-d31c-4380-9d10-4d60738cc0bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.264 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd2466f-d924-4564-8c08-2c9bdd3ab313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.265 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4d9ccb1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.266 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.266 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4d9ccb1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:37 compute-0 kernel: tapf4d9ccb1-40: entered promiscuous mode
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.268 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.269 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:37 compute-0 NetworkManager[48960]: <info>  [1768919797.2701] manager: (tapf4d9ccb1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.270 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4d9ccb1-40, col_values=(('external_ids', {'iface-id': '0c5d26c5-0fe3-416e-8ce4-aadab785c96d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:37 compute-0 ovn_controller[148666]: 2026-01-20T14:36:37Z|00153|binding|INFO|Releasing lport 0c5d26c5-0fe3-416e-8ce4-aadab785c96d from this chassis (sb_readonly=0)
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.292 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4d9ccb1-41fe-4463-aceb-85602ee3a20f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4d9ccb1-41fe-4463-aceb-85602ee3a20f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.293 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[88b4ae97-459b-48b0-a312-63182ecbac6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.294 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-f4d9ccb1-41fe-4463-aceb-85602ee3a20f
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/f4d9ccb1-41fe-4463-aceb-85602ee3a20f.pid.haproxy
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID f4d9ccb1-41fe-4463-aceb-85602ee3a20f
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:36:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:37.295 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'env', 'PROCESS_TAG=haproxy-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4d9ccb1-41fe-4463-aceb-85602ee3a20f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:36:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.7 MiB/s wr, 189 op/s
Jan 20 14:36:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:37 compute-0 podman[284517]: 2026-01-20 14:36:37.722819892 +0000 UTC m=+0.050354658 container create 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:36:37 compute-0 systemd[1]: Started libpod-conmon-929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385.scope.
Jan 20 14:36:37 compute-0 podman[284517]: 2026-01-20 14:36:37.694316144 +0000 UTC m=+0.021850910 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:36:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177259cf7066ad0268f71d329b2758efec6613e0b4d00ec4ae3a0ad94905ae14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:36:37 compute-0 podman[284517]: 2026-01-20 14:36:37.815694397 +0000 UTC m=+0.143229143 container init 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 14:36:37 compute-0 podman[284517]: 2026-01-20 14:36:37.82545254 +0000 UTC m=+0.152987286 container start 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:36:37 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [NOTICE]   (284571) : New worker (284576) forked
Jan 20 14:36:37 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [NOTICE]   (284571) : Loading success.
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.950 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919797.9499607, 64830e28-fd5b-41c5-ba24-3f203f4d4b10 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.951 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] VM Started (Lifecycle Event)
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.970 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.973 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919797.9510484, 64830e28-fd5b-41c5-ba24-3f203f4d4b10 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.974 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] VM Paused (Lifecycle Event)
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.989 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:37 compute-0 nova_compute[250018]: 2026-01-20 14:36:37.992 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:38 compute-0 nova_compute[250018]: 2026-01-20 14:36:38.013 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:38 compute-0 ceph-mon[74360]: pgmap v1435: 321 pgs: 321 active+clean; 180 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.7 MiB/s wr, 189 op/s
Jan 20 14:36:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:38.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:39 compute-0 nova_compute[250018]: 2026-01-20 14:36:39.066 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919784.064966, 1b3763f0-b328-4db2-844b-7f56cc13c19e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:39 compute-0 nova_compute[250018]: 2026-01-20 14:36:39.066 250022 INFO nova.compute.manager [-] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] VM Stopped (Lifecycle Event)
Jan 20 14:36:39 compute-0 nova_compute[250018]: 2026-01-20 14:36:39.098 250022 DEBUG nova.compute.manager [None req-05436832-b3f5-4552-882d-d2acef68bde7 - - - - - -] [instance: 1b3763f0-b328-4db2-844b-7f56cc13c19e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 181 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 154 op/s
Jan 20 14:36:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:40.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:40 compute-0 nova_compute[250018]: 2026-01-20 14:36:40.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:40 compute-0 ceph-mon[74360]: pgmap v1436: 321 pgs: 321 active+clean; 181 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 154 op/s
Jan 20 14:36:41 compute-0 nova_compute[250018]: 2026-01-20 14:36:41.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:41.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:41 compute-0 podman[284593]: 2026-01-20 14:36:41.456292917 +0000 UTC m=+0.051793048 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:36:41 compute-0 podman[284592]: 2026-01-20 14:36:41.514481716 +0000 UTC m=+0.111496838 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:36:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 194 op/s
Jan 20 14:36:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:42.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 20 14:36:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 20 14:36:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 20 14:36:42 compute-0 ceph-mon[74360]: pgmap v1437: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.2 MiB/s wr, 194 op/s
Jan 20 14:36:43 compute-0 sudo[284638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:43 compute-0 sudo[284638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:43 compute-0 sudo[284638]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:43 compute-0 sudo[284663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:36:43 compute-0 sudo[284663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:36:43 compute-0 sudo[284663]: pam_unix(sudo:session): session closed for user root
Jan 20 14:36:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 593 KiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.776 250022 DEBUG nova.compute.manager [req-2ad46439-87da-458c-b8cf-f517b9ae8e03 req-0406855e-83ac-4bd5-b92b-d232a87f946b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.776 250022 DEBUG oslo_concurrency.lockutils [req-2ad46439-87da-458c-b8cf-f517b9ae8e03 req-0406855e-83ac-4bd5-b92b-d232a87f946b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.776 250022 DEBUG oslo_concurrency.lockutils [req-2ad46439-87da-458c-b8cf-f517b9ae8e03 req-0406855e-83ac-4bd5-b92b-d232a87f946b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.776 250022 DEBUG oslo_concurrency.lockutils [req-2ad46439-87da-458c-b8cf-f517b9ae8e03 req-0406855e-83ac-4bd5-b92b-d232a87f946b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.776 250022 DEBUG nova.compute.manager [req-2ad46439-87da-458c-b8cf-f517b9ae8e03 req-0406855e-83ac-4bd5-b92b-d232a87f946b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Processing event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.777 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.780 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919803.7806048, 64830e28-fd5b-41c5-ba24-3f203f4d4b10 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.780 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] VM Resumed (Lifecycle Event)
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.782 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.785 250022 INFO nova.virt.libvirt.driver [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Instance spawned successfully.
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.785 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.812 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.817 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.817 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.817 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.818 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.818 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.818 250022 DEBUG nova.virt.libvirt.driver [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.823 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.873 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.963 250022 INFO nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Took 13.39 seconds to spawn the instance on the hypervisor.
Jan 20 14:36:43 compute-0 nova_compute[250018]: 2026-01-20 14:36:43.964 250022 DEBUG nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:36:44 compute-0 ceph-mon[74360]: osdmap e220: 3 total, 3 up, 3 in
Jan 20 14:36:44 compute-0 nova_compute[250018]: 2026-01-20 14:36:44.067 250022 INFO nova.compute.manager [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Took 14.43 seconds to build instance.
Jan 20 14:36:44 compute-0 nova_compute[250018]: 2026-01-20 14:36:44.085 250022 DEBUG oslo_concurrency.lockutils [None req-f83e8dc1-beed-4b89-b0b5-6566b3133027 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:45 compute-0 ceph-mon[74360]: pgmap v1439: 321 pgs: 321 active+clean; 187 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 593 KiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 20 14:36:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/658670013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.151 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "interface-64830e28-fd5b-41c5-ba24-3f203f4d4b10-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.152 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "interface-64830e28-fd5b-41c5-ba24-3f203f4d4b10-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.153 250022 DEBUG nova.objects.instance [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lazy-loading 'flavor' on Instance uuid 64830e28-fd5b-41c5-ba24-3f203f4d4b10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.181 250022 DEBUG nova.objects.instance [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lazy-loading 'pci_requests' on Instance uuid 64830e28-fd5b-41c5-ba24-3f203f4d4b10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.202 250022 DEBUG nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:36:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:45.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 210 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 182 op/s
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.581 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.873 250022 DEBUG nova.compute.manager [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.873 250022 DEBUG oslo_concurrency.lockutils [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.874 250022 DEBUG oslo_concurrency.lockutils [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.874 250022 DEBUG oslo_concurrency.lockutils [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.875 250022 DEBUG nova.compute.manager [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:45 compute-0 nova_compute[250018]: 2026-01-20 14:36:45.875 250022 WARNING nova.compute.manager [req-244beeb5-0115-43aa-8b71-76162e342033 req-10e21fdb-b821-4b07-8643-3f1f90092671 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received unexpected event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 for instance with vm_state active and task_state None.
Jan 20 14:36:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1444865890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:36:46 compute-0 ceph-mon[74360]: pgmap v1440: 321 pgs: 321 active+clean; 210 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 182 op/s
Jan 20 14:36:46 compute-0 nova_compute[250018]: 2026-01-20 14:36:46.162 250022 DEBUG nova.policy [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af0dc50e860c4144ab2ecc679760d941', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:36:46 compute-0 nova_compute[250018]: 2026-01-20 14:36:46.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:46.188 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:46.190 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:36:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:46 compute-0 nova_compute[250018]: 2026-01-20 14:36:46.322 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 238 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 166 op/s
Jan 20 14:36:47 compute-0 nova_compute[250018]: 2026-01-20 14:36:47.760 250022 DEBUG nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Successfully created port: ea943387-322a-4fe7-975b-216aaed162a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:36:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:48 compute-0 ceph-mon[74360]: pgmap v1441: 321 pgs: 321 active+clean; 238 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 166 op/s
Jan 20 14:36:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:49.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 248 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 201 op/s
Jan 20 14:36:49 compute-0 nova_compute[250018]: 2026-01-20 14:36:49.914 250022 DEBUG nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Successfully updated port: ea943387-322a-4fe7-975b-216aaed162a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:36:49 compute-0 nova_compute[250018]: 2026-01-20 14:36:49.932 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:49 compute-0 nova_compute[250018]: 2026-01-20 14:36:49.933 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquired lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:49 compute-0 nova_compute[250018]: 2026-01-20 14:36:49.934 250022 DEBUG nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:36:50 compute-0 nova_compute[250018]: 2026-01-20 14:36:50.117 250022 WARNING nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] f4d9ccb1-41fe-4463-aceb-85602ee3a20f already exists in list: networks containing: ['f4d9ccb1-41fe-4463-aceb-85602ee3a20f']. ignoring it
Jan 20 14:36:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:50.192 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:50 compute-0 nova_compute[250018]: 2026-01-20 14:36:50.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:50 compute-0 ceph-mon[74360]: pgmap v1442: 321 pgs: 321 active+clean; 248 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 201 op/s
Jan 20 14:36:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/347259012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3157347975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:51 compute-0 nova_compute[250018]: 2026-01-20 14:36:51.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:51.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 20 14:36:51 compute-0 nova_compute[250018]: 2026-01-20 14:36:51.945 250022 DEBUG nova.compute.manager [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-changed-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:51 compute-0 nova_compute[250018]: 2026-01-20 14:36:51.945 250022 DEBUG nova.compute.manager [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Refreshing instance network info cache due to event network-changed-ea943387-322a-4fe7-975b-216aaed162a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:36:51 compute-0 nova_compute[250018]: 2026-01-20 14:36:51.946 250022 DEBUG oslo_concurrency.lockutils [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:36:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:36:52
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.control']
Jan 20 14:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:36:52 compute-0 ceph-mon[74360]: pgmap v1443: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 20 14:36:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3250829967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3598022065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:36:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:53.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 20 14:36:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:54 compute-0 ceph-mon[74360]: pgmap v1444: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Jan 20 14:36:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 165 op/s
Jan 20 14:36:55 compute-0 nova_compute[250018]: 2026-01-20 14:36:55.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:55 compute-0 ceph-mon[74360]: pgmap v1445: 321 pgs: 321 active+clean; 260 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 165 op/s
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.239 250022 DEBUG nova.network.neutron [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.301 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Releasing lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.302 250022 DEBUG oslo_concurrency.lockutils [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.302 250022 DEBUG nova.network.neutron [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Refreshing network info cache for port ea943387-322a-4fe7-975b-216aaed162a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.305 250022 DEBUG nova.virt.libvirt.vif [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:36:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:44Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.305 250022 DEBUG nova.network.os_vif_util [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.306 250022 DEBUG nova.network.os_vif_util [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.306 250022 DEBUG os_vif [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.306 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.307 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.307 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.309 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.309 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea943387-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.310 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea943387-32, col_values=(('external_ids', {'iface-id': 'ea943387-322a-4fe7-975b-216aaed162a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:d5:67', 'vm-uuid': '64830e28-fd5b-41c5-ba24-3f203f4d4b10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:56.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:56 compute-0 NetworkManager[48960]: <info>  [1768919816.3453] manager: (tapea943387-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.346 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.350 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.351 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.352 250022 INFO os_vif [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32')
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.353 250022 DEBUG nova.virt.libvirt.vif [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:36:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:44Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.353 250022 DEBUG nova.network.os_vif_util [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.354 250022 DEBUG nova.network.os_vif_util [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.355 250022 DEBUG nova.virt.libvirt.guest [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:29:d5:67"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <target dev="tapea943387-32"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]: </interface>
Jan 20 14:36:56 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:36:56 compute-0 kernel: tapea943387-32: entered promiscuous mode
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00154|binding|INFO|Claiming lport ea943387-322a-4fe7-975b-216aaed162a3 for this chassis.
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00155|binding|INFO|ea943387-322a-4fe7-975b-216aaed162a3: Claiming fa:16:3e:29:d5:67 10.100.0.11
Jan 20 14:36:56 compute-0 NetworkManager[48960]: <info>  [1768919816.3691] manager: (tapea943387-32): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00156|binding|INFO|Setting lport ea943387-322a-4fe7-975b-216aaed162a3 ovn-installed in OVS
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 systemd-udevd[284701]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00157|binding|INFO|Setting lport ea943387-322a-4fe7-975b-216aaed162a3 up in Southbound
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.415 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:d5:67 10.100.0.11'], port_security=['fa:16:3e:29:d5:67 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '64830e28-fd5b-41c5-ba24-3f203f4d4b10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfd63a86-3b8f-4e92-886c-1c808edda567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=180c0e39-8d10-4e06-9164-434ccb2cb4a5, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ea943387-322a-4fe7-975b-216aaed162a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.416 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ea943387-322a-4fe7-975b-216aaed162a3 in datapath f4d9ccb1-41fe-4463-aceb-85602ee3a20f bound to our chassis
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.418 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4d9ccb1-41fe-4463-aceb-85602ee3a20f
Jan 20 14:36:56 compute-0 NetworkManager[48960]: <info>  [1768919816.4349] device (tapea943387-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:36:56 compute-0 NetworkManager[48960]: <info>  [1768919816.4358] device (tapea943387-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c66117a-216f-4afb-9244-f362352257d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.465 250022 DEBUG nova.virt.libvirt.driver [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.466 250022 DEBUG nova.virt.libvirt.driver [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.466 250022 DEBUG nova.virt.libvirt.driver [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No VIF found with MAC fa:16:3e:45:c6:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.467 250022 DEBUG nova.virt.libvirt.driver [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] No VIF found with MAC fa:16:3e:29:d5:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.471 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0f103491-2b17-46ba-b90b-4477d75ad14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.474 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9e20dd6f-8168-409b-aba7-531e68572442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.505 250022 DEBUG nova.virt.libvirt.guest [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:name>tempest-AttachInterfacesV270Test-server-2115140388</nova:name>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 14:36:56</nova:creationTime>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:user uuid="af0dc50e860c4144ab2ecc679760d941">tempest-AttachInterfacesV270Test-1034738423-project-member</nova:user>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:project uuid="3b81c28ccb1340c9bb2d254088a5793b">tempest-AttachInterfacesV270Test-1034738423</nova:project>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:port uuid="d9d285b2-4ab9-4091-aa07-a9972a7e1031">
Jan 20 14:36:56 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     <nova:port uuid="ea943387-322a-4fe7-975b-216aaed162a3">
Jan 20 14:36:56 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:36:56 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:36:56 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 14:36:56 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 14:36:56 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.508 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[22ce15ba-0104-492e-a0a3-a17ed982f5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.537 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[45c12adf-7d0e-442a-b990-f455bdbffeca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4d9ccb1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573783, 'reachable_time': 38336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284709, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.547 250022 DEBUG oslo_concurrency.lockutils [None req-78c9e2f3-35ea-4fd3-a323-41f9ed7e56e0 af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "interface-64830e28-fd5b-41c5-ba24-3f203f4d4b10-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.559 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[93d8f472-5027-40e7-9e5b-d7f39022f507]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4d9ccb1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573794, 'tstamp': 573794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284710, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4d9ccb1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573798, 'tstamp': 573798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284710, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.561 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4d9ccb1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.562 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.564 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.566 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4d9ccb1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.566 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.566 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4d9ccb1-40, col_values=(('external_ids', {'iface-id': '0c5d26c5-0fe3-416e-8ce4-aadab785c96d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:36:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:36:56.566 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:c6:51 10.100.0.6
Jan 20 14:36:56 compute-0 ovn_controller[148666]: 2026-01-20T14:36:56Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:c6:51 10.100.0.6
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.961 250022 DEBUG nova.compute.manager [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.961 250022 DEBUG oslo_concurrency.lockutils [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.961 250022 DEBUG oslo_concurrency.lockutils [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.962 250022 DEBUG oslo_concurrency.lockutils [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.962 250022 DEBUG nova.compute.manager [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:56 compute-0 nova_compute[250018]: 2026-01-20 14:36:56.962 250022 WARNING nova.compute.manager [req-b199df54-a00c-4e12-aa45-5fff55c4bd98 req-4eb163db-bf76-405d-9116-7e4312185883 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received unexpected event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 for instance with vm_state active and task_state None.
Jan 20 14:36:57 compute-0 ovn_controller[148666]: 2026-01-20T14:36:57Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:d5:67 10.100.0.11
Jan 20 14:36:57 compute-0 ovn_controller[148666]: 2026-01-20T14:36:57Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:d5:67 10.100.0.11
Jan 20 14:36:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:36:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:57.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:36:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 262 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 131 op/s
Jan 20 14:36:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:36:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:36:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:58 compute-0 ceph-mon[74360]: pgmap v1446: 321 pgs: 321 active+clean; 262 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 131 op/s
Jan 20 14:36:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:36:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:36:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:36:59.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:36:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 274 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.861 250022 DEBUG nova.compute.manager [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.861 250022 DEBUG oslo_concurrency.lockutils [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.862 250022 DEBUG oslo_concurrency.lockutils [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.862 250022 DEBUG oslo_concurrency.lockutils [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.862 250022 DEBUG nova.compute.manager [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.862 250022 WARNING nova.compute.manager [req-8c9c14c0-e869-47b6-b384-cd1e43ef6624 req-09b4ef49-77b8-48ee-b5c2-9c5d15c5adfa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received unexpected event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 for instance with vm_state active and task_state None.
Jan 20 14:36:59 compute-0 ceph-mon[74360]: pgmap v1447: 321 pgs: 321 active+clean; 274 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.935 250022 DEBUG nova.network.neutron [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updated VIF entry in instance network info cache for port ea943387-322a-4fe7-975b-216aaed162a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.935 250022 DEBUG nova.network.neutron [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:36:59 compute-0 nova_compute[250018]: 2026-01-20 14:36:59.962 250022 DEBUG oslo_concurrency.lockutils [req-69da4539-c908-431a-ac54-40017ef6915b req-b6083da9-1922-46e7-b2a0-936b1b6225fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-64830e28-fd5b-41c5-ba24-3f203f4d4b10" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.085 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.086 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.086 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.086 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.086 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.088 250022 INFO nova.compute.manager [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Terminating instance
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.088 250022 DEBUG nova.compute.manager [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:37:00 compute-0 kernel: tapd9d285b2-4a (unregistering): left promiscuous mode
Jan 20 14:37:00 compute-0 NetworkManager[48960]: <info>  [1768919820.1304] device (tapd9d285b2-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00158|binding|INFO|Releasing lport d9d285b2-4ab9-4091-aa07-a9972a7e1031 from this chassis (sb_readonly=0)
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00159|binding|INFO|Setting lport d9d285b2-4ab9-4091-aa07-a9972a7e1031 down in Southbound
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00160|binding|INFO|Removing iface tapd9d285b2-4a ovn-installed in OVS
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.207 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.209 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.218 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:c6:51 10.100.0.6'], port_security=['fa:16:3e:45:c6:51 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '64830e28-fd5b-41c5-ba24-3f203f4d4b10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfd63a86-3b8f-4e92-886c-1c808edda567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=180c0e39-8d10-4e06-9164-434ccb2cb4a5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d9d285b2-4ab9-4091-aa07-a9972a7e1031) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.220 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d9d285b2-4ab9-4091-aa07-a9972a7e1031 in datapath f4d9ccb1-41fe-4463-aceb-85602ee3a20f unbound from our chassis
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.221 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4d9ccb1-41fe-4463-aceb-85602ee3a20f
Jan 20 14:37:00 compute-0 kernel: tapea943387-32 (unregistering): left promiscuous mode
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.227 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 NetworkManager[48960]: <info>  [1768919820.2305] device (tapea943387-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.230 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.235 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a63c65-f7c9-4765-844f-0c263be0ec2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00161|binding|INFO|Releasing lport ea943387-322a-4fe7-975b-216aaed162a3 from this chassis (sb_readonly=0)
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00162|binding|INFO|Setting lport ea943387-322a-4fe7-975b-216aaed162a3 down in Southbound
Jan 20 14:37:00 compute-0 ovn_controller[148666]: 2026-01-20T14:37:00Z|00163|binding|INFO|Removing iface tapea943387-32 ovn-installed in OVS
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.247 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:d5:67 10.100.0.11'], port_security=['fa:16:3e:29:d5:67 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '64830e28-fd5b-41c5-ba24-3f203f4d4b10', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b81c28ccb1340c9bb2d254088a5793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfd63a86-3b8f-4e92-886c-1c808edda567', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=180c0e39-8d10-4e06-9164-434ccb2cb4a5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ea943387-322a-4fe7-975b-216aaed162a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.256 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.270 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc7554e-8ecf-4ba5-bab4-961c2d6b0394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 20 14:37:00 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000036.scope: Consumed 14.830s CPU time.
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.272 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8f558d50-214c-4d9f-b39d-4a02e495d855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 systemd-machined[216401]: Machine qemu-24-instance-00000036 terminated.
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.301 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e3bd0769-fb1e-4405-8316-c2c1eb97f8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 NetworkManager[48960]: <info>  [1768919820.3190] manager: (tapea943387-32): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.320 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d922b6-1c87-4605-a0ac-691931d02bf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4d9ccb1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:fe:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573783, 'reachable_time': 38336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284735, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.336 250022 INFO nova.virt.libvirt.driver [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Instance destroyed successfully.
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.337 250022 DEBUG nova.objects.instance [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lazy-loading 'resources' on Instance uuid 64830e28-fd5b-41c5-ba24-3f203f4d4b10 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.339 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e21daa9-f193-4625-8a32-a5bd5c2ddf8f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4d9ccb1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573794, 'tstamp': 573794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284748, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4d9ccb1-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573798, 'tstamp': 573798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284748, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.340 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4d9ccb1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.342 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.349 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.350 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4d9ccb1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.350 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.351 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4d9ccb1-40, col_values=(('external_ids', {'iface-id': '0c5d26c5-0fe3-416e-8ce4-aadab785c96d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.351 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.352 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ea943387-322a-4fe7-975b-216aaed162a3 in datapath f4d9ccb1-41fe-4463-aceb-85602ee3a20f unbound from our chassis
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.353 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4d9ccb1-41fe-4463-aceb-85602ee3a20f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.354 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[abcf6bfa-f18f-4498-932d-72208c9d1c6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.354 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f namespace which is not needed anymore
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.355 250022 DEBUG nova.virt.libvirt.vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:36:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:44Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.355 250022 DEBUG nova.network.os_vif_util [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.356 250022 DEBUG nova.network.os_vif_util [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.356 250022 DEBUG os_vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.358 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d285b2-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.363 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.365 250022 INFO os_vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:c6:51,bridge_name='br-int',has_traffic_filtering=True,id=d9d285b2-4ab9-4091-aa07-a9972a7e1031,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd9d285b2-4a')
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.366 250022 DEBUG nova.virt.libvirt.vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:36:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-2115140388',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-2115140388',id=54,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:36:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b81c28ccb1340c9bb2d254088a5793b',ramdisk_id='',reservation_id='r-ctd26x1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1034738423',owner_user_name='tempest-AttachInterfacesV270Test-1034738423-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:36:44Z,user_data=None,user_id='af0dc50e860c4144ab2ecc679760d941',uuid=64830e28-fd5b-41c5-ba24-3f203f4d4b10,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.366 250022 DEBUG nova.network.os_vif_util [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converting VIF {"id": "ea943387-322a-4fe7-975b-216aaed162a3", "address": "fa:16:3e:29:d5:67", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea943387-32", "ovs_interfaceid": "ea943387-322a-4fe7-975b-216aaed162a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.367 250022 DEBUG nova.network.os_vif_util [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.367 250022 DEBUG os_vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.368 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.369 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea943387-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.372 250022 INFO os_vif [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:d5:67,bridge_name='br-int',has_traffic_filtering=True,id=ea943387-322a-4fe7-975b-216aaed162a3,network=Network(f4d9ccb1-41fe-4463-aceb-85602ee3a20f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea943387-32')
Jan 20 14:37:00 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [NOTICE]   (284571) : haproxy version is 2.8.14-c23fe91
Jan 20 14:37:00 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [NOTICE]   (284571) : path to executable is /usr/sbin/haproxy
Jan 20 14:37:00 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [WARNING]  (284571) : Exiting Master process...
Jan 20 14:37:00 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [ALERT]    (284571) : Current worker (284576) exited with code 143 (Terminated)
Jan 20 14:37:00 compute-0 neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f[284547]: [WARNING]  (284571) : All workers exited. Exiting... (0)
Jan 20 14:37:00 compute-0 systemd[1]: libpod-929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385.scope: Deactivated successfully.
Jan 20 14:37:00 compute-0 podman[284792]: 2026-01-20 14:37:00.485515745 +0000 UTC m=+0.045664082 container died 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385-userdata-shm.mount: Deactivated successfully.
Jan 20 14:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-177259cf7066ad0268f71d329b2758efec6613e0b4d00ec4ae3a0ad94905ae14-merged.mount: Deactivated successfully.
Jan 20 14:37:00 compute-0 podman[284792]: 2026-01-20 14:37:00.529358067 +0000 UTC m=+0.089506384 container cleanup 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:37:00 compute-0 systemd[1]: libpod-conmon-929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385.scope: Deactivated successfully.
Jan 20 14:37:00 compute-0 podman[284823]: 2026-01-20 14:37:00.594286668 +0000 UTC m=+0.043508604 container remove 929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.601 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0017a8-435a-40a0-a2cb-7980ad3b39ce]: (4, ('Tue Jan 20 02:37:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f (929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385)\n929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385\nTue Jan 20 02:37:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f (929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385)\n929cc713e7605af3d02caa3ac6e8924ce3cd5c5f35be5a7c970aabaea7637385\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.603 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[75697206-9aa4-47b4-9c66-fddfe8e33e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.604 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4d9ccb1-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 kernel: tapf4d9ccb1-40: left promiscuous mode
Jan 20 14:37:00 compute-0 nova_compute[250018]: 2026-01-20 14:37:00.621 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.625 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[25d0545c-c5d9-4853-8d21-945a035bc890]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.645 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d6881f-e932-4488-8a0f-87b5c9ef1244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.646 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4b219d62-e00a-47c7-9f96-cd1ae4070cc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.663 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f4be275a-4ee9-4c13-ae06-faf7da0eb432]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573774, 'reachable_time': 40427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284839, 'error': None, 'target': 'ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.665 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4d9ccb1-41fe-4463-aceb-85602ee3a20f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:37:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:00.665 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[11ace527-66c3-4429-97cb-68bc0dc2182e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:00 compute-0 systemd[1]: run-netns-ovnmeta\x2df4d9ccb1\x2d41fe\x2d4463\x2daceb\x2d85602ee3a20f.mount: Deactivated successfully.
Jan 20 14:37:01 compute-0 nova_compute[250018]: 2026-01-20 14:37:01.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:01.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Jan 20 14:37:01 compute-0 nova_compute[250018]: 2026-01-20 14:37:01.606 250022 INFO nova.virt.libvirt.driver [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Deleting instance files /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10_del
Jan 20 14:37:01 compute-0 nova_compute[250018]: 2026-01-20 14:37:01.606 250022 INFO nova.virt.libvirt.driver [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Deletion of /var/lib/nova/instances/64830e28-fd5b-41c5-ba24-3f203f4d4b10_del complete
Jan 20 14:37:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:02 compute-0 ceph-mon[74360]: pgmap v1448: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.758 250022 DEBUG nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-unplugged-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.758 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.758 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.759 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.759 250022 DEBUG nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-unplugged-ea943387-322a-4fe7-975b-216aaed162a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.759 250022 DEBUG nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-unplugged-ea943387-322a-4fe7-975b-216aaed162a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.760 250022 DEBUG nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.760 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.760 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.761 250022 DEBUG oslo_concurrency.lockutils [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.761 250022 DEBUG nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:02 compute-0 nova_compute[250018]: 2026-01-20 14:37:02.761 250022 WARNING nova.compute.manager [req-436dbdc8-04c8-4bdc-957a-a9a4b2001068 req-00a5201b-aea0-4bbe-9b91-aa908f340a14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received unexpected event network-vif-plugged-ea943387-322a-4fe7-975b-216aaed162a3 for instance with vm_state active and task_state deleting.
Jan 20 14:37:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.048 250022 INFO nova.compute.manager [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Took 2.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.049 250022 DEBUG oslo.service.loopingcall [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.050 250022 DEBUG nova.compute.manager [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.050 250022 DEBUG nova.network.neutron [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:37:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 20 14:37:03 compute-0 sudo[284842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:03 compute-0 sudo[284842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:03 compute-0 sudo[284842]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:03 compute-0 sudo[284867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:03 compute-0 sudo[284867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:03 compute-0 sudo[284867]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.993 250022 DEBUG nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-unplugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.994 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.994 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.994 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.995 250022 DEBUG nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-unplugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.995 250022 DEBUG nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-unplugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.995 250022 DEBUG nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.995 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.996 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.996 250022 DEBUG oslo_concurrency.lockutils [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.996 250022 DEBUG nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] No waiting events found dispatching network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:03 compute-0 nova_compute[250018]: 2026-01-20 14:37:03.997 250022 WARNING nova.compute.manager [req-1acebad5-a9d0-4701-aa5f-24b40a09f5c5 req-0a7434d6-edfa-4007-a2f0-c574bca55857 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received unexpected event network-vif-plugged-d9d285b2-4ab9-4091-aa07-a9972a7e1031 for instance with vm_state active and task_state deleting.
Jan 20 14:37:04 compute-0 ceph-mon[74360]: pgmap v1449: 321 pgs: 321 active+clean; 283 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 20 14:37:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:05 compute-0 nova_compute[250018]: 2026-01-20 14:37:05.103 250022 DEBUG nova.compute.manager [req-c509bfd2-0c6d-4d26-935b-5ed0a32bb55b req-df70bdc7-6f78-4abb-baf0-ca8208d339b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-deleted-ea943387-322a-4fe7-975b-216aaed162a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:05 compute-0 nova_compute[250018]: 2026-01-20 14:37:05.103 250022 INFO nova.compute.manager [req-c509bfd2-0c6d-4d26-935b-5ed0a32bb55b req-df70bdc7-6f78-4abb-baf0-ca8208d339b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Neutron deleted interface ea943387-322a-4fe7-975b-216aaed162a3; detaching it from the instance and deleting it from the info cache
Jan 20 14:37:05 compute-0 nova_compute[250018]: 2026-01-20 14:37:05.103 250022 DEBUG nova.network.neutron [req-c509bfd2-0c6d-4d26-935b-5ed0a32bb55b req-df70bdc7-6f78-4abb-baf0-ca8208d339b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [{"id": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "address": "fa:16:3e:45:c6:51", "network": {"id": "f4d9ccb1-41fe-4463-aceb-85602ee3a20f", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1786213854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b81c28ccb1340c9bb2d254088a5793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd9d285b2-4a", "ovs_interfaceid": "d9d285b2-4ab9-4091-aa07-a9972a7e1031", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:05 compute-0 nova_compute[250018]: 2026-01-20 14:37:05.127 250022 DEBUG nova.compute.manager [req-c509bfd2-0c6d-4d26-935b-5ed0a32bb55b req-df70bdc7-6f78-4abb-baf0-ca8208d339b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Detach interface failed, port_id=ea943387-322a-4fe7-975b-216aaed162a3, reason: Instance 64830e28-fd5b-41c5-ba24-3f203f4d4b10 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 14:37:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/287032635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:05 compute-0 nova_compute[250018]: 2026-01-20 14:37:05.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:05.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 242 op/s
Jan 20 14:37:06 compute-0 ceph-mon[74360]: pgmap v1450: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 242 op/s
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.317 250022 DEBUG nova.network.neutron [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.406 250022 INFO nova.compute.manager [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Took 3.36 seconds to deallocate network for instance.
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.493 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.493 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:06 compute-0 sshd-session[284893]: Invalid user ubuntu from 157.245.78.139 port 44116
Jan 20 14:37:06 compute-0 nova_compute[250018]: 2026-01-20 14:37:06.563 250022 DEBUG oslo_concurrency.processutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:06 compute-0 sshd-session[284893]: Connection closed by invalid user ubuntu 157.245.78.139 port 44116 [preauth]
Jan 20 14:37:06 compute-0 sudo[284916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:06 compute-0 sudo[284916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-0 sudo[284916]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:06 compute-0 sudo[284941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:37:06 compute-0 sudo[284941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-0 sudo[284941]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:06 compute-0 sudo[284966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:06 compute-0 sudo[284966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:06 compute-0 sudo[284966]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-0 sudo[284991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114007443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:07 compute-0 sudo[284991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.036 250022 DEBUG oslo_concurrency.processutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.044 250022 DEBUG nova.compute.provider_tree [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.070 250022 DEBUG nova.scheduler.client.report [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.098 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.130 250022 INFO nova.scheduler.client.report [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Deleted allocations for instance 64830e28-fd5b-41c5-ba24-3f203f4d4b10
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.217 250022 DEBUG nova.compute.manager [req-b73eafa3-3abc-4b56-a5b7-190c7b72b52e req-fdd38bf7-e1b8-4615-be7b-e2637cb971fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Received event network-vif-deleted-d9d285b2-4ab9-4091-aa07-a9972a7e1031 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:07 compute-0 nova_compute[250018]: 2026-01-20 14:37:07.219 250022 DEBUG oslo_concurrency.lockutils [None req-c9d31fca-fc38-44a4-8f09-e8a21cb4248d af0dc50e860c4144ab2ecc679760d941 3b81c28ccb1340c9bb2d254088a5793b - - default default] Lock "64830e28-fd5b-41c5-ba24-3f203f4d4b10" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 20 14:37:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3114007443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 20 14:37:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:07 compute-0 sudo[284991]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 246 op/s
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a6b9b769-9b2e-401e-88a5-8bdab965bb92 does not exist
Jan 20 14:37:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c43ecb5f-c703-41ba-a377-839c09606b27 does not exist
Jan 20 14:37:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0f802899-296a-4645-8e92-847e40d14bb8 does not exist
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:37:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:07 compute-0 sudo[285049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:07 compute-0 sudo[285049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:07 compute-0 sudo[285049]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-0 sudo[285074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:37:07 compute-0 sudo[285074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:07 compute-0 sudo[285074]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-0 sudo[285099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:07 compute-0 sudo[285099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:07 compute-0 sudo[285099]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:07 compute-0 sudo[285124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:37:07 compute-0 sudo[285124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.171433379 +0000 UTC m=+0.038614333 container create 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:37:08 compute-0 systemd[1]: Started libpod-conmon-9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411.scope.
Jan 20 14:37:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.248043014 +0000 UTC m=+0.115223998 container init 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.155776447 +0000 UTC m=+0.022957411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.255929427 +0000 UTC m=+0.123110371 container start 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.260941823 +0000 UTC m=+0.128122787 container attach 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:37:08 compute-0 wizardly_chaum[285203]: 167 167
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.262997768 +0000 UTC m=+0.130178722 container died 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:08 compute-0 systemd[1]: libpod-9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411.scope: Deactivated successfully.
Jan 20 14:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d3edfecaa4a1866d929e4e8dfaeab101d4c7576c9ce5bdcb7d48d1069271c6-merged.mount: Deactivated successfully.
Jan 20 14:37:08 compute-0 ceph-mon[74360]: osdmap e221: 3 total, 3 up, 3 in
Jan 20 14:37:08 compute-0 ceph-mon[74360]: pgmap v1452: 321 pgs: 321 active+clean; 213 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 246 op/s
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:37:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:37:08 compute-0 podman[285187]: 2026-01-20 14:37:08.298849715 +0000 UTC m=+0.166030669 container remove 9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:37:08 compute-0 systemd[1]: libpod-conmon-9406ca2c16143f491b779945540c148265278418829f5ba30d5df14693c5f411.scope: Deactivated successfully.
Jan 20 14:37:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:08 compute-0 podman[285227]: 2026-01-20 14:37:08.443940967 +0000 UTC m=+0.038194991 container create d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:37:08 compute-0 systemd[1]: Started libpod-conmon-d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae.scope.
Jan 20 14:37:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:08 compute-0 podman[285227]: 2026-01-20 14:37:08.513270547 +0000 UTC m=+0.107524581 container init d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:37:08 compute-0 podman[285227]: 2026-01-20 14:37:08.523179674 +0000 UTC m=+0.117433698 container start d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:08 compute-0 podman[285227]: 2026-01-20 14:37:08.427957546 +0000 UTC m=+0.022211590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:08 compute-0 podman[285227]: 2026-01-20 14:37:08.526817412 +0000 UTC m=+0.121071466 container attach d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:37:09 compute-0 vibrant_maxwell[285244]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:37:09 compute-0 vibrant_maxwell[285244]: --> relative data size: 1.0
Jan 20 14:37:09 compute-0 vibrant_maxwell[285244]: --> All data devices are unavailable
Jan 20 14:37:09 compute-0 systemd[1]: libpod-d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae.scope: Deactivated successfully.
Jan 20 14:37:09 compute-0 podman[285227]: 2026-01-20 14:37:09.333218426 +0000 UTC m=+0.927472450 container died d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0067c0e50062d62e0d49d779d67ca2ce12d2f3a5182d6776eb7f878b961bc4d7-merged.mount: Deactivated successfully.
Jan 20 14:37:09 compute-0 podman[285227]: 2026-01-20 14:37:09.393112972 +0000 UTC m=+0.987366996 container remove d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:37:09 compute-0 systemd[1]: libpod-conmon-d744953f96d1c6a98fb45edb9c9c2c623672352f88d2cc9f0a3d29d46ebc79ae.scope: Deactivated successfully.
Jan 20 14:37:09 compute-0 sudo[285124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:09.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:09 compute-0 sudo[285274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:09 compute-0 sudo[285274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:09 compute-0 sudo[285274]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:09 compute-0 sudo[285299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:37:09 compute-0 sudo[285299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 185 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Jan 20 14:37:09 compute-0 sudo[285299]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:09 compute-0 sudo[285324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:09 compute-0 sudo[285324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:09 compute-0 sudo[285324]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:09 compute-0 sudo[285349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:37:09 compute-0 sudo[285349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:09 compute-0 podman[285414]: 2026-01-20 14:37:09.92778344 +0000 UTC m=+0.036070204 container create 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:37:09 compute-0 systemd[1]: Started libpod-conmon-7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07.scope.
Jan 20 14:37:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:10.00864758 +0000 UTC m=+0.116934364 container init 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:09.912643241 +0000 UTC m=+0.020930025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:10.015887945 +0000 UTC m=+0.124174719 container start 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:10.020003866 +0000 UTC m=+0.128290650 container attach 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 14:37:10 compute-0 tender_benz[285430]: 167 167
Jan 20 14:37:10 compute-0 systemd[1]: libpod-7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07.scope: Deactivated successfully.
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:10.022197866 +0000 UTC m=+0.130484620 container died 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-612dafa7126e3346d87c6c3212d9fa114e1d865e8f1bfa1730b11f8a844f8cb1-merged.mount: Deactivated successfully.
Jan 20 14:37:10 compute-0 podman[285414]: 2026-01-20 14:37:10.05981714 +0000 UTC m=+0.168103904 container remove 7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:37:10 compute-0 systemd[1]: libpod-conmon-7f981619e1afb7ca6349f965c7f9dc273efcd4ea0fced64eec6efa77f7e88b07.scope: Deactivated successfully.
Jan 20 14:37:10 compute-0 podman[285454]: 2026-01-20 14:37:10.224795738 +0000 UTC m=+0.041914201 container create d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:37:10 compute-0 systemd[1]: Started libpod-conmon-d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd.scope.
Jan 20 14:37:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b14d645fa18b21987655e67d98ad09fb994d8998e9348d7e8cc3d4ffd52d53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b14d645fa18b21987655e67d98ad09fb994d8998e9348d7e8cc3d4ffd52d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b14d645fa18b21987655e67d98ad09fb994d8998e9348d7e8cc3d4ffd52d53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b14d645fa18b21987655e67d98ad09fb994d8998e9348d7e8cc3d4ffd52d53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:10 compute-0 podman[285454]: 2026-01-20 14:37:10.289959276 +0000 UTC m=+0.107077759 container init d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:37:10 compute-0 podman[285454]: 2026-01-20 14:37:10.298859456 +0000 UTC m=+0.115977919 container start d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:37:10 compute-0 podman[285454]: 2026-01-20 14:37:10.302426322 +0000 UTC m=+0.119544785 container attach d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:37:10 compute-0 podman[285454]: 2026-01-20 14:37:10.209153457 +0000 UTC m=+0.026271960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:10.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:10 compute-0 nova_compute[250018]: 2026-01-20 14:37:10.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:10 compute-0 ceph-mon[74360]: pgmap v1453: 321 pgs: 321 active+clean; 185 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]: {
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:     "0": [
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:         {
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "devices": [
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "/dev/loop3"
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             ],
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "lv_name": "ceph_lv0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "lv_size": "7511998464",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "name": "ceph_lv0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "tags": {
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.cluster_name": "ceph",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.crush_device_class": "",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.encrypted": "0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.osd_id": "0",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.type": "block",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:                 "ceph.vdo": "0"
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             },
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "type": "block",
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:             "vg_name": "ceph_vg0"
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:         }
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]:     ]
Jan 20 14:37:11 compute-0 awesome_mccarthy[285471]: }
Jan 20 14:37:11 compute-0 systemd[1]: libpod-d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd.scope: Deactivated successfully.
Jan 20 14:37:11 compute-0 podman[285454]: 2026-01-20 14:37:11.063936647 +0000 UTC m=+0.881055120 container died d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63b14d645fa18b21987655e67d98ad09fb994d8998e9348d7e8cc3d4ffd52d53-merged.mount: Deactivated successfully.
Jan 20 14:37:11 compute-0 podman[285454]: 2026-01-20 14:37:11.120547853 +0000 UTC m=+0.937666316 container remove d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:37:11 compute-0 systemd[1]: libpod-conmon-d46911bebc22fac83fb23bffa087f81223391819bf0687a0fa9cfb5431c47dfd.scope: Deactivated successfully.
Jan 20 14:37:11 compute-0 sudo[285349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:11 compute-0 sudo[285491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:11 compute-0 sudo[285491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:11 compute-0 sudo[285491]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:11 compute-0 sudo[285516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:37:11 compute-0 sudo[285516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:11 compute-0 sudo[285516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.264 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.266 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002930778022054718 of space, bias 1.0, pg target 0.8792334066164155 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0025269213544574726 of space, bias 1.0, pg target 0.7580764063372418 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.307 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:37:11 compute-0 sudo[285541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:11 compute-0 sudo[285541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:11 compute-0 sudo[285541]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:11 compute-0 sudo[285566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:37:11 compute-0 sudo[285566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.449 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.451 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.459 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.460 250022 INFO nova.compute.claims [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:37:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 194 op/s
Jan 20 14:37:11 compute-0 nova_compute[250018]: 2026-01-20 14:37:11.598 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.647512913 +0000 UTC m=+0.036808573 container create 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:37:11 compute-0 systemd[1]: Started libpod-conmon-19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1.scope.
Jan 20 14:37:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.705012343 +0000 UTC m=+0.094308033 container init 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.714207361 +0000 UTC m=+0.103503031 container start 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.717615133 +0000 UTC m=+0.106910793 container attach 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:37:11 compute-0 naughty_jang[285648]: 167 167
Jan 20 14:37:11 compute-0 systemd[1]: libpod-19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1.scope: Deactivated successfully.
Jan 20 14:37:11 compute-0 conmon[285648]: conmon 19c44af15c5de4aa047f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1.scope/container/memory.events
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.721224271 +0000 UTC m=+0.110519941 container died 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.631136322 +0000 UTC m=+0.020432012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d16e9df903d08f18e37d7becb0b4b044ec87d9b95631a473dc2cd7637ae35a-merged.mount: Deactivated successfully.
Jan 20 14:37:11 compute-0 podman[285647]: 2026-01-20 14:37:11.751106816 +0000 UTC m=+0.070928163 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 14:37:11 compute-0 podman[285629]: 2026-01-20 14:37:11.765015131 +0000 UTC m=+0.154310791 container remove 19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:37:11 compute-0 systemd[1]: libpod-conmon-19c44af15c5de4aa047fc25942e02e44462b576dfce5848fd3a77210d4bcbda1.scope: Deactivated successfully.
Jan 20 14:37:11 compute-0 podman[285644]: 2026-01-20 14:37:11.799785749 +0000 UTC m=+0.118712112 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:37:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 20 14:37:11 compute-0 podman[285730]: 2026-01-20 14:37:11.913521186 +0000 UTC m=+0.037596305 container create 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:37:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 20 14:37:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/998884099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:11 compute-0 ceph-mon[74360]: pgmap v1454: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 194 op/s
Jan 20 14:37:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 20 14:37:11 compute-0 systemd[1]: Started libpod-conmon-16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49.scope.
Jan 20 14:37:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0640db24e3711d92d3ab79a57d35dac7580268986ecfaae15e16d86e7211a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0640db24e3711d92d3ab79a57d35dac7580268986ecfaae15e16d86e7211a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0640db24e3711d92d3ab79a57d35dac7580268986ecfaae15e16d86e7211a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0640db24e3711d92d3ab79a57d35dac7580268986ecfaae15e16d86e7211a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:11 compute-0 podman[285730]: 2026-01-20 14:37:11.897866494 +0000 UTC m=+0.021941633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:37:12 compute-0 podman[285730]: 2026-01-20 14:37:12.001491368 +0000 UTC m=+0.125566517 container init 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:37:12 compute-0 podman[285730]: 2026-01-20 14:37:12.011338394 +0000 UTC m=+0.135413513 container start 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:37:12 compute-0 podman[285730]: 2026-01-20 14:37:12.014708945 +0000 UTC m=+0.138784094 container attach 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321828056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.055 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.062 250022 DEBUG nova.compute.provider_tree [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.079 250022 DEBUG nova.scheduler.client.report [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.107 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.108 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.186 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.187 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.206 250022 INFO nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.224 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.331 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.334 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.334 250022 INFO nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Creating image(s)
Jan 20 14:37:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.374 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.402 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.426 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.431 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.457 250022 DEBUG nova.policy [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56e2959629114d3d8a48e7a80ed96c4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3750c56415134773aa9d9880038f1749', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.495 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.496 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.497 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.498 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.525 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.530 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a6c080ba-dcec-4724-ac6c-12c69f617401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.817 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a6c080ba-dcec-4724-ac6c-12c69f617401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-0 happy_albattani[285746]: {
Jan 20 14:37:12 compute-0 happy_albattani[285746]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:37:12 compute-0 happy_albattani[285746]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:37:12 compute-0 happy_albattani[285746]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:37:12 compute-0 happy_albattani[285746]:         "osd_id": 0,
Jan 20 14:37:12 compute-0 happy_albattani[285746]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:37:12 compute-0 happy_albattani[285746]:         "type": "bluestore"
Jan 20 14:37:12 compute-0 happy_albattani[285746]:     }
Jan 20 14:37:12 compute-0 happy_albattani[285746]: }
Jan 20 14:37:12 compute-0 systemd[1]: libpod-16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49.scope: Deactivated successfully.
Jan 20 14:37:12 compute-0 podman[285730]: 2026-01-20 14:37:12.893329747 +0000 UTC m=+1.017404856 container died 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:37:12 compute-0 nova_compute[250018]: 2026-01-20 14:37:12.907 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] resizing rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-74e0640db24e3711d92d3ab79a57d35dac7580268986ecfaae15e16d86e7211a-merged.mount: Deactivated successfully.
Jan 20 14:37:12 compute-0 ceph-mon[74360]: osdmap e222: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3321828056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:12 compute-0 ceph-mon[74360]: osdmap e223: 3 total, 3 up, 3 in
Jan 20 14:37:12 compute-0 podman[285730]: 2026-01-20 14:37:12.951617439 +0000 UTC m=+1.075692558 container remove 16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:37:12 compute-0 systemd[1]: libpod-conmon-16cd909f16ee3b779ce67c8d5accfd351947c919cd137e3b11b9b47d76ba5b49.scope: Deactivated successfully.
Jan 20 14:37:12 compute-0 sudo[285566]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:37:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:37:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 74f23c34-addf-4a40-8ef2-16561d0579e6 does not exist
Jan 20 14:37:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 75517e39-d94c-457b-9322-032d7fce5b05 does not exist
Jan 20 14:37:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 135d95cc-a899-4d51-ad58-effa31e11407 does not exist
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.025 250022 DEBUG nova.objects.instance [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'migration_context' on Instance uuid a6c080ba-dcec-4724-ac6c-12c69f617401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.044 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.045 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Ensure instance console log exists: /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.045 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.046 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:13 compute-0 nova_compute[250018]: 2026-01-20 14:37:13.046 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:13 compute-0 sudo[285948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:13 compute-0 sudo[285948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:13 compute-0 sudo[285948]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:13 compute-0 sudo[285975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:37:13 compute-0 sudo[285975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:13 compute-0 sudo[285975]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 4.1 MiB/s wr, 219 op/s
Jan 20 14:37:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 20 14:37:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 20 14:37:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 20 14:37:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:37:13 compute-0 ceph-mon[74360]: pgmap v1457: 321 pgs: 321 active+clean; 121 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 683 KiB/s rd, 4.1 MiB/s wr, 219 op/s
Jan 20 14:37:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1783961933' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:37:13 compute-0 ceph-mon[74360]: osdmap e224: 3 total, 3 up, 3 in
Jan 20 14:37:14 compute-0 nova_compute[250018]: 2026-01-20 14:37:14.008 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Successfully created port: 95e5f35e-7004-4dfe-a8a2-38f433519d9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:37:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 20 14:37:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 20 14:37:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 20 14:37:15 compute-0 nova_compute[250018]: 2026-01-20 14:37:15.335 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919820.3346362, 64830e28-fd5b-41c5-ba24-3f203f4d4b10 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:15 compute-0 nova_compute[250018]: 2026-01-20 14:37:15.336 250022 INFO nova.compute.manager [-] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] VM Stopped (Lifecycle Event)
Jan 20 14:37:15 compute-0 nova_compute[250018]: 2026-01-20 14:37:15.363 250022 DEBUG nova.compute.manager [None req-a3411d8f-6b7f-4d74-a9f3-97e12610d38b - - - - - -] [instance: 64830e28-fd5b-41c5-ba24-3f203f4d4b10] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:15 compute-0 nova_compute[250018]: 2026-01-20 14:37:15.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:15.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 180 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 142 op/s
Jan 20 14:37:16 compute-0 ceph-mon[74360]: osdmap e225: 3 total, 3 up, 3 in
Jan 20 14:37:16 compute-0 ceph-mon[74360]: pgmap v1460: 321 pgs: 321 active+clean; 180 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.3 MiB/s wr, 142 op/s
Jan 20 14:37:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.408 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Successfully updated port: 95e5f35e-7004-4dfe-a8a2-38f433519d9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.423 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.423 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquired lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.423 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.473 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.498 250022 DEBUG nova.compute.manager [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-changed-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.498 250022 DEBUG nova.compute.manager [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Refreshing instance network info cache due to event network-changed-95e5f35e-7004-4dfe-a8a2-38f433519d9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:37:16 compute-0 nova_compute[250018]: 2026-01-20 14:37:16.498 250022 DEBUG oslo_concurrency.lockutils [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:37:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 21K writes, 82K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 6864 syncs, 3.14 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9579 writes, 36K keys, 9579 commit groups, 1.0 writes per commit group, ingest: 36.29 MB, 0.06 MB/s
                                           Interval WAL: 9579 writes, 3772 syncs, 2.54 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:37:17 compute-0 nova_compute[250018]: 2026-01-20 14:37:17.202 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:37:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:17.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 212 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 262 op/s
Jan 20 14:37:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 20 14:37:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 20 14:37:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 20 14:37:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.598 250022 DEBUG nova.network.neutron [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updating instance_info_cache with network_info: [{"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.622 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Releasing lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.622 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Instance network_info: |[{"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.622 250022 DEBUG oslo_concurrency.lockutils [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.623 250022 DEBUG nova.network.neutron [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Refreshing network info cache for port 95e5f35e-7004-4dfe-a8a2-38f433519d9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.625 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Start _get_guest_xml network_info=[{"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.630 250022 WARNING nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.635 250022 DEBUG nova.virt.libvirt.host [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.636 250022 DEBUG nova.virt.libvirt.host [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.640 250022 DEBUG nova.virt.libvirt.host [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.641 250022 DEBUG nova.virt.libvirt.host [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.642 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.643 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.643 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.643 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.644 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.644 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.644 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.644 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.644 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.645 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.645 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.645 250022 DEBUG nova.virt.hardware [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:37:18 compute-0 nova_compute[250018]: 2026-01-20 14:37:18.648 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:18 compute-0 ceph-mon[74360]: pgmap v1461: 321 pgs: 321 active+clean; 212 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 262 op/s
Jan 20 14:37:18 compute-0 ceph-mon[74360]: osdmap e226: 3 total, 3 up, 3 in
Jan 20 14:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595490804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.109 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.140 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.144 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.335 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:19.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 185 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 250 op/s
Jan 20 14:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3192988678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.637 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.638 250022 DEBUG nova.virt.libvirt.vif [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783077744',display_name='tempest-ImagesTestJSON-server-783077744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783077744',id=57,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-u7s20h7n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:12Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a6c080ba-dcec-4724-ac6c-12c69f617401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.639 250022 DEBUG nova.network.os_vif_util [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.640 250022 DEBUG nova.network.os_vif_util [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.641 250022 DEBUG nova.objects.instance [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6c080ba-dcec-4724-ac6c-12c69f617401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.656 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <uuid>a6c080ba-dcec-4724-ac6c-12c69f617401</uuid>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <name>instance-00000039</name>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:name>tempest-ImagesTestJSON-server-783077744</nova:name>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:37:18</nova:creationTime>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:user uuid="56e2959629114d3d8a48e7a80ed96c4b">tempest-ImagesTestJSON-338390217-project-member</nova:user>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:project uuid="3750c56415134773aa9d9880038f1749">tempest-ImagesTestJSON-338390217</nova:project>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <nova:port uuid="95e5f35e-7004-4dfe-a8a2-38f433519d9c">
Jan 20 14:37:19 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <system>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="serial">a6c080ba-dcec-4724-ac6c-12c69f617401</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="uuid">a6c080ba-dcec-4724-ac6c-12c69f617401</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </system>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <os>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </os>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <features>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </features>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a6c080ba-dcec-4724-ac6c-12c69f617401_disk">
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </source>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config">
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </source>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:37:19 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:4d:56:b3"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <target dev="tap95e5f35e-70"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/console.log" append="off"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <video>
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </video>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:37:19 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:37:19 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:37:19 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:37:19 compute-0 nova_compute[250018]: </domain>
Jan 20 14:37:19 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.658 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Preparing to wait for external event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.658 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.658 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.658 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.659 250022 DEBUG nova.virt.libvirt.vif [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783077744',display_name='tempest-ImagesTestJSON-server-783077744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783077744',id=57,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-u7s20h7n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:12Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a6c080ba-dcec-4724-ac6c-12c69f617401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.659 250022 DEBUG nova.network.os_vif_util [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.660 250022 DEBUG nova.network.os_vif_util [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.660 250022 DEBUG os_vif [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.661 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.661 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.662 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.665 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.666 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95e5f35e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.666 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95e5f35e-70, col_values=(('external_ids', {'iface-id': '95e5f35e-7004-4dfe-a8a2-38f433519d9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:56:b3', 'vm-uuid': 'a6c080ba-dcec-4724-ac6c-12c69f617401'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.667 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:19 compute-0 NetworkManager[48960]: <info>  [1768919839.6687] manager: (tap95e5f35e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.669 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.674 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.674 250022 INFO os_vif [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70')
Jan 20 14:37:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1595490804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3192988678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.912 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.912 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.912 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No VIF found with MAC fa:16:3e:4d:56:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.913 250022 INFO nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Using config drive
Jan 20 14:37:19 compute-0 nova_compute[250018]: 2026-01-20 14:37:19.937 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:20 compute-0 nova_compute[250018]: 2026-01-20 14:37:20.850 250022 INFO nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Creating config drive at /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config
Jan 20 14:37:20 compute-0 nova_compute[250018]: 2026-01-20 14:37:20.857 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe0qi9fbn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:20 compute-0 ceph-mon[74360]: pgmap v1463: 321 pgs: 321 active+clean; 185 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 250 op/s
Jan 20 14:37:20 compute-0 nova_compute[250018]: 2026-01-20 14:37:20.988 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe0qi9fbn" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.019 250022 DEBUG nova.storage.rbd_utils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] rbd image a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.022 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.175 250022 DEBUG oslo_concurrency.processutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config a6c080ba-dcec-4724-ac6c-12c69f617401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.176 250022 INFO nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Deleting local config drive /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401/disk.config because it was imported into RBD.
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.205 250022 DEBUG nova.network.neutron [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updated VIF entry in instance network info cache for port 95e5f35e-7004-4dfe-a8a2-38f433519d9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.206 250022 DEBUG nova.network.neutron [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updating instance_info_cache with network_info: [{"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.224 250022 DEBUG oslo_concurrency.lockutils [req-b5962002-2400-495a-bf3e-26a95f85a858 req-e6619df6-f08d-4156-bcca-f580af07074b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:21 compute-0 kernel: tap95e5f35e-70: entered promiscuous mode
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.2317] manager: (tap95e5f35e-70): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Jan 20 14:37:21 compute-0 ovn_controller[148666]: 2026-01-20T14:37:21Z|00164|binding|INFO|Claiming lport 95e5f35e-7004-4dfe-a8a2-38f433519d9c for this chassis.
Jan 20 14:37:21 compute-0 ovn_controller[148666]: 2026-01-20T14:37:21Z|00165|binding|INFO|95e5f35e-7004-4dfe-a8a2-38f433519d9c: Claiming fa:16:3e:4d:56:b3 10.100.0.14
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.242 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:56:b3 10.100.0.14'], port_security=['fa:16:3e:4d:56:b3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a6c080ba-dcec-4724-ac6c-12c69f617401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=95e5f35e-7004-4dfe-a8a2-38f433519d9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.244 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f35e-7004-4dfe-a8a2-38f433519d9c in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a bound to our chassis
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.245 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.256 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[630c4dba-d018-48f2-8556-0076fc17f269]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.257 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabb83e3e-01 in ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.259 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabb83e3e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.259 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d1584d-566a-4e2f-b7b5-2e33f7985ecd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 systemd-udevd[286141]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.260 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b8833b22-7e2b-4f41-a37a-0d0157d71a05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 systemd-machined[216401]: New machine qemu-25-instance-00000039.
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.2714] device (tap95e5f35e-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.2723] device (tap95e5f35e-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.273 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ba531610-d123-4d6d-8dd8-4836e2c0688f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000039.
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.299 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6da0b228-62ea-4fb0-9b02-fcae0270423c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_controller[148666]: 2026-01-20T14:37:21Z|00166|binding|INFO|Setting lport 95e5f35e-7004-4dfe-a8a2-38f433519d9c ovn-installed in OVS
Jan 20 14:37:21 compute-0 ovn_controller[148666]: 2026-01-20T14:37:21Z|00167|binding|INFO|Setting lport 95e5f35e-7004-4dfe-a8a2-38f433519d9c up in Southbound
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.366 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3b839d27-52b4-43cf-b542-6ecc7d57b570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.370 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a63509-0983-42fb-a592-3fb4a96198a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.3717] manager: (tapabb83e3e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.403 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[da4dcc75-2867-45c7-876f-fe36d3a7aecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.405 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[641f6834-a643-424c-9449-4d6b9368f23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.4269] device (tapabb83e3e-00): carrier: link connected
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.432 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7dbb9399-8984-40cf-95c9-e30d9d9221df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.448 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3d655c7f-a4a8-48f3-b93d-ac31354aedd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578215, 'reachable_time': 27508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286173, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.467 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0af79ff8-a93c-4e5e-8ca7-6e842de06edf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578215, 'tstamp': 578215}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286174, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.475 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.487 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd17d4-d0d3-4ddb-9d43-3800cf821751]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabb83e3e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:0b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578215, 'reachable_time': 27508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286175, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.517 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf240de-ec0d-459d-9322-1bff9fa18825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 206 op/s
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.568 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[50dea4ae-99c8-44ae-b078-e197469f1142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.569 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.569 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.570 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabb83e3e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.571 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 NetworkManager[48960]: <info>  [1768919841.5723] manager: (tapabb83e3e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Jan 20 14:37:21 compute-0 kernel: tapabb83e3e-00: entered promiscuous mode
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.584 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabb83e3e-00, col_values=(('external_ids', {'iface-id': 'dfacaf19-f896-4c13-a7ad-47b57cf03fc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:21 compute-0 ovn_controller[148666]: 2026-01-20T14:37:21Z|00168|binding|INFO|Releasing lport dfacaf19-f896-4c13-a7ad-47b57cf03fc1 from this chassis (sb_readonly=0)
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.601 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:21.602 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.696 250022 DEBUG nova.compute.manager [req-5a5fac00-fad5-4866-8518-dabbd3c99752 req-13ae6dda-156b-43d5-a594-90d900ab6632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.697 250022 DEBUG oslo_concurrency.lockutils [req-5a5fac00-fad5-4866-8518-dabbd3c99752 req-13ae6dda-156b-43d5-a594-90d900ab6632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.697 250022 DEBUG oslo_concurrency.lockutils [req-5a5fac00-fad5-4866-8518-dabbd3c99752 req-13ae6dda-156b-43d5-a594-90d900ab6632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.697 250022 DEBUG oslo_concurrency.lockutils [req-5a5fac00-fad5-4866-8518-dabbd3c99752 req-13ae6dda-156b-43d5-a594-90d900ab6632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:21 compute-0 nova_compute[250018]: 2026-01-20 14:37:21.698 250022 DEBUG nova.compute.manager [req-5a5fac00-fad5-4866-8518-dabbd3c99752 req-13ae6dda-156b-43d5-a594-90d900ab6632 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Processing event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:22.143 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[885333f7-2925-47a8-9154-18a3d609af43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:22.145 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/abb83e3e-0b12-431b-ad86-a1d271b5b46a.pid.haproxy
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID abb83e3e-0b12-431b-ad86-a1d271b5b46a
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:37:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:22.146 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'env', 'PROCESS_TAG=haproxy-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abb83e3e-0b12-431b-ad86-a1d271b5b46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:37:22 compute-0 ceph-mon[74360]: pgmap v1464: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.5 MiB/s wr, 206 op/s
Jan 20 14:37:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1752266427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.265 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.266 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919842.2651684, a6c080ba-dcec-4724-ac6c-12c69f617401 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.267 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] VM Started (Lifecycle Event)
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.269 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.272 250022 INFO nova.virt.libvirt.driver [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Instance spawned successfully.
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.273 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.302 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.307 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.310 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.310 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.311 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.311 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.312 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.312 250022 DEBUG nova.virt.libvirt.driver [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.349 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.350 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919842.2653282, a6c080ba-dcec-4724-ac6c-12c69f617401 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.350 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] VM Paused (Lifecycle Event)
Jan 20 14:37:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.371 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.374 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919842.2686746, a6c080ba-dcec-4724-ac6c-12c69f617401 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.375 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] VM Resumed (Lifecycle Event)
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.392 250022 INFO nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Took 10.06 seconds to spawn the instance on the hypervisor.
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.393 250022 DEBUG nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.403 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.406 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.435 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.456 250022 INFO nova.compute.manager [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Took 11.07 seconds to build instance.
Jan 20 14:37:22 compute-0 nova_compute[250018]: 2026-01-20 14:37:22.471 250022 DEBUG oslo_concurrency.lockutils [None req-f4983e58-f941-4b78-8c6c-624d400382b0 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:22 compute-0 podman[286250]: 2026-01-20 14:37:22.518992877 +0000 UTC m=+0.045812196 container create 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:37:22 compute-0 systemd[1]: Started libpod-conmon-979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c.scope.
Jan 20 14:37:22 compute-0 podman[286250]: 2026-01-20 14:37:22.495333409 +0000 UTC m=+0.022152758 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:37:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e2e65e46e0ea5d1729f9babc55609aab389931da6e622d4c10b0b5fef1df2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:37:22 compute-0 podman[286250]: 2026-01-20 14:37:22.650708969 +0000 UTC m=+0.177528288 container init 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:22 compute-0 podman[286250]: 2026-01-20 14:37:22.657506983 +0000 UTC m=+0.184326302 container start 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:22 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [NOTICE]   (286269) : New worker (286271) forked
Jan 20 14:37:22 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [NOTICE]   (286269) : Loading success.
Jan 20 14:37:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 20 14:37:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 20 14:37:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:23.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Jan 20 14:37:23 compute-0 sudo[286281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:23 compute-0 sudo[286281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:23 compute-0 sudo[286281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.838 250022 DEBUG nova.compute.manager [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.840 250022 DEBUG oslo_concurrency.lockutils [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.841 250022 DEBUG oslo_concurrency.lockutils [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.841 250022 DEBUG oslo_concurrency.lockutils [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.842 250022 DEBUG nova.compute.manager [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] No waiting events found dispatching network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:23 compute-0 nova_compute[250018]: 2026-01-20 14:37:23.842 250022 WARNING nova.compute.manager [req-bddf8b95-e630-43df-aac5-b0c5d10e5602 req-c7a4df6a-eeb6-4315-8308-b4ab17c6efa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received unexpected event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c for instance with vm_state active and task_state None.
Jan 20 14:37:23 compute-0 sudo[286306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:23 compute-0 sudo[286306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:23 compute-0 ceph-mon[74360]: osdmap e227: 3 total, 3 up, 3 in
Jan 20 14:37:23 compute-0 sudo[286306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:24.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:24 compute-0 nova_compute[250018]: 2026-01-20 14:37:24.668 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:24 compute-0 ceph-mon[74360]: pgmap v1466: 321 pgs: 321 active+clean; 101 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Jan 20 14:37:25 compute-0 nova_compute[250018]: 2026-01-20 14:37:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:25 compute-0 nova_compute[250018]: 2026-01-20 14:37:25.072 250022 DEBUG nova.compute.manager [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:25 compute-0 nova_compute[250018]: 2026-01-20 14:37:25.154 250022 INFO nova.compute.manager [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] instance snapshotting
Jan 20 14:37:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:25 compute-0 nova_compute[250018]: 2026-01-20 14:37:25.541 250022 INFO nova.virt.libvirt.driver [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Beginning live snapshot process
Jan 20 14:37:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 21 KiB/s wr, 92 op/s
Jan 20 14:37:25 compute-0 nova_compute[250018]: 2026-01-20 14:37:25.696 250022 DEBUG nova.virt.libvirt.imagebackend [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:37:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/754849335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:25 compute-0 ceph-mon[74360]: pgmap v1467: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 21 KiB/s wr, 92 op/s
Jan 20 14:37:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4289147269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:26 compute-0 nova_compute[250018]: 2026-01-20 14:37:26.019 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(918a454c9b24415795f5da59457fa8e5) on rbd image(a6c080ba-dcec-4724-ac6c-12c69f617401_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:37:26 compute-0 nova_compute[250018]: 2026-01-20 14:37:26.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:26 compute-0 nova_compute[250018]: 2026-01-20 14:37:26.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 20 14:37:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 20 14:37:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.056 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] cloning vms/a6c080ba-dcec-4724-ac6c-12c69f617401_disk@918a454c9b24415795f5da59457fa8e5 to images/a619a5a5-7b38-4a60-af81-76a595f37563 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.112 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.113 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.113 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.113 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.113 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.209 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] flattening images/a619a5a5-7b38-4a60-af81-76a595f37563 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:37:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:27.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516451502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.534 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 160 op/s
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.617 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.617 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.770 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.771 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4402MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.772 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.772 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.852 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance a6c080ba-dcec-4724-ac6c-12c69f617401 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.853 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.853 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.949 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:27 compute-0 nova_compute[250018]: 2026-01-20 14:37:27.973 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] removing snapshot(918a454c9b24415795f5da59457fa8e5) on rbd image(a6c080ba-dcec-4724-ac6c-12c69f617401_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:37:28 compute-0 ceph-mon[74360]: osdmap e228: 3 total, 3 up, 3 in
Jan 20 14:37:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2516451502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:28 compute-0 ceph-mon[74360]: pgmap v1469: 321 pgs: 321 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 160 op/s
Jan 20 14:37:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907367584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:28 compute-0 nova_compute[250018]: 2026-01-20 14:37:28.387 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:28 compute-0 nova_compute[250018]: 2026-01-20 14:37:28.392 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:28 compute-0 nova_compute[250018]: 2026-01-20 14:37:28.413 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:28 compute-0 nova_compute[250018]: 2026-01-20 14:37:28.450 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:37:28 compute-0 nova_compute[250018]: 2026-01-20 14:37:28.450 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 20 14:37:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/907367584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/22471952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 20 14:37:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 20 14:37:29 compute-0 nova_compute[250018]: 2026-01-20 14:37:29.255 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] creating snapshot(snap) on rbd image(a619a5a5-7b38-4a60-af81-76a595f37563) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:37:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 105 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 173 op/s
Jan 20 14:37:29 compute-0 nova_compute[250018]: 2026-01-20 14:37:29.670 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 20 14:37:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 20 14:37:30 compute-0 ceph-mon[74360]: osdmap e229: 3 total, 3 up, 3 in
Jan 20 14:37:30 compute-0 ceph-mon[74360]: pgmap v1471: 321 pgs: 321 active+clean; 105 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 173 op/s
Jan 20 14:37:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1863201075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 20 14:37:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:30.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.450 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.450 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.451 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image a619a5a5-7b38-4a60-af81-76a595f37563 could not be found.
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID a619a5a5-7b38-4a60-af81-76a595f37563
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver 
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver 
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image a619a5a5-7b38-4a60-af81-76a595f37563 could not be found.
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.477 250022 ERROR nova.virt.libvirt.driver 
Jan 20 14:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:30.533 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:30.534 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.534 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:30 compute-0 nova_compute[250018]: 2026-01-20 14:37:30.633 250022 DEBUG nova.storage.rbd_utils [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] removing snapshot(snap) on rbd image(a619a5a5-7b38-4a60-af81-76a595f37563) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:30.750 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:30.751 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:30.751 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.078 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.078 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6c080ba-dcec-4724-ac6c-12c69f617401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 20 14:37:31 compute-0 ceph-mon[74360]: osdmap e230: 3 total, 3 up, 3 in
Jan 20 14:37:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2696979408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 20 14:37:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 20 14:37:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:31.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.519 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 156 op/s
Jan 20 14:37:31 compute-0 nova_compute[250018]: 2026-01-20 14:37:31.699 250022 WARNING nova.compute.manager [None req-cccf13ed-31c6-49e4-810f-96ab6540e282 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Image not found during snapshot: nova.exception.ImageNotFound: Image a619a5a5-7b38-4a60-af81-76a595f37563 could not be found.
Jan 20 14:37:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:37:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:32 compute-0 ceph-mon[74360]: osdmap e231: 3 total, 3 up, 3 in
Jan 20 14:37:32 compute-0 ceph-mon[74360]: pgmap v1474: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 156 op/s
Jan 20 14:37:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:32.535 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.169 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.170 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.171 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.172 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.172 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.175 250022 INFO nova.compute.manager [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Terminating instance
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.177 250022 DEBUG nova.compute.manager [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:37:33 compute-0 kernel: tap95e5f35e-70 (unregistering): left promiscuous mode
Jan 20 14:37:33 compute-0 NetworkManager[48960]: <info>  [1768919853.2141] device (tap95e5f35e-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:37:33 compute-0 ovn_controller[148666]: 2026-01-20T14:37:33Z|00169|binding|INFO|Releasing lport 95e5f35e-7004-4dfe-a8a2-38f433519d9c from this chassis (sb_readonly=0)
Jan 20 14:37:33 compute-0 ovn_controller[148666]: 2026-01-20T14:37:33Z|00170|binding|INFO|Setting lport 95e5f35e-7004-4dfe-a8a2-38f433519d9c down in Southbound
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 ovn_controller[148666]: 2026-01-20T14:37:33Z|00171|binding|INFO|Removing iface tap95e5f35e-70 ovn-installed in OVS
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.225 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.236 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:56:b3 10.100.0.14'], port_security=['fa:16:3e:4d:56:b3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a6c080ba-dcec-4724-ac6c-12c69f617401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3750c56415134773aa9d9880038f1749', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e302063-2ccd-4f7c-8835-ef521762a486', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4125934e-1dea-4e34-a38d-5291c850f0b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=95e5f35e-7004-4dfe-a8a2-38f433519d9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.238 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 95e5f35e-7004-4dfe-a8a2-38f433519d9c in datapath abb83e3e-0b12-431b-ad86-a1d271b5b46a unbound from our chassis
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.239 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abb83e3e-0b12-431b-ad86-a1d271b5b46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.240 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[35f7da23-525d-4e0f-a609-5730c5ea11ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.241 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a namespace which is not needed anymore
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.252 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 20 14:37:33 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Consumed 11.531s CPU time.
Jan 20 14:37:33 compute-0 systemd-machined[216401]: Machine qemu-25-instance-00000039 terminated.
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.299 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updating instance_info_cache with network_info: [{"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.330 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-a6c080ba-dcec-4724-ac6c-12c69f617401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.330 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [NOTICE]   (286269) : haproxy version is 2.8.14-c23fe91
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [NOTICE]   (286269) : path to executable is /usr/sbin/haproxy
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [WARNING]  (286269) : Exiting Master process...
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [WARNING]  (286269) : Exiting Master process...
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [ALERT]    (286269) : Current worker (286271) exited with code 143 (Terminated)
Jan 20 14:37:33 compute-0 neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a[286265]: [WARNING]  (286269) : All workers exited. Exiting... (0)
Jan 20 14:37:33 compute-0 systemd[1]: libpod-979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c.scope: Deactivated successfully.
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.400 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 podman[286584]: 2026-01-20 14:37:33.401611369 +0000 UTC m=+0.048662982 container died 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.415 250022 INFO nova.virt.libvirt.driver [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Instance destroyed successfully.
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.417 250022 DEBUG nova.objects.instance [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lazy-loading 'resources' on Instance uuid a6c080ba-dcec-4724-ac6c-12c69f617401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.431 250022 DEBUG nova.virt.libvirt.vif [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:37:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783077744',display_name='tempest-ImagesTestJSON-server-783077744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783077744',id=57,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:37:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3750c56415134773aa9d9880038f1749',ramdisk_id='',reservation_id='r-u7s20h7n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-338390217',owner_user_name='tempest-ImagesTestJSON-338390217-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:37:31Z,user_data=None,user_id='56e2959629114d3d8a48e7a80ed96c4b',uuid=a6c080ba-dcec-4724-ac6c-12c69f617401,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:37:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e2e65e46e0ea5d1729f9babc55609aab389931da6e622d4c10b0b5fef1df2f-merged.mount: Deactivated successfully.
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.432 250022 DEBUG nova.network.os_vif_util [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converting VIF {"id": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "address": "fa:16:3e:4d:56:b3", "network": {"id": "abb83e3e-0b12-431b-ad86-a1d271b5b46a", "bridge": "br-int", "label": "tempest-ImagesTestJSON-766235638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3750c56415134773aa9d9880038f1749", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95e5f35e-70", "ovs_interfaceid": "95e5f35e-7004-4dfe-a8a2-38f433519d9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.433 250022 DEBUG nova.network.os_vif_util [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.433 250022 DEBUG os_vif [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.435 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.436 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95e5f35e-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.437 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.442 250022 INFO os_vif [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:56:b3,bridge_name='br-int',has_traffic_filtering=True,id=95e5f35e-7004-4dfe-a8a2-38f433519d9c,network=Network(abb83e3e-0b12-431b-ad86-a1d271b5b46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95e5f35e-70')
Jan 20 14:37:33 compute-0 podman[286584]: 2026-01-20 14:37:33.445109222 +0000 UTC m=+0.092160845 container cleanup 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:37:33 compute-0 systemd[1]: libpod-conmon-979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c.scope: Deactivated successfully.
Jan 20 14:37:33 compute-0 podman[286631]: 2026-01-20 14:37:33.506489468 +0000 UTC m=+0.039791094 container remove 979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.511 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1c9f28-7e80-4436-80ae-4243a3b41644]: (4, ('Tue Jan 20 02:37:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c)\n979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c\nTue Jan 20 02:37:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a (979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c)\n979720845e632051c3a395f26edb8309cd6625b94fc2337a6fad7426b4a3199c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.513 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[14984d41-9008-4df6-b9a7-6e074454b373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.514 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabb83e3e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:37:33 compute-0 kernel: tapabb83e3e-00: left promiscuous mode
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.517 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.520 250022 DEBUG nova.compute.manager [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-unplugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.521 250022 DEBUG oslo_concurrency.lockutils [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.521 250022 DEBUG oslo_concurrency.lockutils [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.521 250022 DEBUG oslo_concurrency.lockutils [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.521 250022 DEBUG nova.compute.manager [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] No waiting events found dispatching network-vif-unplugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.522 250022 DEBUG nova.compute.manager [req-fe8a8172-7b6e-4a5e-bc45-be6055caf61a req-f16fe266-69ea-4c3c-8be6-9ad94c9ceed8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-unplugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:37:33 compute-0 nova_compute[250018]: 2026-01-20 14:37:33.531 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.535 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e1a7d8-467e-4848-968d-322eeca8bcd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 119 op/s
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.550 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3bd365-6095-44e3-afb2-728b7e2eeca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.551 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[59e3fbdc-a206-4a0f-9cbe-95cd47100a5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.565 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ad35156d-c374-4059-9b2f-c6a96574244e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578208, 'reachable_time': 28306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286655, 'error': None, 'target': 'ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:33 compute-0 systemd[1]: run-netns-ovnmeta\x2dabb83e3e\x2d0b12\x2d431b\x2dad86\x2da1d271b5b46a.mount: Deactivated successfully.
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.568 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abb83e3e-0b12-431b-ad86-a1d271b5b46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:37:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:37:33.568 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[96cf8f7b-d7b9-4be5-8c51-f8e56efc98ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.023 250022 INFO nova.virt.libvirt.driver [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Deleting instance files /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401_del
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.024 250022 INFO nova.virt.libvirt.driver [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Deletion of /var/lib/nova/instances/a6c080ba-dcec-4724-ac6c-12c69f617401_del complete
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.070 250022 INFO nova.compute.manager [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.071 250022 DEBUG oslo.service.loopingcall [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.071 250022 DEBUG nova.compute.manager [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:37:34 compute-0 nova_compute[250018]: 2026-01-20 14:37:34.071 250022 DEBUG nova.network.neutron [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:37:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:34.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:34 compute-0 ceph-mon[74360]: pgmap v1475: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 150 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 119 op/s
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.186 250022 DEBUG nova.network.neutron [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.212 250022 INFO nova.compute.manager [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Took 1.14 seconds to deallocate network for instance.
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.262 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.262 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.330 250022 DEBUG oslo_concurrency.processutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 133 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Jan 20 14:37:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/471447498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.729 250022 DEBUG nova.compute.manager [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.730 250022 DEBUG oslo_concurrency.lockutils [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.730 250022 DEBUG oslo_concurrency.lockutils [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.731 250022 DEBUG oslo_concurrency.lockutils [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.732 250022 DEBUG nova.compute.manager [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] No waiting events found dispatching network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.732 250022 WARNING nova.compute.manager [req-5e1056e2-61bf-455f-9ec3-e8efe844018b req-b17be2c4-8002-4766-9d62-7b29c1b07582 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received unexpected event network-vif-plugged-95e5f35e-7004-4dfe-a8a2-38f433519d9c for instance with vm_state deleted and task_state None.
Jan 20 14:37:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160214948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.756 250022 DEBUG oslo_concurrency.processutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.762 250022 DEBUG nova.compute.provider_tree [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.785 250022 DEBUG nova.scheduler.client.report [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.821 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.893 250022 INFO nova.scheduler.client.report [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Deleted allocations for instance a6c080ba-dcec-4724-ac6c-12c69f617401
Jan 20 14:37:35 compute-0 nova_compute[250018]: 2026-01-20 14:37:35.985 250022 DEBUG oslo_concurrency.lockutils [None req-0d7c8c7d-a54c-4c20-a7fd-9fea1a34cacf 56e2959629114d3d8a48e7a80ed96c4b 3750c56415134773aa9d9880038f1749 - - default default] Lock "a6c080ba-dcec-4724-ac6c-12c69f617401" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:36 compute-0 nova_compute[250018]: 2026-01-20 14:37:36.561 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:36 compute-0 nova_compute[250018]: 2026-01-20 14:37:36.633 250022 DEBUG nova.compute.manager [req-7195a5e0-3ba6-4f15-b446-30c4a0d727f0 req-94a7c9dc-cf32-4e20-88cb-22db5002057c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Received event network-vif-deleted-95e5f35e-7004-4dfe-a8a2-38f433519d9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:36 compute-0 ceph-mon[74360]: pgmap v1476: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 133 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 178 op/s
Jan 20 14:37:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/160214948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2472477865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3403050555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:37.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 107 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.5 MiB/s wr, 197 op/s
Jan 20 14:37:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 20 14:37:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 20 14:37:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 20 14:37:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:38 compute-0 nova_compute[250018]: 2026-01-20 14:37:38.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:38 compute-0 ceph-mon[74360]: pgmap v1477: 321 pgs: 321 active+clean; 107 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.5 MiB/s wr, 197 op/s
Jan 20 14:37:38 compute-0 ceph-mon[74360]: osdmap e232: 3 total, 3 up, 3 in
Jan 20 14:37:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 112 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 174 op/s
Jan 20 14:37:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:37:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:40.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:37:40 compute-0 ceph-mon[74360]: pgmap v1479: 321 pgs: 321 active+clean; 112 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 174 op/s
Jan 20 14:37:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:41.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:41 compute-0 nova_compute[250018]: 2026-01-20 14:37:41.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:41 compute-0 ceph-mon[74360]: pgmap v1480: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:42.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:42 compute-0 podman[286684]: 2026-01-20 14:37:42.456629141 +0000 UTC m=+0.048105908 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:37:42 compute-0 podman[286683]: 2026-01-20 14:37:42.486139457 +0000 UTC m=+0.079415923 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:37:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3670143803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3145099478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:37:43 compute-0 nova_compute[250018]: 2026-01-20 14:37:43.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:43 compute-0 nova_compute[250018]: 2026-01-20 14:37:43.768 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:43 compute-0 sudo[286726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:43 compute-0 sudo[286726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:43 compute-0 sudo[286726]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:43 compute-0 ceph-mon[74360]: pgmap v1481: 321 pgs: 321 active+clean; 134 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 184 op/s
Jan 20 14:37:43 compute-0 sudo[286751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:37:43 compute-0 sudo[286751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:37:43 compute-0 sudo[286751]: pam_unix(sudo:session): session closed for user root
Jan 20 14:37:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:45.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 104 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:37:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1773620489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:46 compute-0 nova_compute[250018]: 2026-01-20 14:37:46.618 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:46 compute-0 ceph-mon[74360]: pgmap v1482: 321 pgs: 321 active+clean; 104 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:37:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 177 op/s
Jan 20 14:37:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:47 compute-0 sshd-session[286778]: Invalid user ubuntu from 157.245.78.139 port 53438
Jan 20 14:37:48 compute-0 sshd-session[286778]: Connection closed by invalid user ubuntu 157.245.78.139 port 53438 [preauth]
Jan 20 14:37:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:48 compute-0 nova_compute[250018]: 2026-01-20 14:37:48.414 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919853.4120605, a6c080ba-dcec-4724-ac6c-12c69f617401 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:37:48 compute-0 nova_compute[250018]: 2026-01-20 14:37:48.414 250022 INFO nova.compute.manager [-] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] VM Stopped (Lifecycle Event)
Jan 20 14:37:48 compute-0 nova_compute[250018]: 2026-01-20 14:37:48.443 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:48 compute-0 nova_compute[250018]: 2026-01-20 14:37:48.497 250022 DEBUG nova.compute.manager [None req-32656bc6-a15b-4897-a30e-c7f89e4886cf - - - - - -] [instance: a6c080ba-dcec-4724-ac6c-12c69f617401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:37:48 compute-0 ceph-mon[74360]: pgmap v1483: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 177 op/s
Jan 20 14:37:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 915 KiB/s wr, 159 op/s
Jan 20 14:37:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:50.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:50 compute-0 ceph-mon[74360]: pgmap v1484: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 915 KiB/s wr, 159 op/s
Jan 20 14:37:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 717 KiB/s wr, 179 op/s
Jan 20 14:37:51 compute-0 nova_compute[250018]: 2026-01-20 14:37:51.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:52 compute-0 ceph-mon[74360]: pgmap v1485: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 717 KiB/s wr, 179 op/s
Jan 20 14:37:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:37:52
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.mgr', 'backups', '.rgw.root', 'images', 'default.rgw.log']
Jan 20 14:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:37:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.011 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.012 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.116 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.264 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.265 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.273 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.274 250022 INFO nova.compute.claims [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.446 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.504 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 113 op/s
Jan 20 14:37:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:37:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198049895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.962 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:53 compute-0 nova_compute[250018]: 2026-01-20 14:37:53.968 250022 DEBUG nova.compute.provider_tree [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:37:54 compute-0 ceph-mon[74360]: pgmap v1486: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 113 op/s
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.193 250022 DEBUG nova.scheduler.client.report [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.255 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.256 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:37:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.470 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.471 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.901 250022 INFO nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.953 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:37:54 compute-0 nova_compute[250018]: 2026-01-20 14:37:54.958 250022 DEBUG nova.policy [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '592a0204f38a4596ab1ab81774214a6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7d78990d13704d629a8a3e8910d005c5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:37:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1198049895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:37:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:55.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 114 op/s
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.579 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.580 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.581 250022 INFO nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Creating image(s)
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.612 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.637 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.663 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.667 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.741 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.742 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.743 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.743 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.768 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.772 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:37:55 compute-0 nova_compute[250018]: 2026-01-20 14:37:55.918 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Successfully created port: 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:37:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:56.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:56 compute-0 nova_compute[250018]: 2026-01-20 14:37:56.664 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:56 compute-0 nova_compute[250018]: 2026-01-20 14:37:56.955 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Successfully updated port: 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:37:56 compute-0 nova_compute[250018]: 2026-01-20 14:37:56.970 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:56 compute-0 nova_compute[250018]: 2026-01-20 14:37:56.971 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquired lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:56 compute-0 nova_compute[250018]: 2026-01-20 14:37:56.971 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:37:57 compute-0 nova_compute[250018]: 2026-01-20 14:37:57.058 250022 DEBUG nova.compute.manager [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-changed-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:37:57 compute-0 nova_compute[250018]: 2026-01-20 14:37:57.059 250022 DEBUG nova.compute.manager [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Refreshing instance network info cache due to event network-changed-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:37:57 compute-0 nova_compute[250018]: 2026-01-20 14:37:57.060 250022 DEBUG oslo_concurrency.lockutils [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:37:57 compute-0 nova_compute[250018]: 2026-01-20 14:37:57.179 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:37:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 148 KiB/s wr, 89 op/s
Jan 20 14:37:57 compute-0 ceph-mon[74360]: pgmap v1487: 321 pgs: 321 active+clean; 88 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 114 op/s
Jan 20 14:37:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:37:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:37:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:37:58.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.465 250022 DEBUG nova.network.neutron [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Updating instance_info_cache with network_info: [{"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.488 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Releasing lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.489 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Instance network_info: |[{"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.489 250022 DEBUG oslo_concurrency.lockutils [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:37:58 compute-0 nova_compute[250018]: 2026-01-20 14:37:58.489 250022 DEBUG nova.network.neutron [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Refreshing network info cache for port 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:37:59 compute-0 ceph-mon[74360]: pgmap v1488: 321 pgs: 321 active+clean; 88 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 148 KiB/s wr, 89 op/s
Jan 20 14:37:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:37:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:37:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:37:59.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:37:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 99 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 962 KiB/s rd, 938 KiB/s wr, 42 op/s
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.316 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:00 compute-0 ceph-mon[74360]: pgmap v1489: 321 pgs: 321 active+clean; 99 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 962 KiB/s rd, 938 KiB/s wr, 42 op/s
Jan 20 14:38:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:00.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.434 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] resizing rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.579 250022 DEBUG nova.objects.instance [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a1e0954-326e-4ad2-b212-3980d6e9513b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.609 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.610 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Ensure instance console log exists: /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.610 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.611 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.611 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.613 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Start _get_guest_xml network_info=[{"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.617 250022 WARNING nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.621 250022 DEBUG nova.virt.libvirt.host [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.622 250022 DEBUG nova.virt.libvirt.host [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.626 250022 DEBUG nova.virt.libvirt.host [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.627 250022 DEBUG nova.virt.libvirt.host [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.628 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.628 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.629 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.629 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.629 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.630 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.630 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.630 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.631 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.631 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.631 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.631 250022 DEBUG nova.virt.hardware [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:38:00 compute-0 nova_compute[250018]: 2026-01-20 14:38:00.635 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.029 250022 DEBUG nova.network.neutron [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Updated VIF entry in instance network info cache for port 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.030 250022 DEBUG nova.network.neutron [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Updating instance_info_cache with network_info: [{"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.067 250022 DEBUG oslo_concurrency.lockutils [req-52886d98-848a-4d7e-ad2f-03e3e5695a26 req-a0d15038-d015-410d-a1fb-d2fba5d7140c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-2a1e0954-326e-4ad2-b212-3980d6e9513b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:38:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:38:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702146314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.095 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.118 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.123 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2702146314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:38:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126701600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 111 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 804 KiB/s rd, 1.7 MiB/s wr, 48 op/s
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.558 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.560 250022 DEBUG nova.virt.libvirt.vif [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-636458037',display_name='tempest-ImagesOneServerNegativeTestJSON-server-636458037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-636458037',id=60,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-jn19jtk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:54Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=2a1e0954-326e-4ad2-b212-3980d6e9513b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.560 250022 DEBUG nova.network.os_vif_util [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.561 250022 DEBUG nova.network.os_vif_util [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.562 250022 DEBUG nova.objects.instance [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a1e0954-326e-4ad2-b212-3980d6e9513b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.595 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <uuid>2a1e0954-326e-4ad2-b212-3980d6e9513b</uuid>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <name>instance-0000003c</name>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-636458037</nova:name>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:38:00</nova:creationTime>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:user uuid="592a0204f38a4596ab1ab81774214a6d">tempest-ImagesOneServerNegativeTestJSON-866315696-project-member</nova:user>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:project uuid="7d78990d13704d629a8a3e8910d005c5">tempest-ImagesOneServerNegativeTestJSON-866315696</nova:project>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <nova:port uuid="8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f">
Jan 20 14:38:01 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <system>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="serial">2a1e0954-326e-4ad2-b212-3980d6e9513b</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="uuid">2a1e0954-326e-4ad2-b212-3980d6e9513b</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </system>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <os>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </os>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <features>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </features>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/2a1e0954-326e-4ad2-b212-3980d6e9513b_disk">
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </source>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config">
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </source>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:38:01 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:18:da:fd"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <target dev="tap8ceed7a1-dc"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/console.log" append="off"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <video>
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </video>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:38:01 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:38:01 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:38:01 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:38:01 compute-0 nova_compute[250018]: </domain>
Jan 20 14:38:01 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.597 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Preparing to wait for external event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.597 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.597 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.597 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.598 250022 DEBUG nova.virt.libvirt.vif [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-636458037',display_name='tempest-ImagesOneServerNegativeTestJSON-server-636458037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-636458037',id=60,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-jn19jtk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:37:54Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=2a1e0954-326e-4ad2-b212-3980d6e9513b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.598 250022 DEBUG nova.network.os_vif_util [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.599 250022 DEBUG nova.network.os_vif_util [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.599 250022 DEBUG os_vif [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.600 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.600 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.601 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.604 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.604 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ceed7a1-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.605 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ceed7a1-dc, col_values=(('external_ids', {'iface-id': '8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:da:fd', 'vm-uuid': '2a1e0954-326e-4ad2-b212-3980d6e9513b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:01 compute-0 NetworkManager[48960]: <info>  [1768919881.6075] manager: (tap8ceed7a1-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.612 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.613 250022 INFO os_vif [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc')
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.666 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.689 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.690 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.690 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] No VIF found with MAC fa:16:3e:18:da:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.690 250022 INFO nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Using config drive
Jan 20 14:38:01 compute-0 nova_compute[250018]: 2026-01-20 14:38:01.713 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2126701600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:03 compute-0 ceph-mon[74360]: pgmap v1490: 321 pgs: 321 active+clean; 111 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 804 KiB/s rd, 1.7 MiB/s wr, 48 op/s
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.187 250022 INFO nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Creating config drive at /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.193 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydvdsmru execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.328 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpydvdsmru" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.353 250022 DEBUG nova.storage.rbd_utils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] rbd image 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.357 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 129 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:38:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.691 250022 DEBUG oslo_concurrency.processutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config 2a1e0954-326e-4ad2-b212-3980d6e9513b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.692 250022 INFO nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Deleting local config drive /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b/disk.config because it was imported into RBD.
Jan 20 14:38:03 compute-0 virtqemud[249565]: End of file while reading data: Input/output error
Jan 20 14:38:03 compute-0 kernel: tap8ceed7a1-dc: entered promiscuous mode
Jan 20 14:38:03 compute-0 NetworkManager[48960]: <info>  [1768919883.7520] manager: (tap8ceed7a1-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.751 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:03 compute-0 ovn_controller[148666]: 2026-01-20T14:38:03Z|00172|binding|INFO|Claiming lport 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f for this chassis.
Jan 20 14:38:03 compute-0 ovn_controller[148666]: 2026-01-20T14:38:03Z|00173|binding|INFO|8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f: Claiming fa:16:3e:18:da:fd 10.100.0.6
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.755 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:03 compute-0 systemd-machined[216401]: New machine qemu-26-instance-0000003c.
Jan 20 14:38:03 compute-0 systemd-udevd[287113]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:38:03 compute-0 NetworkManager[48960]: <info>  [1768919883.8024] device (tap8ceed7a1-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:38:03 compute-0 NetworkManager[48960]: <info>  [1768919883.8034] device (tap8ceed7a1-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:38:03 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000003c.
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.823 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:03 compute-0 ovn_controller[148666]: 2026-01-20T14:38:03Z|00174|binding|INFO|Setting lport 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f ovn-installed in OVS
Jan 20 14:38:03 compute-0 nova_compute[250018]: 2026-01-20 14:38:03.831 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:03 compute-0 ovn_controller[148666]: 2026-01-20T14:38:03Z|00175|binding|INFO|Setting lport 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f up in Southbound
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.932 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:da:fd 10.100.0.6'], port_security=['fa:16:3e:18:da:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2a1e0954-326e-4ad2-b212-3980d6e9513b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d78990d13704d629a8a3e8910d005c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3763ece7-c739-40ca-8e07-6dde1584ba85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a613141e-df34-49c4-9712-c3d232327d6b, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.933 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f in datapath b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 bound to our chassis
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.934 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.946 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee855f14-79c4-4375-aca5-1aa6f170e8e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.947 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1f372f9-f1 in ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.949 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1f372f9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.949 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4809e8-197a-44d1-89bb-b163f3b01bf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.950 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a307d4a8-2fcd-41f6-90c7-9ccb609b2469]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.960 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae5dd1d-a2e9-4db2-a11c-eb9ee13dbd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:03.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4372df40-1753-4da6-809e-239cb1776f9c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.001 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0e8793-f3aa-413b-af86-49dfa588906f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.006 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[59010382-d458-47d8-b160-42366cccfb8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 NetworkManager[48960]: <info>  [1768919884.0068] manager: (tapb1f372f9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/95)
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.041 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2bce0e6a-3138-42bb-8ff3-2a692c80893a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.044 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b0716c-d1a4-4746-a093-3be8017ce749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 NetworkManager[48960]: <info>  [1768919884.0699] device (tapb1f372f9-f0): carrier: link connected
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.076 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf66af5-88ba-4318-af63-4d03842c7d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.093 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe06830-4621-48f7-b51a-814f0db3bb77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f372f9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d0:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582479, 'reachable_time': 44525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287174, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 sudo[287145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:04 compute-0 sudo[287145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.111 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6f73a865-79c4-408b-9be2-90af2bd863e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:d0c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582479, 'tstamp': 582479}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287185, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 sudo[287145]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.130 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ace942a-4119-40bc-9811-7e696c75b3e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f372f9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:d0:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582479, 'reachable_time': 44525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287191, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.164 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc3fad2-a4c5-4400-9884-030aa7a44e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 sudo[287192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:04 compute-0 sudo[287192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:04 compute-0 sudo[287192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.225 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[57e10522-a0d9-469d-bbe1-ed95c0502ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.227 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f372f9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.228 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.228 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1f372f9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.230 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:04 compute-0 kernel: tapb1f372f9-f0: entered promiscuous mode
Jan 20 14:38:04 compute-0 NetworkManager[48960]: <info>  [1768919884.2310] manager: (tapb1f372f9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.233 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1f372f9-f0, col_values=(('external_ids', {'iface-id': 'f0137d70-4bff-4646-9f70-7e0c82ac1e88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:04 compute-0 ovn_controller[148666]: 2026-01-20T14:38:04Z|00176|binding|INFO|Releasing lport f0137d70-4bff-4646-9f70-7e0c82ac1e88 from this chassis (sb_readonly=0)
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.236 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.237 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f220d603-7ab5-4046-91da-6c63fe4ec827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.237 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.pid.haproxy
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:38:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:04.240 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'env', 'PROCESS_TAG=haproxy-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.249 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:04 compute-0 ceph-mon[74360]: pgmap v1491: 321 pgs: 321 active+clean; 129 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 51 op/s
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.298 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919884.2981844, 2a1e0954-326e-4ad2-b212-3980d6e9513b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.299 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] VM Started (Lifecycle Event)
Jan 20 14:38:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.583 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.586 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919884.3008394, 2a1e0954-326e-4ad2-b212-3980d6e9513b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.587 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] VM Paused (Lifecycle Event)
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.615 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.619 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.639 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:38:04 compute-0 podman[287272]: 2026-01-20 14:38:04.706466456 +0000 UTC m=+0.095944648 container create 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:38:04 compute-0 podman[287272]: 2026-01-20 14:38:04.631424532 +0000 UTC m=+0.020902754 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:38:04 compute-0 systemd[1]: Started libpod-conmon-1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b.scope.
Jan 20 14:38:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed353e1b66a0c454e7b8cf8e8117c451241b66f480eb0c3d521ce00bcc2a8530/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:04 compute-0 podman[287272]: 2026-01-20 14:38:04.807403598 +0000 UTC m=+0.196881810 container init 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 14:38:04 compute-0 podman[287272]: 2026-01-20 14:38:04.814787047 +0000 UTC m=+0.204265249 container start 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:38:04 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [NOTICE]   (287293) : New worker (287295) forked
Jan 20 14:38:04 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [NOTICE]   (287293) : Loading success.
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.898 250022 DEBUG nova.compute.manager [req-cb1e8639-3457-441c-8bb0-3b1254439231 req-0f1bbbbc-4df2-4e1e-8766-02cb402c8b49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.899 250022 DEBUG oslo_concurrency.lockutils [req-cb1e8639-3457-441c-8bb0-3b1254439231 req-0f1bbbbc-4df2-4e1e-8766-02cb402c8b49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.899 250022 DEBUG oslo_concurrency.lockutils [req-cb1e8639-3457-441c-8bb0-3b1254439231 req-0f1bbbbc-4df2-4e1e-8766-02cb402c8b49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.900 250022 DEBUG oslo_concurrency.lockutils [req-cb1e8639-3457-441c-8bb0-3b1254439231 req-0f1bbbbc-4df2-4e1e-8766-02cb402c8b49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.900 250022 DEBUG nova.compute.manager [req-cb1e8639-3457-441c-8bb0-3b1254439231 req-0f1bbbbc-4df2-4e1e-8766-02cb402c8b49 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Processing event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.901 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.905 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919884.9055185, 2a1e0954-326e-4ad2-b212-3980d6e9513b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.906 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] VM Resumed (Lifecycle Event)
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.908 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.911 250022 INFO nova.virt.libvirt.driver [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Instance spawned successfully.
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.911 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.926 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.933 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.936 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.937 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.937 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.938 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.938 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.939 250022 DEBUG nova.virt.libvirt.driver [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:04 compute-0 nova_compute[250018]: 2026-01-20 14:38:04.965 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:38:05 compute-0 nova_compute[250018]: 2026-01-20 14:38:05.138 250022 INFO nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Took 9.56 seconds to spawn the instance on the hypervisor.
Jan 20 14:38:05 compute-0 nova_compute[250018]: 2026-01-20 14:38:05.139 250022 DEBUG nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:05 compute-0 nova_compute[250018]: 2026-01-20 14:38:05.248 250022 INFO nova.compute.manager [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Took 12.02 seconds to build instance.
Jan 20 14:38:05 compute-0 nova_compute[250018]: 2026-01-20 14:38:05.420 250022 DEBUG oslo_concurrency.lockutils [None req-2138753e-e042-4e43-bb62-af6157adf8a6 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 14:38:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:06 compute-0 nova_compute[250018]: 2026-01-20 14:38:06.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:06 compute-0 nova_compute[250018]: 2026-01-20 14:38:06.668 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:06 compute-0 ceph-mon[74360]: pgmap v1492: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.010 250022 DEBUG nova.compute.manager [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.010 250022 DEBUG oslo_concurrency.lockutils [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.011 250022 DEBUG oslo_concurrency.lockutils [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.011 250022 DEBUG oslo_concurrency.lockutils [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.011 250022 DEBUG nova.compute.manager [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] No waiting events found dispatching network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.012 250022 WARNING nova.compute.manager [req-70c070c6-f6ed-4c40-baa7-82a9d3ef2068 req-fe747c49-5b8e-4b73-8cff-21152a9ffe1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received unexpected event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f for instance with vm_state active and task_state None.
Jan 20 14:38:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 20 14:38:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.587 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.588 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.588 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.588 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.589 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.590 250022 INFO nova.compute.manager [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Terminating instance
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.591 250022 DEBUG nova.compute.manager [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:38:07 compute-0 kernel: tap8ceed7a1-dc (unregistering): left promiscuous mode
Jan 20 14:38:07 compute-0 NetworkManager[48960]: <info>  [1768919887.6939] device (tap8ceed7a1-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:38:07 compute-0 ovn_controller[148666]: 2026-01-20T14:38:07Z|00177|binding|INFO|Releasing lport 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f from this chassis (sb_readonly=0)
Jan 20 14:38:07 compute-0 ovn_controller[148666]: 2026-01-20T14:38:07Z|00178|binding|INFO|Setting lport 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f down in Southbound
Jan 20 14:38:07 compute-0 ovn_controller[148666]: 2026-01-20T14:38:07Z|00179|binding|INFO|Removing iface tap8ceed7a1-dc ovn-installed in OVS
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.740 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 20 14:38:07 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Consumed 3.162s CPU time.
Jan 20 14:38:07 compute-0 systemd-machined[216401]: Machine qemu-26-instance-0000003c terminated.
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.822 250022 INFO nova.virt.libvirt.driver [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Instance destroyed successfully.
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.822 250022 DEBUG nova.objects.instance [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lazy-loading 'resources' on Instance uuid 2a1e0954-326e-4ad2-b212-3980d6e9513b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:38:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:07.891 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:da:fd 10.100.0.6'], port_security=['fa:16:3e:18:da:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2a1e0954-326e-4ad2-b212-3980d6e9513b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d78990d13704d629a8a3e8910d005c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3763ece7-c739-40ca-8e07-6dde1584ba85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a613141e-df34-49c4-9712-c3d232327d6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:38:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:07.893 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f in datapath b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 unbound from our chassis
Jan 20 14:38:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:07.894 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:38:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:07.895 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f6786958-0b4f-4e23-b73b-687f4e10eb46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:07.896 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 namespace which is not needed anymore
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.901 250022 DEBUG nova.virt.libvirt.vif [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:37:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-636458037',display_name='tempest-ImagesOneServerNegativeTestJSON-server-636458037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-636458037',id=60,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:38:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7d78990d13704d629a8a3e8910d005c5',ramdisk_id='',reservation_id='r-jn19jtk2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-866315696',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-866315696-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:38:05Z,user_data=None,user_id='592a0204f38a4596ab1ab81774214a6d',uuid=2a1e0954-326e-4ad2-b212-3980d6e9513b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.902 250022 DEBUG nova.network.os_vif_util [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converting VIF {"id": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "address": "fa:16:3e:18:da:fd", "network": {"id": "b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-462971735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7d78990d13704d629a8a3e8910d005c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ceed7a1-dc", "ovs_interfaceid": "8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.902 250022 DEBUG nova.network.os_vif_util [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.903 250022 DEBUG os_vif [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.905 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ceed7a1-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.907 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:07 compute-0 nova_compute[250018]: 2026-01-20 14:38:07.911 250022 INFO os_vif [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:da:fd,bridge_name='br-int',has_traffic_filtering=True,id=8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f,network=Network(b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ceed7a1-dc')
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [NOTICE]   (287293) : haproxy version is 2.8.14-c23fe91
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [NOTICE]   (287293) : path to executable is /usr/sbin/haproxy
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [WARNING]  (287293) : Exiting Master process...
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [WARNING]  (287293) : Exiting Master process...
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [ALERT]    (287293) : Current worker (287295) exited with code 143 (Terminated)
Jan 20 14:38:08 compute-0 neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23[287288]: [WARNING]  (287293) : All workers exited. Exiting... (0)
Jan 20 14:38:08 compute-0 systemd[1]: libpod-1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b.scope: Deactivated successfully.
Jan 20 14:38:08 compute-0 conmon[287288]: conmon 1fe4d14ff3700bd3da00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b.scope/container/memory.events
Jan 20 14:38:08 compute-0 podman[287357]: 2026-01-20 14:38:08.033398148 +0000 UTC m=+0.043041532 container died 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b-userdata-shm.mount: Deactivated successfully.
Jan 20 14:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed353e1b66a0c454e7b8cf8e8117c451241b66f480eb0c3d521ce00bcc2a8530-merged.mount: Deactivated successfully.
Jan 20 14:38:08 compute-0 podman[287357]: 2026-01-20 14:38:08.075153084 +0000 UTC m=+0.084796458 container cleanup 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:38:08 compute-0 systemd[1]: libpod-conmon-1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b.scope: Deactivated successfully.
Jan 20 14:38:08 compute-0 podman[287388]: 2026-01-20 14:38:08.131634266 +0000 UTC m=+0.034367938 container remove 1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.136 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fff18dfe-b41d-4dfd-9dce-4e575f622d67]: (4, ('Tue Jan 20 02:38:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 (1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b)\n1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b\nTue Jan 20 02:38:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 (1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b)\n1fe4d14ff3700bd3da00f226b408e9e66e7df81a9563c11ccbaa036ef803e60b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.138 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8a765c-bc12-459b-8ba9-9a6282e01bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.139 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f372f9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.140 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:08 compute-0 kernel: tapb1f372f9-f0: left promiscuous mode
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.154 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.156 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b273c2-8142-4217-8d68-62b10655bce0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.172 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3eacba4e-2f16-4ebf-b621-5dd162eacdf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.172 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[961c8c2f-3b61-46d7-b97b-bd8614ffe140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.189 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[21978c97-f4d9-47c7-b374-019c9c5338cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582471, 'reachable_time': 16714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287403, 'error': None, 'target': 'ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.191 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1f372f9-fbd1-4ef7-9be7-ace7ce14bb23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:38:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:08.191 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d6183153-2e17-4528-a4ec-acad5b48e9a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:08 compute-0 systemd[1]: run-netns-ovnmeta\x2db1f372f9\x2dfbd1\x2d4ef7\x2d9be7\x2dace7ce14bb23.mount: Deactivated successfully.
Jan 20 14:38:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:08.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.714 250022 INFO nova.virt.libvirt.driver [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Deleting instance files /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b_del
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.715 250022 INFO nova.virt.libvirt.driver [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Deletion of /var/lib/nova/instances/2a1e0954-326e-4ad2-b212-3980d6e9513b_del complete
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.761 250022 INFO nova.compute.manager [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Took 1.17 seconds to destroy the instance on the hypervisor.
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.762 250022 DEBUG oslo.service.loopingcall [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.763 250022 DEBUG nova.compute.manager [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:38:08 compute-0 nova_compute[250018]: 2026-01-20 14:38:08.763 250022 DEBUG nova.network.neutron [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:38:08 compute-0 ceph-mon[74360]: pgmap v1493: 321 pgs: 321 active+clean; 167 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.133 250022 DEBUG nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-unplugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.134 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.135 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.135 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.136 250022 DEBUG nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] No waiting events found dispatching network-vif-unplugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.136 250022 DEBUG nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-unplugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.137 250022 DEBUG nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.137 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.138 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.139 250022 DEBUG oslo_concurrency.lockutils [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.139 250022 DEBUG nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] No waiting events found dispatching network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.140 250022 WARNING nova.compute.manager [req-468e9cf6-6c5b-4e11-ab99-054063c28d9d req-5b033333-1f45-47dd-afe1-7cd252712d03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received unexpected event network-vif-plugged-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f for instance with vm_state active and task_state deleting.
Jan 20 14:38:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 150 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Jan 20 14:38:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.899 250022 DEBUG nova.network.neutron [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.919 250022 INFO nova.compute.manager [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Took 1.16 seconds to deallocate network for instance.
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.973 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.974 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:09 compute-0 nova_compute[250018]: 2026-01-20 14:38:09.987 250022 DEBUG nova.compute.manager [req-2c1db15e-e050-4f18-8c91-892d3853e4aa req-1869c21f-15be-497e-9ed9-a5b02638d581 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Received event network-vif-deleted-8ceed7a1-dcfe-40ae-ba8d-a963f6e0668f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.010 250022 DEBUG nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.025 250022 DEBUG nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.026 250022 DEBUG nova.compute.provider_tree [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.039 250022 DEBUG nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.077 250022 DEBUG nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.119 250022 DEBUG oslo_concurrency.processutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:10.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441600990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.558 250022 DEBUG oslo_concurrency.processutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.564 250022 DEBUG nova.compute.provider_tree [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.581 250022 DEBUG nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.603 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.736 250022 INFO nova.scheduler.client.report [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Deleted allocations for instance 2a1e0954-326e-4ad2-b212-3980d6e9513b
Jan 20 14:38:10 compute-0 nova_compute[250018]: 2026-01-20 14:38:10.841 250022 DEBUG oslo_concurrency.lockutils [None req-6cb643ad-83d4-4bf2-be6c-6af086eb3376 592a0204f38a4596ab1ab81774214a6d 7d78990d13704d629a8a3e8910d005c5 - - default default] Lock "2a1e0954-326e-4ad2-b212-3980d6e9513b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:10 compute-0 ceph-mon[74360]: pgmap v1494: 321 pgs: 321 active+clean; 150 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Jan 20 14:38:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3441600990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002816091452633538 of space, bias 1.0, pg target 0.8448274357900614 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:38:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 136 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:38:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:11.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:11 compute-0 nova_compute[250018]: 2026-01-20 14:38:11.687 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:12 compute-0 ceph-mon[74360]: pgmap v1495: 321 pgs: 321 active+clean; 136 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:38:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:12.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:12 compute-0 nova_compute[250018]: 2026-01-20 14:38:12.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:13 compute-0 podman[287431]: 2026-01-20 14:38:13.457354856 +0000 UTC m=+0.052718563 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 20 14:38:13 compute-0 podman[287430]: 2026-01-20 14:38:13.501341962 +0000 UTC m=+0.091872369 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:38:13 compute-0 sudo[287468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:13 compute-0 sudo[287468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-0 sudo[287468]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 121 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 20 14:38:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:13.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:13 compute-0 sudo[287500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:13 compute-0 sudo[287500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-0 sudo[287500]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1969300733' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:38:13 compute-0 sudo[287525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:13 compute-0 sudo[287525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-0 sudo[287525]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-0 sudo[287550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:38:13 compute-0 sudo[287550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:13 compute-0 sudo[287550]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:38:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:38:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-0 sudo[287595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:14 compute-0 sudo[287595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:14 compute-0 sudo[287620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:14 compute-0 sudo[287645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287645]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:38:14 compute-0 sudo[287670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:14 compute-0 sudo[287670]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 ceph-mon[74360]: pgmap v1496: 321 pgs: 321 active+clean; 121 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 20 14:38:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:14 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1ed618fe-9934-4c1b-9e84-1ffe7b599f2d does not exist
Jan 20 14:38:14 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1cc34ce7-e1f7-4d50-9949-333a1769d5eb does not exist
Jan 20 14:38:14 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 905437c8-17e2-46a1-98f0-527fed3f1eaa does not exist
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:38:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:38:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:14 compute-0 sudo[287727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:14 compute-0 sudo[287727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287727]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:14 compute-0 sudo[287752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287752]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:14 compute-0 sudo[287778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:14 compute-0 sudo[287778]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:14 compute-0 sudo[287803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:38:14 compute-0 sudo[287803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.30879698 +0000 UTC m=+0.107248593 container create 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.22313075 +0000 UTC m=+0.021582373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:15 compute-0 systemd[1]: Started libpod-conmon-71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed.scope.
Jan 20 14:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.405659342 +0000 UTC m=+0.204110975 container init 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.412921198 +0000 UTC m=+0.211372821 container start 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.417506982 +0000 UTC m=+0.215958625 container attach 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:38:15 compute-0 dreamy_wilbur[287885]: 167 167
Jan 20 14:38:15 compute-0 systemd[1]: libpod-71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed.scope: Deactivated successfully.
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.420221545 +0000 UTC m=+0.218673168 container died 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd98cd5de69ae1e23ed5f9f605bb22d9fd5027ded8964c4a042ffe7f93426e7e-merged.mount: Deactivated successfully.
Jan 20 14:38:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Jan 20 14:38:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:15.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:15 compute-0 podman[287868]: 2026-01-20 14:38:15.674648736 +0000 UTC m=+0.473100379 container remove 71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:38:15 compute-0 systemd[1]: libpod-conmon-71b8cd36c779441471dbaf385f9b31e468122646426a2f7d17ddc0bc06a642ed.scope: Deactivated successfully.
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:38:15 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:38:15 compute-0 podman[287908]: 2026-01-20 14:38:15.846999453 +0000 UTC m=+0.044639574 container create 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:38:15 compute-0 systemd[1]: Started libpod-conmon-68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16.scope.
Jan 20 14:38:15 compute-0 podman[287908]: 2026-01-20 14:38:15.826351467 +0000 UTC m=+0.023991608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:16 compute-0 podman[287908]: 2026-01-20 14:38:16.341997071 +0000 UTC m=+0.539637212 container init 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:38:16 compute-0 podman[287908]: 2026-01-20 14:38:16.349940955 +0000 UTC m=+0.547581076 container start 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:38:16 compute-0 podman[287908]: 2026-01-20 14:38:16.37499042 +0000 UTC m=+0.572630571 container attach 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:38:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:16 compute-0 nova_compute[250018]: 2026-01-20 14:38:16.689 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:17 compute-0 ceph-mon[74360]: pgmap v1497: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 143 op/s
Jan 20 14:38:17 compute-0 kind_poitras[287925]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:38:17 compute-0 kind_poitras[287925]: --> relative data size: 1.0
Jan 20 14:38:17 compute-0 kind_poitras[287925]: --> All data devices are unavailable
Jan 20 14:38:17 compute-0 systemd[1]: libpod-68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16.scope: Deactivated successfully.
Jan 20 14:38:17 compute-0 podman[287941]: 2026-01-20 14:38:17.269233134 +0000 UTC m=+0.027114312 container died 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e7b31a55465c6737fadcc79f0f71288d9ae1c7d17e383be7f9f7e56d6e45ca2-merged.mount: Deactivated successfully.
Jan 20 14:38:17 compute-0 podman[287941]: 2026-01-20 14:38:17.316730634 +0000 UTC m=+0.074611802 container remove 68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:38:17 compute-0 systemd[1]: libpod-conmon-68f5bbe005021902d13650fdd36312b14047871e7fb23fc5ee8007510cbece16.scope: Deactivated successfully.
Jan 20 14:38:17 compute-0 sudo[287803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:17 compute-0 sudo[287956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:17 compute-0 sudo[287956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:17 compute-0 sudo[287956]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:17 compute-0 sudo[287981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:17 compute-0 sudo[287981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:17 compute-0 sudo[287981]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:17 compute-0 sudo[288006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:17 compute-0 sudo[288006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:17 compute-0 sudo[288006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 94 op/s
Jan 20 14:38:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:17 compute-0 sudo[288031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:38:17 compute-0 sudo[288031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:17 compute-0 nova_compute[250018]: 2026-01-20 14:38:17.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:17 compute-0 podman[288097]: 2026-01-20 14:38:17.949885437 +0000 UTC m=+0.051172760 container create 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:38:17 compute-0 systemd[1]: Started libpod-conmon-22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f.scope.
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:17.930551496 +0000 UTC m=+0.031838839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:18.049588537 +0000 UTC m=+0.150875880 container init 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:18.059515544 +0000 UTC m=+0.160802867 container start 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:18.0630733 +0000 UTC m=+0.164360643 container attach 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 20 14:38:18 compute-0 ceph-mon[74360]: pgmap v1498: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 94 op/s
Jan 20 14:38:18 compute-0 nostalgic_cray[288113]: 167 167
Jan 20 14:38:18 compute-0 systemd[1]: libpod-22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f.scope: Deactivated successfully.
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:18.068543237 +0000 UTC m=+0.169830580 container died 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d0cf9616e35cda145f87f6ddc99cc76603a81f2876b6fe9bcb7fb9c554963f-merged.mount: Deactivated successfully.
Jan 20 14:38:18 compute-0 podman[288097]: 2026-01-20 14:38:18.110425757 +0000 UTC m=+0.211713080 container remove 22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cray, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:38:18 compute-0 systemd[1]: libpod-conmon-22d4fa55e2fcc4d1c6b4036f65e3e5c50fcabe66f1ef0024638947f17296ab5f.scope: Deactivated successfully.
Jan 20 14:38:18 compute-0 podman[288136]: 2026-01-20 14:38:18.276963557 +0000 UTC m=+0.039695261 container create 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:38:18 compute-0 systemd[1]: Started libpod-conmon-8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3.scope.
Jan 20 14:38:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a83f6d26bdb32e86891934f164c38d4c65c3775695aa7b89f98f91f4dff1f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a83f6d26bdb32e86891934f164c38d4c65c3775695aa7b89f98f91f4dff1f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a83f6d26bdb32e86891934f164c38d4c65c3775695aa7b89f98f91f4dff1f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:18 compute-0 podman[288136]: 2026-01-20 14:38:18.25889337 +0000 UTC m=+0.021625094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a83f6d26bdb32e86891934f164c38d4c65c3775695aa7b89f98f91f4dff1f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:18 compute-0 podman[288136]: 2026-01-20 14:38:18.368755952 +0000 UTC m=+0.131487676 container init 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:38:18 compute-0 podman[288136]: 2026-01-20 14:38:18.377074006 +0000 UTC m=+0.139805710 container start 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:38:18 compute-0 podman[288136]: 2026-01-20 14:38:18.381191628 +0000 UTC m=+0.143923332 container attach 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:38:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:18.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:18 compute-0 nova_compute[250018]: 2026-01-20 14:38:18.712 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]: {
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:     "0": [
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:         {
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "devices": [
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "/dev/loop3"
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             ],
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "lv_name": "ceph_lv0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "lv_size": "7511998464",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "name": "ceph_lv0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "tags": {
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.cluster_name": "ceph",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.crush_device_class": "",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.encrypted": "0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.osd_id": "0",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.type": "block",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:                 "ceph.vdo": "0"
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             },
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "type": "block",
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:             "vg_name": "ceph_vg0"
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:         }
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]:     ]
Jan 20 14:38:19 compute-0 nostalgic_poitras[288153]: }
Jan 20 14:38:19 compute-0 systemd[1]: libpod-8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3.scope: Deactivated successfully.
Jan 20 14:38:19 compute-0 podman[288163]: 2026-01-20 14:38:19.311054011 +0000 UTC m=+0.028515789 container died 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a83f6d26bdb32e86891934f164c38d4c65c3775695aa7b89f98f91f4dff1f7-merged.mount: Deactivated successfully.
Jan 20 14:38:19 compute-0 podman[288163]: 2026-01-20 14:38:19.38698464 +0000 UTC m=+0.104446368 container remove 8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_poitras, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:38:19 compute-0 systemd[1]: libpod-conmon-8c06f6798d581cedd456709b0db237b0608ce36a021c7a8b65854be0b17a3ff3.scope: Deactivated successfully.
Jan 20 14:38:19 compute-0 sudo[288031]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:19 compute-0 sudo[288179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:19 compute-0 sudo[288179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:19 compute-0 sudo[288179]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 68 op/s
Jan 20 14:38:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:19.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:19 compute-0 sudo[288204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:38:19 compute-0 sudo[288204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:19 compute-0 sudo[288204]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:19 compute-0 sudo[288229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:19 compute-0 sudo[288229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:19 compute-0 sudo[288229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:19 compute-0 sudo[288254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:38:19 compute-0 sudo[288254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:20 compute-0 nova_compute[250018]: 2026-01-20 14:38:20.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:20 compute-0 nova_compute[250018]: 2026-01-20 14:38:20.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:20 compute-0 nova_compute[250018]: 2026-01-20 14:38:20.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.146407627 +0000 UTC m=+0.047122772 container create 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:38:20 compute-0 systemd[1]: Started libpod-conmon-52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b.scope.
Jan 20 14:38:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.12757104 +0000 UTC m=+0.028286195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.235231473 +0000 UTC m=+0.135946618 container init 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.243873666 +0000 UTC m=+0.144588791 container start 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.247605856 +0000 UTC m=+0.148321001 container attach 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:38:20 compute-0 condescending_bouman[288336]: 167 167
Jan 20 14:38:20 compute-0 systemd[1]: libpod-52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b.scope: Deactivated successfully.
Jan 20 14:38:20 compute-0 conmon[288336]: conmon 52e2fe6309d76dc598f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b.scope/container/memory.events
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.252741675 +0000 UTC m=+0.153456800 container died 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb7b6ad6f9f0241b3b63092a940210a28b3c205d45302924110e8bed465ccf6d-merged.mount: Deactivated successfully.
Jan 20 14:38:20 compute-0 podman[288320]: 2026-01-20 14:38:20.292961459 +0000 UTC m=+0.193676584 container remove 52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:38:20 compute-0 systemd[1]: libpod-conmon-52e2fe6309d76dc598f94f7721ae01414404ffac76e792d3d4f97f04a2bb0f4b.scope: Deactivated successfully.
Jan 20 14:38:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:20 compute-0 podman[288359]: 2026-01-20 14:38:20.487726651 +0000 UTC m=+0.055078816 container create 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:38:20 compute-0 systemd[1]: Started libpod-conmon-41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6.scope.
Jan 20 14:38:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:20 compute-0 podman[288359]: 2026-01-20 14:38:20.463146268 +0000 UTC m=+0.030498513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750846c940de1a9c4164a250c54d449cb582e1e5eadf98fafb7483949fdbf74c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750846c940de1a9c4164a250c54d449cb582e1e5eadf98fafb7483949fdbf74c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750846c940de1a9c4164a250c54d449cb582e1e5eadf98fafb7483949fdbf74c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/750846c940de1a9c4164a250c54d449cb582e1e5eadf98fafb7483949fdbf74c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:20 compute-0 podman[288359]: 2026-01-20 14:38:20.575080596 +0000 UTC m=+0.142432781 container init 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:38:20 compute-0 podman[288359]: 2026-01-20 14:38:20.583687368 +0000 UTC m=+0.151039533 container start 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:38:20 compute-0 podman[288359]: 2026-01-20 14:38:20.587512822 +0000 UTC m=+0.154865087 container attach 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:38:20 compute-0 ceph-mon[74360]: pgmap v1499: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 68 op/s
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]: {
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:         "osd_id": 0,
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:         "type": "bluestore"
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]:     }
Jan 20 14:38:21 compute-0 pedantic_ramanujan[288375]: }
Jan 20 14:38:21 compute-0 systemd[1]: libpod-41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6.scope: Deactivated successfully.
Jan 20 14:38:21 compute-0 podman[288359]: 2026-01-20 14:38:21.397459563 +0000 UTC m=+0.964811758 container died 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-750846c940de1a9c4164a250c54d449cb582e1e5eadf98fafb7483949fdbf74c-merged.mount: Deactivated successfully.
Jan 20 14:38:21 compute-0 podman[288359]: 2026-01-20 14:38:21.449560887 +0000 UTC m=+1.016913062 container remove 41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:38:21 compute-0 systemd[1]: libpod-conmon-41ca04f11f8c91538a0137e90a160da975c6525855c31e1fa7b0fa660898b7f6.scope: Deactivated successfully.
Jan 20 14:38:21 compute-0 sudo[288254]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:38:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:38:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0f34d9a5-ac65-44e9-82d0-90328134a2a6 does not exist
Jan 20 14:38:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 69574c51-761a-4040-b9ab-d43c4aea770b does not exist
Jan 20 14:38:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 70d6212e-481c-4ce1-b9f6-4c5e46069f0e does not exist
Jan 20 14:38:21 compute-0 sudo[288410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 17 KiB/s wr, 31 op/s
Jan 20 14:38:21 compute-0 sudo[288410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:21 compute-0 sudo[288410]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:21 compute-0 sudo[288435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:38:21 compute-0 sudo[288435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:21 compute-0 sudo[288435]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:21 compute-0 nova_compute[250018]: 2026-01-20 14:38:21.691 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:22.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:38:22 compute-0 ceph-mon[74360]: pgmap v1500: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 17 KiB/s wr, 31 op/s
Jan 20 14:38:22 compute-0 nova_compute[250018]: 2026-01-20 14:38:22.821 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919887.820475, 2a1e0954-326e-4ad2-b212-3980d6e9513b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:22 compute-0 nova_compute[250018]: 2026-01-20 14:38:22.822 250022 INFO nova.compute.manager [-] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] VM Stopped (Lifecycle Event)
Jan 20 14:38:22 compute-0 nova_compute[250018]: 2026-01-20 14:38:22.841 250022 DEBUG nova.compute.manager [None req-f72bad34-521d-4dba-9f49-66e73b3c4b73 - - - - - -] [instance: 2a1e0954-326e-4ad2-b212-3980d6e9513b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:22 compute-0 nova_compute[250018]: 2026-01-20 14:38:22.915 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 5.5 KiB/s wr, 6 op/s
Jan 20 14:38:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:24 compute-0 nova_compute[250018]: 2026-01-20 14:38:24.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:24 compute-0 sudo[288461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:24 compute-0 sudo[288461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:24 compute-0 sudo[288461]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:24 compute-0 sudo[288486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:24 compute-0 sudo[288486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:24 compute-0 sudo[288486]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:24.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:24 compute-0 ceph-mon[74360]: pgmap v1501: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 5.5 KiB/s wr, 6 op/s
Jan 20 14:38:25 compute-0 nova_compute[250018]: 2026-01-20 14:38:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:25 compute-0 nova_compute[250018]: 2026-01-20 14:38:25.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:38:25 compute-0 nova_compute[250018]: 2026-01-20 14:38:25.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:38:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 20 14:38:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:26 compute-0 nova_compute[250018]: 2026-01-20 14:38:26.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-0 nova_compute[250018]: 2026-01-20 14:38:26.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:26.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:26 compute-0 ceph-mon[74360]: pgmap v1502: 321 pgs: 321 active+clean; 121 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 20 14:38:26 compute-0 nova_compute[250018]: 2026-01-20 14:38:26.693 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:27 compute-0 nova_compute[250018]: 2026-01-20 14:38:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 99 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Jan 20 14:38:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:27.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2281049994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:27 compute-0 nova_compute[250018]: 2026-01-20 14:38:27.917 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.089 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.090 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.091 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1204876683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.533 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.698 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.699 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4553MB free_disk=20.959110260009766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.699 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.699 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:28 compute-0 ceph-mon[74360]: pgmap v1503: 321 pgs: 321 active+clean; 99 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Jan 20 14:38:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1324154779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/297649950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1204876683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.785 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.786 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:38:28 compute-0 nova_compute[250018]: 2026-01-20 14:38:28.993 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156133779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.423 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.428 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.449 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.475 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.475 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:29 compute-0 nova_compute[250018]: 2026-01-20 14:38:29.476 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 78 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 25 op/s
Jan 20 14:38:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/416411339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3156133779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:30.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:30 compute-0 nova_compute[250018]: 2026-01-20 14:38:30.489 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:30 compute-0 nova_compute[250018]: 2026-01-20 14:38:30.489 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:30 compute-0 nova_compute[250018]: 2026-01-20 14:38:30.489 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:38:30 compute-0 sshd-session[288559]: Invalid user ubuntu from 157.245.78.139 port 55234
Jan 20 14:38:30 compute-0 sshd-session[288559]: Connection closed by invalid user ubuntu 157.245.78.139 port 55234 [preauth]
Jan 20 14:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:30.751 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:30.751 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:30.752 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:30 compute-0 ceph-mon[74360]: pgmap v1504: 321 pgs: 321 active+clean; 78 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 25 op/s
Jan 20 14:38:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3805276064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:31.005 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:38:31 compute-0 nova_compute[250018]: 2026-01-20 14:38:31.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:31.007 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:38:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 20 14:38:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:31.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:31 compute-0 nova_compute[250018]: 2026-01-20 14:38:31.696 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:32.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:32 compute-0 ceph-mon[74360]: pgmap v1505: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 20 14:38:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:32 compute-0 nova_compute[250018]: 2026-01-20 14:38:32.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:33 compute-0 nova_compute[250018]: 2026-01-20 14:38:33.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:33 compute-0 nova_compute[250018]: 2026-01-20 14:38:33.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:38:33 compute-0 nova_compute[250018]: 2026-01-20 14:38:33.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:38:33 compute-0 nova_compute[250018]: 2026-01-20 14:38:33.075 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:38:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:38:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:33.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:34 compute-0 ceph-mon[74360]: pgmap v1506: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 14:38:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Jan 20 14:38:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:35 compute-0 ceph-mon[74360]: pgmap v1507: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Jan 20 14:38:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:36 compute-0 nova_compute[250018]: 2026-01-20 14:38:36.699 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 20 14:38:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:37 compute-0 nova_compute[250018]: 2026-01-20 14:38:37.923 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:38 compute-0 ceph-mon[74360]: pgmap v1508: 321 pgs: 321 active+clean; 41 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 20 14:38:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:39.009 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:39 compute-0 nova_compute[250018]: 2026-01-20 14:38:39.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:38:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 62 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 678 KiB/s wr, 21 op/s
Jan 20 14:38:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:40.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:40 compute-0 ceph-mon[74360]: pgmap v1509: 321 pgs: 321 active+clean; 62 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 678 KiB/s wr, 21 op/s
Jan 20 14:38:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 84 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 14:38:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:41 compute-0 nova_compute[250018]: 2026-01-20 14:38:41.702 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:41 compute-0 ceph-mon[74360]: pgmap v1510: 321 pgs: 321 active+clean; 84 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 14:38:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:42.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:42 compute-0 nova_compute[250018]: 2026-01-20 14:38:42.926 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 88 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:38:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.783 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.783 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.806 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.916 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.918 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.926 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:38:43 compute-0 nova_compute[250018]: 2026-01-20 14:38:43.927 250022 INFO nova.compute.claims [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.124 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:44 compute-0 sudo[288588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:44 compute-0 sudo[288588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:44 compute-0 sudo[288588]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:44.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:44 compute-0 sudo[288615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:38:44 compute-0 sudo[288615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:38:44 compute-0 sudo[288615]: pam_unix(sudo:session): session closed for user root
Jan 20 14:38:44 compute-0 podman[288613]: 2026-01-20 14:38:44.521013291 +0000 UTC m=+0.058808137 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:38:44 compute-0 podman[288612]: 2026-01-20 14:38:44.576212399 +0000 UTC m=+0.116090462 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:38:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:38:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224742800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.604 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.610 250022 DEBUG nova.compute.provider_tree [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.634 250022 DEBUG nova.scheduler.client.report [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.662 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.663 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.716 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.717 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.735 250022 INFO nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:38:44 compute-0 nova_compute[250018]: 2026-01-20 14:38:44.755 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:38:44 compute-0 ceph-mon[74360]: pgmap v1511: 321 pgs: 321 active+clean; 88 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:38:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2224742800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.010 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.011 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.011 250022 INFO nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Creating image(s)
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.040 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.065 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.091 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.095 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.167 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.169 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.169 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.170 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.303 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.307 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.340 250022 DEBUG nova.policy [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3274e8540014ffa8cd910526cd964f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:38:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 88 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 14:38:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:45.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.882 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:45 compute-0 nova_compute[250018]: 2026-01-20 14:38:45.963 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] resizing rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.070 250022 DEBUG nova.objects.instance [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c6693f6-c588-4a64-86ee-cf44a6a36260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.098 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.099 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Ensure instance console log exists: /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.100 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.100 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.101 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:46.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:46 compute-0 nova_compute[250018]: 2026-01-20 14:38:46.705 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:46 compute-0 ceph-mon[74360]: pgmap v1512: 321 pgs: 321 active+clean; 88 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 14:38:47 compute-0 nova_compute[250018]: 2026-01-20 14:38:47.006 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Successfully created port: 66a65e38-61c1-414e-8a99-7a5480b4b97b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:38:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 97 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 20 14:38:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:47.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.884298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927884356, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2587, "num_deletes": 526, "total_data_size": 3822830, "memory_usage": 3896000, "flush_reason": "Manual Compaction"}
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 20 14:38:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927920781, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3737600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31475, "largest_seqno": 34061, "table_properties": {"data_size": 3726364, "index_size": 6834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26949, "raw_average_key_size": 20, "raw_value_size": 3701618, "raw_average_value_size": 2791, "num_data_blocks": 296, "num_entries": 1326, "num_filter_entries": 1326, "num_deletions": 526, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919747, "oldest_key_time": 1768919747, "file_creation_time": 1768919927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 38499 microseconds, and 10233 cpu microseconds.
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.920960) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3737600 bytes OK
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.922866) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.926066) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.926094) EVENT_LOG_v1 {"time_micros": 1768919927926085, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.926115) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3811059, prev total WAL file size 3811059, number of live WAL files 2.
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.927461) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3650KB)], [68(9941KB)]
Jan 20 14:38:47 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919927927534, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13918083, "oldest_snapshot_seqno": -1}
Jan 20 14:38:47 compute-0 nova_compute[250018]: 2026-01-20 14:38:47.929 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6235 keys, 11936284 bytes, temperature: kUnknown
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928031710, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 11936284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11890960, "index_size": 28638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 159185, "raw_average_key_size": 25, "raw_value_size": 11775373, "raw_average_value_size": 1888, "num_data_blocks": 1154, "num_entries": 6235, "num_filter_entries": 6235, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.032367) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 11936284 bytes
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.035740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7291, records dropped: 1056 output_compression: NoCompression
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.035770) EVENT_LOG_v1 {"time_micros": 1768919928035758, "job": 38, "event": "compaction_finished", "compaction_time_micros": 104585, "compaction_time_cpu_micros": 40150, "output_level": 6, "num_output_files": 1, "total_output_size": 11936284, "num_input_records": 7291, "num_output_records": 6235, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928036555, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919928038325, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:47.927349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.038532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.038542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.038544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.038546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:38:48.038547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.325 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Successfully updated port: 66a65e38-61c1-414e-8a99-7a5480b4b97b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.341 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.342 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquired lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.342 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:38:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.468 250022 DEBUG nova.compute.manager [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.469 250022 DEBUG nova.compute.manager [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing instance network info cache due to event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:38:48 compute-0 nova_compute[250018]: 2026-01-20 14:38:48.469 250022 DEBUG oslo_concurrency.lockutils [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:38:48 compute-0 ceph-mon[74360]: pgmap v1513: 321 pgs: 321 active+clean; 97 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 20 14:38:49 compute-0 nova_compute[250018]: 2026-01-20 14:38:49.286 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:38:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 114 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 MiB/s wr, 39 op/s
Jan 20 14:38:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:38:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:38:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3403050049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:38:49 compute-0 ceph-mon[74360]: pgmap v1514: 321 pgs: 321 active+clean; 114 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 MiB/s wr, 39 op/s
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.362 250022 DEBUG nova.network.neutron [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.384 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Releasing lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.384 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Instance network_info: |[{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.385 250022 DEBUG oslo_concurrency.lockutils [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.385 250022 DEBUG nova.network.neutron [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.387 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Start _get_guest_xml network_info=[{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.391 250022 WARNING nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.397 250022 DEBUG nova.virt.libvirt.host [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.398 250022 DEBUG nova.virt.libvirt.host [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.402 250022 DEBUG nova.virt.libvirt.host [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.402 250022 DEBUG nova.virt.libvirt.host [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.404 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.404 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.404 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.404 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.404 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.405 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.406 250022 DEBUG nova.virt.hardware [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.408 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:50.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:38:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123654074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.826 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.852 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:50 compute-0 nova_compute[250018]: 2026-01-20 14:38:50.855 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4123654074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:38:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161260471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.281 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.284 250022 DEBUG nova.virt.libvirt.vif [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:38:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1175148693',display_name='tempest-SecurityGroupsTestJSON-server-1175148693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1175148693',id=61,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-e05ceji0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:38:44Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=0c6693f6-c588-4a64-86ee-cf44a6a36260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.285 250022 DEBUG nova.network.os_vif_util [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.287 250022 DEBUG nova.network.os_vif_util [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.289 250022 DEBUG nova.objects.instance [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c6693f6-c588-4a64-86ee-cf44a6a36260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.322 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <uuid>0c6693f6-c588-4a64-86ee-cf44a6a36260</uuid>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <name>instance-0000003d</name>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1175148693</nova:name>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:38:50</nova:creationTime>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:user uuid="a3274e8540014ffa8cd910526cd964f7">tempest-SecurityGroupsTestJSON-1750338240-project-member</nova:user>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:project uuid="ebadba01dc3642a9a3e39469ff5d4708">tempest-SecurityGroupsTestJSON-1750338240</nova:project>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <nova:port uuid="66a65e38-61c1-414e-8a99-7a5480b4b97b">
Jan 20 14:38:51 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <system>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="serial">0c6693f6-c588-4a64-86ee-cf44a6a36260</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="uuid">0c6693f6-c588-4a64-86ee-cf44a6a36260</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </system>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <os>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </os>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <features>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </features>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0c6693f6-c588-4a64-86ee-cf44a6a36260_disk">
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config">
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </source>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:38:51 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:ff:49:85"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <target dev="tap66a65e38-61"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/console.log" append="off"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <video>
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </video>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:38:51 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:38:51 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:38:51 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:38:51 compute-0 nova_compute[250018]: </domain>
Jan 20 14:38:51 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.324 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Preparing to wait for external event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.324 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.324 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.325 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.325 250022 DEBUG nova.virt.libvirt.vif [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:38:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1175148693',display_name='tempest-SecurityGroupsTestJSON-server-1175148693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1175148693',id=61,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-e05ceji0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:38:44Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=0c6693f6-c588-4a64-86ee-cf44a6a36260,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.325 250022 DEBUG nova.network.os_vif_util [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.326 250022 DEBUG nova.network.os_vif_util [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.326 250022 DEBUG os_vif [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.327 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.327 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.328 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.331 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66a65e38-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.332 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66a65e38-61, col_values=(('external_ids', {'iface-id': '66a65e38-61c1-414e-8a99-7a5480b4b97b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:49:85', 'vm-uuid': '0c6693f6-c588-4a64-86ee-cf44a6a36260'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:51 compute-0 NetworkManager[48960]: <info>  [1768919931.3358] manager: (tap66a65e38-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.343 250022 INFO os_vif [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61')
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.392 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.392 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.393 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No VIF found with MAC fa:16:3e:ff:49:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.393 250022 INFO nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Using config drive
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.422 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 147 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.1 MiB/s wr, 74 op/s
Jan 20 14:38:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.897 250022 DEBUG nova.network.neutron [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updated VIF entry in instance network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.898 250022 DEBUG nova.network.neutron [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.909 250022 INFO nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Creating config drive at /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.914 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jrl1mrf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:51 compute-0 nova_compute[250018]: 2026-01-20 14:38:51.937 250022 DEBUG oslo_concurrency.lockutils [req-0cfbd6e1-8221-4dce-a8ec-f85a5db5d02b req-a1a0bdb5-7c23-42d7-8708-00567f6a8058 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:38:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4161260471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:51 compute-0 ceph-mon[74360]: pgmap v1515: 321 pgs: 321 active+clean; 147 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.1 MiB/s wr, 74 op/s
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.044 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jrl1mrf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.079 250022 DEBUG nova.storage.rbd_utils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.083 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.259 250022 DEBUG oslo_concurrency.processutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config 0c6693f6-c588-4a64-86ee-cf44a6a36260_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.260 250022 INFO nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Deleting local config drive /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260/disk.config because it was imported into RBD.
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.3355] manager: (tap66a65e38-61): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Jan 20 14:38:52 compute-0 kernel: tap66a65e38-61: entered promiscuous mode
Jan 20 14:38:52 compute-0 ovn_controller[148666]: 2026-01-20T14:38:52Z|00180|binding|INFO|Claiming lport 66a65e38-61c1-414e-8a99-7a5480b4b97b for this chassis.
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.337 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 ovn_controller[148666]: 2026-01-20T14:38:52Z|00181|binding|INFO|66a65e38-61c1-414e-8a99-7a5480b4b97b: Claiming fa:16:3e:ff:49:85 10.100.0.13
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.351 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.364 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:49:85 10.100.0.13'], port_security=['fa:16:3e:ff:49:85 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0c6693f6-c588-4a64-86ee-cf44a6a36260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5567989a-fdc9-4eba-934a-717bea3108fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=66a65e38-61c1-414e-8a99-7a5480b4b97b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.366 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 66a65e38-61c1-414e-8a99-7a5480b4b97b in datapath dece58f4-881e-47c7-961b-4367a4c3c21a bound to our chassis
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.367 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:38:52 compute-0 systemd-udevd[288988]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:38:52 compute-0 systemd-machined[216401]: New machine qemu-27-instance-0000003d.
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.381 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f602134d-d4b1-4143-a663-3e829f534ba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.383 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdece58f4-81 in ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.384 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdece58f4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.384 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1788f653-4bab-4ea7-b035-9bebd94703bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.385 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[628fbac6-1b55-47e0-9895-8d61ac036a69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.3919] device (tap66a65e38-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.3931] device (tap66a65e38-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.400 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8fffa3-1843-4ca3-a441-abc971c156db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_controller[148666]: 2026-01-20T14:38:52Z|00182|binding|INFO|Setting lport 66a65e38-61c1-414e-8a99-7a5480b4b97b ovn-installed in OVS
Jan 20 14:38:52 compute-0 ovn_controller[148666]: 2026-01-20T14:38:52Z|00183|binding|INFO|Setting lport 66a65e38-61c1-414e-8a99-7a5480b4b97b up in Southbound
Jan 20 14:38:52 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000003d.
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.426 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c6560cba-1d9d-42b0-bf3a-6d023a5cc264]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:52.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.472 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b20d8375-4204-43a1-b05f-210091f98d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.478 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[61f063b3-c602-456b-b045-41f631b493bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 systemd-udevd[288992]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.4799] manager: (tapdece58f4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.508 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4063cadb-1f03-4ab7-afc0-fb5f0ffb66c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.511 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfa2900-fcef-4a41-a9f7-72b5c30a968f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:38:52
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log']
Jan 20 14:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.5316] device (tapdece58f4-80): carrier: link connected
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.536 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[541af4bf-d8ef-447c-b3fa-5d68a0de7d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.552 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[622e4bfc-d8ef-480b-be6d-21624d2289cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289021, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.566 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9d2df4-8c3a-424a-abbe-0cbfa948c2cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:e0be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587325, 'tstamp': 587325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289022, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.582 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb83acc-6010-4dec-8c28-5ab6bc80396f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289023, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.613 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbbc8ea-6bc0-4075-aed3-c7d2bfdfe8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.672 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed4063f-41d9-4ae7-a5a6-615bb2692951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.673 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.673 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.674 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdece58f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:52 compute-0 kernel: tapdece58f4-80: entered promiscuous mode
Jan 20 14:38:52 compute-0 NetworkManager[48960]: <info>  [1768919932.6763] manager: (tapdece58f4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.678 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdece58f4-80, col_values=(('external_ids', {'iface-id': 'e7065b67-177e-47d4-a18c-921ae1c77ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.675 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.679 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 ovn_controller[148666]: 2026-01-20T14:38:52Z|00184|binding|INFO|Releasing lport e7065b67-177e-47d4-a18c-921ae1c77ad4 from this chassis (sb_readonly=0)
Jan 20 14:38:52 compute-0 nova_compute[250018]: 2026-01-20 14:38:52.698 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.699 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dece58f4-881e-47c7-961b-4367a4c3c21a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dece58f4-881e-47c7-961b-4367a4c3c21a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.700 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea75b446-a748-4231-8507-4a84072e0597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.701 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/dece58f4-881e-47c7-961b-4367a4c3c21a.pid.haproxy
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:38:52.701 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'env', 'PROCESS_TAG=haproxy-dece58f4-881e-47c7-961b-4367a4c3c21a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dece58f4-881e-47c7-961b-4367a4c3c21a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:38:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3938428519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:53 compute-0 podman[289090]: 2026-01-20 14:38:53.053971724 +0000 UTC m=+0.048930850 container create 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:38:53 compute-0 systemd[1]: Started libpod-conmon-59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf.scope.
Jan 20 14:38:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.108 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919933.1081672, 0c6693f6-c588-4a64-86ee-cf44a6a36260 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.109 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] VM Started (Lifecycle Event)
Jan 20 14:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aafcd99a0d846c5b7c615907921ee0851372153f968a1df0105ef43c1c4749b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:38:53 compute-0 podman[289090]: 2026-01-20 14:38:53.029234157 +0000 UTC m=+0.024193303 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:38:53 compute-0 podman[289090]: 2026-01-20 14:38:53.125412651 +0000 UTC m=+0.120371797 container init 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:38:53 compute-0 podman[289090]: 2026-01-20 14:38:53.132703847 +0000 UTC m=+0.127662963 container start 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.132 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.136 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919933.1083224, 0c6693f6-c588-4a64-86ee-cf44a6a36260 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.136 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] VM Paused (Lifecycle Event)
Jan 20 14:38:53 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [NOTICE]   (289118) : New worker (289120) forked
Jan 20 14:38:53 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [NOTICE]   (289118) : Loading success.
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.164 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.167 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.188 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:38:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 166 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 106 op/s
Jan 20 14:38:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:38:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.642 250022 DEBUG nova.compute.manager [req-1f419c85-58d3-4b85-8af9-765708a0848f req-8bd7f827-4144-4656-85cb-72c611d39da3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.642 250022 DEBUG oslo_concurrency.lockutils [req-1f419c85-58d3-4b85-8af9-765708a0848f req-8bd7f827-4144-4656-85cb-72c611d39da3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.643 250022 DEBUG oslo_concurrency.lockutils [req-1f419c85-58d3-4b85-8af9-765708a0848f req-8bd7f827-4144-4656-85cb-72c611d39da3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.644 250022 DEBUG oslo_concurrency.lockutils [req-1f419c85-58d3-4b85-8af9-765708a0848f req-8bd7f827-4144-4656-85cb-72c611d39da3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.644 250022 DEBUG nova.compute.manager [req-1f419c85-58d3-4b85-8af9-765708a0848f req-8bd7f827-4144-4656-85cb-72c611d39da3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Processing event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.645 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.648 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919933.6484387, 0c6693f6-c588-4a64-86ee-cf44a6a36260 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.649 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] VM Resumed (Lifecycle Event)
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.653 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.659 250022 INFO nova.virt.libvirt.driver [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Instance spawned successfully.
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.660 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.671 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.676 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.687 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.688 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.689 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.690 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.691 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.692 250022 DEBUG nova.virt.libvirt.driver [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.699 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.744 250022 INFO nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Took 8.73 seconds to spawn the instance on the hypervisor.
Jan 20 14:38:53 compute-0 nova_compute[250018]: 2026-01-20 14:38:53.745 250022 DEBUG nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:38:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/971674263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:38:54 compute-0 ceph-mon[74360]: pgmap v1516: 321 pgs: 321 active+clean; 166 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 106 op/s
Jan 20 14:38:54 compute-0 nova_compute[250018]: 2026-01-20 14:38:54.181 250022 INFO nova.compute.manager [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Took 10.32 seconds to build instance.
Jan 20 14:38:54 compute-0 nova_compute[250018]: 2026-01-20 14:38:54.211 250022 DEBUG oslo_concurrency.lockutils [None req-82de21ab-deb3-4ecc-b554-14cc43af7efb a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:54.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 20 14:38:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:56 compute-0 nova_compute[250018]: 2026-01-20 14:38:56.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:56.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:56 compute-0 ceph-mon[74360]: pgmap v1517: 321 pgs: 321 active+clean; 181 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 3.6 MiB/s wr, 243 op/s
Jan 20 14:38:56 compute-0 nova_compute[250018]: 2026-01-20 14:38:56.708 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.410 250022 DEBUG nova.compute.manager [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.411 250022 DEBUG oslo_concurrency.lockutils [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.411 250022 DEBUG oslo_concurrency.lockutils [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.411 250022 DEBUG oslo_concurrency.lockutils [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.411 250022 DEBUG nova.compute.manager [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] No waiting events found dispatching network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:38:57 compute-0 nova_compute[250018]: 2026-01-20 14:38:57.412 250022 WARNING nova.compute.manager [req-797bf830-9a8a-444b-bfb7-163bff8a386f req-f7b9c4ae-5267-4c0b-82b5-9032c4247450 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received unexpected event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b for instance with vm_state active and task_state None.
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:38:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Jan 20 14:38:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:38:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:38:58.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:38:58 compute-0 ceph-mon[74360]: pgmap v1518: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Jan 20 14:38:59 compute-0 nova_compute[250018]: 2026-01-20 14:38:59.491 250022 DEBUG nova.compute.manager [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:38:59 compute-0 nova_compute[250018]: 2026-01-20 14:38:59.492 250022 DEBUG nova.compute.manager [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing instance network info cache due to event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:38:59 compute-0 nova_compute[250018]: 2026-01-20 14:38:59.492 250022 DEBUG oslo_concurrency.lockutils [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:38:59 compute-0 nova_compute[250018]: 2026-01-20 14:38:59.492 250022 DEBUG oslo_concurrency.lockutils [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:38:59 compute-0 nova_compute[250018]: 2026-01-20 14:38:59.492 250022 DEBUG nova.network.neutron [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:38:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 312 op/s
Jan 20 14:38:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:38:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:38:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:38:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:00 compute-0 ceph-mon[74360]: pgmap v1519: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 312 op/s
Jan 20 14:39:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/616156782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.361 250022 DEBUG nova.network.neutron [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updated VIF entry in instance network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.362 250022 DEBUG nova.network.neutron [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.390 250022 DEBUG oslo_concurrency.lockutils [req-a55ab625-dcdd-4cb7-b96a-86d0b1206907 req-ddec0577-a807-4687-b93d-dfe9afd89005 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 323 op/s
Jan 20 14:39:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:01.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.694 250022 DEBUG nova.compute.manager [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.694 250022 DEBUG nova.compute.manager [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing instance network info cache due to event network-changed-66a65e38-61c1-414e-8a99-7a5480b4b97b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.694 250022 DEBUG oslo_concurrency.lockutils [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.695 250022 DEBUG oslo_concurrency.lockutils [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.695 250022 DEBUG nova.network.neutron [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Refreshing network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:01 compute-0 nova_compute[250018]: 2026-01-20 14:39:01.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:02 compute-0 ceph-mon[74360]: pgmap v1520: 321 pgs: 321 active+clean; 181 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 323 op/s
Jan 20 14:39:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:03 compute-0 nova_compute[250018]: 2026-01-20 14:39:03.452 250022 DEBUG nova.network.neutron [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updated VIF entry in instance network info cache for port 66a65e38-61c1-414e-8a99-7a5480b4b97b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:03 compute-0 nova_compute[250018]: 2026-01-20 14:39:03.453 250022 DEBUG nova.network.neutron [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:03 compute-0 nova_compute[250018]: 2026-01-20 14:39:03.479 250022 DEBUG oslo_concurrency.lockutils [req-4dafaaf4-7f7e-40d3-87d4-91e29c748178 req-750f33d6-5bfa-4fb9-8c27-68176be83505 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 197 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 330 op/s
Jan 20 14:39:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:03.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:04 compute-0 sudo[289135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:04 compute-0 sudo[289135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:04 compute-0 sudo[289135]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:04 compute-0 sudo[289160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:04 compute-0 sudo[289160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:04 compute-0 sudo[289160]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:04 compute-0 ceph-mon[74360]: pgmap v1521: 321 pgs: 321 active+clean; 197 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 330 op/s
Jan 20 14:39:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 227 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 290 op/s
Jan 20 14:39:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3372781718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2327636316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:06 compute-0 nova_compute[250018]: 2026-01-20 14:39:06.343 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:06 compute-0 ceph-mon[74360]: pgmap v1522: 321 pgs: 321 active+clean; 227 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 290 op/s
Jan 20 14:39:06 compute-0 nova_compute[250018]: 2026-01-20 14:39:06.762 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 232 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 20 14:39:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:07 compute-0 ovn_controller[148666]: 2026-01-20T14:39:07Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:49:85 10.100.0.13
Jan 20 14:39:07 compute-0 ovn_controller[148666]: 2026-01-20T14:39:07Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:49:85 10.100.0.13
Jan 20 14:39:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1592924687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.047838) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948047867, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 435, "num_deletes": 251, "total_data_size": 351076, "memory_usage": 359008, "flush_reason": "Manual Compaction"}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948051744, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 285520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34062, "largest_seqno": 34496, "table_properties": {"data_size": 283148, "index_size": 472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6627, "raw_average_key_size": 20, "raw_value_size": 278274, "raw_average_value_size": 856, "num_data_blocks": 21, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919928, "oldest_key_time": 1768919928, "file_creation_time": 1768919948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 3953 microseconds, and 1482 cpu microseconds.
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.051789) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 285520 bytes OK
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.051804) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.053343) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.053355) EVENT_LOG_v1 {"time_micros": 1768919948053351, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.053370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 348428, prev total WAL file size 348428, number of live WAL files 2.
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.053758) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(278KB)], [71(11MB)]
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948053789, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12221804, "oldest_snapshot_seqno": -1}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6053 keys, 8440002 bytes, temperature: kUnknown
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948108119, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8440002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8400592, "index_size": 23172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 155608, "raw_average_key_size": 25, "raw_value_size": 8292845, "raw_average_value_size": 1370, "num_data_blocks": 926, "num_entries": 6053, "num_filter_entries": 6053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768919948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.108539) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8440002 bytes
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.110227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.0 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(72.4) write-amplify(29.6) OK, records in: 6560, records dropped: 507 output_compression: NoCompression
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.110249) EVENT_LOG_v1 {"time_micros": 1768919948110239, "job": 40, "event": "compaction_finished", "compaction_time_micros": 54552, "compaction_time_cpu_micros": 20055, "output_level": 6, "num_output_files": 1, "total_output_size": 8440002, "num_input_records": 6560, "num_output_records": 6053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948111031, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768919948113116, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.053685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.113308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.113313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.113316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.113318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:39:08.113319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:39:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148979419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:08 compute-0 ceph-mon[74360]: pgmap v1523: 321 pgs: 321 active+clean; 232 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 169 op/s
Jan 20 14:39:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1148979419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 245 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 20 14:39:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:39:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:39:09 compute-0 ceph-mon[74360]: pgmap v1524: 321 pgs: 321 active+clean; 245 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 174 op/s
Jan 20 14:39:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3539387829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1234018470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004023299367206403 of space, bias 1.0, pg target 1.2069898101619208 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:39:11 compute-0 nova_compute[250018]: 2026-01-20 14:39:11.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 268 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Jan 20 14:39:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:11.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:11 compute-0 nova_compute[250018]: 2026-01-20 14:39:11.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.109 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.109 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.130 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.237 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.238 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.245 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.246 250022 INFO nova.compute.claims [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:39:12 compute-0 ceph-mon[74360]: pgmap v1525: 321 pgs: 321 active+clean; 268 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.374 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028375435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.811 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.817 250022 DEBUG nova.compute.provider_tree [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.834 250022 DEBUG nova.scheduler.client.report [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.852 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.853 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.904 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.905 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:39:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.926 250022 INFO nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:39:12 compute-0 nova_compute[250018]: 2026-01-20 14:39:12.944 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.032 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.034 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.034 250022 INFO nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Creating image(s)
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.058 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.083 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.107 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.111 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.200 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.201 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.202 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.202 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.231 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.234 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.258 250022 DEBUG nova.policy [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3274e8540014ffa8cd910526cd964f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:39:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3028375435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.543 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 281 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.613 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] resizing rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:39:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:13.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.751 250022 DEBUG nova.objects.instance [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'migration_context' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.767 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.767 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Ensure instance console log exists: /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.768 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.768 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:13 compute-0 nova_compute[250018]: 2026-01-20 14:39:13.768 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2354170290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:39:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2354170290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:39:14 compute-0 ceph-mon[74360]: pgmap v1526: 321 pgs: 321 active+clean; 281 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.430 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Successfully created port: 100a89bb-a5c7-4e59-a9d5-500f5781d5ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:39:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:14 compute-0 sshd-session[289378]: Invalid user ubuntu from 157.245.78.139 port 34464
Jan 20 14:39:14 compute-0 podman[289381]: 2026-01-20 14:39:14.680243083 +0000 UTC m=+0.055158719 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:39:14 compute-0 podman[289380]: 2026-01-20 14:39:14.707119438 +0000 UTC m=+0.081933651 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:14 compute-0 sshd-session[289378]: Connection closed by invalid user ubuntu 157.245.78.139 port 34464 [preauth]
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.890 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.917 250022 WARNING nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.917 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid 0c6693f6-c588-4a64-86ee-cf44a6a36260 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.917 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.917 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.917 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.918 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:14 compute-0 nova_compute[250018]: 2026-01-20 14:39:14.939 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.496 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Successfully updated port: 100a89bb-a5c7-4e59-a9d5-500f5781d5ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.519 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.519 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquired lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.519 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:39:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 329 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.616 250022 DEBUG nova.compute.manager [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.617 250022 DEBUG nova.compute.manager [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing instance network info cache due to event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.617 250022 DEBUG oslo_concurrency.lockutils [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:15.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:15 compute-0 nova_compute[250018]: 2026-01-20 14:39:15.693 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.349 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.646 250022 DEBUG nova.network.neutron [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.662 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Releasing lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.662 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance network_info: |[{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.663 250022 DEBUG oslo_concurrency.lockutils [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.663 250022 DEBUG nova.network.neutron [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing network info cache for port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.665 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start _get_guest_xml network_info=[{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.669 250022 WARNING nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.674 250022 DEBUG nova.virt.libvirt.host [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.675 250022 DEBUG nova.virt.libvirt.host [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.681 250022 DEBUG nova.virt.libvirt.host [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.681 250022 DEBUG nova.virt.libvirt.host [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.682 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.682 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.683 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.683 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.683 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.683 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.684 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.684 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.684 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.684 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.685 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.685 250022 DEBUG nova.virt.hardware [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.687 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:16 compute-0 ceph-mon[74360]: pgmap v1527: 321 pgs: 321 active+clean; 329 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 20 14:39:16 compute-0 nova_compute[250018]: 2026-01-20 14:39:16.821 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294190681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.134 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.156 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.160 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248444674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 339 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.596 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.597 250022 DEBUG nova.virt.libvirt.vif [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:12Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.598 250022 DEBUG nova.network.os_vif_util [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.598 250022 DEBUG nova.network.os_vif_util [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.599 250022 DEBUG nova.objects.instance [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'pci_devices' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.613 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <uuid>472d2aca-ddcf-4c68-87d9-c9fe623fae5e</uuid>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <name>instance-00000041</name>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:name>tempest-SecurityGroupsTestJSON-server-611554348</nova:name>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:39:16</nova:creationTime>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:user uuid="a3274e8540014ffa8cd910526cd964f7">tempest-SecurityGroupsTestJSON-1750338240-project-member</nova:user>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:project uuid="ebadba01dc3642a9a3e39469ff5d4708">tempest-SecurityGroupsTestJSON-1750338240</nova:project>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <nova:port uuid="100a89bb-a5c7-4e59-a9d5-500f5781d5ee">
Jan 20 14:39:17 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <system>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="serial">472d2aca-ddcf-4c68-87d9-c9fe623fae5e</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="uuid">472d2aca-ddcf-4c68-87d9-c9fe623fae5e</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </system>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <os>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </os>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <features>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </features>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk">
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </source>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config">
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </source>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:39:17 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:3f:7c:5d"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <target dev="tap100a89bb-a5"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/console.log" append="off"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <video>
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </video>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:39:17 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:39:17 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:39:17 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:39:17 compute-0 nova_compute[250018]: </domain>
Jan 20 14:39:17 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.614 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Preparing to wait for external event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.615 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.615 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.615 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.616 250022 DEBUG nova.virt.libvirt.vif [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:39:12Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.616 250022 DEBUG nova.network.os_vif_util [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.616 250022 DEBUG nova.network.os_vif_util [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.617 250022 DEBUG os_vif [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.618 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.618 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.620 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap100a89bb-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.621 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap100a89bb-a5, col_values=(('external_ids', {'iface-id': '100a89bb-a5c7-4e59-a9d5-500f5781d5ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:7c:5d', 'vm-uuid': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:17 compute-0 NetworkManager[48960]: <info>  [1768919957.6231] manager: (tap100a89bb-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.628 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.629 250022 INFO os_vif [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5')
Jan 20 14:39:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:17.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.692 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.693 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.693 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] No VIF found with MAC fa:16:3e:3f:7c:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.694 250022 INFO nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Using config drive
Jan 20 14:39:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3294190681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/248444674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:17 compute-0 nova_compute[250018]: 2026-01-20 14:39:17.725 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.080 250022 INFO nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Creating config drive at /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.084 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6s7otr2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.106 250022 DEBUG nova.network.neutron [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updated VIF entry in instance network info cache for port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.107 250022 DEBUG nova.network.neutron [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.126 250022 DEBUG oslo_concurrency.lockutils [req-55015b74-261d-494b-a55b-5d4450c9ed8c req-b38eca3d-74ab-4de7-8f9e-a90f0481b6c6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.212 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6s7otr2n" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.237 250022 DEBUG nova.storage.rbd_utils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] rbd image 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.240 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.380 250022 DEBUG oslo_concurrency.processutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config 472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.381 250022 INFO nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Deleting local config drive /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/disk.config because it was imported into RBD.
Jan 20 14:39:18 compute-0 kernel: tap100a89bb-a5: entered promiscuous mode
Jan 20 14:39:18 compute-0 NetworkManager[48960]: <info>  [1768919958.4406] manager: (tap100a89bb-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Jan 20 14:39:18 compute-0 ovn_controller[148666]: 2026-01-20T14:39:18Z|00185|binding|INFO|Claiming lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee for this chassis.
Jan 20 14:39:18 compute-0 ovn_controller[148666]: 2026-01-20T14:39:18Z|00186|binding|INFO|100a89bb-a5c7-4e59-a9d5-500f5781d5ee: Claiming fa:16:3e:3f:7c:5d 10.100.0.5
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.442 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.451 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:7c:5d 10.100.0.5'], port_security=['fa:16:3e:3f:7c:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5567989a-fdc9-4eba-934a-717bea3108fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=100a89bb-a5c7-4e59-a9d5-500f5781d5ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.452 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee in datapath dece58f4-881e-47c7-961b-4367a4c3c21a bound to our chassis
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.454 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:39:18 compute-0 ovn_controller[148666]: 2026-01-20T14:39:18Z|00187|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee ovn-installed in OVS
Jan 20 14:39:18 compute-0 ovn_controller[148666]: 2026-01-20T14:39:18Z|00188|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee up in Southbound
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.457 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.460 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.470 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c471fbf6-7e03-48f9-a2b5-71d3ed41237a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 systemd-udevd[289562]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:39:18 compute-0 systemd-machined[216401]: New machine qemu-28-instance-00000041.
Jan 20 14:39:18 compute-0 NetworkManager[48960]: <info>  [1768919958.4867] device (tap100a89bb-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:39:18 compute-0 NetworkManager[48960]: <info>  [1768919958.4872] device (tap100a89bb-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:39:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:18.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:18 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000041.
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.503 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea1f3e7-4336-4f36-8d0d-f1d1a33340cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.506 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e2f80a-85c2-4041-b082-d91ef8f2c348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.536 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0f676fd2-afba-44b9-b271-53e5f1248fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.552 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1e92fc7e-5ccc-4c48-b480-a75966f5748c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289572, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.571 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[21a82b38-503d-4577-9ad4-50cb143aa51d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587336, 'tstamp': 587336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289575, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587338, 'tstamp': 587338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289575, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.573 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.575 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.576 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.576 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdece58f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.577 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.577 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdece58f4-80, col_values=(('external_ids', {'iface-id': 'e7065b67-177e-47d4-a18c-921ae1c77ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:18.577 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.692 250022 DEBUG nova.compute.manager [req-d8876bcb-3612-4073-9675-2114478feaaa req-80c5fa1e-97dd-4f8c-8bbe-70da81e24f75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.692 250022 DEBUG oslo_concurrency.lockutils [req-d8876bcb-3612-4073-9675-2114478feaaa req-80c5fa1e-97dd-4f8c-8bbe-70da81e24f75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.693 250022 DEBUG oslo_concurrency.lockutils [req-d8876bcb-3612-4073-9675-2114478feaaa req-80c5fa1e-97dd-4f8c-8bbe-70da81e24f75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.693 250022 DEBUG oslo_concurrency.lockutils [req-d8876bcb-3612-4073-9675-2114478feaaa req-80c5fa1e-97dd-4f8c-8bbe-70da81e24f75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:18 compute-0 nova_compute[250018]: 2026-01-20 14:39:18.693 250022 DEBUG nova.compute.manager [req-d8876bcb-3612-4073-9675-2114478feaaa req-80c5fa1e-97dd-4f8c-8bbe-70da81e24f75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Processing event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:39:18 compute-0 ceph-mon[74360]: pgmap v1528: 321 pgs: 321 active+clean; 339 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 230 op/s
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.180 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.181 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919959.1798532, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.181 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Started (Lifecycle Event)
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.183 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.186 250022 INFO nova.virt.libvirt.driver [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance spawned successfully.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.186 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.204 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.210 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.213 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.214 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.214 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.214 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.215 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.215 250022 DEBUG nova.virt.libvirt.driver [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.240 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.240 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919959.1812446, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.241 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Paused (Lifecycle Event)
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.270 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.273 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919959.183151, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.273 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Resumed (Lifecycle Event)
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.342 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.343 250022 INFO nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Took 6.31 seconds to spawn the instance on the hypervisor.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.344 250022 DEBUG nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.346 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.398 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.419 250022 INFO nova.compute.manager [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Took 7.21 seconds to build instance.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.438 250022 DEBUG oslo_concurrency.lockutils [None req-97008fd9-e9b8-40e6-a7ac-3fb4fd61c679 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.438 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.438 250022 INFO nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:39:19 compute-0 nova_compute[250018]: 2026-01-20 14:39:19.438 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 348 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 217 op/s
Jan 20 14:39:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:19.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:39:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:20.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:39:20 compute-0 ceph-mon[74360]: pgmap v1529: 321 pgs: 321 active+clean; 348 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 217 op/s
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.781 250022 DEBUG nova.compute.manager [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.782 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.782 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.782 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.782 250022 DEBUG nova.compute.manager [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.782 250022 WARNING nova.compute.manager [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state None.
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.783 250022 DEBUG nova.compute.manager [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.783 250022 DEBUG nova.compute.manager [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing instance network info cache due to event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.783 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.783 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.784 250022 DEBUG nova.network.neutron [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing network info cache for port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.941 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.941 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.942 250022 INFO nova.compute.manager [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Rebooting instance
Jan 20 14:39:20 compute-0 nova_compute[250018]: 2026-01-20 14:39:20.957 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 352 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.5 MiB/s wr, 223 op/s
Jan 20 14:39:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:21 compute-0 nova_compute[250018]: 2026-01-20 14:39:21.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:21 compute-0 ceph-mon[74360]: pgmap v1530: 321 pgs: 321 active+clean; 352 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.5 MiB/s wr, 223 op/s
Jan 20 14:39:21 compute-0 sudo[289621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:21 compute-0 sudo[289621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:21 compute-0 sudo[289621]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-0 sudo[289646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:22 compute-0 sudo[289646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-0 sudo[289646]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-0 nova_compute[250018]: 2026-01-20 14:39:22.072 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:22 compute-0 sudo[289671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:22 compute-0 sudo[289671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-0 sudo[289671]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-0 sudo[289696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:39:22 compute-0 sudo[289696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:39:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:39:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:39:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:39:22 compute-0 nova_compute[250018]: 2026-01-20 14:39:22.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:22 compute-0 sudo[289696]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:39:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 67bf052d-083f-4ead-a8e9-be4269d677cd does not exist
Jan 20 14:39:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9795d9f0-9e5f-4cca-a692-06fe5e1493ae does not exist
Jan 20 14:39:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f0b3cdfc-1435-4ecf-9069-d651e004ad2f does not exist
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:39:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 363 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 171 op/s
Jan 20 14:39:23 compute-0 sudo[289753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:23 compute-0 sudo[289753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:23 compute-0 sudo[289753]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:23.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:23 compute-0 sudo[289778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:23 compute-0 sudo[289778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:23 compute-0 sudo[289778]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:23 compute-0 sudo[289803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:23 compute-0 sudo[289803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:23 compute-0 sudo[289803]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:23 compute-0 sudo[289828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:39:23 compute-0 sudo[289828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:23 compute-0 nova_compute[250018]: 2026-01-20 14:39:23.860 250022 DEBUG nova.network.neutron [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updated VIF entry in instance network info cache for port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:39:23 compute-0 nova_compute[250018]: 2026-01-20 14:39:23.860 250022 DEBUG nova.network.neutron [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:23 compute-0 nova_compute[250018]: 2026-01-20 14:39:23.886 250022 DEBUG oslo_concurrency.lockutils [req-fb0259cb-b463-4bf0-a053-de92b3e2f30c req-55cdc6e5-e146-4efe-a756-d65394eda7cc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:23 compute-0 nova_compute[250018]: 2026-01-20 14:39:23.886 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquired lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:23 compute-0 nova_compute[250018]: 2026-01-20 14:39:23.886 250022 DEBUG nova.network.neutron [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:39:24 compute-0 nova_compute[250018]: 2026-01-20 14:39:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.087173492 +0000 UTC m=+0.037494451 container create 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:24 compute-0 systemd[1]: Started libpod-conmon-610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5.scope.
Jan 20 14:39:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.16089504 +0000 UTC m=+0.111216019 container init 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.071043707 +0000 UTC m=+0.021364686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.169095541 +0000 UTC m=+0.119416500 container start 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.172106292 +0000 UTC m=+0.122427261 container attach 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:39:24 compute-0 vigorous_tharp[289908]: 167 167
Jan 20 14:39:24 compute-0 systemd[1]: libpod-610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5.scope: Deactivated successfully.
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.174207029 +0000 UTC m=+0.124527998 container died 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2218ff6b27d9ffadfc944396dd4caa3c12f2230268ca93d43f7e2c88554b8607-merged.mount: Deactivated successfully.
Jan 20 14:39:24 compute-0 podman[289892]: 2026-01-20 14:39:24.211807013 +0000 UTC m=+0.162127972 container remove 610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:39:24 compute-0 systemd[1]: libpod-conmon-610357035b6828bc0385df7d4511e344e8a9d25e60d63cdc044337dec3dacca5.scope: Deactivated successfully.
Jan 20 14:39:24 compute-0 podman[289932]: 2026-01-20 14:39:24.371210751 +0000 UTC m=+0.036196747 container create dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 14:39:24 compute-0 systemd[1]: Started libpod-conmon-dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a.scope.
Jan 20 14:39:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:24 compute-0 podman[289932]: 2026-01-20 14:39:24.355488107 +0000 UTC m=+0.020474133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:24 compute-0 podman[289932]: 2026-01-20 14:39:24.466057309 +0000 UTC m=+0.131043325 container init dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:39:24 compute-0 podman[289932]: 2026-01-20 14:39:24.472114042 +0000 UTC m=+0.137100038 container start dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:39:24 compute-0 podman[289932]: 2026-01-20 14:39:24.475322909 +0000 UTC m=+0.140308935 container attach dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:39:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:39:24 compute-0 ceph-mon[74360]: pgmap v1531: 321 pgs: 321 active+clean; 363 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 171 op/s
Jan 20 14:39:24 compute-0 sudo[289954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:24 compute-0 sudo[289954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:24 compute-0 sudo[289954]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:24 compute-0 sudo[289979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:24 compute-0 sudo[289979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:24 compute-0 sudo[289979]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:25 compute-0 condescending_archimedes[289949]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:39:25 compute-0 condescending_archimedes[289949]: --> relative data size: 1.0
Jan 20 14:39:25 compute-0 condescending_archimedes[289949]: --> All data devices are unavailable
Jan 20 14:39:25 compute-0 systemd[1]: libpod-dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a.scope: Deactivated successfully.
Jan 20 14:39:25 compute-0 podman[289932]: 2026-01-20 14:39:25.314892758 +0000 UTC m=+0.979878754 container died dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6bfa218635d4ff999d7385e0dc4f51aaaaa546dc22fe5b76555f2e57359537-merged.mount: Deactivated successfully.
Jan 20 14:39:25 compute-0 podman[289932]: 2026-01-20 14:39:25.409650083 +0000 UTC m=+1.074636089 container remove dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:39:25 compute-0 systemd[1]: libpod-conmon-dfc93c87ed8a5fa23f474bc3cef3be4b6e2268de8fd0f6554ccf410fa061c96a.scope: Deactivated successfully.
Jan 20 14:39:25 compute-0 sudo[289828]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:25 compute-0 sudo[290027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:25 compute-0 sudo[290027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:25 compute-0 sudo[290027]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 189 op/s
Jan 20 14:39:25 compute-0 sudo[290052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:25 compute-0 sudo[290052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:25 compute-0 sudo[290052]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:25 compute-0 sudo[290077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:25 compute-0 sudo[290077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:25 compute-0 sudo[290077]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:25 compute-0 sudo[290102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:39:25 compute-0 sudo[290102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.058348205 +0000 UTC m=+0.039273820 container create 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:39:26 compute-0 systemd[1]: Started libpod-conmon-0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5.scope.
Jan 20 14:39:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.040982267 +0000 UTC m=+0.021907892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.147954861 +0000 UTC m=+0.128880556 container init 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.15606143 +0000 UTC m=+0.136987035 container start 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.159546744 +0000 UTC m=+0.140472379 container attach 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:39:26 compute-0 silly_yalow[290183]: 167 167
Jan 20 14:39:26 compute-0 systemd[1]: libpod-0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5.scope: Deactivated successfully.
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.161878967 +0000 UTC m=+0.142804602 container died 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:39:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b967b992114e24267c0447bb82e54af64785c8cfd796264b6d0c38375733954-merged.mount: Deactivated successfully.
Jan 20 14:39:26 compute-0 podman[290167]: 2026-01-20 14:39:26.209639955 +0000 UTC m=+0.190565560 container remove 0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_yalow, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:39:26 compute-0 systemd[1]: libpod-conmon-0f72c1f2557cb5b6334b1d0e4dbdfe0224e2a44f3d96089774ad19a5080508b5.scope: Deactivated successfully.
Jan 20 14:39:26 compute-0 podman[290207]: 2026-01-20 14:39:26.406836392 +0000 UTC m=+0.056648888 container create 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:26 compute-0 systemd[1]: Started libpod-conmon-9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e.scope.
Jan 20 14:39:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:26 compute-0 podman[290207]: 2026-01-20 14:39:26.372144077 +0000 UTC m=+0.021956603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ae73db4f6ab9971c8f02314fc7fd60c0bf5d049d1fcff11ca77ceec16e038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ae73db4f6ab9971c8f02314fc7fd60c0bf5d049d1fcff11ca77ceec16e038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ae73db4f6ab9971c8f02314fc7fd60c0bf5d049d1fcff11ca77ceec16e038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996ae73db4f6ab9971c8f02314fc7fd60c0bf5d049d1fcff11ca77ceec16e038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:26 compute-0 podman[290207]: 2026-01-20 14:39:26.486214263 +0000 UTC m=+0.136026779 container init 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:39:26 compute-0 podman[290207]: 2026-01-20 14:39:26.49313003 +0000 UTC m=+0.142942526 container start 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:39:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:26 compute-0 podman[290207]: 2026-01-20 14:39:26.497425305 +0000 UTC m=+0.147237801 container attach 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:39:26 compute-0 ceph-mon[74360]: pgmap v1532: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 189 op/s
Jan 20 14:39:26 compute-0 nova_compute[250018]: 2026-01-20 14:39:26.847 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]: {
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:     "0": [
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:         {
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "devices": [
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "/dev/loop3"
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             ],
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "lv_name": "ceph_lv0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "lv_size": "7511998464",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "name": "ceph_lv0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "tags": {
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.cluster_name": "ceph",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.crush_device_class": "",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.encrypted": "0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.osd_id": "0",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.type": "block",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:                 "ceph.vdo": "0"
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             },
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "type": "block",
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:             "vg_name": "ceph_vg0"
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:         }
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]:     ]
Jan 20 14:39:27 compute-0 unruffled_heyrovsky[290223]: }
Jan 20 14:39:27 compute-0 systemd[1]: libpod-9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e.scope: Deactivated successfully.
Jan 20 14:39:27 compute-0 podman[290207]: 2026-01-20 14:39:27.310621794 +0000 UTC m=+0.960434300 container died 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-996ae73db4f6ab9971c8f02314fc7fd60c0bf5d049d1fcff11ca77ceec16e038-merged.mount: Deactivated successfully.
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.348 250022 DEBUG nova.network.neutron [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.367 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Releasing lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.369 250022 DEBUG nova.compute.manager [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:27 compute-0 podman[290207]: 2026-01-20 14:39:27.418521353 +0000 UTC m=+1.068333849 container remove 9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:39:27 compute-0 systemd[1]: libpod-conmon-9cc36af080a78bdaddda3c80eff242be50c7075c6d4400d8a1d05b0cde63676e.scope: Deactivated successfully.
Jan 20 14:39:27 compute-0 sudo[290102]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:27 compute-0 kernel: tap100a89bb-a5 (unregistering): left promiscuous mode
Jan 20 14:39:27 compute-0 NetworkManager[48960]: <info>  [1768919967.5124] device (tap100a89bb-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:39:27 compute-0 sudo[290248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.521 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 ovn_controller[148666]: 2026-01-20T14:39:27Z|00189|binding|INFO|Releasing lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee from this chassis (sb_readonly=0)
Jan 20 14:39:27 compute-0 ovn_controller[148666]: 2026-01-20T14:39:27Z|00190|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee down in Southbound
Jan 20 14:39:27 compute-0 sudo[290248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:27 compute-0 ovn_controller[148666]: 2026-01-20T14:39:27Z|00191|binding|INFO|Removing iface tap100a89bb-a5 ovn-installed in OVS
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.524 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 sudo[290248]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.530 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:7c:5d 10.100.0.5'], port_security=['fa:16:3e:3f:7c:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5567989a-fdc9-4eba-934a-717bea3108fe be541bb3-3f03-4ac6-a46b-7651e33995f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=100a89bb-a5c7-4e59-a9d5-500f5781d5ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.531 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee in datapath dece58f4-881e-47c7-961b-4367a4c3c21a unbound from our chassis
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.532 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.544 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.547 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[19954ae4-9e34-4430-8936-aa45c941b48f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 20 14:39:27 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000041.scope: Consumed 9.166s CPU time.
Jan 20 14:39:27 compute-0 systemd-machined[216401]: Machine qemu-28-instance-00000041 terminated.
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.575 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[18815f77-3ef0-4fee-813c-32f0f67272ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.579 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9cb7aa-bd80-42f8-8468-703b5dda3c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 sudo[290275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:39:27 compute-0 sudo[290275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:27 compute-0 sudo[290275]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.605 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c409972c-815c-41a4-8f30-11e810e3fd34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.621 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b2ba6e-3065-4292-8210-2c8b074faaf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290327, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 sudo[290310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:27 compute-0 sudo[290310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:27 compute-0 sudo[290310]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.638 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd314e8-62d1-4021-8e32-5d142a928c6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587336, 'tstamp': 587336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290334, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587338, 'tstamp': 587338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290334, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.640 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.641 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.647 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdece58f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.647 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.647 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdece58f4-80, col_values=(('external_ids', {'iface-id': 'e7065b67-177e-47d4-a18c-921ae1c77ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:27.648 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.690 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 sudo[290338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.694 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 sudo[290338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.701 250022 INFO nova.virt.libvirt.driver [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance destroyed successfully.
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.701 250022 DEBUG nova.objects.instance [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'resources' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.723 250022 DEBUG nova.virt.libvirt.vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:27Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.723 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.724 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.724 250022 DEBUG os_vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.727 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap100a89bb-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.735 250022 INFO os_vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5')
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.743 250022 DEBUG nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start _get_guest_xml network_info=[{"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.748 250022 WARNING nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.753 250022 DEBUG nova.virt.libvirt.host [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.754 250022 DEBUG nova.virt.libvirt.host [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.757 250022 DEBUG nova.virt.libvirt.host [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.757 250022 DEBUG nova.virt.libvirt.host [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.758 250022 DEBUG nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.758 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.759 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.759 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.759 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.760 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.760 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.760 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.760 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.760 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.761 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.761 250022 DEBUG nova.virt.hardware [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.761 250022 DEBUG nova.objects.instance [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:27 compute-0 nova_compute[250018]: 2026-01-20 14:39:27.775 250022 DEBUG oslo_concurrency.processutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.046313411 +0000 UTC m=+0.041746057 container create c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:28 compute-0 systemd[1]: Started libpod-conmon-c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9.scope.
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.080 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.080 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.119427553 +0000 UTC m=+0.114860229 container init c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.026660201 +0000 UTC m=+0.022092867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.127910091 +0000 UTC m=+0.123342737 container start c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.132075274 +0000 UTC m=+0.127507910 container attach c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:39:28 compute-0 gifted_elbakyan[290448]: 167 167
Jan 20 14:39:28 compute-0 systemd[1]: libpod-c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9.scope: Deactivated successfully.
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.135277011 +0000 UTC m=+0.130709657 container died c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbc06e95c7ab392fe7fa30601616c489fb376681460e0c5a20b71c1690b9531f-merged.mount: Deactivated successfully.
Jan 20 14:39:28 compute-0 podman[290432]: 2026-01-20 14:39:28.174134388 +0000 UTC m=+0.169567034 container remove c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_elbakyan, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:39:28 compute-0 systemd[1]: libpod-conmon-c4fd6d62c90d9ed2f7ef0749de7436bfa395f9979122e46dc3f2d0ab703ceba9.scope: Deactivated successfully.
Jan 20 14:39:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148360181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.249 250022 DEBUG oslo_concurrency.processutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.294 250022 DEBUG oslo_concurrency.processutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:28 compute-0 podman[290512]: 2026-01-20 14:39:28.337838012 +0000 UTC m=+0.038177080 container create cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:39:28 compute-0 systemd[1]: Started libpod-conmon-cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258.scope.
Jan 20 14:39:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4299318adf539ebc022c98ab52e0aaf187f3d7b88fcb0dcd8c976ccd654155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4299318adf539ebc022c98ab52e0aaf187f3d7b88fcb0dcd8c976ccd654155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4299318adf539ebc022c98ab52e0aaf187f3d7b88fcb0dcd8c976ccd654155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d4299318adf539ebc022c98ab52e0aaf187f3d7b88fcb0dcd8c976ccd654155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:39:28 compute-0 podman[290512]: 2026-01-20 14:39:28.321277105 +0000 UTC m=+0.021616193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:39:28 compute-0 podman[290512]: 2026-01-20 14:39:28.424980102 +0000 UTC m=+0.125319190 container init cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:39:28 compute-0 podman[290512]: 2026-01-20 14:39:28.430257204 +0000 UTC m=+0.130596272 container start cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:39:28 compute-0 podman[290512]: 2026-01-20 14:39:28.433937714 +0000 UTC m=+0.134276802 container attach cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:39:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148624311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.556 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.628 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.629 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000003d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.762 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.763 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4237MB free_disk=20.83075714111328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:39:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337503018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-0 ceph-mon[74360]: pgmap v1533: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:39:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/148360181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/148624311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.797 250022 DEBUG oslo_concurrency.processutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.798 250022 DEBUG nova.virt.libvirt.vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:27Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.798 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.799 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.800 250022 DEBUG nova.objects.instance [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'pci_devices' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.838 250022 DEBUG nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <uuid>472d2aca-ddcf-4c68-87d9-c9fe623fae5e</uuid>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <name>instance-00000041</name>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:name>tempest-SecurityGroupsTestJSON-server-611554348</nova:name>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:39:27</nova:creationTime>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:user uuid="a3274e8540014ffa8cd910526cd964f7">tempest-SecurityGroupsTestJSON-1750338240-project-member</nova:user>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:project uuid="ebadba01dc3642a9a3e39469ff5d4708">tempest-SecurityGroupsTestJSON-1750338240</nova:project>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <nova:port uuid="100a89bb-a5c7-4e59-a9d5-500f5781d5ee">
Jan 20 14:39:28 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <system>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="serial">472d2aca-ddcf-4c68-87d9-c9fe623fae5e</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="uuid">472d2aca-ddcf-4c68-87d9-c9fe623fae5e</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </system>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <os>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </os>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <features>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </features>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk">
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </source>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_disk.config">
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </source>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:39:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:3f:7c:5d"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <target dev="tap100a89bb-a5"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e/console.log" append="off"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <video>
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </video>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:39:28 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:39:28 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:39:28 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:39:28 compute-0 nova_compute[250018]: </domain>
Jan 20 14:39:28 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.844 250022 DEBUG nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.844 250022 DEBUG nova.virt.libvirt.driver [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.845 250022 DEBUG nova.virt.libvirt.vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:27Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.846 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.846 250022 DEBUG nova.network.os_vif_util [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.847 250022 DEBUG os_vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.848 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.849 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.855 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap100a89bb-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.856 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap100a89bb-a5, col_values=(('external_ids', {'iface-id': '100a89bb-a5c7-4e59-a9d5-500f5781d5ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:7c:5d', 'vm-uuid': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:28 compute-0 NetworkManager[48960]: <info>  [1768919968.8583] manager: (tap100a89bb-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.859 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.861 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.867 250022 INFO os_vif [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5')
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.899 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 0c6693f6-c588-4a64-86ee-cf44a6a36260 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 472d2aca-ddcf-4c68-87d9-c9fe623fae5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:39:28 compute-0 kernel: tap100a89bb-a5: entered promiscuous mode
Jan 20 14:39:28 compute-0 NetworkManager[48960]: <info>  [1768919968.9252] manager: (tap100a89bb-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Jan 20 14:39:28 compute-0 systemd-udevd[290284]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:39:28 compute-0 NetworkManager[48960]: <info>  [1768919968.9450] device (tap100a89bb-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:39:28 compute-0 NetworkManager[48960]: <info>  [1768919968.9457] device (tap100a89bb-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:39:28 compute-0 ovn_controller[148666]: 2026-01-20T14:39:28Z|00192|binding|INFO|Claiming lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee for this chassis.
Jan 20 14:39:28 compute-0 ovn_controller[148666]: 2026-01-20T14:39:28Z|00193|binding|INFO|100a89bb-a5c7-4e59-a9d5-500f5781d5ee: Claiming fa:16:3e:3f:7c:5d 10.100.0.5
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:28.953 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:7c:5d 10.100.0.5'], port_security=['fa:16:3e:3f:7c:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5567989a-fdc9-4eba-934a-717bea3108fe be541bb3-3f03-4ac6-a46b-7651e33995f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=100a89bb-a5c7-4e59-a9d5-500f5781d5ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:28.954 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee in datapath dece58f4-881e-47c7-961b-4367a4c3c21a bound to our chassis
Jan 20 14:39:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:28.956 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:39:28 compute-0 ovn_controller[148666]: 2026-01-20T14:39:28Z|00194|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee ovn-installed in OVS
Jan 20 14:39:28 compute-0 ovn_controller[148666]: 2026-01-20T14:39:28Z|00195|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee up in Southbound
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.967 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:28 compute-0 nova_compute[250018]: 2026-01-20 14:39:28.971 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:28.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8af046-19b3-4a0a-afd8-916941778322]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:28 compute-0 systemd-machined[216401]: New machine qemu-29-instance-00000041.
Jan 20 14:39:29 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000041.
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.006 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8b374060-b0b8-451d-8806-801621c8aaa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.010 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f2da95dd-9edc-4cd1-9166-7d553eb7f624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.042 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f76d193c-b279-4b7c-b51c-1886c3801f46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.076 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf0b0f2-fffb-4a0c-90b8-76ad7e37fd9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290587, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.092 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[631973b0-4388-4b48-a8b1-fe5b89df00f2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587336, 'tstamp': 587336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290589, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587338, 'tstamp': 587338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290589, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.094 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.099 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdece58f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.099 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.099 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdece58f4-80, col_values=(('external_ids', {'iface-id': 'e7065b67-177e-47d4-a18c-921ae1c77ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:29.100 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.102 250022 DEBUG nova.compute.manager [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.102 250022 DEBUG oslo_concurrency.lockutils [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.103 250022 DEBUG oslo_concurrency.lockutils [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.103 250022 DEBUG oslo_concurrency.lockutils [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.103 250022 DEBUG nova.compute.manager [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.103 250022 WARNING nova.compute.manager [req-8bd753c7-e52d-4a45-8854-7a456e5c115e req-6c91e4c9-8af2-4725-8b69-074e2c4cfff4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state reboot_started_hard.
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]: {
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:         "osd_id": 0,
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:         "type": "bluestore"
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]:     }
Jan 20 14:39:29 compute-0 wonderful_hertz[290531]: }
Jan 20 14:39:29 compute-0 systemd[1]: libpod-cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258.scope: Deactivated successfully.
Jan 20 14:39:29 compute-0 podman[290512]: 2026-01-20 14:39:29.3683449 +0000 UTC m=+1.068683968 container died cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d4299318adf539ebc022c98ab52e0aaf187f3d7b88fcb0dcd8c976ccd654155-merged.mount: Deactivated successfully.
Jan 20 14:39:29 compute-0 podman[290512]: 2026-01-20 14:39:29.436133599 +0000 UTC m=+1.136472667 container remove cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:29 compute-0 systemd[1]: libpod-conmon-cacbbefb5224bab271e7513705051145a6983a45835b43731c1d889942263258.scope: Deactivated successfully.
Jan 20 14:39:29 compute-0 sudo[290338]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:39:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:39:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 292955e3-ec12-4739-ace6-ae65d884ed81 does not exist
Jan 20 14:39:29 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f2d1d0cc-8c6b-445a-979f-1af616f4b8ed does not exist
Jan 20 14:39:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254287090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 171fbfee-d5b7-4846-9533-bfe44cd570e6 does not exist
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.516 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.525 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.551 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:29 compute-0 sudo[290637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:29 compute-0 sudo[290637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:29 compute-0 sudo[290637]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.584 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.585 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:39:29 compute-0 sudo[290663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:39:29 compute-0 sudo[290663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:29 compute-0 sudo[290663]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1337503018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3331275778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:39:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1254287090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.792 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for 472d2aca-ddcf-4c68-87d9-c9fe623fae5e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.793 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919969.7920709, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.793 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Resumed (Lifecycle Event)
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.795 250022 DEBUG nova.compute.manager [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.800 250022 INFO nova.virt.libvirt.driver [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance rebooted successfully.
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.800 250022 DEBUG nova.compute.manager [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.811 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.814 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.838 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.838 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768919969.7950308, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.839 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Started (Lifecycle Event)
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.870 250022 DEBUG oslo_concurrency.lockutils [None req-4b6de632-ab4b-4a82-8e05-9d1ab2d5feda a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.872 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:29 compute-0 nova_compute[250018]: 2026-01-20 14:39:29.875 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:39:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:30 compute-0 nova_compute[250018]: 2026-01-20 14:39:30.586 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:30.752 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:30.752 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:30.753 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:30 compute-0 ceph-mon[74360]: pgmap v1534: 321 pgs: 321 active+clean; 372 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:39:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/837437866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.239 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.239 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.239 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.239 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.240 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.240 250022 WARNING nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state None.
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.240 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.240 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.240 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.241 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.241 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.241 250022 WARNING nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state None.
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.241 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.241 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.242 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.242 250022 DEBUG oslo_concurrency.lockutils [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.242 250022 DEBUG nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.242 250022 WARNING nova.compute.manager [req-5567d833-541e-4ddc-9254-13c63f724321 req-2f09011f-d27c-4140-a389-8f7136e7da16 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state None.
Jan 20 14:39:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 337 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Jan 20 14:39:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:31.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/219743222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/722968545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:31 compute-0 nova_compute[250018]: 2026-01-20 14:39:31.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:32 compute-0 nova_compute[250018]: 2026-01-20 14:39:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:32.079 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:32 compute-0 nova_compute[250018]: 2026-01-20 14:39:32.079 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:32.080 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:39:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:32 compute-0 ceph-mon[74360]: pgmap v1535: 321 pgs: 321 active+clean; 337 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Jan 20 14:39:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2096865750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 282 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 919 KiB/s wr, 154 op/s
Jan 20 14:39:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2502978402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:33 compute-0 nova_compute[250018]: 2026-01-20 14:39:33.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.081 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.270 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.270 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.270 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.270 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.271 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.272 250022 INFO nova.compute.manager [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Terminating instance
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.272 250022 DEBUG nova.compute.manager [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.282 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.282 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.282 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.282 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0c6693f6-c588-4a64-86ee-cf44a6a36260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:34 compute-0 kernel: tap100a89bb-a5 (unregistering): left promiscuous mode
Jan 20 14:39:34 compute-0 NetworkManager[48960]: <info>  [1768919974.3086] device (tap100a89bb-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:39:34 compute-0 ovn_controller[148666]: 2026-01-20T14:39:34Z|00196|binding|INFO|Releasing lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee from this chassis (sb_readonly=0)
Jan 20 14:39:34 compute-0 ovn_controller[148666]: 2026-01-20T14:39:34Z|00197|binding|INFO|Setting lport 100a89bb-a5c7-4e59-a9d5-500f5781d5ee down in Southbound
Jan 20 14:39:34 compute-0 ovn_controller[148666]: 2026-01-20T14:39:34Z|00198|binding|INFO|Removing iface tap100a89bb-a5 ovn-installed in OVS
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.319 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:7c:5d 10.100.0.5'], port_security=['fa:16:3e:3f:7c:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '472d2aca-ddcf-4c68-87d9-c9fe623fae5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5567989a-fdc9-4eba-934a-717bea3108fe be541bb3-3f03-4ac6-a46b-7651e33995f4 f3ce9a57-2f58-47f5-a885-26879a5662f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=100a89bb-a5c7-4e59-a9d5-500f5781d5ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.321 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee in datapath dece58f4-881e-47c7-961b-4367a4c3c21a unbound from our chassis
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.323 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dece58f4-881e-47c7-961b-4367a4c3c21a
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.336 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5133c030-2228-432c-9db1-ec655bbe829c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.367 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[09ca19e2-9660-4d0c-9960-abb7606b9eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.370 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ec410da7-b7b5-41e2-b331-96cbbbb8e349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 20 14:39:34 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000041.scope: Consumed 5.466s CPU time.
Jan 20 14:39:34 compute-0 systemd-machined[216401]: Machine qemu-29-instance-00000041 terminated.
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.395 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[74e6bb2f-1c8e-47a1-b750-8b8e41eb9591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.410 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[33ccc95a-e66f-4a8f-9cda-228a7b2eb82c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdece58f4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:e0:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587325, 'reachable_time': 37173, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290744, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.426 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[101f0b61-c9ea-4bcc-8469-5890bc70dcb4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587336, 'tstamp': 587336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290745, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdece58f4-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 587338, 'tstamp': 587338}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290745, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.427 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.429 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.433 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.434 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdece58f4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.434 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.435 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdece58f4-80, col_values=(('external_ids', {'iface-id': 'e7065b67-177e-47d4-a18c-921ae1c77ad4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:34.435 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.489 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.494 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.502 250022 INFO nova.virt.libvirt.driver [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Instance destroyed successfully.
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.503 250022 DEBUG nova.objects.instance [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'resources' on Instance uuid 472d2aca-ddcf-4c68-87d9-c9fe623fae5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.513 250022 DEBUG nova.virt.libvirt.vif [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:39:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-611554348',display_name='tempest-SecurityGroupsTestJSON-server-611554348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-611554348',id=65,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:39:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-q6l3neim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:39:29Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=472d2aca-ddcf-4c68-87d9-c9fe623fae5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.514 250022 DEBUG nova.network.os_vif_util [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "address": "fa:16:3e:3f:7c:5d", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap100a89bb-a5", "ovs_interfaceid": "100a89bb-a5c7-4e59-a9d5-500f5781d5ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.514 250022 DEBUG nova.network.os_vif_util [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.514 250022 DEBUG os_vif [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.516 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.516 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap100a89bb-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.519 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.523 250022 INFO os_vif [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:7c:5d,bridge_name='br-int',has_traffic_filtering=True,id=100a89bb-a5c7-4e59-a9d5-500f5781d5ee,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap100a89bb-a5')
Jan 20 14:39:34 compute-0 ceph-mon[74360]: pgmap v1536: 321 pgs: 321 active+clean; 282 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 919 KiB/s wr, 154 op/s
Jan 20 14:39:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1735536733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.872 250022 INFO nova.virt.libvirt.driver [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Deleting instance files /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_del
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.873 250022 INFO nova.virt.libvirt.driver [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Deletion of /var/lib/nova/instances/472d2aca-ddcf-4c68-87d9-c9fe623fae5e_del complete
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.941 250022 INFO nova.compute.manager [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.942 250022 DEBUG oslo.service.loopingcall [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.942 250022 DEBUG nova.compute.manager [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:39:34 compute-0 nova_compute[250018]: 2026-01-20 14:39:34.942 250022 DEBUG nova.network.neutron [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:39:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 177 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 81 KiB/s wr, 218 op/s
Jan 20 14:39:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.735 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.736 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing instance network info cache due to event network-changed-100a89bb-a5c7-4e59-a9d5-500f5781d5ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.736 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.736 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.736 250022 DEBUG nova.network.neutron [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Refreshing network info cache for port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.776 250022 DEBUG nova.network.neutron [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.791 250022 INFO nova.compute.manager [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Took 0.85 seconds to deallocate network for instance.
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.840 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.840 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.914 250022 DEBUG oslo_concurrency.processutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.942 250022 INFO nova.network.neutron [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Port 100a89bb-a5c7-4e59-a9d5-500f5781d5ee from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.943 250022 DEBUG nova.network.neutron [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.959 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-472d2aca-ddcf-4c68-87d9-c9fe623fae5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.959 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.959 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-unplugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.960 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.961 250022 DEBUG oslo_concurrency.lockutils [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.961 250022 DEBUG nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] No waiting events found dispatching network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:35 compute-0 nova_compute[250018]: 2026-01-20 14:39:35.961 250022 WARNING nova.compute.manager [req-6c8da1a6-41dd-4209-8fd8-d5c55831e5f3 req-ce0dae49-5f05-42dc-981d-81d27269fe96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received unexpected event network-vif-plugged-100a89bb-a5c7-4e59-a9d5-500f5781d5ee for instance with vm_state active and task_state deleting.
Jan 20 14:39:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025398922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.347 250022 DEBUG oslo_concurrency.processutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.353 250022 DEBUG nova.compute.provider_tree [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.378 250022 DEBUG nova.scheduler.client.report [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.411 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.431 250022 INFO nova.scheduler.client.report [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Deleted allocations for instance 472d2aca-ddcf-4c68-87d9-c9fe623fae5e
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.492 250022 DEBUG oslo_concurrency.lockutils [None req-a01abc9a-1006-413b-88bd-5e1d42febfc9 a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "472d2aca-ddcf-4c68-87d9-c9fe623fae5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.773 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [{"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.789 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-0c6693f6-c588-4a64-86ee-cf44a6a36260" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.790 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:39:36 compute-0 nova_compute[250018]: 2026-01-20 14:39:36.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:36 compute-0 ceph-mon[74360]: pgmap v1537: 321 pgs: 321 active+clean; 177 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 81 KiB/s wr, 218 op/s
Jan 20 14:39:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3025398922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 172 op/s
Jan 20 14:39:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:37.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:37 compute-0 nova_compute[250018]: 2026-01-20 14:39:37.805 250022 DEBUG nova.compute.manager [req-bbc60d46-96cd-480b-ba63-ed9d295eb3b1 req-9d29ece7-aa3b-4bff-a01c-a35f47f93dc9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Received event network-vif-deleted-100a89bb-a5c7-4e59-a9d5-500f5781d5ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:38 compute-0 ceph-mon[74360]: pgmap v1538: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 172 op/s
Jan 20 14:39:39 compute-0 nova_compute[250018]: 2026-01-20 14:39:39.519 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 22 KiB/s wr, 206 op/s
Jan 20 14:39:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:39.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:40.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:40 compute-0 ceph-mon[74360]: pgmap v1539: 321 pgs: 321 active+clean; 167 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 22 KiB/s wr, 206 op/s
Jan 20 14:39:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 229 op/s
Jan 20 14:39:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:41.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:41 compute-0 nova_compute[250018]: 2026-01-20 14:39:41.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:42 compute-0 ceph-mon[74360]: pgmap v1540: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 229 op/s
Jan 20 14:39:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 219 op/s
Jan 20 14:39:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:43.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.957 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.958 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.959 250022 INFO nova.compute.manager [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Terminating instance
Jan 20 14:39:43 compute-0 nova_compute[250018]: 2026-01-20 14:39:43.960 250022 DEBUG nova.compute.manager [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:39:44 compute-0 kernel: tap66a65e38-61 (unregistering): left promiscuous mode
Jan 20 14:39:44 compute-0 NetworkManager[48960]: <info>  [1768919984.0146] device (tap66a65e38-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.053 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 ovn_controller[148666]: 2026-01-20T14:39:44Z|00199|binding|INFO|Releasing lport 66a65e38-61c1-414e-8a99-7a5480b4b97b from this chassis (sb_readonly=0)
Jan 20 14:39:44 compute-0 ovn_controller[148666]: 2026-01-20T14:39:44Z|00200|binding|INFO|Setting lport 66a65e38-61c1-414e-8a99-7a5480b4b97b down in Southbound
Jan 20 14:39:44 compute-0 ovn_controller[148666]: 2026-01-20T14:39:44Z|00201|binding|INFO|Removing iface tap66a65e38-61 ovn-installed in OVS
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.060 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:49:85 10.100.0.13'], port_security=['fa:16:3e:ff:49:85 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0c6693f6-c588-4a64-86ee-cf44a6a36260', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dece58f4-881e-47c7-961b-4367a4c3c21a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebadba01dc3642a9a3e39469ff5d4708', 'neutron:revision_number': '6', 'neutron:security_group_ids': '12cb97f2-202b-4966-9d91-8eda8dfa8dc4 5567989a-fdc9-4eba-934a-717bea3108fe 92e30278-22dc-47a3-90d7-324e7d26d9f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2446ce7a-4f1f-4833-83c5-907fbc775260, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=66a65e38-61c1-414e-8a99-7a5480b4b97b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.061 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 66a65e38-61c1-414e-8a99-7a5480b4b97b in datapath dece58f4-881e-47c7-961b-4367a4c3c21a unbound from our chassis
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.062 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dece58f4-881e-47c7-961b-4367a4c3c21a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.063 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6b1129-7c38-4321-8774-ef4e8c6ad985]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.063 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a namespace which is not needed anymore
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Jan 20 14:39:44 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003d.scope: Consumed 15.393s CPU time.
Jan 20 14:39:44 compute-0 systemd-machined[216401]: Machine qemu-27-instance-0000003d terminated.
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.177 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.181 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.193 250022 INFO nova.virt.libvirt.driver [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Instance destroyed successfully.
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.193 250022 DEBUG nova.objects.instance [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lazy-loading 'resources' on Instance uuid 0c6693f6-c588-4a64-86ee-cf44a6a36260 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [NOTICE]   (289118) : haproxy version is 2.8.14-c23fe91
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [NOTICE]   (289118) : path to executable is /usr/sbin/haproxy
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [WARNING]  (289118) : Exiting Master process...
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [WARNING]  (289118) : Exiting Master process...
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [ALERT]    (289118) : Current worker (289120) exited with code 143 (Terminated)
Jan 20 14:39:44 compute-0 neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a[289114]: [WARNING]  (289118) : All workers exited. Exiting... (0)
Jan 20 14:39:44 compute-0 systemd[1]: libpod-59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf.scope: Deactivated successfully.
Jan 20 14:39:44 compute-0 podman[290827]: 2026-01-20 14:39:44.20926848 +0000 UTC m=+0.050932135 container died 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf-userdata-shm.mount: Deactivated successfully.
Jan 20 14:39:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aafcd99a0d846c5b7c615907921ee0851372153f968a1df0105ef43c1c4749b-merged.mount: Deactivated successfully.
Jan 20 14:39:44 compute-0 podman[290827]: 2026-01-20 14:39:44.245552398 +0000 UTC m=+0.087216043 container cleanup 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:39:44 compute-0 systemd[1]: libpod-conmon-59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf.scope: Deactivated successfully.
Jan 20 14:39:44 compute-0 podman[290865]: 2026-01-20 14:39:44.310657113 +0000 UTC m=+0.044992974 container remove 59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.316 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[56ad55b7-e90f-4ccf-b71f-0437e160e11b]: (4, ('Tue Jan 20 02:39:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a (59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf)\n59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf\nTue Jan 20 02:39:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a (59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf)\n59ab5ad8fc10dba8c806e4f97844b83d32d8f1f307304abe848fbbd3d49ebfdf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.317 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac9ac03-69fe-47cf-b5b7-c00a420b7eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.318 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdece58f4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 kernel: tapdece58f4-80: left promiscuous mode
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.342 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5a99ab57-b6ed-4c44-86bd-a2cf14c824f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.358 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2a1c64-9631-4d99-bac6-b3e1194ae678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.358 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6568c4-e9c4-4f9d-b53f-875926bfae4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.371 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[970960d3-ae43-4785-abb9-fc482280eb16]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 587318, 'reachable_time': 17358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290884, 'error': None, 'target': 'ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.374 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dece58f4-881e-47c7-961b-4367a4c3c21a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:39:44 compute-0 systemd[1]: run-netns-ovnmeta\x2ddece58f4\x2d881e\x2d47c7\x2d961b\x2d4367a4c3c21a.mount: Deactivated successfully.
Jan 20 14:39:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:39:44.375 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c58672-9e71-44cf-a480-12d197349f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:39:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.520 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.731 250022 DEBUG nova.virt.libvirt.vif [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:38:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1175148693',display_name='tempest-SecurityGroupsTestJSON-server-1175148693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1175148693',id=61,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:38:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebadba01dc3642a9a3e39469ff5d4708',ramdisk_id='',reservation_id='r-e05ceji0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1750338240',owner_user_name='tempest-SecurityGroupsTestJSON-1750338240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:38:53Z,user_data=None,user_id='a3274e8540014ffa8cd910526cd964f7',uuid=0c6693f6-c588-4a64-86ee-cf44a6a36260,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.732 250022 DEBUG nova.network.os_vif_util [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converting VIF {"id": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "address": "fa:16:3e:ff:49:85", "network": {"id": "dece58f4-881e-47c7-961b-4367a4c3c21a", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1623251841-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebadba01dc3642a9a3e39469ff5d4708", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66a65e38-61", "ovs_interfaceid": "66a65e38-61c1-414e-8a99-7a5480b4b97b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.733 250022 DEBUG nova.network.os_vif_util [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.733 250022 DEBUG os_vif [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.734 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.735 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66a65e38-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.738 250022 DEBUG nova.compute.manager [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-unplugged-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.738 250022 DEBUG oslo_concurrency.lockutils [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.738 250022 DEBUG oslo_concurrency.lockutils [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.739 250022 DEBUG oslo_concurrency.lockutils [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.739 250022 DEBUG nova.compute.manager [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] No waiting events found dispatching network-vif-unplugged-66a65e38-61c1-414e-8a99-7a5480b4b97b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.739 250022 DEBUG nova.compute.manager [req-a8850201-2f4b-4939-a2a2-8ef84f7184c2 req-801cadf7-9f9f-430f-bd9a-d32633376741 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-unplugged-66a65e38-61c1-414e-8a99-7a5480b4b97b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.739 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.740 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:39:44 compute-0 nova_compute[250018]: 2026-01-20 14:39:44.742 250022 INFO os_vif [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:49:85,bridge_name='br-int',has_traffic_filtering=True,id=66a65e38-61c1-414e-8a99-7a5480b4b97b,network=Network(dece58f4-881e-47c7-961b-4367a4c3c21a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66a65e38-61')
Jan 20 14:39:44 compute-0 sudo[290905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:44 compute-0 sudo[290905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:44 compute-0 sudo[290905]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:44 compute-0 ceph-mon[74360]: pgmap v1541: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 219 op/s
Jan 20 14:39:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1901219903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:44 compute-0 sudo[290942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:39:44 compute-0 sudo[290942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:39:44 compute-0 podman[290930]: 2026-01-20 14:39:44.94915973 +0000 UTC m=+0.052561067 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:44 compute-0 sudo[290942]: pam_unix(sudo:session): session closed for user root
Jan 20 14:39:44 compute-0 podman[290929]: 2026-01-20 14:39:44.972610773 +0000 UTC m=+0.079785982 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:39:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 175 op/s
Jan 20 14:39:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:45.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.798 250022 INFO nova.virt.libvirt.driver [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Deleting instance files /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260_del
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.799 250022 INFO nova.virt.libvirt.driver [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Deletion of /var/lib/nova/instances/0c6693f6-c588-4a64-86ee-cf44a6a36260_del complete
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.894 250022 INFO nova.compute.manager [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Took 1.93 seconds to destroy the instance on the hypervisor.
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.895 250022 DEBUG oslo.service.loopingcall [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.896 250022 DEBUG nova.compute.manager [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:39:45 compute-0 nova_compute[250018]: 2026-01-20 14:39:45.896 250022 DEBUG nova.network.neutron [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:39:46 compute-0 ceph-mon[74360]: pgmap v1542: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 175 op/s
Jan 20 14:39:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:46.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.909 250022 DEBUG nova.compute.manager [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.910 250022 DEBUG oslo_concurrency.lockutils [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.910 250022 DEBUG oslo_concurrency.lockutils [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.910 250022 DEBUG oslo_concurrency.lockutils [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.910 250022 DEBUG nova.compute.manager [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] No waiting events found dispatching network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:39:46 compute-0 nova_compute[250018]: 2026-01-20 14:39:46.910 250022 WARNING nova.compute.manager [req-b7f7f0ee-b91b-48b7-8a37-95dda06fa273 req-85a5a08c-faa0-45df-afab-71cbef38dbea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received unexpected event network-vif-plugged-66a65e38-61c1-414e-8a99-7a5480b4b97b for instance with vm_state active and task_state deleting.
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.027 250022 DEBUG nova.network.neutron [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.135 250022 DEBUG nova.compute.manager [req-b20b6750-43bc-4625-9377-de3220c50ee9 req-379d8c68-c655-4380-aaf2-1e87935c9a8c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Received event network-vif-deleted-66a65e38-61c1-414e-8a99-7a5480b4b97b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.136 250022 INFO nova.compute.manager [req-b20b6750-43bc-4625-9377-de3220c50ee9 req-379d8c68-c655-4380-aaf2-1e87935c9a8c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Neutron deleted interface 66a65e38-61c1-414e-8a99-7a5480b4b97b; detaching it from the instance and deleting it from the info cache
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.136 250022 DEBUG nova.network.neutron [req-b20b6750-43bc-4625-9377-de3220c50ee9 req-379d8c68-c655-4380-aaf2-1e87935c9a8c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.138 250022 INFO nova.compute.manager [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Took 1.24 seconds to deallocate network for instance.
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.192 250022 DEBUG nova.compute.manager [req-b20b6750-43bc-4625-9377-de3220c50ee9 req-379d8c68-c655-4380-aaf2-1e87935c9a8c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Detach interface failed, port_id=66a65e38-61c1-414e-8a99-7a5480b4b97b, reason: Instance 0c6693f6-c588-4a64-86ee-cf44a6a36260 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.212 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.212 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.310 250022 DEBUG oslo_concurrency.processutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:39:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 163 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 930 KiB/s wr, 83 op/s
Jan 20 14:39:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:47.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:39:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1926474629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.760 250022 DEBUG oslo_concurrency.processutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.766 250022 DEBUG nova.compute.provider_tree [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.783 250022 DEBUG nova.scheduler.client.report [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.807 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.834 250022 INFO nova.scheduler.client.report [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Deleted allocations for instance 0c6693f6-c588-4a64-86ee-cf44a6a36260
Jan 20 14:39:47 compute-0 nova_compute[250018]: 2026-01-20 14:39:47.907 250022 DEBUG oslo_concurrency.lockutils [None req-3e03a2ab-9b36-4668-ac2b-6db320c9b08c a3274e8540014ffa8cd910526cd964f7 ebadba01dc3642a9a3e39469ff5d4708 - - default default] Lock "0c6693f6-c588-4a64-86ee-cf44a6a36260" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:39:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:48 compute-0 ceph-mon[74360]: pgmap v1543: 321 pgs: 321 active+clean; 163 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 930 KiB/s wr, 83 op/s
Jan 20 14:39:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1926474629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:39:49 compute-0 nova_compute[250018]: 2026-01-20 14:39:49.500 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919974.4995346, 472d2aca-ddcf-4c68-87d9-c9fe623fae5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:49 compute-0 nova_compute[250018]: 2026-01-20 14:39:49.501 250022 INFO nova.compute.manager [-] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] VM Stopped (Lifecycle Event)
Jan 20 14:39:49 compute-0 nova_compute[250018]: 2026-01-20 14:39:49.523 250022 DEBUG nova.compute.manager [None req-8c52f21c-48c3-445f-9360-0ef251bfeab7 - - - - - -] [instance: 472d2aca-ddcf-4c68-87d9-c9fe623fae5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 166 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 20 14:39:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:49.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:49 compute-0 nova_compute[250018]: 2026-01-20 14:39:49.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:50.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:50 compute-0 ceph-mon[74360]: pgmap v1544: 321 pgs: 321 active+clean; 166 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 20 14:39:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4224668871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 165 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:39:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:39:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:51.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:39:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2535468181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:39:51 compute-0 nova_compute[250018]: 2026-01-20 14:39:51.856 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:39:52
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Jan 20 14:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:39:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:52.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:52 compute-0 ceph-mon[74360]: pgmap v1545: 321 pgs: 321 active+clean; 165 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:39:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 167 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 20 14:39:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:53 compute-0 ceph-mon[74360]: pgmap v1546: 321 pgs: 321 active+clean; 167 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 20 14:39:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:54.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:54 compute-0 nova_compute[250018]: 2026-01-20 14:39:54.738 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:55 compute-0 sshd-session[291030]: Invalid user ubuntu from 157.245.78.139 port 36868
Jan 20 14:39:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 20 14:39:55 compute-0 sshd-session[291030]: Connection closed by invalid user ubuntu 157.245.78.139 port 36868 [preauth]
Jan 20 14:39:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:55.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:56 compute-0 ceph-mon[74360]: pgmap v1547: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 20 14:39:56 compute-0 nova_compute[250018]: 2026-01-20 14:39:56.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:39:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Jan 20 14:39:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:57.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:57 compute-0 nova_compute[250018]: 2026-01-20 14:39:57.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:39:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:39:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:39:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:39:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:39:58 compute-0 ceph-mon[74360]: pgmap v1548: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Jan 20 14:39:59 compute-0 nova_compute[250018]: 2026-01-20 14:39:59.192 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768919984.1912093, 0c6693f6-c588-4a64-86ee-cf44a6a36260 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:39:59 compute-0 nova_compute[250018]: 2026-01-20 14:39:59.192 250022 INFO nova.compute.manager [-] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] VM Stopped (Lifecycle Event)
Jan 20 14:39:59 compute-0 nova_compute[250018]: 2026-01-20 14:39:59.215 250022 DEBUG nova.compute.manager [None req-5f6bd64b-23d8-4397-8f4f-1cf1d71927e0 - - - - - -] [instance: 0c6693f6-c588-4a64-86ee-cf44a6a36260] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:39:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:39:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:39:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:39:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:39:59.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:39:59 compute-0 nova_compute[250018]: 2026-01-20 14:39:59.739 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:40:00 compute-0 ceph-mon[74360]: pgmap v1549: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 20 14:40:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:01 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:40:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Jan 20 14:40:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:01.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:01 compute-0 nova_compute[250018]: 2026-01-20 14:40:01.860 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:02 compute-0 ceph-mon[74360]: pgmap v1550: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Jan 20 14:40:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:02.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 75 op/s
Jan 20 14:40:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:03.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.015 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.015 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.035 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.121 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.121 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.130 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.130 250022 INFO nova.compute.claims [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.230 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.740 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907619847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:04 compute-0 ceph-mon[74360]: pgmap v1551: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 75 op/s
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.822 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.828 250022 DEBUG nova.compute.provider_tree [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.849 250022 DEBUG nova.scheduler.client.report [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.871 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.871 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.911 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.912 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.931 250022 INFO nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:40:04 compute-0 nova_compute[250018]: 2026-01-20 14:40:04.950 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:40:05 compute-0 sudo[291059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:05 compute-0 sudo[291059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:05 compute-0 sudo[291059]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.031 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.033 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.033 250022 INFO nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Creating image(s)
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.066 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:05 compute-0 sudo[291084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:05 compute-0 sudo[291084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:05 compute-0 sudo[291084]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.104 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.139 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.143 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.211 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.212 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.213 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.213 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.304 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.308 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9d050141-940b-4c59-8731-ca9d572d1127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.349 250022 DEBUG nova.policy [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1145324e6a8d44f28828a922ee70933a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e4d1d7361c94c429f75bf58a2dd432e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 81 op/s
Jan 20 14:40:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:05.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.749 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9d050141-940b-4c59-8731-ca9d572d1127_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3907619847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.850 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] resizing rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.947 250022 DEBUG nova.objects.instance [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lazy-loading 'migration_context' on Instance uuid 9d050141-940b-4c59-8731-ca9d572d1127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.963 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.964 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Ensure instance console log exists: /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.965 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.965 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:05 compute-0 nova_compute[250018]: 2026-01-20 14:40:05.965 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:06 compute-0 nova_compute[250018]: 2026-01-20 14:40:06.256 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Successfully created port: ed987e23-05ae-4b7e-a376-14db3bab9659 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:40:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:06.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:06 compute-0 ceph-mon[74360]: pgmap v1552: 321 pgs: 321 active+clean; 167 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 81 op/s
Jan 20 14:40:06 compute-0 nova_compute[250018]: 2026-01-20 14:40:06.862 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 197 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 60 op/s
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.700 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Successfully updated port: ed987e23-05ae-4b7e-a376-14db3bab9659 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.713 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.714 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquired lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.714 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:07.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.814 250022 DEBUG nova.compute.manager [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-changed-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.815 250022 DEBUG nova.compute.manager [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Refreshing instance network info cache due to event network-changed-ed987e23-05ae-4b7e-a376-14db3bab9659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.815 250022 DEBUG oslo_concurrency.lockutils [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:07 compute-0 nova_compute[250018]: 2026-01-20 14:40:07.905 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:40:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:08.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:08 compute-0 ceph-mon[74360]: pgmap v1553: 321 pgs: 321 active+clean; 197 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 60 op/s
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.087 250022 DEBUG nova.network.neutron [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updating instance_info_cache with network_info: [{"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.124 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Releasing lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.124 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Instance network_info: |[{"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.125 250022 DEBUG oslo_concurrency.lockutils [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.125 250022 DEBUG nova.network.neutron [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Refreshing network info cache for port ed987e23-05ae-4b7e-a376-14db3bab9659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.128 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Start _get_guest_xml network_info=[{"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.135 250022 WARNING nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.143 250022 DEBUG nova.virt.libvirt.host [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.144 250022 DEBUG nova.virt.libvirt.host [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.152 250022 DEBUG nova.virt.libvirt.host [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.152 250022 DEBUG nova.virt.libvirt.host [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.154 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.154 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.155 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.155 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.155 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.155 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.156 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.156 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.156 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.157 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.157 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.157 250022 DEBUG nova.virt.hardware [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.160 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:40:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459109517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 207 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.614 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.643 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.648 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:09 compute-0 nova_compute[250018]: 2026-01-20 14:40:09.742 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3459109517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:40:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895402904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.089 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.091 250022 DEBUG nova.virt.libvirt.vif [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=67,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFJgU7mCPKfkFgr/oKcVmlRRhJZj0QWIlBMfbPg1uRVsyziaMY8yTLUMmvLSt4t4kN6gHRbpZRaphoWHbFjBsO45c+rkSRuECrUvq9pyMwJAvZ1U5YGbNmZ5iOi/oGHAg==',key_name='tempest-keypair-1460970199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e4d1d7361c94c429f75bf58a2dd432e',ramdisk_id='',reservation_id='r-0c7303nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-698951391',owner_user_name='tempest-ServersV294TestFqdnHostnames-698951391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1145324e6a8d44f28828a922ee70933a',uuid=9d050141-940b-4c59-8731-ca9d572d1127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.091 250022 DEBUG nova.network.os_vif_util [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converting VIF {"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.092 250022 DEBUG nova.network.os_vif_util [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.093 250022 DEBUG nova.objects.instance [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d050141-940b-4c59-8731-ca9d572d1127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.106 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <uuid>9d050141-940b-4c59-8731-ca9d572d1127</uuid>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <name>instance-00000043</name>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:name>guest-instance-1</nova:name>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:40:09</nova:creationTime>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:user uuid="1145324e6a8d44f28828a922ee70933a">tempest-ServersV294TestFqdnHostnames-698951391-project-member</nova:user>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:project uuid="8e4d1d7361c94c429f75bf58a2dd432e">tempest-ServersV294TestFqdnHostnames-698951391</nova:project>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <nova:port uuid="ed987e23-05ae-4b7e-a376-14db3bab9659">
Jan 20 14:40:10 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <system>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="serial">9d050141-940b-4c59-8731-ca9d572d1127</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="uuid">9d050141-940b-4c59-8731-ca9d572d1127</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </system>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <os>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </os>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <features>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </features>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9d050141-940b-4c59-8731-ca9d572d1127_disk">
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9d050141-940b-4c59-8731-ca9d572d1127_disk.config">
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </source>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:40:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:fb:16:3a"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <target dev="taped987e23-05"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/console.log" append="off"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <video>
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </video>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:40:10 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:40:10 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:40:10 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:40:10 compute-0 nova_compute[250018]: </domain>
Jan 20 14:40:10 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.108 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Preparing to wait for external event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.108 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.108 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.108 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.109 250022 DEBUG nova.virt.libvirt.vif [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=67,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFJgU7mCPKfkFgr/oKcVmlRRhJZj0QWIlBMfbPg1uRVsyziaMY8yTLUMmvLSt4t4kN6gHRbpZRaphoWHbFjBsO45c+rkSRuECrUvq9pyMwJAvZ1U5YGbNmZ5iOi/oGHAg==',key_name='tempest-keypair-1460970199',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e4d1d7361c94c429f75bf58a2dd432e',ramdisk_id='',reservation_id='r-0c7303nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-698951391',owner_user_name='tempest-ServersV294TestFqdnHostnames-698951391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1145324e6a8d44f28828a922ee70933a',uuid=9d050141-940b-4c59-8731-ca9d572d1127,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.109 250022 DEBUG nova.network.os_vif_util [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converting VIF {"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.110 250022 DEBUG nova.network.os_vif_util [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.110 250022 DEBUG os_vif [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.111 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.111 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.112 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.114 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.114 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped987e23-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.114 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped987e23-05, col_values=(('external_ids', {'iface-id': 'ed987e23-05ae-4b7e-a376-14db3bab9659', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:16:3a', 'vm-uuid': '9d050141-940b-4c59-8731-ca9d572d1127'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 NetworkManager[48960]: <info>  [1768920010.1170] manager: (taped987e23-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.118 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.122 250022 INFO os_vif [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05')
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.179 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.180 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.180 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] No VIF found with MAC fa:16:3e:fb:16:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.180 250022 INFO nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Using config drive
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.202 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.461 250022 DEBUG nova.network.neutron [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updated VIF entry in instance network info cache for port ed987e23-05ae-4b7e-a376-14db3bab9659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.462 250022 DEBUG nova.network.neutron [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updating instance_info_cache with network_info: [{"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.475 250022 DEBUG oslo_concurrency.lockutils [req-a7242606-25e3-4c10-9c9d-8ee97cadf045 req-fdb2b28c-6e8a-485e-b558-93745357ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.575 250022 INFO nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Creating config drive at /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.581 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk6a1257 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.718 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxk6a1257" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.753 250022 DEBUG nova.storage.rbd_utils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] rbd image 9d050141-940b-4c59-8731-ca9d572d1127_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.757 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config 9d050141-940b-4c59-8731-ca9d572d1127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:10 compute-0 ceph-mon[74360]: pgmap v1554: 321 pgs: 321 active+clean; 207 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Jan 20 14:40:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1895402904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.915 250022 DEBUG oslo_concurrency.processutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config 9d050141-940b-4c59-8731-ca9d572d1127_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.916 250022 INFO nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Deleting local config drive /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127/disk.config because it was imported into RBD.
Jan 20 14:40:10 compute-0 kernel: taped987e23-05: entered promiscuous mode
Jan 20 14:40:10 compute-0 NetworkManager[48960]: <info>  [1768920010.9713] manager: (taped987e23-05): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Jan 20 14:40:10 compute-0 ovn_controller[148666]: 2026-01-20T14:40:10Z|00202|binding|INFO|Claiming lport ed987e23-05ae-4b7e-a376-14db3bab9659 for this chassis.
Jan 20 14:40:10 compute-0 ovn_controller[148666]: 2026-01-20T14:40:10Z|00203|binding|INFO|ed987e23-05ae-4b7e-a376-14db3bab9659: Claiming fa:16:3e:fb:16:3a 10.100.0.9
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.971 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 nova_compute[250018]: 2026-01-20 14:40:10.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:10.985 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:16:3a 10.100.0.9'], port_security=['fa:16:3e:fb:16:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d050141-940b-4c59-8731-ca9d572d1127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8c4393-f543-4223-8342-f22c66d7df5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e4d1d7361c94c429f75bf58a2dd432e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67e9e5ff-fd97-4967-97b8-4831a398af6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a42eeb6-0fa0-44f1-b75d-f87ed89575c3, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ed987e23-05ae-4b7e-a376-14db3bab9659) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:10.986 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ed987e23-05ae-4b7e-a376-14db3bab9659 in datapath 7e8c4393-f543-4223-8342-f22c66d7df5a bound to our chassis
Jan 20 14:40:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:10.987 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e8c4393-f543-4223-8342-f22c66d7df5a
Jan 20 14:40:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:10.997 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0d22bd69-9512-4ded-b8ec-edadf21ffbcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:10.998 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e8c4393-f1 in ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.000 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e8c4393-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.000 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[542c6095-c729-48e2-a037-b592a5ce01a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 systemd-udevd[291414]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.001 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a7a8331-9a0f-445c-98eb-ca222e6d5cc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 systemd-machined[216401]: New machine qemu-30-instance-00000043.
Jan 20 14:40:11 compute-0 NetworkManager[48960]: <info>  [1768920011.0122] device (taped987e23-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:40:11 compute-0 NetworkManager[48960]: <info>  [1768920011.0133] device (taped987e23-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.013 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[aa94b231-c78b-4272-8098-f0b6886c31ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000043.
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.037 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bf50055e-181e-4805-bbf8-e82255500246]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.044 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 ovn_controller[148666]: 2026-01-20T14:40:11Z|00204|binding|INFO|Setting lport ed987e23-05ae-4b7e-a376-14db3bab9659 ovn-installed in OVS
Jan 20 14:40:11 compute-0 ovn_controller[148666]: 2026-01-20T14:40:11Z|00205|binding|INFO|Setting lport ed987e23-05ae-4b7e-a376-14db3bab9659 up in Southbound
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.050 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.064 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[84da9d5c-c36c-49ce-ac80-c4fdd0e29ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.068 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[12c8055c-e4aa-4d52-a2c7-6b1d827f0fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 NetworkManager[48960]: <info>  [1768920011.0699] manager: (tap7e8c4393-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.097 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[43b96247-2b7f-438f-ba04-8d5a80b77c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.100 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[50c09560-3778-4509-90bc-7d22fcb49423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 NetworkManager[48960]: <info>  [1768920011.1191] device (tap7e8c4393-f0): carrier: link connected
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.123 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[517d6e70-2b3d-4069-b8a7-d037cc197208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.139 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0acddf-7753-41c0-9ada-b14d2ede6cfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8c4393-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:85:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595184, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291447, 'error': None, 'target': 'ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.151 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e6af0ae3-77d6-4876-bdac-fedd84969386]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:85a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595184, 'tstamp': 595184}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291448, 'error': None, 'target': 'ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.167 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d4709f-e018-47d7-ac8e-4114b0d9a81a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e8c4393-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:85:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595184, 'reachable_time': 34322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291449, 'error': None, 'target': 'ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.194 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9edda5eb-bb06-4518-ab7b-1520c1058e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.260 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b70c1883-f7b4-4626-b189-559998892e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.262 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8c4393-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.263 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.263 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e8c4393-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:11 compute-0 NetworkManager[48960]: <info>  [1768920011.2668] manager: (tap7e8c4393-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.266 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 kernel: tap7e8c4393-f0: entered promiscuous mode
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.270 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e8c4393-f0, col_values=(('external_ids', {'iface-id': 'ab4d1a7f-56b0-43b5-9bdc-d6249fe4c0d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 ovn_controller[148666]: 2026-01-20T14:40:11Z|00206|binding|INFO|Releasing lport ab4d1a7f-56b0-43b5-9bdc-d6249fe4c0d6 from this chassis (sb_readonly=0)
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.274 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e8c4393-f543-4223-8342-f22c66d7df5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e8c4393-f543-4223-8342-f22c66d7df5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.275 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[97314734-1d0b-4188-bc65-9f9ce343dbb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.276 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-7e8c4393-f543-4223-8342-f22c66d7df5a
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/7e8c4393-f543-4223-8342-f22c66d7df5a.pid.haproxy
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 7e8c4393-f543-4223-8342-f22c66d7df5a
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:40:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:11.277 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a', 'env', 'PROCESS_TAG=haproxy-7e8c4393-f543-4223-8342-f22c66d7df5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e8c4393-f543-4223-8342-f22c66d7df5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.285 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002109614914852038 of space, bias 1.0, pg target 0.6328844744556114 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002169048366834171 of space, bias 1.0, pg target 0.6507145100502513 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.423 250022 DEBUG nova.compute.manager [req-38039385-6de6-411d-930f-e29c0309a3ff req-c1851ef0-675f-4016-b0d0-594f344d4c38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.424 250022 DEBUG oslo_concurrency.lockutils [req-38039385-6de6-411d-930f-e29c0309a3ff req-c1851ef0-675f-4016-b0d0-594f344d4c38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.424 250022 DEBUG oslo_concurrency.lockutils [req-38039385-6de6-411d-930f-e29c0309a3ff req-c1851ef0-675f-4016-b0d0-594f344d4c38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.425 250022 DEBUG oslo_concurrency.lockutils [req-38039385-6de6-411d-930f-e29c0309a3ff req-c1851ef0-675f-4016-b0d0-594f344d4c38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.425 250022 DEBUG nova.compute.manager [req-38039385-6de6-411d-930f-e29c0309a3ff req-c1851ef0-675f-4016-b0d0-594f344d4c38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Processing event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:40:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.651 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920011.6508436, 9d050141-940b-4c59-8731-ca9d572d1127 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.652 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] VM Started (Lifecycle Event)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.654 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.658 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.661 250022 INFO nova.virt.libvirt.driver [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Instance spawned successfully.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.661 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.676 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.683 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:40:11 compute-0 podman[291521]: 2026-01-20 14:40:11.684809985 +0000 UTC m=+0.069938487 container create 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.688 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.689 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.690 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.690 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.691 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.691 250022 DEBUG nova.virt.libvirt.driver [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:40:11 compute-0 systemd[1]: Started libpod-conmon-503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb.scope.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.721 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.723 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920011.6521006, 9d050141-940b-4c59-8731-ca9d572d1127 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.723 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] VM Paused (Lifecycle Event)
Jan 20 14:40:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:11.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:11 compute-0 podman[291521]: 2026-01-20 14:40:11.658512785 +0000 UTC m=+0.043641307 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:40:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5f2644a3d84926e7af3695ff69e9e26de1c20af79b9f83342a039e5f2ea102/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.760 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.767 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920011.6571338, 9d050141-940b-4c59-8731-ca9d572d1127 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.767 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] VM Resumed (Lifecycle Event)
Jan 20 14:40:11 compute-0 podman[291521]: 2026-01-20 14:40:11.772920421 +0000 UTC m=+0.158048943 container init 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:40:11 compute-0 podman[291521]: 2026-01-20 14:40:11.778580533 +0000 UTC m=+0.163709035 container start 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.787 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.789 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:40:11 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [NOTICE]   (291540) : New worker (291542) forked
Jan 20 14:40:11 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [NOTICE]   (291540) : Loading success.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.803 250022 INFO nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Took 6.77 seconds to spawn the instance on the hypervisor.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.804 250022 DEBUG nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.818 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.880 250022 INFO nova.compute.manager [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Took 7.79 seconds to build instance.
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:11 compute-0 nova_compute[250018]: 2026-01-20 14:40:11.898 250022 DEBUG oslo_concurrency.lockutils [None req-868918d5-027e-4dcf-8caf-21142f133a24 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:12.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:13 compute-0 ceph-mon[74360]: pgmap v1555: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 14:40:13 compute-0 NetworkManager[48960]: <info>  [1768920013.1815] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 20 14:40:13 compute-0 NetworkManager[48960]: <info>  [1768920013.1828] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.181 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.337 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:13 compute-0 ovn_controller[148666]: 2026-01-20T14:40:13Z|00207|binding|INFO|Releasing lport ab4d1a7f-56b0-43b5-9bdc-d6249fe4c0d6 from this chassis (sb_readonly=0)
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.385 250022 DEBUG nova.compute.manager [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-changed-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.386 250022 DEBUG nova.compute.manager [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Refreshing instance network info cache due to event network-changed-ed987e23-05ae-4b7e-a376-14db3bab9659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.386 250022 DEBUG oslo_concurrency.lockutils [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.386 250022 DEBUG oslo_concurrency.lockutils [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.386 250022 DEBUG nova.network.neutron [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Refreshing network info cache for port ed987e23-05ae-4b7e-a376-14db3bab9659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.564 250022 DEBUG nova.compute.manager [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.564 250022 DEBUG oslo_concurrency.lockutils [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.564 250022 DEBUG oslo_concurrency.lockutils [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.565 250022 DEBUG oslo_concurrency.lockutils [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.565 250022 DEBUG nova.compute.manager [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] No waiting events found dispatching network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:13 compute-0 nova_compute[250018]: 2026-01-20 14:40:13.565 250022 WARNING nova.compute.manager [req-330c2b58-c9b5-4b7f-bc89-87e7c8b02371 req-d41275c3-bd14-4990-8182-61f15b987e1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received unexpected event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 for instance with vm_state active and task_state None.
Jan 20 14:40:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:40:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:40:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:13.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:40:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1177646554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:14 compute-0 ceph-mon[74360]: pgmap v1556: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:40:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:14.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:15 compute-0 nova_compute[250018]: 2026-01-20 14:40:15.103 250022 DEBUG nova.network.neutron [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updated VIF entry in instance network info cache for port ed987e23-05ae-4b7e-a376-14db3bab9659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:40:15 compute-0 nova_compute[250018]: 2026-01-20 14:40:15.103 250022 DEBUG nova.network.neutron [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updating instance_info_cache with network_info: [{"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:15 compute-0 nova_compute[250018]: 2026-01-20 14:40:15.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:15 compute-0 nova_compute[250018]: 2026-01-20 14:40:15.139 250022 DEBUG oslo_concurrency.lockutils [req-6c3ac60a-fb1f-4bad-9468-3af328f5af54 req-43253ee1-05d8-4389-925d-90028406fdde 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9d050141-940b-4c59-8731-ca9d572d1127" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:40:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2441771306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:15 compute-0 podman[291555]: 2026-01-20 14:40:15.468202615 +0000 UTC m=+0.052129337 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:40:15 compute-0 podman[291554]: 2026-01-20 14:40:15.489713855 +0000 UTC m=+0.077743917 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:40:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:40:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:15.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:16 compute-0 ceph-mon[74360]: pgmap v1557: 321 pgs: 321 active+clean; 247 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 147 op/s
Jan 20 14:40:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:16.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:16 compute-0 nova_compute[250018]: 2026-01-20 14:40:16.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 268 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Jan 20 14:40:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:17.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:40:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:18.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:40:18 compute-0 ceph-mon[74360]: pgmap v1558: 321 pgs: 321 active+clean; 268 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Jan 20 14:40:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 278 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 171 op/s
Jan 20 14:40:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:40:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:19.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:40:20 compute-0 nova_compute[250018]: 2026-01-20 14:40:20.118 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:20.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:20 compute-0 ceph-mon[74360]: pgmap v1559: 321 pgs: 321 active+clean; 278 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 171 op/s
Jan 20 14:40:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2413042588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/564781149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:40:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 182 op/s
Jan 20 14:40:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:21.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:21 compute-0 nova_compute[250018]: 2026-01-20 14:40:21.923 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:22.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:22 compute-0 ceph-mon[74360]: pgmap v1560: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 182 op/s
Jan 20 14:40:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/488662490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 20 14:40:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:23.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:24 compute-0 nova_compute[250018]: 2026-01-20 14:40:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:24 compute-0 nova_compute[250018]: 2026-01-20 14:40:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:24 compute-0 ceph-mon[74360]: pgmap v1561: 321 pgs: 321 active+clean; 293 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 20 14:40:25 compute-0 nova_compute[250018]: 2026-01-20 14:40:25.120 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:25 compute-0 sudo[291605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:25 compute-0 sudo[291605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:25 compute-0 sudo[291605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:25 compute-0 sudo[291630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:25 compute-0 sudo[291630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:25 compute-0 sudo[291630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 299 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 157 op/s
Jan 20 14:40:25 compute-0 ovn_controller[148666]: 2026-01-20T14:40:25Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:16:3a 10.100.0.9
Jan 20 14:40:25 compute-0 ovn_controller[148666]: 2026-01-20T14:40:25Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:16:3a 10.100.0.9
Jan 20 14:40:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:25.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:25 compute-0 ceph-mon[74360]: pgmap v1562: 321 pgs: 321 active+clean; 299 MiB data, 728 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 157 op/s
Jan 20 14:40:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:26 compute-0 nova_compute[250018]: 2026-01-20 14:40:26.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:27 compute-0 nova_compute[250018]: 2026-01-20 14:40:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 315 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 165 op/s
Jan 20 14:40:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:27.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.089 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.091 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.091 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936017395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.521 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:28.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.621 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.622 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:40:28 compute-0 ceph-mon[74360]: pgmap v1563: 321 pgs: 321 active+clean; 315 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 165 op/s
Jan 20 14:40:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3936017395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.804 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.805 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4355MB free_disk=20.882659912109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.805 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.806 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.978 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 9d050141-940b-4c59-8731-ca9d572d1127 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.978 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:40:28 compute-0 nova_compute[250018]: 2026-01-20 14:40:28.978 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.018 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861985885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.489 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.494 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.514 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.541 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:40:29 compute-0 nova_compute[250018]: 2026-01-20 14:40:29.542 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 321 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 20 14:40:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:29.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1861985885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:30 compute-0 sudo[291702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:30 compute-0 sudo[291702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 sudo[291702]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-0 nova_compute[250018]: 2026-01-20 14:40:30.122 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:30 compute-0 sudo[291727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:40:30 compute-0 sudo[291727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 sudo[291727]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-0 sudo[291752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:30 compute-0 sudo[291752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 sudo[291752]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-0 sudo[291777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:40:30 compute-0 sudo[291777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 nova_compute[250018]: 2026-01-20 14:40:30.544 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:30.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:30 compute-0 sudo[291777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:30.753 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:30.754 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:30.754 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:30 compute-0 ceph-mon[74360]: pgmap v1564: 321 pgs: 321 active+clean; 321 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 38ec034b-3f32-43ac-a718-1ed320c2881b does not exist
Jan 20 14:40:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f7c98d3b-7bdf-42dc-8a4d-66427736f722 does not exist
Jan 20 14:40:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8fc6b51d-80af-4377-9c2b-1efe6171d2fe does not exist
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:40:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:40:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:30 compute-0 sudo[291835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:30 compute-0 sudo[291835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 sudo[291835]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:30 compute-0 sudo[291860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:40:30 compute-0 sudo[291860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:30 compute-0 sudo[291860]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:31 compute-0 sudo[291885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:31 compute-0 sudo[291885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:31 compute-0 sudo[291885]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:31 compute-0 nova_compute[250018]: 2026-01-20 14:40:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:31 compute-0 sudo[291910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:40:31 compute-0 sudo[291910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.41995062 +0000 UTC m=+0.039105346 container create 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:40:31 compute-0 systemd[1]: Started libpod-conmon-18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e.scope.
Jan 20 14:40:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.402207852 +0000 UTC m=+0.021362598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.509955706 +0000 UTC m=+0.129110482 container init 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.519269328 +0000 UTC m=+0.138424074 container start 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.523610895 +0000 UTC m=+0.142765641 container attach 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:40:31 compute-0 upbeat_carson[291992]: 167 167
Jan 20 14:40:31 compute-0 systemd[1]: libpod-18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e.scope: Deactivated successfully.
Jan 20 14:40:31 compute-0 conmon[291992]: conmon 18ce3ad22459ab6b0c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e.scope/container/memory.events
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.52859682 +0000 UTC m=+0.147751536 container died 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4182666df125378dd73ab73254ea59af74da7e88686e456da8ecd3257d42fd1-merged.mount: Deactivated successfully.
Jan 20 14:40:31 compute-0 podman[291976]: 2026-01-20 14:40:31.569122963 +0000 UTC m=+0.188277679 container remove 18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:40:31 compute-0 systemd[1]: libpod-conmon-18ce3ad22459ab6b0c79f208a9aa74376b4201bf8fc11a057c0f93f22989744e.scope: Deactivated successfully.
Jan 20 14:40:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 326 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:40:31 compute-0 podman[292016]: 2026-01-20 14:40:31.736746912 +0000 UTC m=+0.046826433 container create e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:40:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:31.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:31 compute-0 systemd[1]: Started libpod-conmon-e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae.scope.
Jan 20 14:40:31 compute-0 podman[292016]: 2026-01-20 14:40:31.717136073 +0000 UTC m=+0.027215604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:31 compute-0 podman[292016]: 2026-01-20 14:40:31.839640457 +0000 UTC m=+0.149719998 container init e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:40:31 compute-0 podman[292016]: 2026-01-20 14:40:31.847296293 +0000 UTC m=+0.157375804 container start e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:40:31 compute-0 podman[292016]: 2026-01-20 14:40:31.851342422 +0000 UTC m=+0.161421943 container attach e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1270905783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2878926089' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2878926089' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:31 compute-0 nova_compute[250018]: 2026-01-20 14:40:31.972 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:32 compute-0 nova_compute[250018]: 2026-01-20 14:40:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:32 compute-0 nova_compute[250018]: 2026-01-20 14:40:32.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:40:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:32.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:32 compute-0 modest_tu[292031]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:40:32 compute-0 modest_tu[292031]: --> relative data size: 1.0
Jan 20 14:40:32 compute-0 modest_tu[292031]: --> All data devices are unavailable
Jan 20 14:40:32 compute-0 systemd[1]: libpod-e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae.scope: Deactivated successfully.
Jan 20 14:40:32 compute-0 podman[292016]: 2026-01-20 14:40:32.684164703 +0000 UTC m=+0.994244214 container died e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fdf289976fc0ae5d0bc5fd598d42237c6ae2f955f0c80468a0cdf951367e89a-merged.mount: Deactivated successfully.
Jan 20 14:40:32 compute-0 podman[292016]: 2026-01-20 14:40:32.784905803 +0000 UTC m=+1.094985314 container remove e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:40:32 compute-0 systemd[1]: libpod-conmon-e81b0d3ec992046de72984d4aadfb3bde31b5c85b13fdaa64ac9c68e7e4bbeae.scope: Deactivated successfully.
Jan 20 14:40:32 compute-0 sudo[291910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:32 compute-0 sudo[292061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:32 compute-0 sudo[292061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:32 compute-0 sudo[292061]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:32 compute-0 sudo[292086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:40:32 compute-0 sudo[292086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:32 compute-0 sudo[292086]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:32 compute-0 sudo[292111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:33 compute-0 sudo[292111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:33 compute-0 sudo[292111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:33 compute-0 sudo[292136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:40:33 compute-0 sudo[292136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:33 compute-0 ceph-mon[74360]: pgmap v1565: 321 pgs: 321 active+clean; 326 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 156 op/s
Jan 20 14:40:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2590034195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.4025406 +0000 UTC m=+0.051514587 container create 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:40:33 compute-0 systemd[1]: Started libpod-conmon-8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be.scope.
Jan 20 14:40:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.372929187 +0000 UTC m=+0.021903194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.511160264 +0000 UTC m=+0.160134341 container init 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.519612523 +0000 UTC m=+0.168586520 container start 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:40:33 compute-0 laughing_bhabha[292219]: 167 167
Jan 20 14:40:33 compute-0 systemd[1]: libpod-8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be.scope: Deactivated successfully.
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.542134873 +0000 UTC m=+0.191108870 container attach 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.542757849 +0000 UTC m=+0.191731846 container died 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-56193d217e40895b3c74d0ac70a1d207d82b23ca3404a24f523f01bc991fa9f0-merged.mount: Deactivated successfully.
Jan 20 14:40:33 compute-0 podman[292202]: 2026-01-20 14:40:33.595988902 +0000 UTC m=+0.244962879 container remove 8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bhabha, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:40:33 compute-0 systemd[1]: libpod-conmon-8f9a151fa67424a45344b4ca0c7101459b641cc5fe0798d8ac0c2a5db5de20be.scope: Deactivated successfully.
Jan 20 14:40:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 326 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 20 14:40:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:33.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:33 compute-0 podman[292242]: 2026-01-20 14:40:33.76897397 +0000 UTC m=+0.023413756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:33 compute-0 podman[292242]: 2026-01-20 14:40:33.960459369 +0000 UTC m=+0.214899125 container create a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:40:34 compute-0 systemd[1]: Started libpod-conmon-a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78.scope.
Jan 20 14:40:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38561d4a42c150b5d563a8b0c79c9524d79a22196e805ad54c72e0cf936e2136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:34 compute-0 nova_compute[250018]: 2026-01-20 14:40:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38561d4a42c150b5d563a8b0c79c9524d79a22196e805ad54c72e0cf936e2136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38561d4a42c150b5d563a8b0c79c9524d79a22196e805ad54c72e0cf936e2136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38561d4a42c150b5d563a8b0c79c9524d79a22196e805ad54c72e0cf936e2136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:34 compute-0 podman[292242]: 2026-01-20 14:40:34.072948207 +0000 UTC m=+0.327388003 container init a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:40:34 compute-0 podman[292242]: 2026-01-20 14:40:34.080681447 +0000 UTC m=+0.335121203 container start a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:40:34 compute-0 podman[292242]: 2026-01-20 14:40:34.085036095 +0000 UTC m=+0.339475891 container attach a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:40:34 compute-0 ceph-mon[74360]: pgmap v1566: 321 pgs: 321 active+clean; 326 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 20 14:40:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3830778880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3830778880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:34.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]: {
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:     "0": [
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:         {
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "devices": [
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "/dev/loop3"
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             ],
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "lv_name": "ceph_lv0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "lv_size": "7511998464",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "name": "ceph_lv0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "tags": {
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.cluster_name": "ceph",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.crush_device_class": "",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.encrypted": "0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.osd_id": "0",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.type": "block",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:                 "ceph.vdo": "0"
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             },
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "type": "block",
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:             "vg_name": "ceph_vg0"
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:         }
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]:     ]
Jan 20 14:40:34 compute-0 nostalgic_goldstine[292258]: }
Jan 20 14:40:34 compute-0 systemd[1]: libpod-a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78.scope: Deactivated successfully.
Jan 20 14:40:34 compute-0 podman[292242]: 2026-01-20 14:40:34.862779821 +0000 UTC m=+1.117219597 container died a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:40:35 compute-0 nova_compute[250018]: 2026-01-20 14:40:35.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-38561d4a42c150b5d563a8b0c79c9524d79a22196e805ad54c72e0cf936e2136-merged.mount: Deactivated successfully.
Jan 20 14:40:35 compute-0 podman[292242]: 2026-01-20 14:40:35.528934992 +0000 UTC m=+1.783374778 container remove a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldstine, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:40:35 compute-0 sudo[292136]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 304 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 175 op/s
Jan 20 14:40:35 compute-0 systemd[1]: libpod-conmon-a945fb9e3274abc50c1624b7d352e26a2bee5cc0e1ca4300a15ddcd590158c78.scope: Deactivated successfully.
Jan 20 14:40:35 compute-0 sudo[292281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:35 compute-0 sudo[292281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:35 compute-0 sudo[292281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:35 compute-0 sudo[292306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:40:35 compute-0 sudo[292306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:35 compute-0 sudo[292306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:35 compute-0 sudo[292331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:35 compute-0 sudo[292331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:35 compute-0 sudo[292331]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:35.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:35 compute-0 sudo[292356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:40:35 compute-0 sudo[292356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/164278318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:36 compute-0 nova_compute[250018]: 2026-01-20 14:40:36.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:36 compute-0 nova_compute[250018]: 2026-01-20 14:40:36.054 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:40:36 compute-0 nova_compute[250018]: 2026-01-20 14:40:36.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.121334805 +0000 UTC m=+0.021802502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.246572059 +0000 UTC m=+0.147039716 container create 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:40:36 compute-0 systemd[1]: Started libpod-conmon-945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2.scope.
Jan 20 14:40:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.394354084 +0000 UTC m=+0.294821771 container init 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.402250418 +0000 UTC m=+0.302718075 container start 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:40:36 compute-0 brave_noyce[292437]: 167 167
Jan 20 14:40:36 compute-0 systemd[1]: libpod-945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2.scope: Deactivated successfully.
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.428963332 +0000 UTC m=+0.329431019 container attach 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.429374763 +0000 UTC m=+0.329842430 container died 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a026b60a4e75367ff26fcc0d68d0c591a2fb9ef46ba48db545c9c1c6687825f2-merged.mount: Deactivated successfully.
Jan 20 14:40:36 compute-0 podman[292421]: 2026-01-20 14:40:36.472361568 +0000 UTC m=+0.372829235 container remove 945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:40:36 compute-0 systemd[1]: libpod-conmon-945794bc4082b14d411ccd2b5421571e2b582a2a596e8f71765fd14fecc9c1b2.scope: Deactivated successfully.
Jan 20 14:40:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:36.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:36 compute-0 podman[292461]: 2026-01-20 14:40:36.688861724 +0000 UTC m=+0.089903127 container create 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:40:36 compute-0 podman[292461]: 2026-01-20 14:40:36.620817621 +0000 UTC m=+0.021858974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:40:36 compute-0 systemd[1]: Started libpod-conmon-818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff.scope.
Jan 20 14:40:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d579c5e25391156ae89bf0e50fb0c5dc071ebe326c971ba54253054559f5732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d579c5e25391156ae89bf0e50fb0c5dc071ebe326c971ba54253054559f5732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d579c5e25391156ae89bf0e50fb0c5dc071ebe326c971ba54253054559f5732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d579c5e25391156ae89bf0e50fb0c5dc071ebe326c971ba54253054559f5732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:40:36 compute-0 podman[292461]: 2026-01-20 14:40:36.972944172 +0000 UTC m=+0.373985525 container init 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:40:36 compute-0 nova_compute[250018]: 2026-01-20 14:40:36.975 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:36 compute-0 ceph-mon[74360]: pgmap v1567: 321 pgs: 321 active+clean; 304 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 175 op/s
Jan 20 14:40:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2698970955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1847975642' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:40:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1847975642' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:40:36 compute-0 podman[292461]: 2026-01-20 14:40:36.982151012 +0000 UTC m=+0.383192355 container start 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:40:37 compute-0 podman[292461]: 2026-01-20 14:40:37.036594808 +0000 UTC m=+0.437636121 container attach 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:40:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 301 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 153 op/s
Jan 20 14:40:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:37.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:37 compute-0 friendly_cori[292478]: {
Jan 20 14:40:37 compute-0 friendly_cori[292478]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:40:37 compute-0 friendly_cori[292478]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:40:37 compute-0 friendly_cori[292478]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:40:37 compute-0 friendly_cori[292478]:         "osd_id": 0,
Jan 20 14:40:37 compute-0 friendly_cori[292478]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:40:37 compute-0 friendly_cori[292478]:         "type": "bluestore"
Jan 20 14:40:37 compute-0 friendly_cori[292478]:     }
Jan 20 14:40:37 compute-0 friendly_cori[292478]: }
Jan 20 14:40:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:37.854 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:37 compute-0 nova_compute[250018]: 2026-01-20 14:40:37.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:37.857 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:40:37 compute-0 systemd[1]: libpod-818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff.scope: Deactivated successfully.
Jan 20 14:40:37 compute-0 podman[292461]: 2026-01-20 14:40:37.864077811 +0000 UTC m=+1.265119114 container died 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d579c5e25391156ae89bf0e50fb0c5dc071ebe326c971ba54253054559f5732-merged.mount: Deactivated successfully.
Jan 20 14:40:37 compute-0 podman[292461]: 2026-01-20 14:40:37.942080654 +0000 UTC m=+1.343121977 container remove 818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:40:37 compute-0 systemd[1]: libpod-conmon-818f65d01a61e682e65287c7ed89daee0cf595fe66c1699244f97de5cadf20ff.scope: Deactivated successfully.
Jan 20 14:40:37 compute-0 sudo[292356]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:40:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:40:38 compute-0 ceph-mon[74360]: pgmap v1568: 321 pgs: 321 active+clean; 301 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 153 op/s
Jan 20 14:40:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 86e3a5a4-c364-424e-8ddd-a546b90c0172 does not exist
Jan 20 14:40:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 10203ae6-ed60-423d-94e1-86b00357bcca does not exist
Jan 20 14:40:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f93881ce-2a75-4755-8e92-5e58f4fea00b does not exist
Jan 20 14:40:38 compute-0 sudo[292516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:38 compute-0 sudo[292516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:38 compute-0 sudo[292516]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:38 compute-0 sudo[292541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:40:38 compute-0 sudo[292541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:38 compute-0 sudo[292541]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:38.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:38 compute-0 sshd-session[292514]: Invalid user ubuntu from 157.245.78.139 port 38548
Jan 20 14:40:38 compute-0 sshd-session[292514]: Connection closed by invalid user ubuntu 157.245.78.139 port 38548 [preauth]
Jan 20 14:40:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:40:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 278 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 760 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Jan 20 14:40:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:39.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:40 compute-0 ovn_controller[148666]: 2026-01-20T14:40:40Z|00208|binding|INFO|Releasing lport ab4d1a7f-56b0-43b5-9bdc-d6249fe4c0d6 from this chassis (sb_readonly=0)
Jan 20 14:40:40 compute-0 nova_compute[250018]: 2026-01-20 14:40:40.127 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:40 compute-0 nova_compute[250018]: 2026-01-20 14:40:40.149 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:40 compute-0 ceph-mon[74360]: pgmap v1569: 321 pgs: 321 active+clean; 278 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 760 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Jan 20 14:40:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:40.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Jan 20 14:40:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:40:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:41.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:40:41 compute-0 nova_compute[250018]: 2026-01-20 14:40:41.976 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:40:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:42.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:40:42 compute-0 ceph-mon[74360]: pgmap v1570: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Jan 20 14:40:43 compute-0 nova_compute[250018]: 2026-01-20 14:40:43.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:40:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Jan 20 14:40:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:43.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:44 compute-0 ovn_controller[148666]: 2026-01-20T14:40:44Z|00209|binding|INFO|Releasing lport ab4d1a7f-56b0-43b5-9bdc-d6249fe4c0d6 from this chassis (sb_readonly=0)
Jan 20 14:40:44 compute-0 nova_compute[250018]: 2026-01-20 14:40:44.119 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:44.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:44 compute-0 ceph-mon[74360]: pgmap v1571: 321 pgs: 321 active+clean; 279 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Jan 20 14:40:45 compute-0 nova_compute[250018]: 2026-01-20 14:40:45.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:45 compute-0 sudo[292570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:45 compute-0 sudo[292570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:45 compute-0 sudo[292570]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:45 compute-0 sudo[292595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:40:45 compute-0 sudo[292595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:40:45 compute-0 sudo[292595]: pam_unix(sudo:session): session closed for user root
Jan 20 14:40:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 280 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Jan 20 14:40:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:45.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:45.913 160402 DEBUG eventlet.wsgi.server [-] (160402) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:45.915 160402 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: Accept: */*
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: Connection: close
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: Content-Type: text/plain
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: Host: 169.254.169.254
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: User-Agent: curl/7.84.0
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: X-Forwarded-For: 10.100.0.9
Jan 20 14:40:45 compute-0 ovn_metadata_agent[160049]: X-Ovn-Network-Id: 7e8c4393-f543-4223-8342-f22c66d7df5a __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 20 14:40:45 compute-0 ceph-mon[74360]: pgmap v1572: 321 pgs: 321 active+clean; 280 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 108 op/s
Jan 20 14:40:46 compute-0 podman[292620]: 2026-01-20 14:40:46.495284785 +0000 UTC m=+0.081077938 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:40:46 compute-0 podman[292621]: 2026-01-20 14:40:46.496012045 +0000 UTC m=+0.081808569 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:40:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:46.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:46.858 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:46 compute-0 nova_compute[250018]: 2026-01-20 14:40:46.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.303 160402 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.304 160402 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1673 time: 1.3885477
Jan 20 14:40:47 compute-0 haproxy-metadata-proxy-7e8c4393-f543-4223-8342-f22c66d7df5a[291542]: 10.100.0.9:59354 [20/Jan/2026:14:40:45.912] listener listener/metadata 0/0/0/1391/1391 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.488 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.489 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.489 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.489 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.489 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.491 250022 INFO nova.compute.manager [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Terminating instance
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.491 250022 DEBUG nova.compute.manager [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:40:47 compute-0 kernel: taped987e23-05 (unregistering): left promiscuous mode
Jan 20 14:40:47 compute-0 NetworkManager[48960]: <info>  [1768920047.5457] device (taped987e23-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:40:47 compute-0 ovn_controller[148666]: 2026-01-20T14:40:47Z|00210|binding|INFO|Releasing lport ed987e23-05ae-4b7e-a376-14db3bab9659 from this chassis (sb_readonly=0)
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.553 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 ovn_controller[148666]: 2026-01-20T14:40:47Z|00211|binding|INFO|Setting lport ed987e23-05ae-4b7e-a376-14db3bab9659 down in Southbound
Jan 20 14:40:47 compute-0 ovn_controller[148666]: 2026-01-20T14:40:47Z|00212|binding|INFO|Removing iface taped987e23-05 ovn-installed in OVS
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.555 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.565 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:16:3a 10.100.0.9'], port_security=['fa:16:3e:fb:16:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '9d050141-940b-4c59-8731-ca9d572d1127', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e8c4393-f543-4223-8342-f22c66d7df5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e4d1d7361c94c429f75bf58a2dd432e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67e9e5ff-fd97-4967-97b8-4831a398af6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3a42eeb6-0fa0-44f1-b75d-f87ed89575c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ed987e23-05ae-4b7e-a376-14db3bab9659) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.566 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ed987e23-05ae-4b7e-a376-14db3bab9659 in datapath 7e8c4393-f543-4223-8342-f22c66d7df5a unbound from our chassis
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.567 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e8c4393-f543-4223-8342-f22c66d7df5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.568 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b91179e3-376a-4c33-a2b0-5397ddc46107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.569 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a namespace which is not needed anymore
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.574 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 20 14:40:47 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000043.scope: Consumed 15.241s CPU time.
Jan 20 14:40:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 252 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 1.2 MiB/s wr, 75 op/s
Jan 20 14:40:47 compute-0 systemd-machined[216401]: Machine qemu-30-instance-00000043 terminated.
Jan 20 14:40:47 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [NOTICE]   (291540) : haproxy version is 2.8.14-c23fe91
Jan 20 14:40:47 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [NOTICE]   (291540) : path to executable is /usr/sbin/haproxy
Jan 20 14:40:47 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [WARNING]  (291540) : Exiting Master process...
Jan 20 14:40:47 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [ALERT]    (291540) : Current worker (291542) exited with code 143 (Terminated)
Jan 20 14:40:47 compute-0 neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a[291536]: [WARNING]  (291540) : All workers exited. Exiting... (0)
Jan 20 14:40:47 compute-0 systemd[1]: libpod-503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb.scope: Deactivated successfully.
Jan 20 14:40:47 compute-0 podman[292689]: 2026-01-20 14:40:47.707685589 +0000 UTC m=+0.056605654 container died 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.728 250022 INFO nova.virt.libvirt.driver [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Instance destroyed successfully.
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.728 250022 DEBUG nova.objects.instance [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lazy-loading 'resources' on Instance uuid 9d050141-940b-4c59-8731-ca9d572d1127 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b5f2644a3d84926e7af3695ff69e9e26de1c20af79b9f83342a039e5f2ea102-merged.mount: Deactivated successfully.
Jan 20 14:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb-userdata-shm.mount: Deactivated successfully.
Jan 20 14:40:47 compute-0 podman[292689]: 2026-01-20 14:40:47.751700993 +0000 UTC m=+0.100621048 container cleanup 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.750 250022 DEBUG nova.virt.libvirt.vif [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=67,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCFJgU7mCPKfkFgr/oKcVmlRRhJZj0QWIlBMfbPg1uRVsyziaMY8yTLUMmvLSt4t4kN6gHRbpZRaphoWHbFjBsO45c+rkSRuECrUvq9pyMwJAvZ1U5YGbNmZ5iOi/oGHAg==',key_name='tempest-keypair-1460970199',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:40:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e4d1d7361c94c429f75bf58a2dd432e',ramdisk_id='',reservation_id='r-0c7303nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-698951391',owner_user_name='tempest-ServersV294TestFqdnHostnames-698951391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:40:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1145324e6a8d44f28828a922ee70933a',uuid=9d050141-940b-4c59-8731-ca9d572d1127,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.752 250022 DEBUG nova.network.os_vif_util [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converting VIF {"id": "ed987e23-05ae-4b7e-a376-14db3bab9659", "address": "fa:16:3e:fb:16:3a", "network": {"id": "7e8c4393-f543-4223-8342-f22c66d7df5a", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1526085917-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e4d1d7361c94c429f75bf58a2dd432e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped987e23-05", "ovs_interfaceid": "ed987e23-05ae-4b7e-a376-14db3bab9659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.753 250022 DEBUG nova.network.os_vif_util [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.753 250022 DEBUG os_vif [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.755 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.756 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped987e23-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 systemd[1]: libpod-conmon-503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb.scope: Deactivated successfully.
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.758 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.761 250022 INFO os_vif [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:16:3a,bridge_name='br-int',has_traffic_filtering=True,id=ed987e23-05ae-4b7e-a376-14db3bab9659,network=Network(7e8c4393-f543-4223-8342-f22c66d7df5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped987e23-05')
Jan 20 14:40:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:47 compute-0 podman[292728]: 2026-01-20 14:40:47.818631085 +0000 UTC m=+0.043518010 container remove 503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.824 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a81ebde3-7999-4ca6-8d24-642d3c1a2bf2]: (4, ('Tue Jan 20 02:40:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a (503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb)\n503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb\nTue Jan 20 02:40:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a (503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb)\n503d8ad0a17c019f03d94939f37ed9cb1e2eea004c1be28f16da1227568498eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.826 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec29214-4d79-42c0-a0ff-7005be396af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.827 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e8c4393-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.828 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 kernel: tap7e8c4393-f0: left promiscuous mode
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.845 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[39d99240-fadf-4c3c-b7f6-357a4fb213d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.856 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[13d239e9-82eb-4584-8b19-a9c9c546a059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.857 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8b262eaf-2bc7-41dc-becd-224cf60463b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.867 250022 DEBUG nova.compute.manager [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-unplugged-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.868 250022 DEBUG oslo_concurrency.lockutils [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.868 250022 DEBUG oslo_concurrency.lockutils [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.868 250022 DEBUG oslo_concurrency.lockutils [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.869 250022 DEBUG nova.compute.manager [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] No waiting events found dispatching network-vif-unplugged-ed987e23-05ae-4b7e-a376-14db3bab9659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:47 compute-0 nova_compute[250018]: 2026-01-20 14:40:47.869 250022 DEBUG nova.compute.manager [req-bb73c577-0dc2-4399-8cc0-6bbd6bb12f44 req-6858d37f-e8fb-423f-89a3-520c49e012f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-unplugged-ed987e23-05ae-4b7e-a376-14db3bab9659 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.873 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[14944564-cc09-4a7c-9b37-215680798da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595178, 'reachable_time': 29842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292761, 'error': None, 'target': 'ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d7e8c4393\x2df543\x2d4223\x2d8342\x2df22c66d7df5a.mount: Deactivated successfully.
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.875 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e8c4393-f543-4223-8342-f22c66d7df5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:40:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:40:47.875 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[417a5b45-7737-4bd0-8e2e-0bbeeb9ce91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:40:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.142 250022 INFO nova.virt.libvirt.driver [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Deleting instance files /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127_del
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.143 250022 INFO nova.virt.libvirt.driver [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Deletion of /var/lib/nova/instances/9d050141-940b-4c59-8731-ca9d572d1127_del complete
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.205 250022 INFO nova.compute.manager [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.205 250022 DEBUG oslo.service.loopingcall [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.206 250022 DEBUG nova.compute.manager [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:40:48 compute-0 nova_compute[250018]: 2026-01-20 14:40:48.206 250022 DEBUG nova.network.neutron [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:40:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:48.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:48 compute-0 ceph-mon[74360]: pgmap v1573: 321 pgs: 321 active+clean; 252 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 1.2 MiB/s wr, 75 op/s
Jan 20 14:40:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2116215996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.420 250022 DEBUG nova.network.neutron [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.434 250022 INFO nova.compute.manager [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Took 1.23 seconds to deallocate network for instance.
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.474 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.475 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.557 250022 DEBUG oslo_concurrency.processutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 221 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 182 KiB/s rd, 759 KiB/s wr, 69 op/s
Jan 20 14:40:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:49.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/695930938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.946 250022 DEBUG nova.compute.manager [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.946 250022 DEBUG oslo_concurrency.lockutils [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9d050141-940b-4c59-8731-ca9d572d1127-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.947 250022 DEBUG oslo_concurrency.lockutils [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.947 250022 DEBUG oslo_concurrency.lockutils [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:49 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.947 250022 DEBUG nova.compute.manager [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] No waiting events found dispatching network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.947 250022 WARNING nova.compute.manager [req-6a7bb0f9-1ca1-4279-9138-831b40a1cf11 req-c31fc706-4cdf-484f-9ef4-f1af8f5272d3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received unexpected event network-vif-plugged-ed987e23-05ae-4b7e-a376-14db3bab9659 for instance with vm_state deleted and task_state None.
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.966 250022 DEBUG oslo_concurrency.processutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.971 250022 DEBUG nova.compute.provider_tree [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:49 compute-0 nova_compute[250018]: 2026-01-20 14:40:49.991 250022 DEBUG nova.scheduler.client.report [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:50 compute-0 nova_compute[250018]: 2026-01-20 14:40:50.013 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:50 compute-0 nova_compute[250018]: 2026-01-20 14:40:50.094 250022 INFO nova.scheduler.client.report [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Deleted allocations for instance 9d050141-940b-4c59-8731-ca9d572d1127
Jan 20 14:40:50 compute-0 nova_compute[250018]: 2026-01-20 14:40:50.160 250022 DEBUG oslo_concurrency.lockutils [None req-63214daf-54fe-4539-a80c-bd04ec4f6e78 1145324e6a8d44f28828a922ee70933a 8e4d1d7361c94c429f75bf58a2dd432e - - default default] Lock "9d050141-940b-4c59-8731-ca9d572d1127" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:50.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:50 compute-0 ceph-mon[74360]: pgmap v1574: 321 pgs: 321 active+clean; 221 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 182 KiB/s rd, 759 KiB/s wr, 69 op/s
Jan 20 14:40:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/695930938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:51 compute-0 nova_compute[250018]: 2026-01-20 14:40:51.431 250022 DEBUG nova.compute.manager [req-43f17028-057b-4cb0-892d-593b288e18ba req-2028d6b3-671f-4c5d-ac7c-4d0565a3c4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Received event network-vif-deleted-ed987e23-05ae-4b7e-a376-14db3bab9659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 151 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 73 KiB/s wr, 72 op/s
Jan 20 14:40:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:52 compute-0 nova_compute[250018]: 2026-01-20 14:40:52.035 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:40:52
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'vms', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 20 14:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:40:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:52.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:52 compute-0 nova_compute[250018]: 2026-01-20 14:40:52.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:52 compute-0 ceph-mon[74360]: pgmap v1575: 321 pgs: 321 active+clean; 151 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 73 KiB/s wr, 72 op/s
Jan 20 14:40:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:53 compute-0 nova_compute[250018]: 2026-01-20 14:40:53.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 66 KiB/s wr, 59 op/s
Jan 20 14:40:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:40:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:53.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:40:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:54.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:54 compute-0 ceph-mon[74360]: pgmap v1576: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 66 KiB/s wr, 59 op/s
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.130 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Jan 20 14:40:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:55.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.883 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.884 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.900 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.996 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:55 compute-0 nova_compute[250018]: 2026-01-20 14:40:55.997 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.002 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.002 250022 INFO nova.compute.claims [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.117 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:40:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659128134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.539 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.544 250022 DEBUG nova.compute.provider_tree [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.562 250022 DEBUG nova.scheduler.client.report [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:40:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:56.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.593 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.594 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.640 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.641 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.662 250022 INFO nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.680 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.802 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.803 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.803 250022 INFO nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Creating image(s)
Jan 20 14:40:56 compute-0 ceph-mon[74360]: pgmap v1577: 321 pgs: 321 active+clean; 121 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 19 KiB/s wr, 56 op/s
Jan 20 14:40:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2659128134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.831 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.855 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.877 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.880 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.936 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.937 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.938 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.938 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.960 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:40:56 compute-0 nova_compute[250018]: 2026-01-20 14:40:56.963 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.373 250022 DEBUG nova.policy [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.401 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.473 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] resizing rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:40:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 121 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.759 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:40:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:57.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.880 250022 DEBUG nova.objects.instance [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'migration_context' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.896 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.897 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Ensure instance console log exists: /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.897 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.898 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:40:57 compute-0 nova_compute[250018]: 2026-01-20 14:40:57.898 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:40:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:40:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:40:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:58 compute-0 nova_compute[250018]: 2026-01-20 14:40:58.768 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Successfully created port: 2a804e48-646b-4b9e-8a59-155024ec39ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:40:59 compute-0 ceph-mon[74360]: pgmap v1578: 321 pgs: 321 active+clean; 121 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 55 op/s
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.594 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Successfully updated port: 2a804e48-646b-4b9e-8a59-155024ec39ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.618 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.619 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.619 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:40:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 124 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 181 KiB/s wr, 64 op/s
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.734 250022 DEBUG nova.compute.manager [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.734 250022 DEBUG nova.compute.manager [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.735 250022 DEBUG oslo_concurrency.lockutils [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:40:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:40:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:40:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:40:59.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:40:59 compute-0 nova_compute[250018]: 2026-01-20 14:40:59.860 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:41:00 compute-0 ceph-mon[74360]: pgmap v1579: 321 pgs: 321 active+clean; 124 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 181 KiB/s wr, 64 op/s
Jan 20 14:41:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:00.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.326 250022 DEBUG nova.network.neutron [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.370 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.371 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Instance network_info: |[{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.371 250022 DEBUG oslo_concurrency.lockutils [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.372 250022 DEBUG nova.network.neutron [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.375 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Start _get_guest_xml network_info=[{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.379 250022 WARNING nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.386 250022 DEBUG nova.virt.libvirt.host [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.387 250022 DEBUG nova.virt.libvirt.host [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.389 250022 DEBUG nova.virt.libvirt.host [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.390 250022 DEBUG nova.virt.libvirt.host [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.391 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.391 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.391 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.392 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.392 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.392 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.392 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.393 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.393 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.393 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.393 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.394 250022 DEBUG nova.virt.hardware [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.396 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 114 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 948 KiB/s wr, 75 op/s
Jan 20 14:41:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:01.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:41:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1601011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.851 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.879 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:01 compute-0 nova_compute[250018]: 2026-01-20 14:41:01.883 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1609353612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:41:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739494036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.320 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.323 250022 DEBUG nova.virt.libvirt.vif [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.323 250022 DEBUG nova.network.os_vif_util [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.325 250022 DEBUG nova.network.os_vif_util [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.326 250022 DEBUG nova.objects.instance [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.343 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <uuid>86aa2fb7-c532-46b4-a02b-8070608dfe6b</uuid>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <name>instance-00000045</name>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:name>tempest-tempest.common.compute-instance-857473992</nova:name>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:41:01</nova:creationTime>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <nova:port uuid="2a804e48-646b-4b9e-8a59-155024ec39ac">
Jan 20 14:41:02 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <system>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="serial">86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="uuid">86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </system>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <os>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </os>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <features>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </features>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk">
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </source>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config">
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </source>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:41:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:48:06:d8"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <target dev="tap2a804e48-64"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log" append="off"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <video>
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </video>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:41:02 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:41:02 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:41:02 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:41:02 compute-0 nova_compute[250018]: </domain>
Jan 20 14:41:02 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.344 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Preparing to wait for external event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.345 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.345 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.345 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.346 250022 DEBUG nova.virt.libvirt.vif [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:40:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.347 250022 DEBUG nova.network.os_vif_util [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.347 250022 DEBUG nova.network.os_vif_util [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.348 250022 DEBUG os_vif [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.349 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.349 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.350 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.352 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.353 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a804e48-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.353 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a804e48-64, col_values=(('external_ids', {'iface-id': '2a804e48-646b-4b9e-8a59-155024ec39ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:06:d8', 'vm-uuid': '86aa2fb7-c532-46b4-a02b-8070608dfe6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:02 compute-0 NetworkManager[48960]: <info>  [1768920062.3561] manager: (tap2a804e48-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.356 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.361 250022 INFO os_vif [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64')
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.435 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.435 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.435 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:48:06:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.435 250022 INFO nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Using config drive
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.461 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.726 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920047.7243612, 9d050141-940b-4c59-8731-ca9d572d1127 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.726 250022 INFO nova.compute.manager [-] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] VM Stopped (Lifecycle Event)
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.760 250022 DEBUG nova.compute.manager [None req-568dfda1-69b2-4809-bd8e-fe5ba6a76a07 - - - - - -] [instance: 9d050141-940b-4c59-8731-ca9d572d1127] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.840 250022 DEBUG nova.network.neutron [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.841 250022 DEBUG nova.network.neutron [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:02 compute-0 nova_compute[250018]: 2026-01-20 14:41:02.855 250022 DEBUG oslo_concurrency.lockutils [req-cf1ab8b8-d125-43d4-b8fa-c1d56921971b req-e750fa5e-e12b-4970-9455-d9a9b2114e77 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.018 250022 INFO nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Creating config drive at /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.023 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplpa2pd3g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:03 compute-0 ceph-mon[74360]: pgmap v1580: 321 pgs: 321 active+clean; 114 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 948 KiB/s wr, 75 op/s
Jan 20 14:41:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1601011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3514088597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2739494036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.156 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplpa2pd3g" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.180 250022 DEBUG nova.storage.rbd_utils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] rbd image 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.183 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.431 250022 DEBUG oslo_concurrency.processutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config 86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.432 250022 INFO nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Deleting local config drive /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/disk.config because it was imported into RBD.
Jan 20 14:41:03 compute-0 kernel: tap2a804e48-64: entered promiscuous mode
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.4825] manager: (tap2a804e48-64): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Jan 20 14:41:03 compute-0 ovn_controller[148666]: 2026-01-20T14:41:03Z|00213|binding|INFO|Claiming lport 2a804e48-646b-4b9e-8a59-155024ec39ac for this chassis.
Jan 20 14:41:03 compute-0 ovn_controller[148666]: 2026-01-20T14:41:03Z|00214|binding|INFO|2a804e48-646b-4b9e-8a59-155024ec39ac: Claiming fa:16:3e:48:06:d8 10.100.0.9
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.484 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.487 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.497 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:06:d8 10.100.0.9'], port_security=['fa:16:3e:48:06:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '86aa2fb7-c532-46b4-a02b-8070608dfe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52b08fd6-6aa8-4470-b89c-ece04e1c959e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2a804e48-646b-4b9e-8a59-155024ec39ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.499 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2a804e48-646b-4b9e-8a59-155024ec39ac in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.501 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.511 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f6205ba1-6382-47ca-8106-6b353ecb40a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.512 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc21b99b-41 in ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.514 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc21b99b-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.514 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d561c59-be60-4697-9548-cb4a376e3d46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.515 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d48b9aeb-28fd-433e-987e-3792dfda7fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 systemd-udevd[293121]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:41:03 compute-0 systemd-machined[216401]: New machine qemu-31-instance-00000045.
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.529 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc93cd9-60e0-417d-9378-9d514d0555bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.5380] device (tap2a804e48-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.5401] device (tap2a804e48-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.553 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.554 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4dccd842-7225-4455-91c8-0c0d93bc6039]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-00000045.
Jan 20 14:41:03 compute-0 ovn_controller[148666]: 2026-01-20T14:41:03Z|00215|binding|INFO|Setting lport 2a804e48-646b-4b9e-8a59-155024ec39ac ovn-installed in OVS
Jan 20 14:41:03 compute-0 ovn_controller[148666]: 2026-01-20T14:41:03Z|00216|binding|INFO|Setting lport 2a804e48-646b-4b9e-8a59-155024ec39ac up in Southbound
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.561 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.579 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdbe06e-5fe5-4fec-bf50-748cb5daa44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.584 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3abb003d-7cfa-4ed4-9431-0243423f2894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.5854] manager: (tapfc21b99b-40): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.612 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7a054d48-dbda-4d8c-b2b1-d55c0ecd4160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.615 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2edd53ca-16f6-43db-8c5b-5262db4327d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 111 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.6451] device (tapfc21b99b-40): carrier: link connected
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.650 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b5f093-d7db-487e-80a8-4d86bbcf7093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.672 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0b353b86-6a0c-4d18-92d6-82c0edbd2eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600436, 'reachable_time': 37412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293153, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.692 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[21018dee-9853-45f5-89b2-821c8f750048]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:5bd2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600436, 'tstamp': 600436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293154, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.710 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d8c6b0-aa31-496b-893d-b805faf9e1c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600436, 'reachable_time': 37412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293155, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.735 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6074a61-1033-4cd8-8509-4be77ad49b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.787 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d46b5342-1055-4a53-bae4-f5ae6f64204a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.788 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.789 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.789 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.791 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 NetworkManager[48960]: <info>  [1768920063.7917] manager: (tapfc21b99b-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 20 14:41:03 compute-0 kernel: tapfc21b99b-40: entered promiscuous mode
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.792 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.793 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.794 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_controller[148666]: 2026-01-20T14:41:03Z|00217|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.810 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.811 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[225b6906-3507-43ef-abfd-867571c102e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.812 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/fc21b99b-4e34-422c-be05-0a440009dac4.pid.haproxy
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:41:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:03.812 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'env', 'PROCESS_TAG=haproxy-fc21b99b-4e34-422c-be05-0a440009dac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc21b99b-4e34-422c-be05-0a440009dac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:41:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.932 250022 DEBUG nova.compute.manager [req-85f7c3a7-df9b-43cf-8c98-b0d463fe7ecf req-13d1d1d1-7a1a-4c3b-ad52-b06de9c3eb60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.934 250022 DEBUG oslo_concurrency.lockutils [req-85f7c3a7-df9b-43cf-8c98-b0d463fe7ecf req-13d1d1d1-7a1a-4c3b-ad52-b06de9c3eb60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.934 250022 DEBUG oslo_concurrency.lockutils [req-85f7c3a7-df9b-43cf-8c98-b0d463fe7ecf req-13d1d1d1-7a1a-4c3b-ad52-b06de9c3eb60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.935 250022 DEBUG oslo_concurrency.lockutils [req-85f7c3a7-df9b-43cf-8c98-b0d463fe7ecf req-13d1d1d1-7a1a-4c3b-ad52-b06de9c3eb60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:03 compute-0 nova_compute[250018]: 2026-01-20 14:41:03.935 250022 DEBUG nova.compute.manager [req-85f7c3a7-df9b-43cf-8c98-b0d463fe7ecf req-13d1d1d1-7a1a-4c3b-ad52-b06de9c3eb60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Processing event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:41:04 compute-0 ceph-mon[74360]: pgmap v1581: 321 pgs: 321 active+clean; 111 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.179 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920064.1794415, 86aa2fb7-c532-46b4-a02b-8070608dfe6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.181 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] VM Started (Lifecycle Event)
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.183 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.186 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.190 250022 INFO nova.virt.libvirt.driver [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Instance spawned successfully.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.191 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.207 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.212 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.219 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.220 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.220 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.220 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.221 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.221 250022 DEBUG nova.virt.libvirt.driver [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.227 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.227 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920064.1804836, 86aa2fb7-c532-46b4-a02b-8070608dfe6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.227 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] VM Paused (Lifecycle Event)
Jan 20 14:41:04 compute-0 podman[293228]: 2026-01-20 14:41:04.233904938 +0000 UTC m=+0.092685103 container create a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:41:04 compute-0 podman[293228]: 2026-01-20 14:41:04.169083892 +0000 UTC m=+0.027864047 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.264 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.267 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920064.1861358, 86aa2fb7-c532-46b4-a02b-8070608dfe6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.268 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] VM Resumed (Lifecycle Event)
Jan 20 14:41:04 compute-0 systemd[1]: Started libpod-conmon-a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1.scope.
Jan 20 14:41:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a8cede45f0fd54d3f12aa6d101bb0edb6fb9da91ec64a2e1694006f0904857a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.307 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.310 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:41:04 compute-0 podman[293228]: 2026-01-20 14:41:04.317073622 +0000 UTC m=+0.175853797 container init a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:41:04 compute-0 podman[293228]: 2026-01-20 14:41:04.322293713 +0000 UTC m=+0.181073868 container start a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.338 250022 INFO nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Took 7.54 seconds to spawn the instance on the hypervisor.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.338 250022 DEBUG nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:41:04 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [NOTICE]   (293248) : New worker (293250) forked
Jan 20 14:41:04 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [NOTICE]   (293248) : Loading success.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.341 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.420 250022 INFO nova.compute.manager [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Took 8.45 seconds to build instance.
Jan 20 14:41:04 compute-0 nova_compute[250018]: 2026-01-20 14:41:04.439 250022 DEBUG oslo_concurrency.lockutils [None req-64cde76b-cb70-4e52-87b4-50b8f363e087 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:05 compute-0 sudo[293260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:05 compute-0 sudo[293260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:05 compute-0 sudo[293260]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:05 compute-0 sudo[293285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:05 compute-0 sudo[293285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:05 compute-0 sudo[293285]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 114 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 90 op/s
Jan 20 14:41:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.117 250022 DEBUG nova.compute.manager [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.117 250022 DEBUG oslo_concurrency.lockutils [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.118 250022 DEBUG oslo_concurrency.lockutils [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.118 250022 DEBUG oslo_concurrency.lockutils [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.118 250022 DEBUG nova.compute.manager [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:41:06 compute-0 nova_compute[250018]: 2026-01-20 14:41:06.118 250022 WARNING nova.compute.manager [req-87403f35-c713-43ed-bfae-db4d182cb2fa req-96aec9ea-b236-4614-b7d4-56cc7ba5b3ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac for instance with vm_state active and task_state None.
Jan 20 14:41:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:06.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:06 compute-0 ceph-mon[74360]: pgmap v1582: 321 pgs: 321 active+clean; 114 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 90 op/s
Jan 20 14:41:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1173221353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.355 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:07 compute-0 NetworkManager[48960]: <info>  [1768920067.7150] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 20 14:41:07 compute-0 NetworkManager[48960]: <info>  [1768920067.7160] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.787 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:07 compute-0 ovn_controller[148666]: 2026-01-20T14:41:07Z|00218|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.801 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.934 250022 DEBUG nova.compute.manager [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.934 250022 DEBUG nova.compute.manager [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.934 250022 DEBUG oslo_concurrency.lockutils [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.934 250022 DEBUG oslo_concurrency.lockutils [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:07 compute-0 nova_compute[250018]: 2026-01-20 14:41:07.935 250022 DEBUG nova.network.neutron [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2638918246' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:08.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:08 compute-0 ovn_controller[148666]: 2026-01-20T14:41:08Z|00219|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:08 compute-0 nova_compute[250018]: 2026-01-20 14:41:08.683 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:09 compute-0 ceph-mon[74360]: pgmap v1583: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 20 14:41:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 20 14:41:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:10 compute-0 nova_compute[250018]: 2026-01-20 14:41:10.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:10 compute-0 ceph-mon[74360]: pgmap v1584: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 20 14:41:10 compute-0 nova_compute[250018]: 2026-01-20 14:41:10.375 250022 DEBUG nova.network.neutron [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:10 compute-0 nova_compute[250018]: 2026-01-20 14:41:10.376 250022 DEBUG nova.network.neutron [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:10 compute-0 nova_compute[250018]: 2026-01-20 14:41:10.399 250022 DEBUG oslo_concurrency.lockutils [req-9ef0bcc7-348e-4406-87e9-9dbfd130ddec req-b7bcee24-a816-483e-9022-16769ac026b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:10.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001985840661641541 of space, bias 1.0, pg target 0.5957521984924623 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:41:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 157 op/s
Jan 20 14:41:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:12 compute-0 nova_compute[250018]: 2026-01-20 14:41:12.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:12 compute-0 nova_compute[250018]: 2026-01-20 14:41:12.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:12.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:12 compute-0 ceph-mon[74360]: pgmap v1585: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 157 op/s
Jan 20 14:41:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Jan 20 14:41:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/228072690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:41:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3266538832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:41:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:14.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:15 compute-0 ceph-mon[74360]: pgmap v1586: 321 pgs: 321 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Jan 20 14:41:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3966429821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2521530103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2225346245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 153 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 181 op/s
Jan 20 14:41:15 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 20 14:41:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:16 compute-0 ceph-mon[74360]: pgmap v1587: 321 pgs: 321 active+clean; 153 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 181 op/s
Jan 20 14:41:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:16.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:17 compute-0 nova_compute[250018]: 2026-01-20 14:41:17.044 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:17 compute-0 nova_compute[250018]: 2026-01-20 14:41:17.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:17 compute-0 nova_compute[250018]: 2026-01-20 14:41:17.413 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:17 compute-0 podman[293318]: 2026-01-20 14:41:17.49774286 +0000 UTC m=+0.061886159 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:41:17 compute-0 podman[293317]: 2026-01-20 14:41:17.527818184 +0000 UTC m=+0.091962233 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 20 14:41:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 195 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 20 14:41:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3958666232' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:18 compute-0 ovn_controller[148666]: 2026-01-20T14:41:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:06:d8 10.100.0.9
Jan 20 14:41:18 compute-0 ovn_controller[148666]: 2026-01-20T14:41:18Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:06:d8 10.100.0.9
Jan 20 14:41:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:18.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:19 compute-0 ceph-mon[74360]: pgmap v1588: 321 pgs: 321 active+clean; 195 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 20 14:41:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3232206658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2318344812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 216 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 20 14:41:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:19.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/906151994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:41:20 compute-0 ceph-mon[74360]: pgmap v1589: 321 pgs: 321 active+clean; 216 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 20 14:41:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:20.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 232 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:21 compute-0 nova_compute[250018]: 2026-01-20 14:41:21.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:41:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:41:22 compute-0 nova_compute[250018]: 2026-01-20 14:41:22.046 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:22 compute-0 sshd-session[293364]: Invalid user ubuntu from 157.245.78.139 port 51646
Jan 20 14:41:22 compute-0 ceph-mon[74360]: pgmap v1590: 321 pgs: 321 active+clean; 232 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:22 compute-0 sshd-session[293364]: Connection closed by invalid user ubuntu 157.245.78.139 port 51646 [preauth]
Jan 20 14:41:22 compute-0 nova_compute[250018]: 2026-01-20 14:41:22.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:22.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1467856985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Jan 20 14:41:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:23.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:24 compute-0 nova_compute[250018]: 2026-01-20 14:41:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:24 compute-0 nova_compute[250018]: 2026-01-20 14:41:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:24 compute-0 ceph-mon[74360]: pgmap v1591: 321 pgs: 321 active+clean; 213 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Jan 20 14:41:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:24.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:25 compute-0 ovn_controller[148666]: 2026-01-20T14:41:25Z|00220|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.149 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.319 250022 DEBUG nova.compute.manager [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.319 250022 DEBUG nova.compute.manager [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.320 250022 DEBUG oslo_concurrency.lockutils [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.320 250022 DEBUG oslo_concurrency.lockutils [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:25 compute-0 nova_compute[250018]: 2026-01-20 14:41:25.320 250022 DEBUG nova.network.neutron [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:25 compute-0 sudo[293367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:25 compute-0 sudo[293367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:25 compute-0 sudo[293367]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:25 compute-0 sudo[293392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:25 compute-0 sudo[293392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:25 compute-0 sudo[293392]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:25 compute-0 ceph-mon[74360]: pgmap v1592: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 20 14:41:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:25.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:27 compute-0 nova_compute[250018]: 2026-01-20 14:41:27.048 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:27 compute-0 nova_compute[250018]: 2026-01-20 14:41:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:27 compute-0 nova_compute[250018]: 2026-01-20 14:41:27.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 291 op/s
Jan 20 14:41:27 compute-0 ceph-mon[74360]: pgmap v1593: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 291 op/s
Jan 20 14:41:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:27.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:29 compute-0 nova_compute[250018]: 2026-01-20 14:41:29.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Jan 20 14:41:29 compute-0 ceph-mon[74360]: pgmap v1594: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.7 MiB/s wr, 242 op/s
Jan 20 14:41:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:41:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:30.754 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:30.755 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:30.755 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.161 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.162 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.163 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.163 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.163 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340845056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.611 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 188 op/s
Jan 20 14:41:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3340845056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2992877323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:41:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.920 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:41:31 compute-0 nova_compute[250018]: 2026-01-20 14:41:31.921 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.095 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.096 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4346MB free_disk=20.900901794433594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.097 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.097 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.201 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.202 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.202 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.250 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:32 compute-0 ceph-mon[74360]: pgmap v1595: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.3 MiB/s wr, 188 op/s
Jan 20 14:41:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:41:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54478405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.693 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:41:32 compute-0 nova_compute[250018]: 2026-01-20 14:41:32.701 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:41:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 70 KiB/s wr, 159 op/s
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.660 250022 DEBUG nova.network.neutron [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.661 250022 DEBUG nova.network.neutron [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/54478405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3706431175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.786 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.795 250022 DEBUG oslo_concurrency.lockutils [req-0e5c7cb9-35af-45af-b16b-2fead5e8ef58 req-da55313a-609c-4ac5-825e-0795b996968f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.822 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:41:33 compute-0 nova_compute[250018]: 2026-01-20 14:41:33.823 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:41:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:33.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:34 compute-0 ceph-mon[74360]: pgmap v1596: 321 pgs: 321 active+clean; 214 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 70 KiB/s wr, 159 op/s
Jan 20 14:41:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2759836805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:34 compute-0 nova_compute[250018]: 2026-01-20 14:41:34.824 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:34 compute-0 nova_compute[250018]: 2026-01-20 14:41:34.824 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:34 compute-0 nova_compute[250018]: 2026-01-20 14:41:34.824 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:34 compute-0 nova_compute[250018]: 2026-01-20 14:41:34.825 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:41:35 compute-0 nova_compute[250018]: 2026-01-20 14:41:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 241 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 177 op/s
Jan 20 14:41:35 compute-0 ceph-mon[74360]: pgmap v1597: 321 pgs: 321 active+clean; 241 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 177 op/s
Jan 20 14:41:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:36 compute-0 nova_compute[250018]: 2026-01-20 14:41:36.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:41:36 compute-0 nova_compute[250018]: 2026-01-20 14:41:36.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:41:36 compute-0 nova_compute[250018]: 2026-01-20 14:41:36.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:41:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:36.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.051 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.468 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.468 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.469 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:41:37 compute-0 nova_compute[250018]: 2026-01-20 14:41:37.470 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 246 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Jan 20 14:41:37 compute-0 ceph-mon[74360]: pgmap v1598: 321 pgs: 321 active+clean; 246 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Jan 20 14:41:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:38.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3010398542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:38 compute-0 sudo[293469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:38 compute-0 sudo[293469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:38 compute-0 sudo[293469]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:38 compute-0 sudo[293494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:38 compute-0 sudo[293494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:38 compute-0 sudo[293494]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:39 compute-0 sudo[293519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:39 compute-0 sudo[293519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:39 compute-0 sudo[293519]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:39 compute-0 sudo[293544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:41:39 compute-0 sudo[293544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:39 compute-0 podman[293643]: 2026-01-20 14:41:39.57779409 +0000 UTC m=+0.052077702 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:41:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 247 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 20 14:41:39 compute-0 podman[293643]: 2026-01-20 14:41:39.675928049 +0000 UTC m=+0.150211651 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:41:39 compute-0 ceph-mon[74360]: pgmap v1599: 321 pgs: 321 active+clean; 247 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 20 14:41:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:41:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:39.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:41:40 compute-0 podman[293798]: 2026-01-20 14:41:40.29442339 +0000 UTC m=+0.050248843 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:41:40 compute-0 podman[293798]: 2026-01-20 14:41:40.30475425 +0000 UTC m=+0.060579683 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:41:40 compute-0 podman[293865]: 2026-01-20 14:41:40.516745925 +0000 UTC m=+0.049978276 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, build-date=2023-02-22T09:23:20)
Jan 20 14:41:40 compute-0 podman[293865]: 2026-01-20 14:41:40.534164126 +0000 UTC m=+0.067396467 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., distribution-scope=public, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived)
Jan 20 14:41:40 compute-0 sudo[293544]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:41:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:41:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:41:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:41:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:40 compute-0 sudo[293898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:40 compute-0 sudo[293898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:40 compute-0 sudo[293898]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:40 compute-0 sudo[293923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:40 compute-0 sudo[293923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:40 compute-0 sudo[293923]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:40 compute-0 sudo[293948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:40 compute-0 sudo[293948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:40 compute-0 sudo[293948]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:40 compute-0 sudo[293973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:41:40 compute-0 sudo[293973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:41:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:41:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 sudo[293973]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 261 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 143 op/s
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:41.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:41:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:41:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:41:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:41:42 compute-0 nova_compute[250018]: 2026-01-20 14:41:42.052 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:42 compute-0 nova_compute[250018]: 2026-01-20 14:41:42.239 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:41:42 compute-0 nova_compute[250018]: 2026-01-20 14:41:42.240 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:41:42 compute-0 nova_compute[250018]: 2026-01-20 14:41:42.241 250022 DEBUG nova.objects.instance [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ca62c166-b1fa-4540-87be-f473e3b850c3 does not exist
Jan 20 14:41:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7657f2c1-8347-4171-866e-7a59856b4b5d does not exist
Jan 20 14:41:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 71e938a8-7391-46dd-b38e-a0f6fdab9499 does not exist
Jan 20 14:41:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:41:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:41:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:41:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:42 compute-0 sudo[294030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:42 compute-0 sudo[294030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:42 compute-0 nova_compute[250018]: 2026-01-20 14:41:42.372 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:42 compute-0 sudo[294030]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:42 compute-0 sudo[294055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:42 compute-0 sudo[294055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:42 compute-0 sudo[294055]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:42 compute-0 sudo[294080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:42 compute-0 sudo[294080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:42 compute-0 sudo[294080]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:42 compute-0 sudo[294105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:41:42 compute-0 sudo[294105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:42 compute-0 ceph-mon[74360]: pgmap v1600: 321 pgs: 321 active+clean; 261 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 143 op/s
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:41:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.875158363 +0000 UTC m=+0.046421508 container create f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:41:42 compute-0 systemd[1]: Started libpod-conmon-f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf.scope.
Jan 20 14:41:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.855332166 +0000 UTC m=+0.026595271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.966679984 +0000 UTC m=+0.137943109 container init f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.97282384 +0000 UTC m=+0.144086955 container start f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.976185372 +0000 UTC m=+0.147448497 container attach f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:41:42 compute-0 systemd[1]: libpod-f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf.scope: Deactivated successfully.
Jan 20 14:41:42 compute-0 interesting_perlman[294186]: 167 167
Jan 20 14:41:42 compute-0 conmon[294186]: conmon f6988a6022f5a372d7fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf.scope/container/memory.events
Jan 20 14:41:42 compute-0 podman[294170]: 2026-01-20 14:41:42.980621722 +0000 UTC m=+0.151884827 container died f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-27ace74b1edcc4954e5c16362ddf8bd9c458b71965e79feaa9b02d9857c677d3-merged.mount: Deactivated successfully.
Jan 20 14:41:43 compute-0 podman[294170]: 2026-01-20 14:41:43.020284787 +0000 UTC m=+0.191547892 container remove f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:41:43 compute-0 systemd[1]: libpod-conmon-f6988a6022f5a372d7fcdccf0d94c43af0a48a35cea51f3bbacad03248fb63bf.scope: Deactivated successfully.
Jan 20 14:41:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:43 compute-0 nova_compute[250018]: 2026-01-20 14:41:43.150 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:43 compute-0 podman[294211]: 2026-01-20 14:41:43.16213654 +0000 UTC m=+0.021936024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:43 compute-0 podman[294211]: 2026-01-20 14:41:43.347595696 +0000 UTC m=+0.207395160 container create ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:41:43 compute-0 systemd[1]: Started libpod-conmon-ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2.scope.
Jan 20 14:41:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:43 compute-0 nova_compute[250018]: 2026-01-20 14:41:43.396 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:43 compute-0 nova_compute[250018]: 2026-01-20 14:41:43.396 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:43 compute-0 podman[294211]: 2026-01-20 14:41:43.411708874 +0000 UTC m=+0.271508338 container init ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:41:43 compute-0 podman[294211]: 2026-01-20 14:41:43.420601355 +0000 UTC m=+0.280400819 container start ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 20 14:41:43 compute-0 podman[294211]: 2026-01-20 14:41:43.424336926 +0000 UTC m=+0.284136380 container attach ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:41:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 270 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 146 op/s
Jan 20 14:41:43 compute-0 nova_compute[250018]: 2026-01-20 14:41:43.687 250022 DEBUG nova.objects.instance [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'pci_requests' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:41:43 compute-0 ceph-mon[74360]: pgmap v1601: 321 pgs: 321 active+clean; 270 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 146 op/s
Jan 20 14:41:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:43.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:44 compute-0 zen_williams[294228]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:41:44 compute-0 zen_williams[294228]: --> relative data size: 1.0
Jan 20 14:41:44 compute-0 zen_williams[294228]: --> All data devices are unavailable
Jan 20 14:41:44 compute-0 systemd[1]: libpod-ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2.scope: Deactivated successfully.
Jan 20 14:41:44 compute-0 podman[294211]: 2026-01-20 14:41:44.227665384 +0000 UTC m=+1.087464848 container died ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3221de287dd59a0842b8b4d1d6ceaf703cfbe189429295cb74b34bbd6609b938-merged.mount: Deactivated successfully.
Jan 20 14:41:44 compute-0 podman[294211]: 2026-01-20 14:41:44.290262571 +0000 UTC m=+1.150062035 container remove ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:41:44 compute-0 systemd[1]: libpod-conmon-ed64327da2ce03c86f4c7b25206bb5c261f94454b587dacfee0f62bb37059db2.scope: Deactivated successfully.
Jan 20 14:41:44 compute-0 sudo[294105]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:44 compute-0 sudo[294255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:44 compute-0 sudo[294255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:44 compute-0 sudo[294255]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:44 compute-0 sudo[294280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:44 compute-0 sudo[294280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:44 compute-0 sudo[294280]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:44 compute-0 sudo[294305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:44 compute-0 sudo[294305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:44 compute-0 sudo[294305]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:44 compute-0 sudo[294330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:41:44 compute-0 sudo[294330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.837399248 +0000 UTC m=+0.038843494 container create 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:41:44 compute-0 systemd[1]: Started libpod-conmon-995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2.scope.
Jan 20 14:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.908467314 +0000 UTC m=+0.109911580 container init 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.914524368 +0000 UTC m=+0.115968614 container start 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.822238927 +0000 UTC m=+0.023683193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.917724564 +0000 UTC m=+0.119168830 container attach 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:41:44 compute-0 kind_banach[294410]: 167 167
Jan 20 14:41:44 compute-0 systemd[1]: libpod-995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2.scope: Deactivated successfully.
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.920307555 +0000 UTC m=+0.121751801 container died 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd077fd02005cef47fe5a573bdcd92532bdfcb8cad9cbeb32c434569972c01b0-merged.mount: Deactivated successfully.
Jan 20 14:41:44 compute-0 podman[294394]: 2026-01-20 14:41:44.952832236 +0000 UTC m=+0.154276482 container remove 995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banach, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:41:44 compute-0 systemd[1]: libpod-conmon-995874be12ecc5d5035697e64148fd65c521c04d08d48f8cae3497049fb213a2.scope: Deactivated successfully.
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.133596924 +0000 UTC m=+0.039047349 container create 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:41:45 compute-0 systemd[1]: Started libpod-conmon-324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b.scope.
Jan 20 14:41:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d09bd65c00c82918e66a3599b8ba2a5f3fbe6f4f7e8e31782cc5815e4c07861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d09bd65c00c82918e66a3599b8ba2a5f3fbe6f4f7e8e31782cc5815e4c07861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d09bd65c00c82918e66a3599b8ba2a5f3fbe6f4f7e8e31782cc5815e4c07861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d09bd65c00c82918e66a3599b8ba2a5f3fbe6f4f7e8e31782cc5815e4c07861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.187208498 +0000 UTC m=+0.092658923 container init 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.193573309 +0000 UTC m=+0.099023724 container start 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.197127356 +0000 UTC m=+0.102577801 container attach 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.116994575 +0000 UTC m=+0.022445020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 231 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Jan 20 14:41:45 compute-0 sudo[294456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:45 compute-0 ceph-mon[74360]: pgmap v1602: 321 pgs: 321 active+clean; 231 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 4.3 MiB/s wr, 155 op/s
Jan 20 14:41:45 compute-0 sudo[294456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:45 compute-0 sudo[294456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:45 compute-0 sudo[294481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:45 compute-0 sudo[294481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:45 compute-0 sudo[294481]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:45.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:45 compute-0 naughty_pascal[294450]: {
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:     "0": [
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:         {
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "devices": [
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "/dev/loop3"
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             ],
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "lv_name": "ceph_lv0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "lv_size": "7511998464",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "name": "ceph_lv0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "tags": {
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.cluster_name": "ceph",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.crush_device_class": "",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.encrypted": "0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.osd_id": "0",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.type": "block",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:                 "ceph.vdo": "0"
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             },
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "type": "block",
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:             "vg_name": "ceph_vg0"
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:         }
Jan 20 14:41:45 compute-0 naughty_pascal[294450]:     ]
Jan 20 14:41:45 compute-0 naughty_pascal[294450]: }
Jan 20 14:41:45 compute-0 systemd[1]: libpod-324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b.scope: Deactivated successfully.
Jan 20 14:41:45 compute-0 podman[294434]: 2026-01-20 14:41:45.957959433 +0000 UTC m=+0.863409858 container died 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:41:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d09bd65c00c82918e66a3599b8ba2a5f3fbe6f4f7e8e31782cc5815e4c07861-merged.mount: Deactivated successfully.
Jan 20 14:41:46 compute-0 podman[294434]: 2026-01-20 14:41:46.008439861 +0000 UTC m=+0.913890286 container remove 324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pascal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:41:46 compute-0 systemd[1]: libpod-conmon-324bc5369e98d2db13c92ff910ed7f6a4af89ae53eb1b77bf9b1033de70af88b.scope: Deactivated successfully.
Jan 20 14:41:46 compute-0 sudo[294330]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:46 compute-0 sudo[294524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:46 compute-0 sudo[294524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:46 compute-0 sudo[294524]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:46 compute-0 nova_compute[250018]: 2026-01-20 14:41:46.135 250022 DEBUG nova.network.neutron [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:41:46 compute-0 sudo[294549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:41:46 compute-0 sudo[294549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:46 compute-0 sudo[294549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:46 compute-0 sudo[294574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:46 compute-0 sudo[294574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:46 compute-0 sudo[294574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:46 compute-0 sudo[294599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:41:46 compute-0 sudo[294599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.584573004 +0000 UTC m=+0.038210977 container create da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:41:46 compute-0 systemd[1]: Started libpod-conmon-da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7.scope.
Jan 20 14:41:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.56782721 +0000 UTC m=+0.021465193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.666101573 +0000 UTC m=+0.119739566 container init da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.672493036 +0000 UTC m=+0.126131009 container start da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.675670663 +0000 UTC m=+0.129308666 container attach da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 14:41:46 compute-0 trusting_matsumoto[294681]: 167 167
Jan 20 14:41:46 compute-0 systemd[1]: libpod-da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7.scope: Deactivated successfully.
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.677697257 +0000 UTC m=+0.131335230 container died da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f68863f3489ebcbb90e781c86a1d4ab821bbd6b601fd363e68db5b3bfa651954-merged.mount: Deactivated successfully.
Jan 20 14:41:46 compute-0 podman[294664]: 2026-01-20 14:41:46.710256949 +0000 UTC m=+0.163894912 container remove da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:41:46 compute-0 systemd[1]: libpod-conmon-da4300300ffcee965d1732517819dc73287d010e6d5a6077bc4304f94ff529b7.scope: Deactivated successfully.
Jan 20 14:41:46 compute-0 podman[294703]: 2026-01-20 14:41:46.88185295 +0000 UTC m=+0.043038848 container create 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:41:46 compute-0 systemd[1]: Started libpod-conmon-9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079.scope.
Jan 20 14:41:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7598cae0c6fb8e690bf071ca59c9775b9689af749d4e7147edeed00bd12b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7598cae0c6fb8e690bf071ca59c9775b9689af749d4e7147edeed00bd12b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7598cae0c6fb8e690bf071ca59c9775b9689af749d4e7147edeed00bd12b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7598cae0c6fb8e690bf071ca59c9775b9689af749d4e7147edeed00bd12b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:41:46 compute-0 podman[294703]: 2026-01-20 14:41:46.861360794 +0000 UTC m=+0.022546712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:41:46 compute-0 podman[294703]: 2026-01-20 14:41:46.959352899 +0000 UTC m=+0.120538807 container init 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:41:46 compute-0 podman[294703]: 2026-01-20 14:41:46.966703369 +0000 UTC m=+0.127889267 container start 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:41:46 compute-0 podman[294703]: 2026-01-20 14:41:46.970282636 +0000 UTC m=+0.131468534 container attach 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.102 250022 DEBUG nova.compute.manager [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.102 250022 DEBUG nova.compute.manager [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.102 250022 DEBUG oslo_concurrency.lockutils [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.103 250022 DEBUG oslo_concurrency.lockutils [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.103 250022 DEBUG nova.network.neutron [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.413 250022 DEBUG nova.policy [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c8a9fb458d27434495a77a94827b6097', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3f93fd4b2154dda9f38e62334904303', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:41:47 compute-0 nova_compute[250018]: 2026-01-20 14:41:47.433 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 20 14:41:47 compute-0 ceph-mon[74360]: pgmap v1603: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 2.7 MiB/s wr, 126 op/s
Jan 20 14:41:47 compute-0 elastic_moser[294719]: {
Jan 20 14:41:47 compute-0 elastic_moser[294719]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:41:47 compute-0 elastic_moser[294719]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:41:47 compute-0 elastic_moser[294719]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:41:47 compute-0 elastic_moser[294719]:         "osd_id": 0,
Jan 20 14:41:47 compute-0 elastic_moser[294719]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:41:47 compute-0 elastic_moser[294719]:         "type": "bluestore"
Jan 20 14:41:47 compute-0 elastic_moser[294719]:     }
Jan 20 14:41:47 compute-0 elastic_moser[294719]: }
Jan 20 14:41:47 compute-0 systemd[1]: libpod-9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079.scope: Deactivated successfully.
Jan 20 14:41:47 compute-0 podman[294703]: 2026-01-20 14:41:47.79483019 +0000 UTC m=+0.956016098 container died 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d7598cae0c6fb8e690bf071ca59c9775b9689af749d4e7147edeed00bd12b0a-merged.mount: Deactivated successfully.
Jan 20 14:41:47 compute-0 podman[294703]: 2026-01-20 14:41:47.848788763 +0000 UTC m=+1.009974661 container remove 9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_moser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:41:47 compute-0 systemd[1]: libpod-conmon-9c8c2c1e84fed4eb7d27b0f4dc4068d1a23870316a8a2009b5855fee92daa079.scope: Deactivated successfully.
Jan 20 14:41:47 compute-0 sudo[294599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:47 compute-0 podman[294749]: 2026-01-20 14:41:47.884612763 +0000 UTC m=+0.057649963 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:41:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:41:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:41:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0f1e573f-a284-49d3-a0c3-be0b2908aaa1 does not exist
Jan 20 14:41:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0a3da196-a35a-45b7-8267-cb6c43b1398d does not exist
Jan 20 14:41:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c59ee07d-437e-4f14-95fe-3f9da959ea82 does not exist
Jan 20 14:41:47 compute-0 podman[294742]: 2026-01-20 14:41:47.927404883 +0000 UTC m=+0.101621025 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:41:47 compute-0 sudo[294795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:41:47 compute-0 sudo[294795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:47 compute-0 sudo[294795]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:48 compute-0 nova_compute[250018]: 2026-01-20 14:41:48.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:48.001 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:41:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:48.002 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:41:48 compute-0 sudo[294821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:41:48 compute-0 sudo[294821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:41:48 compute-0 sudo[294821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:48.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:41:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3821528778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:41:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Jan 20 14:41:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:49.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:49 compute-0 ceph-mon[74360]: pgmap v1604: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Jan 20 14:41:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:50.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 14:41:51 compute-0 ceph-mon[74360]: pgmap v1605: 321 pgs: 321 active+clean; 200 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 14:41:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:52 compute-0 nova_compute[250018]: 2026-01-20 14:41:52.056 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:52 compute-0 nova_compute[250018]: 2026-01-20 14:41:52.435 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:41:52
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 20 14:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:41:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:52.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.214 250022 DEBUG nova.network.neutron [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.215 250022 DEBUG nova.network.neutron [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.231 250022 DEBUG nova.network.neutron [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Successfully updated port: e2648ead-7162-4661-94e1-755faa8f1fd1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.260 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.261 250022 DEBUG oslo_concurrency.lockutils [req-88e68e13-6b59-4301-a544-8e53784eb4f0 req-009984ff-89ab-4717-a317-821ea4a553b3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.261 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.262 250022 DEBUG nova.network.neutron [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.402 250022 DEBUG nova.compute.manager [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.404 250022 DEBUG nova.compute.manager [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-e2648ead-7162-4661-94e1-755faa8f1fd1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.405 250022 DEBUG oslo_concurrency.lockutils [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:41:53 compute-0 nova_compute[250018]: 2026-01-20 14:41:53.461 250022 WARNING nova.network.neutron [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] fc21b99b-4e34-422c-be05-0a440009dac4 already exists in list: networks containing: ['fc21b99b-4e34-422c-be05-0a440009dac4']. ignoring it
Jan 20 14:41:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 213 KiB/s rd, 904 KiB/s wr, 67 op/s
Jan 20 14:41:53 compute-0 ceph-mon[74360]: pgmap v1606: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 213 KiB/s rd, 904 KiB/s wr, 67 op/s
Jan 20 14:41:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:41:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:41:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:41:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:54.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:41:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 298 KiB/s wr, 44 op/s
Jan 20 14:41:55 compute-0 ceph-mon[74360]: pgmap v1607: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 298 KiB/s wr, 44 op/s
Jan 20 14:41:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:55.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:41:57.004 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:41:57 compute-0 nova_compute[250018]: 2026-01-20 14:41:57.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:57 compute-0 nova_compute[250018]: 2026-01-20 14:41:57.437 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:41:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 3.8 KiB/s wr, 4 op/s
Jan 20 14:41:57 compute-0 ceph-mon[74360]: pgmap v1608: 321 pgs: 321 active+clean; 200 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 3.8 KiB/s wr, 4 op/s
Jan 20 14:41:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:57.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:41:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:41:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:41:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:41:59 compute-0 ceph-mon[74360]: pgmap v1609: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:41:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:41:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:41:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:41:59.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:42:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:42:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:42:01 compute-0 ceph-mon[74360]: pgmap v1610: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 14:42:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:02 compute-0 nova_compute[250018]: 2026-01-20 14:42:02.062 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:02 compute-0 sshd-session[294854]: Invalid user ubuntu from 157.245.78.139 port 37666
Jan 20 14:42:02 compute-0 nova_compute[250018]: 2026-01-20 14:42:02.439 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:02 compute-0 sshd-session[294854]: Connection closed by invalid user ubuntu 157.245.78.139 port 37666 [preauth]
Jan 20 14:42:02 compute-0 nova_compute[250018]: 2026-01-20 14:42:02.650 250022 DEBUG nova.network.neutron [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.515 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.516 250022 DEBUG oslo_concurrency.lockutils [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.516 250022 DEBUG nova.network.neutron [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port e2648ead-7162-4661-94e1-755faa8f1fd1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.519 250022 DEBUG nova.virt.libvirt.vif [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.519 250022 DEBUG nova.network.os_vif_util [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.520 250022 DEBUG nova.network.os_vif_util [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.520 250022 DEBUG os_vif [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.521 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.521 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.521 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.525 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.525 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2648ead-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.526 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2648ead-71, col_values=(('external_ids', {'iface-id': 'e2648ead-7162-4661-94e1-755faa8f1fd1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:35:48', 'vm-uuid': '86aa2fb7-c532-46b4-a02b-8070608dfe6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.527 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 NetworkManager[48960]: <info>  [1768920123.5286] manager: (tape2648ead-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.536 250022 INFO os_vif [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71')
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.537 250022 DEBUG nova.virt.libvirt.vif [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.538 250022 DEBUG nova.network.os_vif_util [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.538 250022 DEBUG nova.network.os_vif_util [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.544 250022 DEBUG nova.virt.libvirt.guest [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] attach device xml: <interface type="ethernet">
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]: </interface>
Jan 20 14:42:03 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:42:03 compute-0 kernel: tape2648ead-71: entered promiscuous mode
Jan 20 14:42:03 compute-0 NetworkManager[48960]: <info>  [1768920123.5601] manager: (tape2648ead-71): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.559 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 ovn_controller[148666]: 2026-01-20T14:42:03Z|00221|binding|INFO|Claiming lport e2648ead-7162-4661-94e1-755faa8f1fd1 for this chassis.
Jan 20 14:42:03 compute-0 ovn_controller[148666]: 2026-01-20T14:42:03Z|00222|binding|INFO|e2648ead-7162-4661-94e1-755faa8f1fd1: Claiming fa:16:3e:80:35:48 10.100.0.6
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.577 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 ovn_controller[148666]: 2026-01-20T14:42:03Z|00223|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 ovn-installed in OVS
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.580 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 systemd-udevd[294862]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:03 compute-0 NetworkManager[48960]: <info>  [1768920123.5977] device (tape2648ead-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:03 compute-0 NetworkManager[48960]: <info>  [1768920123.5991] device (tape2648ead-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:03 compute-0 ovn_controller[148666]: 2026-01-20T14:42:03Z|00224|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 up in Southbound
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.605 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:35:48 10.100.0.6'], port_security=['fa:16:3e:80:35:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86aa2fb7-c532-46b4-a02b-8070608dfe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e2648ead-7162-4661-94e1-755faa8f1fd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.606 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e2648ead-7162-4661-94e1-755faa8f1fd1 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 bound to our chassis
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.607 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.622 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[62e8e52f-9559-4471-85a0-d28cfb1fb46c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.649 250022 DEBUG nova.virt.libvirt.driver [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.650 250022 DEBUG nova.virt.libvirt.driver [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.650 250022 DEBUG nova.virt.libvirt.driver [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:48:06:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.650 250022 DEBUG nova.virt.libvirt.driver [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] No VIF found with MAC fa:16:3e:80:35:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.653 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[129c76a6-598a-4d5c-8e51-384e39e08725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.656 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc8da46-637a-442e-9966-c58f22886ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.690 250022 DEBUG nova.virt.libvirt.guest [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:name>tempest-tempest.common.compute-instance-857473992</nova:name>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 14:42:03</nova:creationTime>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:port uuid="2a804e48-646b-4b9e-8a59-155024ec39ac">
Jan 20 14:42:03 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:03 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:03 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:03 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 14:42:03 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 14:42:03 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.691 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[122eb5e0-09de-4591-92a6-1e4cc7d4b9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.715 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a3f2ba-e401-44ba-b5cb-2b8851f7538c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600436, 'reachable_time': 19501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294871, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.735 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b07a3e42-91a3-4e33-98b1-54b7e7712725]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600447, 'tstamp': 600447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294872, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600450, 'tstamp': 600450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294872, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.739 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.741 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.744 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.744 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.745 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:03.745 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:03 compute-0 nova_compute[250018]: 2026-01-20 14:42:03.752 250022 DEBUG oslo_concurrency.lockutils [None req-33a64a8c-384e-497b-bff9-c56508784f51 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 21.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:03.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.215 250022 DEBUG nova.compute.manager [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.216 250022 DEBUG oslo_concurrency.lockutils [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.216 250022 DEBUG oslo_concurrency.lockutils [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.216 250022 DEBUG oslo_concurrency.lockutils [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.217 250022 DEBUG nova.compute.manager [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:04 compute-0 nova_compute[250018]: 2026-01-20 14:42:04.217 250022 WARNING nova.compute.manager [req-8077f0f4-b74e-4d39-889a-7d5079c56787 req-3aad2067-2020-432d-8e72-d74c95800d6a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:04 compute-0 ceph-mon[74360]: pgmap v1611: 321 pgs: 321 active+clean; 200 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 14:42:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:04 compute-0 ovn_controller[148666]: 2026-01-20T14:42:04Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:35:48 10.100.0.6
Jan 20 14:42:04 compute-0 ovn_controller[148666]: 2026-01-20T14:42:04Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:35:48 10.100.0.6
Jan 20 14:42:05 compute-0 nova_compute[250018]: 2026-01-20 14:42:05.210 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:05 compute-0 nova_compute[250018]: 2026-01-20 14:42:05.210 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4238600245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:05 compute-0 nova_compute[250018]: 2026-01-20 14:42:05.365 250022 DEBUG nova.objects.instance [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'flavor' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 201 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 4.0 KiB/s rd, 5.0 KiB/s wr, 7 op/s
Jan 20 14:42:05 compute-0 sudo[294874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:05 compute-0 sudo[294874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:05 compute-0 sudo[294874]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:05 compute-0 sudo[294899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:05 compute-0 sudo[294899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:06 compute-0 sudo[294899]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:06 compute-0 ceph-mon[74360]: pgmap v1612: 321 pgs: 321 active+clean; 201 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 4.0 KiB/s rd, 5.0 KiB/s wr, 7 op/s
Jan 20 14:42:06 compute-0 nova_compute[250018]: 2026-01-20 14:42:06.488 250022 DEBUG nova.network.neutron [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port e2648ead-7162-4661-94e1-755faa8f1fd1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:06 compute-0 nova_compute[250018]: 2026-01-20 14:42:06.488 250022 DEBUG nova.network.neutron [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.626 250022 DEBUG nova.compute.manager [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.627 250022 DEBUG oslo_concurrency.lockutils [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.627 250022 DEBUG oslo_concurrency.lockutils [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.628 250022 DEBUG oslo_concurrency.lockutils [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.628 250022 DEBUG nova.compute.manager [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.628 250022 WARNING nova.compute.manager [req-886cb885-50cf-480b-a661-c82d26811641 req-b4f33183-fd2d-4b89-b0e3-e28147599aee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 214 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 441 KiB/s wr, 21 op/s
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.700 250022 DEBUG nova.virt.libvirt.vif [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.700 250022 DEBUG nova.network.os_vif_util [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.701 250022 DEBUG nova.network.os_vif_util [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.702 250022 DEBUG oslo_concurrency.lockutils [req-f2df8128-98a4-4681-b3e9-840983f5f814 req-1bdce282-90a1-418c-a88a-d4be95965428 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.704 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.706 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.708 250022 DEBUG nova.virt.libvirt.driver [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Attempting to detach device tape2648ead-71 from instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.708 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </interface>
Jan 20 14:42:07 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.713 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.716 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <name>instance-00000045</name>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <uuid>86aa2fb7-c532-46b4-a02b-8070608dfe6b</uuid>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:name>tempest-tempest.common.compute-instance-857473992</nova:name>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 14:42:03</nova:creationTime>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:port uuid="2a804e48-646b-4b9e-8a59-155024ec39ac">
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <resource>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </resource>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <system>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='serial'>86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='uuid'>86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </system>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <os>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </os>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <features>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </features>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk' index='2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config' index='1'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:48:06:d8'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='tap2a804e48-64'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:80:35:48'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='tape2648ead-71'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='net1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log' append='off'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </target>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log' append='off'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </console>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <video>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </video>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c89,c251</label>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c89,c251</imagelabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </domain>
Jan 20 14:42:07 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.717 250022 INFO nova.virt.libvirt.driver [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tape2648ead-71 from instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b from the persistent domain config.
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.718 250022 DEBUG nova.virt.libvirt.driver [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] (1/8): Attempting to detach device tape2648ead-71 with device alias net1 from instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.718 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] detach device xml: <interface type="ethernet">
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:80:35:48"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <target dev="tape2648ead-71"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </interface>
Jan 20 14:42:07 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.728558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127728602, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1958, "num_deletes": 251, "total_data_size": 3400694, "memory_usage": 3457584, "flush_reason": "Manual Compaction"}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127744926, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3307047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34497, "largest_seqno": 36454, "table_properties": {"data_size": 3298227, "index_size": 5378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18936, "raw_average_key_size": 20, "raw_value_size": 3280465, "raw_average_value_size": 3554, "num_data_blocks": 234, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768919948, "oldest_key_time": 1768919948, "file_creation_time": 1768920127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16483 microseconds, and 6465 cpu microseconds.
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.745043) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3307047 bytes OK
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.745091) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.746645) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.746655) EVENT_LOG_v1 {"time_micros": 1768920127746652, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.746670) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3392593, prev total WAL file size 3392593, number of live WAL files 2.
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.747771) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3229KB)], [74(8242KB)]
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127747809, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11747049, "oldest_snapshot_seqno": -1}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: pgmap v1613: 321 pgs: 321 active+clean; 214 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 441 KiB/s wr, 21 op/s
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6457 keys, 9845812 bytes, temperature: kUnknown
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127801683, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9845812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9802848, "index_size": 25702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164866, "raw_average_key_size": 25, "raw_value_size": 9687245, "raw_average_value_size": 1500, "num_data_blocks": 1030, "num_entries": 6457, "num_filter_entries": 6457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920127, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.801884) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9845812 bytes
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.803206) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.8 rd, 182.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.0 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6976, records dropped: 519 output_compression: NoCompression
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.803222) EVENT_LOG_v1 {"time_micros": 1768920127803214, "job": 42, "event": "compaction_finished", "compaction_time_micros": 53935, "compaction_time_cpu_micros": 22792, "output_level": 6, "num_output_files": 1, "total_output_size": 9845812, "num_input_records": 6976, "num_output_records": 6457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127803808, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920127805087, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.747708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.805111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.805115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.805116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.805118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:07.805119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:07 compute-0 kernel: tape2648ead-71 (unregistering): left promiscuous mode
Jan 20 14:42:07 compute-0 NetworkManager[48960]: <info>  [1768920127.8196] device (tape2648ead-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 ovn_controller[148666]: 2026-01-20T14:42:07Z|00225|binding|INFO|Releasing lport e2648ead-7162-4661-94e1-755faa8f1fd1 from this chassis (sb_readonly=0)
Jan 20 14:42:07 compute-0 ovn_controller[148666]: 2026-01-20T14:42:07Z|00226|binding|INFO|Setting lport e2648ead-7162-4661-94e1-755faa8f1fd1 down in Southbound
Jan 20 14:42:07 compute-0 ovn_controller[148666]: 2026-01-20T14:42:07Z|00227|binding|INFO|Removing iface tape2648ead-71 ovn-installed in OVS
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.844 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.851 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768920127.8511171, 86aa2fb7-c532-46b4-a02b-8070608dfe6b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.849 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:35:48 10.100.0.6'], port_security=['fa:16:3e:80:35:48 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '86aa2fb7-c532-46b4-a02b-8070608dfe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-381806613', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52cb9fb4-4318-4f53-9b5a-002d95792517', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e2648ead-7162-4661-94e1-755faa8f1fd1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.850 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e2648ead-7162-4661-94e1-755faa8f1fd1 in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.852 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc21b99b-4e34-422c-be05-0a440009dac4
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.856 250022 DEBUG nova.virt.libvirt.driver [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Start waiting for the detach event from libvirt for device tape2648ead-71 with device alias net1 for instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.856 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.857 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.860 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:80:35:48"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape2648ead-71"/></interface>not found in domain: <domain type='kvm' id='31'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <name>instance-00000045</name>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <uuid>86aa2fb7-c532-46b4-a02b-8070608dfe6b</uuid>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:name>tempest-tempest.common.compute-instance-857473992</nova:name>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 14:42:03</nova:creationTime>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:port uuid="2a804e48-646b-4b9e-8a59-155024ec39ac">
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:port uuid="e2648ead-7162-4661-94e1-755faa8f1fd1">
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <resource>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </resource>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <system>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='serial'>86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='uuid'>86aa2fb7-c532-46b4-a02b-8070608dfe6b</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </system>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <os>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </os>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <features>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </features>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk' index='2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/86aa2fb7-c532-46b4-a02b-8070608dfe6b_disk.config' index='1'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </controller>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:48:06:d8'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target dev='tap2a804e48-64'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log' append='off'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       </target>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b/console.log' append='off'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </console>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </input>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </graphics>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <video>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </video>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c89,c251</label>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c89,c251</imagelabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </domain>
Jan 20 14:42:07 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.860 250022 INFO nova.virt.libvirt.driver [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully detached device tape2648ead-71 from instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b from the live domain config.
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.860 250022 DEBUG nova.virt.libvirt.vif [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.861 250022 DEBUG nova.network.os_vif_util [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "e2648ead-7162-4661-94e1-755faa8f1fd1", "address": "fa:16:3e:80:35:48", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2648ead-71", "ovs_interfaceid": "e2648ead-7162-4661-94e1-755faa8f1fd1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.861 250022 DEBUG nova.network.os_vif_util [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.861 250022 DEBUG os_vif [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.862 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.863 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2648ead-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.864 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.866 250022 INFO os_vif [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:35:48,bridge_name='br-int',has_traffic_filtering=True,id=e2648ead-7162-4661-94e1-755faa8f1fd1,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape2648ead-71')
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.867 250022 DEBUG nova.virt.libvirt.guest [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:name>tempest-tempest.common.compute-instance-857473992</nova:name>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 14:42:07</nova:creationTime>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:user uuid="c8a9fb458d27434495a77a94827b6097">tempest-AttachInterfacesTestJSON-305746947-project-member</nova:user>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:project uuid="e3f93fd4b2154dda9f38e62334904303">tempest-AttachInterfacesTestJSON-305746947</nova:project>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     <nova:port uuid="2a804e48-646b-4b9e-8a59-155024ec39ac">
Jan 20 14:42:07 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:42:07 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 14:42:07 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 14:42:07 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 14:42:07 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.867 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[89d07028-7d07-4803-ab30-7298d93266ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.895 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ca987b8e-598e-4c09-9545-12f4c49cc5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.898 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0f826906-7e4c-4bbf-aca0-2c60dd8c4e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.923 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[42d4c6e3-97d0-46a3-bb2f-a7167c753e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.940 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1be11adf-504b-432d-ac50-a986f32b55ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc21b99b-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:5b:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600436, 'reachable_time': 19501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294937, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.955 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.955 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[20ae3081-a6c4-4988-b591-1b978d917b06]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600447, 'tstamp': 600447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294938, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc21b99b-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600450, 'tstamp': 600450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294938, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.958 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.960 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.961 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.962 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc21b99b-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.962 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.962 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc21b99b-40, col_values=(('external_ids', {'iface-id': '583df905-1d9f-49c1-b209-4b7fad1599f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:07.963 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:07 compute-0 nova_compute[250018]: 2026-01-20 14:42:07.973 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.102 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.103 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.109 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.109 250022 INFO nova.compute.claims [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:42:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:08 compute-0 ovn_controller[148666]: 2026-01-20T14:42:08Z|00228|binding|INFO|Releasing lport 583df905-1d9f-49c1-b209-4b7fad1599f6 from this chassis (sb_readonly=0)
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.252 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.352 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2832464759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.778 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.784 250022 DEBUG nova.compute.provider_tree [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.831 250022 DEBUG nova.scheduler.client.report [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2832464759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.896 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.897 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.955 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:42:08 compute-0 nova_compute[250018]: 2026-01-20 14:42:08.955 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.002 250022 INFO nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.042 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.304 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.305 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.306 250022 INFO nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Creating image(s)
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.346 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.379 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.410 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.414 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.506 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.507 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.508 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.508 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.540 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.545 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 233 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 24 op/s
Jan 20 14:42:09 compute-0 ceph-mon[74360]: pgmap v1614: 321 pgs: 321 active+clean; 233 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 24 op/s
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.846 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:09 compute-0 nova_compute[250018]: 2026-01-20 14:42:09.921 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] resizing rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:42:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:10 compute-0 nova_compute[250018]: 2026-01-20 14:42:10.031 250022 DEBUG nova.objects.instance [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'migration_context' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.109 250022 DEBUG nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-unplugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.110 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.110 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.111 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.111 250022 DEBUG nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-unplugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.112 250022 WARNING nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-unplugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.112 250022 DEBUG nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.112 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.112 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.113 250022 DEBUG oslo_concurrency.lockutils [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.113 250022 DEBUG nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.113 250022 WARNING nova.compute.manager [req-5431cd9b-fa29-4672-9749-e33ebaefd75d req-61baa803-815f-4eab-9ed1-d0fc58b9ceed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-plugged-e2648ead-7162-4661-94e1-755faa8f1fd1 for instance with vm_state active and task_state None.
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.119 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.119 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Ensure instance console log exists: /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.120 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.120 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.120 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004921707786618277 of space, bias 1.0, pg target 1.476512335985483 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:42:11 compute-0 nova_compute[250018]: 2026-01-20 14:42:11.608 250022 DEBUG nova.policy [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff99fc8eda0640928c6e82981dacb266', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b95747114ab4043b93a260387199c91', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:42:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 268 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 20 14:42:11 compute-0 ceph-mon[74360]: pgmap v1615: 321 pgs: 321 active+clean; 268 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.3 MiB/s wr, 30 op/s
Jan 20 14:42:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:12 compute-0 nova_compute[250018]: 2026-01-20 14:42:12.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:12 compute-0 nova_compute[250018]: 2026-01-20 14:42:12.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.155929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133156018, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 303, "num_deletes": 255, "total_data_size": 80466, "memory_usage": 86688, "flush_reason": "Manual Compaction"}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133159574, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 79974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36455, "largest_seqno": 36757, "table_properties": {"data_size": 78047, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4925, "raw_average_key_size": 17, "raw_value_size": 74148, "raw_average_value_size": 264, "num_data_blocks": 7, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920127, "oldest_key_time": 1768920127, "file_creation_time": 1768920133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3695 microseconds, and 1411 cpu microseconds.
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.159637) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 79974 bytes OK
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.159657) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.161302) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.161346) EVENT_LOG_v1 {"time_micros": 1768920133161338, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.161368) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 78270, prev total WAL file size 78270, number of live WAL files 2.
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.161859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303131' seq:72057594037927935, type:22 .. '6C6F676D0031323632' seq:0, type:0; will stop at (end)
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(78KB)], [77(9615KB)]
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133161901, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9925786, "oldest_snapshot_seqno": -1}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6219 keys, 9792293 bytes, temperature: kUnknown
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133212565, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9792293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9750464, "index_size": 25207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 160854, "raw_average_key_size": 25, "raw_value_size": 9638541, "raw_average_value_size": 1549, "num_data_blocks": 1005, "num_entries": 6219, "num_filter_entries": 6219, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.212796) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9792293 bytes
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.214423) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.6 rd, 193.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(246.6) write-amplify(122.4) OK, records in: 6737, records dropped: 518 output_compression: NoCompression
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.214441) EVENT_LOG_v1 {"time_micros": 1768920133214432, "job": 44, "event": "compaction_finished", "compaction_time_micros": 50735, "compaction_time_cpu_micros": 21368, "output_level": 6, "num_output_files": 1, "total_output_size": 9792293, "num_input_records": 6737, "num_output_records": 6219, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133214560, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920133216567, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.161780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.216632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.216638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.216640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.216642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:42:13.216643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:42:13 compute-0 nova_compute[250018]: 2026-01-20 14:42:13.395 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:13 compute-0 nova_compute[250018]: 2026-01-20 14:42:13.396 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:13 compute-0 nova_compute[250018]: 2026-01-20 14:42:13.397 250022 DEBUG nova.network.neutron [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:13 compute-0 nova_compute[250018]: 2026-01-20 14:42:13.610 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Successfully created port: f6e1030d-5508-4e83-92ce-0a723132eb45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:42:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:42:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:42:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 286 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 33 op/s
Jan 20 14:42:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2810974689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:42:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2620006287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:42:14 compute-0 ceph-mon[74360]: pgmap v1616: 321 pgs: 321 active+clean; 286 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 3.1 MiB/s wr, 33 op/s
Jan 20 14:42:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:42:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:42:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 307 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.0 MiB/s wr, 68 op/s
Jan 20 14:42:15 compute-0 ceph-mon[74360]: pgmap v1617: 321 pgs: 321 active+clean; 307 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 4.0 MiB/s wr, 68 op/s
Jan 20 14:42:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.189 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Successfully updated port: f6e1030d-5508-4e83-92ce-0a723132eb45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.239 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.240 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquired lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.240 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.403 250022 DEBUG nova.compute.manager [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-changed-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.404 250022 DEBUG nova.compute.manager [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Refreshing instance network info cache due to event network-changed-f6e1030d-5508-4e83-92ce-0a723132eb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.404 250022 DEBUG oslo_concurrency.lockutils [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:16 compute-0 nova_compute[250018]: 2026-01-20 14:42:16.847 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.376 250022 DEBUG nova.compute.manager [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.376 250022 DEBUG nova.compute.manager [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing instance network info cache due to event network-changed-2a804e48-646b-4b9e-8a59-155024ec39ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.377 250022 DEBUG oslo_concurrency.lockutils [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.512 250022 INFO nova.network.neutron [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Port e2648ead-7162-4661-94e1-755faa8f1fd1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.513 250022 DEBUG nova.network.neutron [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.553 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.555 250022 DEBUG oslo_concurrency.lockutils [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.556 250022 DEBUG nova.network.neutron [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Refreshing network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.600 250022 DEBUG oslo_concurrency.lockutils [None req-1d843a85-989a-41f1-a0de-e173c8b549a8 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "interface-86aa2fb7-c532-46b4-a02b-8070608dfe6b-e2648ead-7162-4661-94e1-755faa8f1fd1" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 12.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 324 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.0 MiB/s wr, 63 op/s
Jan 20 14:42:17 compute-0 ceph-mon[74360]: pgmap v1618: 321 pgs: 321 active+clean; 324 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.0 MiB/s wr, 63 op/s
Jan 20 14:42:17 compute-0 nova_compute[250018]: 2026-01-20 14:42:17.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:17.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:18 compute-0 podman[295133]: 2026-01-20 14:42:18.460154649 +0000 UTC m=+0.051565848 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:42:18 compute-0 podman[295132]: 2026-01-20 14:42:18.48712248 +0000 UTC m=+0.078606561 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 20 14:42:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3967705155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.567 250022 DEBUG nova.network.neutron [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Updating instance_info_cache with network_info: [{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.601 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Releasing lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.602 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance network_info: |[{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.602 250022 DEBUG oslo_concurrency.lockutils [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.602 250022 DEBUG nova.network.neutron [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Refreshing network info cache for port f6e1030d-5508-4e83-92ce-0a723132eb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.604 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start _get_guest_xml network_info=[{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.609 250022 WARNING nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.615 250022 DEBUG nova.virt.libvirt.host [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.615 250022 DEBUG nova.virt.libvirt.host [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.619 250022 DEBUG nova.virt.libvirt.host [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.620 250022 DEBUG nova.virt.libvirt.host [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.620 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.621 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.621 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.621 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.622 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.622 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.622 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.622 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.622 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.623 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.623 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.623 250022 DEBUG nova.virt.hardware [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:42:19 compute-0 nova_compute[250018]: 2026-01-20 14:42:19.625 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 344 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.9 MiB/s wr, 62 op/s
Jan 20 14:42:19 compute-0 ceph-mon[74360]: pgmap v1619: 321 pgs: 321 active+clean; 344 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.9 MiB/s wr, 62 op/s
Jan 20 14:42:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:42:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:19.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:42:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571315860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.027 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.064 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.068 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751689202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.478 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.480 250022 DEBUG nova.virt.libvirt.vif [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:09Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.480 250022 DEBUG nova.network.os_vif_util [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.481 250022 DEBUG nova.network.os_vif_util [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.482 250022 DEBUG nova.objects.instance [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.517 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <uuid>1951432b-2c0c-4d1b-90df-d94dcf9fc32e</uuid>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <name>instance-0000004a</name>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-164859619</nova:name>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:42:19</nova:creationTime>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:user uuid="ff99fc8eda0640928c6e82981dacb266">tempest-ListServerFiltersTestJSON-2126845308-project-member</nova:user>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:project uuid="4b95747114ab4043b93a260387199c91">tempest-ListServerFiltersTestJSON-2126845308</nova:project>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <nova:port uuid="f6e1030d-5508-4e83-92ce-0a723132eb45">
Jan 20 14:42:20 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <system>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="serial">1951432b-2c0c-4d1b-90df-d94dcf9fc32e</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="uuid">1951432b-2c0c-4d1b-90df-d94dcf9fc32e</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </system>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <os>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </os>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <features>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </features>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk">
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config">
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:20 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:8d:b9:dd"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <target dev="tapf6e1030d-55"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/console.log" append="off"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <video>
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </video>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:42:20 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:42:20 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:42:20 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:42:20 compute-0 nova_compute[250018]: </domain>
Jan 20 14:42:20 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.518 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Preparing to wait for external event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.518 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.519 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.519 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.520 250022 DEBUG nova.virt.libvirt.vif [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:09Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.520 250022 DEBUG nova.network.os_vif_util [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.521 250022 DEBUG nova.network.os_vif_util [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.521 250022 DEBUG os_vif [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.521 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.522 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.522 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.524 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.524 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6e1030d-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.525 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6e1030d-55, col_values=(('external_ids', {'iface-id': 'f6e1030d-5508-4e83-92ce-0a723132eb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:b9:dd', 'vm-uuid': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:20 compute-0 NetworkManager[48960]: <info>  [1768920140.5271] manager: (tapf6e1030d-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.528 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.531 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.531 250022 INFO os_vif [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55')
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.594 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.594 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.594 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] No VIF found with MAC fa:16:3e:8d:b9:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.595 250022 INFO nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Using config drive
Jan 20 14:42:20 compute-0 nova_compute[250018]: 2026-01-20 14:42:20.619 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:20.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3571315860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/751689202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:21 compute-0 nova_compute[250018]: 2026-01-20 14:42:21.185 250022 DEBUG nova.network.neutron [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated VIF entry in instance network info cache for port 2a804e48-646b-4b9e-8a59-155024ec39ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:21 compute-0 nova_compute[250018]: 2026-01-20 14:42:21.186 250022 DEBUG nova.network.neutron [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:21 compute-0 nova_compute[250018]: 2026-01-20 14:42:21.265 250022 DEBUG oslo_concurrency.lockutils [req-dadd2fb8-2123-418b-a3c4-414e028a8c97 req-d8616b3d-53df-4778-b7d2-bf0fddbc2c51 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 358 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.8 MiB/s wr, 61 op/s
Jan 20 14:42:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:21.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:21 compute-0 nova_compute[250018]: 2026-01-20 14:42:21.976 250022 INFO nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Creating config drive at /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config
Jan 20 14:42:21 compute-0 nova_compute[250018]: 2026-01-20 14:42:21.983 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgf_2ohzb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.072 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.120 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgf_2ohzb" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:22 compute-0 ceph-mon[74360]: pgmap v1620: 321 pgs: 321 active+clean; 358 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 4.8 MiB/s wr, 61 op/s
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.156 250022 DEBUG nova.storage.rbd_utils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] rbd image 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.160 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.568 250022 DEBUG nova.network.neutron [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Updated VIF entry in instance network info cache for port f6e1030d-5508-4e83-92ce-0a723132eb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.569 250022 DEBUG nova.network.neutron [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Updating instance_info_cache with network_info: [{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.601 250022 DEBUG oslo_concurrency.lockutils [req-7f5b271e-90d2-4d10-8db5-7d42d3525769 req-c997855d-6970-4c78-baf3-3100bf818feb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.734 250022 DEBUG oslo_concurrency.processutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config 1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.735 250022 INFO nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Deleting local config drive /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/disk.config because it was imported into RBD.
Jan 20 14:42:22 compute-0 kernel: tapf6e1030d-55: entered promiscuous mode
Jan 20 14:42:22 compute-0 NetworkManager[48960]: <info>  [1768920142.7764] manager: (tapf6e1030d-55): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Jan 20 14:42:22 compute-0 ovn_controller[148666]: 2026-01-20T14:42:22Z|00229|binding|INFO|Claiming lport f6e1030d-5508-4e83-92ce-0a723132eb45 for this chassis.
Jan 20 14:42:22 compute-0 ovn_controller[148666]: 2026-01-20T14:42:22Z|00230|binding|INFO|f6e1030d-5508-4e83-92ce-0a723132eb45: Claiming fa:16:3e:8d:b9:dd 10.100.0.4
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.778 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:22 compute-0 systemd-udevd[295313]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:22 compute-0 systemd-machined[216401]: New machine qemu-32-instance-0000004a.
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.810 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:b9:dd 10.100.0.4'], port_security=['fa:16:3e:8d:b9:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f6e1030d-5508-4e83-92ce-0a723132eb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.811 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f6e1030d-5508-4e83-92ce-0a723132eb45 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba bound to our chassis
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.813 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.818 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:22 compute-0 ovn_controller[148666]: 2026-01-20T14:42:22Z|00231|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 ovn-installed in OVS
Jan 20 14:42:22 compute-0 ovn_controller[148666]: 2026-01-20T14:42:22Z|00232|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 up in Southbound
Jan 20 14:42:22 compute-0 NetworkManager[48960]: <info>  [1768920142.8217] device (tapf6e1030d-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:22 compute-0 nova_compute[250018]: 2026-01-20 14:42:22.821 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:22 compute-0 NetworkManager[48960]: <info>  [1768920142.8225] device (tapf6e1030d-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.823 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7be2a2c-60da-419c-a3b3-30fc9b0369f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.824 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb36e9cab-11 in ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:42:22 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000004a.
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.826 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb36e9cab-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.826 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3438f4fe-1245-4bdc-96df-3c5e999223bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.827 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5e5e28-c06d-41a5-8873-ad5e0e04220a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.838 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[8df0c0b1-3d07-4641-8134-7c126d0da2b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.851 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2948afe5-e63f-4530-8b2d-9982ccd2044d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.876 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[036aa2bc-670e-42c9-aa0b-92f6bba225dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.881 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[317036df-ea09-4024-a34c-4a7a1fed8159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 NetworkManager[48960]: <info>  [1768920142.8830] manager: (tapb36e9cab-10): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.909 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[961269f4-b994-4b2b-8eb8-0a5d694d730e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.911 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9618fbe0-a2ed-4a46-96fa-90333cf73ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 NetworkManager[48960]: <info>  [1768920142.9304] device (tapb36e9cab-10): carrier: link connected
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.936 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[242a7e63-4a61-4975-b304-911498061423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.950 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bcacf5b7-2e4c-41e5-82ee-67b2fb96a916]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608365, 'reachable_time': 19060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295346, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.966 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f18abbf0-7043-443e-b108-3f55e1e7fc04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:c252'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 608365, 'tstamp': 608365}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295347, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:22.980 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e70f5396-0e0d-47b6-b2fd-d9b01ddc35c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608365, 'reachable_time': 19060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295348, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.006 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f42a2c90-479c-42a0-8a50-daa1abd7e9e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.058 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9a34fa93-0f6e-43a6-a115-04c04933508d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.059 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.059 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.060 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb36e9cab-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-0 kernel: tapb36e9cab-10: entered promiscuous mode
Jan 20 14:42:23 compute-0 NetworkManager[48960]: <info>  [1768920143.0622] manager: (tapb36e9cab-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.061 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.065 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb36e9cab-10, col_values=(('external_ids', {'iface-id': '5dcae274-b8f4-440a-a3eb-5c1a5a044346'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:23 compute-0 ovn_controller[148666]: 2026-01-20T14:42:23Z|00233|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.066 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.068 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.069 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[998658c4-6132-4080-9420-c01791c147c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.069 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:42:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:23.070 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'env', 'PROCESS_TAG=haproxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b36e9cab-12c6-4a09-9aab-ef2679d875ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.079 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3110567127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.335 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920143.3347416, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.336 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Started (Lifecycle Event)
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.364 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.368 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920143.3356223, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.368 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Paused (Lifecycle Event)
Jan 20 14:42:23 compute-0 podman[295421]: 2026-01-20 14:42:23.415606754 +0000 UTC m=+0.046781159 container create fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.424 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.429 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:23 compute-0 systemd[1]: Started libpod-conmon-fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3.scope.
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.472 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb2dfc1d76a0df60ee13772466ea990fe9c1584aa944601416485f89fa85b885/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:23 compute-0 podman[295421]: 2026-01-20 14:42:23.389221749 +0000 UTC m=+0.020396174 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:42:23 compute-0 podman[295421]: 2026-01-20 14:42:23.497637576 +0000 UTC m=+0.128811991 container init fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:42:23 compute-0 podman[295421]: 2026-01-20 14:42:23.502426997 +0000 UTC m=+0.133601422 container start fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:42:23 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [NOTICE]   (295440) : New worker (295442) forked
Jan 20 14:42:23 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [NOTICE]   (295440) : Loading success.
Jan 20 14:42:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 381 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 79 op/s
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.796 250022 DEBUG nova.compute.manager [req-7f6d7c67-4c16-43f6-9ca8-9a15675d5362 req-26fdda3d-dab5-41df-a577-fcfb141c84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.796 250022 DEBUG oslo_concurrency.lockutils [req-7f6d7c67-4c16-43f6-9ca8-9a15675d5362 req-26fdda3d-dab5-41df-a577-fcfb141c84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.797 250022 DEBUG oslo_concurrency.lockutils [req-7f6d7c67-4c16-43f6-9ca8-9a15675d5362 req-26fdda3d-dab5-41df-a577-fcfb141c84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.797 250022 DEBUG oslo_concurrency.lockutils [req-7f6d7c67-4c16-43f6-9ca8-9a15675d5362 req-26fdda3d-dab5-41df-a577-fcfb141c84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.797 250022 DEBUG nova.compute.manager [req-7f6d7c67-4c16-43f6-9ca8-9a15675d5362 req-26fdda3d-dab5-41df-a577-fcfb141c84de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Processing event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.798 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.802 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920143.8018332, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.802 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Resumed (Lifecycle Event)
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.805 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.808 250022 INFO nova.virt.libvirt.driver [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance spawned successfully.
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.809 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.834 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.841 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.847 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.848 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.848 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.849 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.850 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.851 250022 DEBUG nova.virt.libvirt.driver [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.916 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:42:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:23.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.969 250022 INFO nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Took 14.66 seconds to spawn the instance on the hypervisor.
Jan 20 14:42:23 compute-0 nova_compute[250018]: 2026-01-20 14:42:23.970 250022 DEBUG nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:24 compute-0 nova_compute[250018]: 2026-01-20 14:42:24.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:24 compute-0 nova_compute[250018]: 2026-01-20 14:42:24.055 250022 INFO nova.compute.manager [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Took 15.98 seconds to build instance.
Jan 20 14:42:24 compute-0 nova_compute[250018]: 2026-01-20 14:42:24.071 250022 DEBUG oslo_concurrency.lockutils [None req-3afb6d30-acbf-4973-a945-53b1950deb1a ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2974363094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:24 compute-0 ceph-mon[74360]: pgmap v1621: 321 pgs: 321 active+clean; 381 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 MiB/s wr, 79 op/s
Jan 20 14:42:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.526 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 4.1 MiB/s wr, 103 op/s
Jan 20 14:42:25 compute-0 ceph-mon[74360]: pgmap v1622: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 561 KiB/s rd, 4.1 MiB/s wr, 103 op/s
Jan 20 14:42:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.996 250022 DEBUG nova.compute.manager [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.996 250022 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.997 250022 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.997 250022 DEBUG oslo_concurrency.lockutils [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.997 250022 DEBUG nova.compute.manager [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:25 compute-0 nova_compute[250018]: 2026-01-20 14:42:25.998 250022 WARNING nova.compute.manager [req-8ef92bfb-02d7-498f-8926-44bb7d094b03 req-78a51f7d-ef54-4f32-9f98-3b92d1220ed7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state active and task_state None.
Jan 20 14:42:26 compute-0 sudo[295453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:26 compute-0 sudo[295453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:26 compute-0 sudo[295453]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:26 compute-0 sudo[295478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:26 compute-0 sudo[295478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:26 compute-0 sudo[295478]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:27 compute-0 nova_compute[250018]: 2026-01-20 14:42:27.073 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Jan 20 14:42:27 compute-0 ceph-mon[74360]: pgmap v1623: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Jan 20 14:42:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:27.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:29 compute-0 nova_compute[250018]: 2026-01-20 14:42:29.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/311125799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 20 14:42:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:30 compute-0 nova_compute[250018]: 2026-01-20 14:42:30.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4237730022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3877248440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-0 ceph-mon[74360]: pgmap v1624: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 20 14:42:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2794071667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:30 compute-0 nova_compute[250018]: 2026-01-20 14:42:30.528 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:30.754 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:30.755 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:30.756 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.079 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/553078605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.484 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/553078605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.581 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.582 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.586 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.586 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.755 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.757 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4224MB free_disk=20.81378173828125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.758 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.758 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.909 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.909 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 1951432b-2c0c-4d1b-90df-d94dcf9fc32e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.909 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.910 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:42:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:31 compute-0 nova_compute[250018]: 2026-01-20 14:42:31.989 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.126 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391396376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.468 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.475 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:32 compute-0 ceph-mon[74360]: pgmap v1625: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 20 14:42:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/391396376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.627 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.716 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:42:32 compute-0 nova_compute[250018]: 2026-01-20 14:42:32.721 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3354975338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Jan 20 14:42:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:33.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:34 compute-0 nova_compute[250018]: 2026-01-20 14:42:34.722 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:34 compute-0 nova_compute[250018]: 2026-01-20 14:42:34.723 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:34 compute-0 nova_compute[250018]: 2026-01-20 14:42:34.723 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:42:35 compute-0 nova_compute[250018]: 2026-01-20 14:42:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:35 compute-0 ceph-mon[74360]: pgmap v1626: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 175 op/s
Jan 20 14:42:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3903070332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:35 compute-0 nova_compute[250018]: 2026-01-20 14:42:35.530 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 82 KiB/s wr, 179 op/s
Jan 20 14:42:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:35.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:36 compute-0 ceph-mon[74360]: pgmap v1627: 321 pgs: 321 active+clean; 386 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 82 KiB/s wr, 179 op/s
Jan 20 14:42:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.128 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2052746815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.515 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:37.516 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:37.518 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.525 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.525 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.526 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:42:37 compute-0 nova_compute[250018]: 2026-01-20 14:42:37.527 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 372 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 627 KiB/s wr, 216 op/s
Jan 20 14:42:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:42:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:37.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:42:38 compute-0 ovn_controller[148666]: 2026-01-20T14:42:38Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:b9:dd 10.100.0.4
Jan 20 14:42:38 compute-0 ovn_controller[148666]: 2026-01-20T14:42:38Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:b9:dd 10.100.0.4
Jan 20 14:42:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:38 compute-0 ceph-mon[74360]: pgmap v1628: 321 pgs: 321 active+clean; 372 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 627 KiB/s wr, 216 op/s
Jan 20 14:42:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2781861380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 362 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.5 MiB/s wr, 249 op/s
Jan 20 14:42:39 compute-0 ceph-mon[74360]: pgmap v1629: 321 pgs: 321 active+clean; 362 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.5 MiB/s wr, 249 op/s
Jan 20 14:42:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:40 compute-0 nova_compute[250018]: 2026-01-20 14:42:40.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:40 compute-0 nova_compute[250018]: 2026-01-20 14:42:40.192 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [{"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:40 compute-0 nova_compute[250018]: 2026-01-20 14:42:40.216 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-86aa2fb7-c532-46b4-a02b-8070608dfe6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:40 compute-0 nova_compute[250018]: 2026-01-20 14:42:40.216 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:42:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:40.519 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:40 compute-0 nova_compute[250018]: 2026-01-20 14:42:40.532 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:40.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2242138520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 354 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.4 MiB/s wr, 311 op/s
Jan 20 14:42:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:41.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.057 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.058 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.059 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.059 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.060 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.062 250022 INFO nova.compute.manager [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Terminating instance
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.064 250022 DEBUG nova.compute.manager [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.130 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 kernel: tap2a804e48-64 (unregistering): left promiscuous mode
Jan 20 14:42:42 compute-0 NetworkManager[48960]: <info>  [1768920162.1389] device (tap2a804e48-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:42 compute-0 ovn_controller[148666]: 2026-01-20T14:42:42Z|00234|binding|INFO|Releasing lport 2a804e48-646b-4b9e-8a59-155024ec39ac from this chassis (sb_readonly=0)
Jan 20 14:42:42 compute-0 ovn_controller[148666]: 2026-01-20T14:42:42Z|00235|binding|INFO|Setting lport 2a804e48-646b-4b9e-8a59-155024ec39ac down in Southbound
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.151 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 ovn_controller[148666]: 2026-01-20T14:42:42Z|00236|binding|INFO|Removing iface tap2a804e48-64 ovn-installed in OVS
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.160 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:06:d8 10.100.0.9'], port_security=['fa:16:3e:48:06:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '86aa2fb7-c532-46b4-a02b-8070608dfe6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc21b99b-4e34-422c-be05-0a440009dac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3f93fd4b2154dda9f38e62334904303', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52b08fd6-6aa8-4470-b89c-ece04e1c959e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af6b6bc-3cbd-48be-9f10-23ec011e0426, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2a804e48-646b-4b9e-8a59-155024ec39ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.162 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2a804e48-646b-4b9e-8a59-155024ec39ac in datapath fc21b99b-4e34-422c-be05-0a440009dac4 unbound from our chassis
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.164 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc21b99b-4e34-422c-be05-0a440009dac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.166 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.165 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[976a9a3b-af6a-4b64-b02b-b589b8aedce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.166 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 namespace which is not needed anymore
Jan 20 14:42:42 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000045.scope: Deactivated successfully.
Jan 20 14:42:42 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000045.scope: Consumed 17.340s CPU time.
Jan 20 14:42:42 compute-0 systemd-machined[216401]: Machine qemu-31-instance-00000045 terminated.
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.286 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.290 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.300 250022 INFO nova.virt.libvirt.driver [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Instance destroyed successfully.
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.300 250022 DEBUG nova.objects.instance [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lazy-loading 'resources' on Instance uuid 86aa2fb7-c532-46b4-a02b-8070608dfe6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:42 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [NOTICE]   (293248) : haproxy version is 2.8.14-c23fe91
Jan 20 14:42:42 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [NOTICE]   (293248) : path to executable is /usr/sbin/haproxy
Jan 20 14:42:42 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [WARNING]  (293248) : Exiting Master process...
Jan 20 14:42:42 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [ALERT]    (293248) : Current worker (293250) exited with code 143 (Terminated)
Jan 20 14:42:42 compute-0 neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4[293244]: [WARNING]  (293248) : All workers exited. Exiting... (0)
Jan 20 14:42:42 compute-0 systemd[1]: libpod-a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1.scope: Deactivated successfully.
Jan 20 14:42:42 compute-0 podman[295580]: 2026-01-20 14:42:42.328768506 +0000 UTC m=+0.056421330 container died a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.329 250022 DEBUG nova.virt.libvirt.vif [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:40:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-857473992',display_name='tempest-tempest.common.compute-instance-857473992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-857473992',id=69,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk9SkW5N7MhrGaZslG18EJ7xoBof9PQa4upjUw+XxfbO5rNOjJYMJtJMRGPfgbl1pwAZZD7LHjNNMRFKVo+T+C8Rnr+HXWsPYQmvPGwjjZ++NXvRdqES1LIbRDiwaFMJQ==',key_name='tempest-keypair-1970360297',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3f93fd4b2154dda9f38e62334904303',ramdisk_id='',reservation_id='r-2xutoy4e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-305746947',owner_user_name='tempest-AttachInterfacesTestJSON-305746947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c8a9fb458d27434495a77a94827b6097',uuid=86aa2fb7-c532-46b4-a02b-8070608dfe6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.329 250022 DEBUG nova.network.os_vif_util [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converting VIF {"id": "2a804e48-646b-4b9e-8a59-155024ec39ac", "address": "fa:16:3e:48:06:d8", "network": {"id": "fc21b99b-4e34-422c-be05-0a440009dac4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-808285772-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3f93fd4b2154dda9f38e62334904303", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a804e48-64", "ovs_interfaceid": "2a804e48-646b-4b9e-8a59-155024ec39ac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.330 250022 DEBUG nova.network.os_vif_util [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.330 250022 DEBUG os_vif [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.332 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.332 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a804e48-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.338 250022 INFO os_vif [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:06:d8,bridge_name='br-int',has_traffic_filtering=True,id=2a804e48-646b-4b9e-8a59-155024ec39ac,network=Network(fc21b99b-4e34-422c-be05-0a440009dac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a804e48-64')
Jan 20 14:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1-userdata-shm.mount: Deactivated successfully.
Jan 20 14:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a8cede45f0fd54d3f12aa6d101bb0edb6fb9da91ec64a2e1694006f0904857a-merged.mount: Deactivated successfully.
Jan 20 14:42:42 compute-0 podman[295580]: 2026-01-20 14:42:42.376349485 +0000 UTC m=+0.104002299 container cleanup a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:42:42 compute-0 systemd[1]: libpod-conmon-a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1.scope: Deactivated successfully.
Jan 20 14:42:42 compute-0 ceph-mon[74360]: pgmap v1630: 321 pgs: 321 active+clean; 354 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.4 MiB/s wr, 311 op/s
Jan 20 14:42:42 compute-0 podman[295634]: 2026-01-20 14:42:42.451545743 +0000 UTC m=+0.050683025 container remove a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.457 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e12af5ef-0f6d-444f-93b9-7cbaced1b19b]: (4, ('Tue Jan 20 02:42:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1)\na737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1\nTue Jan 20 02:42:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 (a737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1)\na737407032544be2be1f823ba2008e20f85dbfc3beab995281daa68c539911b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.460 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[364193e6-2659-4dc4-bcd1-8c44b2cb63a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.461 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc21b99b-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.463 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 kernel: tapfc21b99b-40: left promiscuous mode
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.467 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a25cd28a-0d5c-478d-81e8-de2e13268434]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.479 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.486 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf0a725-4198-4024-b03b-8ee67ab4fbb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.488 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[23edefd0-c891-426a-8dd4-dcc50a6cd729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.503 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0320c32c-3967-48ec-b785-0539bf072db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600429, 'reachable_time': 42794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295649, 'error': None, 'target': 'ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.506 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc21b99b-4e34-422c-be05-0a440009dac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:42:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:42.506 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7059144c-61c5-4d62-ac2e-3078b14586f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc21b99b\x2d4e34\x2d422c\x2dbe05\x2d0a440009dac4.mount: Deactivated successfully.
Jan 20 14:42:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:42.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.753 250022 INFO nova.virt.libvirt.driver [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Deleting instance files /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b_del
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.754 250022 INFO nova.virt.libvirt.driver [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Deletion of /var/lib/nova/instances/86aa2fb7-c532-46b4-a02b-8070608dfe6b_del complete
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.924 250022 INFO nova.compute.manager [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Took 0.86 seconds to destroy the instance on the hypervisor.
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.925 250022 DEBUG oslo.service.loopingcall [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.925 250022 DEBUG nova.compute.manager [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:42:42 compute-0 nova_compute[250018]: 2026-01-20 14:42:42.925 250022 DEBUG nova.network.neutron [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:42:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1080104359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 357 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 328 op/s
Jan 20 14:42:43 compute-0 nova_compute[250018]: 2026-01-20 14:42:43.922 250022 DEBUG nova.network.neutron [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:43 compute-0 nova_compute[250018]: 2026-01-20 14:42:43.981 250022 INFO nova.compute.manager [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Took 1.06 seconds to deallocate network for instance.
Jan 20 14:42:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.034 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.035 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.129 250022 DEBUG oslo_concurrency.processutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.447 250022 DEBUG nova.compute.manager [req-cbca2c07-3c0c-459e-ba9c-3cbcd66be508 req-bc2e1d81-e6b6-4811-8439-898bec189b71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-deleted-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:44 compute-0 sshd-session[295652]: Invalid user ubuntu from 157.245.78.139 port 56556
Jan 20 14:42:44 compute-0 sshd-session[295652]: Connection closed by invalid user ubuntu 157.245.78.139 port 56556 [preauth]
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.557 250022 DEBUG oslo_concurrency.lockutils [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.557 250022 DEBUG oslo_concurrency.lockutils [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.558 250022 DEBUG nova.compute.manager [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.562 250022 DEBUG nova.compute.manager [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.563 250022 DEBUG nova.objects.instance [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'flavor' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388504398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.591 250022 DEBUG oslo_concurrency.processutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.594 250022 DEBUG nova.virt.libvirt.driver [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.597 250022 DEBUG nova.compute.provider_tree [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.607 250022 DEBUG nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-unplugged-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.607 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.607 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.608 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.608 250022 DEBUG nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-unplugged-2a804e48-646b-4b9e-8a59-155024ec39ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.608 250022 WARNING nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-unplugged-2a804e48-646b-4b9e-8a59-155024ec39ac for instance with vm_state deleted and task_state None.
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.608 250022 DEBUG nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.608 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.609 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.609 250022 DEBUG oslo_concurrency.lockutils [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.609 250022 DEBUG nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] No waiting events found dispatching network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.609 250022 WARNING nova.compute.manager [req-b614dd09-406c-47d7-a218-578e6ef5bb1a req-69fc1dbf-f1ba-4a78-b6b2-a2df4196d958 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Received unexpected event network-vif-plugged-2a804e48-646b-4b9e-8a59-155024ec39ac for instance with vm_state deleted and task_state None.
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.621 250022 DEBUG nova.scheduler.client.report [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.649 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:44 compute-0 ceph-mon[74360]: pgmap v1631: 321 pgs: 321 active+clean; 357 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 328 op/s
Jan 20 14:42:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3388504398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.694 250022 INFO nova.scheduler.client.report [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Deleted allocations for instance 86aa2fb7-c532-46b4-a02b-8070608dfe6b
Jan 20 14:42:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:44 compute-0 nova_compute[250018]: 2026-01-20 14:42:44.800 250022 DEBUG oslo_concurrency.lockutils [None req-be222477-d768-4a32-8bb5-84b5160141b0 c8a9fb458d27434495a77a94827b6097 e3f93fd4b2154dda9f38e62334904303 - - default default] Lock "86aa2fb7-c532-46b4-a02b-8070608dfe6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.421 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.422 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.445 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.515 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.515 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.520 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.520 250022 INFO nova.compute.claims [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:42:45 compute-0 nova_compute[250018]: 2026-01-20 14:42:45.656 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 273 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 346 op/s
Jan 20 14:42:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378286606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.113 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.119 250022 DEBUG nova.compute.provider_tree [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.142 250022 DEBUG nova.scheduler.client.report [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.195 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.196 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:42:46 compute-0 sudo[295699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.210 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:42:46 compute-0 sudo[295699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:46 compute-0 sudo[295699]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:46 compute-0 sudo[295724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:46 compute-0 sudo[295724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.270 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.271 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:42:46 compute-0 sudo[295724]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.290 250022 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.316 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.512 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.513 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.514 250022 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Creating image(s)
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.540 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.566 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.590 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.593 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.640 250022 DEBUG nova.policy [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa2e7857e85f483eb0d162e2ee8c2e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3e022a35f604df2bbc885e498b1e206', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.672 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.673 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.674 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.674 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.699 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.703 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:46.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:46 compute-0 ceph-mon[74360]: pgmap v1632: 321 pgs: 321 active+clean; 273 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 346 op/s
Jan 20 14:42:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3378286606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/265236336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:46 compute-0 kernel: tapf6e1030d-55 (unregistering): left promiscuous mode
Jan 20 14:42:46 compute-0 NetworkManager[48960]: <info>  [1768920166.9680] device (tapf6e1030d-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:46 compute-0 ovn_controller[148666]: 2026-01-20T14:42:46Z|00237|binding|INFO|Releasing lport f6e1030d-5508-4e83-92ce-0a723132eb45 from this chassis (sb_readonly=0)
Jan 20 14:42:46 compute-0 ovn_controller[148666]: 2026-01-20T14:42:46Z|00238|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 down in Southbound
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.977 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:46 compute-0 ovn_controller[148666]: 2026-01-20T14:42:46Z|00239|binding|INFO|Removing iface tapf6e1030d-55 ovn-installed in OVS
Jan 20 14:42:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:46.984 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:b9:dd 10.100.0.4'], port_security=['fa:16:3e:8d:b9:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f6e1030d-5508-4e83-92ce-0a723132eb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:46.985 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f6e1030d-5508-4e83-92ce-0a723132eb45 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba unbound from our chassis
Jan 20 14:42:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:46.987 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b36e9cab-12c6-4a09-9aab-ef2679d875ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:42:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:46.988 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[df26e0ca-07c9-43be-b704-7a23ecf56a35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:46.988 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace which is not needed anymore
Jan 20 14:42:46 compute-0 nova_compute[250018]: 2026-01-20 14:42:46.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 20 14:42:47 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Consumed 14.489s CPU time.
Jan 20 14:42:47 compute-0 systemd-machined[216401]: Machine qemu-32-instance-0000004a terminated.
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.131 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.193 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.197 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [NOTICE]   (295440) : haproxy version is 2.8.14-c23fe91
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [NOTICE]   (295440) : path to executable is /usr/sbin/haproxy
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [WARNING]  (295440) : Exiting Master process...
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [WARNING]  (295440) : Exiting Master process...
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [ALERT]    (295440) : Current worker (295442) exited with code 143 (Terminated)
Jan 20 14:42:47 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[295436]: [WARNING]  (295440) : All workers exited. Exiting... (0)
Jan 20 14:42:47 compute-0 systemd[1]: libpod-fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3.scope: Deactivated successfully.
Jan 20 14:42:47 compute-0 podman[295868]: 2026-01-20 14:42:47.22812098 +0000 UTC m=+0.134477545 container died fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3-userdata-shm.mount: Deactivated successfully.
Jan 20 14:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb2dfc1d76a0df60ee13772466ea990fe9c1584aa944601416485f89fa85b885-merged.mount: Deactivated successfully.
Jan 20 14:42:47 compute-0 podman[295868]: 2026-01-20 14:42:47.286718808 +0000 UTC m=+0.193075373 container cleanup fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.287 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:47 compute-0 systemd[1]: libpod-conmon-fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3.scope: Deactivated successfully.
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.350 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.353 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] resizing rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.454 250022 DEBUG nova.objects.instance [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'migration_context' on Instance uuid 62bbc690-cb71-4cdc-93e1-cdf395abbae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.475 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.475 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Ensure instance console log exists: /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.476 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.476 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.477 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:47 compute-0 podman[295914]: 2026-01-20 14:42:47.539077466 +0000 UTC m=+0.229808828 container remove fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.545 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd2fef9-ae07-4ab4-b983-60506883bd93]: (4, ('Tue Jan 20 02:42:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3)\nfbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3\nTue Jan 20 02:42:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (fbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3)\nfbc44418705e94b57f8598626d1cffa2df5f779fd0c211c71772c96d355152a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.546 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c909bcd3-1b30-4deb-a5e5-18749cb44237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.547 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:47 compute-0 kernel: tapb36e9cab-10: left promiscuous mode
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.549 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.565 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.566 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.568 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[346da1d7-38e1-4d1e-b277-f868c0361f8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.576 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[48197b64-29cc-4e04-9ef1-2aedcf7db4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.577 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dd2de9c9-5d41-435d-967f-c7a2a31cec0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.594 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d86730c9-3948-4bfc-bda6-29d1408bc527]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 608359, 'reachable_time': 42856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295998, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.596 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:42:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:47.596 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b1080bc6-125c-4bf3-9de9-4b83b29cb410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:47 compute-0 systemd[1]: run-netns-ovnmeta\x2db36e9cab\x2d12c6\x2d4a09\x2d9aab\x2def2679d875ba.mount: Deactivated successfully.
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.629 250022 INFO nova.virt.libvirt.driver [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance shutdown successfully after 3 seconds.
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.636 250022 INFO nova.virt.libvirt.driver [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance destroyed successfully.
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.636 250022 DEBUG nova.objects.instance [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.648 250022 DEBUG nova.compute.manager [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 273 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 352 op/s
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.763 250022 DEBUG oslo_concurrency.lockutils [None req-14c8e0ab-d1cd-4169-8c7a-7d41dae46379 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.851 250022 DEBUG nova.compute.manager [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.852 250022 DEBUG oslo_concurrency.lockutils [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.852 250022 DEBUG oslo_concurrency.lockutils [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.852 250022 DEBUG oslo_concurrency.lockutils [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.853 250022 DEBUG nova.compute.manager [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:47 compute-0 nova_compute[250018]: 2026-01-20 14:42:47.853 250022 WARNING nova.compute.manager [req-2771e3d4-85cb-4c1f-950a-0d0e1e32b4b4 req-4e64d22c-667e-4695-9a12-fb0fe9119498 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state stopped and task_state None.
Jan 20 14:42:47 compute-0 ceph-mon[74360]: pgmap v1633: 321 pgs: 321 active+clean; 273 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.5 MiB/s wr, 352 op/s
Jan 20 14:42:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:47.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:48 compute-0 nova_compute[250018]: 2026-01-20 14:42:48.229 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Successfully created port: d6c7f812-24ee-4acf-9603-942a2c4658f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:42:48 compute-0 sudo[296000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:48 compute-0 sudo[296000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-0 sudo[296000]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-0 sudo[296025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:42:48 compute-0 sudo[296025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-0 sudo[296025]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-0 sudo[296050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:48 compute-0 sudo[296050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-0 sudo[296050]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:48 compute-0 sudo[296087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:42:48 compute-0 sudo[296087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:48 compute-0 podman[296075]: 2026-01-20 14:42:48.596600065 +0000 UTC m=+0.057935961 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 14:42:48 compute-0 podman[296074]: 2026-01-20 14:42:48.661434873 +0000 UTC m=+0.122752128 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:42:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:48.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:49 compute-0 sudo[296087]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b03a5c19-1e8f-4665-8508-9e975fdcb281 does not exist
Jan 20 14:42:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3e2f0b95-235b-4ecf-be04-d1a6f455cb3d does not exist
Jan 20 14:42:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 76e565b7-5f2b-49da-afd0-174a88c32fd0 does not exist
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:42:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:42:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:42:49 compute-0 sudo[296176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:49 compute-0 sudo[296176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:49 compute-0 sudo[296176]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:49 compute-0 sudo[296201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:42:49 compute-0 sudo[296201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:49 compute-0 sudo[296201]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:49 compute-0 sudo[296226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:49 compute-0 sudo[296226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:49 compute-0 sudo[296226]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:49 compute-0 sudo[296251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:42:49 compute-0 sudo[296251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 289 MiB data, 796 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 305 op/s
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.873 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'flavor' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.878142384 +0000 UTC m=+0.048548477 container create 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.889 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Successfully updated port: d6c7f812-24ee-4acf-9603-942a2c4658f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.903 250022 DEBUG oslo_concurrency.lockutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.903 250022 DEBUG oslo_concurrency.lockutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquired lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.903 250022 DEBUG nova.network.neutron [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.904 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'info_cache' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:49 compute-0 systemd[1]: Started libpod-conmon-4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e.scope.
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.915 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.915 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquired lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.915 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.856015324 +0000 UTC m=+0.026421407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.968077781 +0000 UTC m=+0.138483834 container init 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.97503865 +0000 UTC m=+0.145444693 container start 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.978305208 +0000 UTC m=+0.148711261 container attach 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:42:49 compute-0 keen_ride[296335]: 167 167
Jan 20 14:42:49 compute-0 systemd[1]: libpod-4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e.scope: Deactivated successfully.
Jan 20 14:42:49 compute-0 podman[296319]: 2026-01-20 14:42:49.981745682 +0000 UTC m=+0.152151725 container died 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.987 250022 DEBUG nova.compute.manager [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.988 250022 DEBUG oslo_concurrency.lockutils [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.989 250022 DEBUG oslo_concurrency.lockutils [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.989 250022 DEBUG oslo_concurrency.lockutils [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.989 250022 DEBUG nova.compute.manager [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:49 compute-0 nova_compute[250018]: 2026-01-20 14:42:49.989 250022 WARNING nova.compute.manager [req-5eb84557-040a-4e00-9cfc-cff3b82fa8b5 req-47ff4803-423c-486f-bb1c-b194ce681663 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state stopped and task_state powering-on.
Jan 20 14:42:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:49.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-96855ec8f02d437463ba758f0d1a7d474a84c0646a82917d9c3ce1764ef1e06c-merged.mount: Deactivated successfully.
Jan 20 14:42:50 compute-0 podman[296319]: 2026-01-20 14:42:50.023107822 +0000 UTC m=+0.193513855 container remove 4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:42:50 compute-0 systemd[1]: libpod-conmon-4b0da2a31b81fd57d08fba7984aac534ed670ad56fe1aa203da1683146b32d5e.scope: Deactivated successfully.
Jan 20 14:42:50 compute-0 nova_compute[250018]: 2026-01-20 14:42:50.172 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:42:50 compute-0 nova_compute[250018]: 2026-01-20 14:42:50.176 250022 DEBUG nova.compute.manager [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-changed-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:50 compute-0 nova_compute[250018]: 2026-01-20 14:42:50.177 250022 DEBUG nova.compute.manager [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Refreshing instance network info cache due to event network-changed-d6c7f812-24ee-4acf-9603-942a2c4658f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:42:50 compute-0 nova_compute[250018]: 2026-01-20 14:42:50.177 250022 DEBUG oslo_concurrency.lockutils [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:42:50 compute-0 podman[296357]: 2026-01-20 14:42:50.188236317 +0000 UTC m=+0.052776232 container create 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:42:50 compute-0 systemd[1]: Started libpod-conmon-91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d.scope.
Jan 20 14:42:50 compute-0 podman[296357]: 2026-01-20 14:42:50.160747233 +0000 UTC m=+0.025287238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:50 compute-0 podman[296357]: 2026-01-20 14:42:50.287744933 +0000 UTC m=+0.152284858 container init 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:42:50 compute-0 podman[296357]: 2026-01-20 14:42:50.296400778 +0000 UTC m=+0.160940693 container start 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 14:42:50 compute-0 podman[296357]: 2026-01-20 14:42:50.29939934 +0000 UTC m=+0.163939285 container attach 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:42:50 compute-0 ceph-mon[74360]: pgmap v1634: 321 pgs: 321 active+clean; 289 MiB data, 796 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 305 op/s
Jan 20 14:42:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:50.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:51 compute-0 ecstatic_rubin[296373]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:42:51 compute-0 ecstatic_rubin[296373]: --> relative data size: 1.0
Jan 20 14:42:51 compute-0 ecstatic_rubin[296373]: --> All data devices are unavailable
Jan 20 14:42:51 compute-0 systemd[1]: libpod-91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d.scope: Deactivated successfully.
Jan 20 14:42:51 compute-0 podman[296357]: 2026-01-20 14:42:51.115313629 +0000 UTC m=+0.979853544 container died 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-36d618ecf5a8e4f02ab9be0dc7ef17773e0a2613b398040ff00254c065e969ca-merged.mount: Deactivated successfully.
Jan 20 14:42:51 compute-0 podman[296357]: 2026-01-20 14:42:51.166465186 +0000 UTC m=+1.031005101 container remove 91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:42:51 compute-0 systemd[1]: libpod-conmon-91015a3f9e71de6ef8541a4f7072dc4c5692e1c0d199e78e6beeee39e149129d.scope: Deactivated successfully.
Jan 20 14:42:51 compute-0 sudo[296251]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:51 compute-0 sudo[296400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:51 compute-0 sudo[296400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:51 compute-0 sudo[296400]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:51 compute-0 sudo[296425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:42:51 compute-0 sudo[296425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:51 compute-0 sudo[296425]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:51 compute-0 sudo[296450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:51 compute-0 sudo[296450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:51 compute-0 sudo[296450]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:51 compute-0 sudo[296475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:42:51 compute-0 sudo[296475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.581 250022 DEBUG nova.network.neutron [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Updating instance_info_cache with network_info: [{"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.613 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Releasing lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.613 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Instance network_info: |[{"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.614 250022 DEBUG oslo_concurrency.lockutils [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.614 250022 DEBUG nova.network.neutron [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Refreshing network info cache for port d6c7f812-24ee-4acf-9603-942a2c4658f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.617 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Start _get_guest_xml network_info=[{"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.622 250022 WARNING nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.628 250022 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.629 250022 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.632 250022 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.633 250022 DEBUG nova.virt.libvirt.host [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.634 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.634 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.634 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.634 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.634 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.635 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.636 250022 DEBUG nova.virt.hardware [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:42:51 compute-0 nova_compute[250018]: 2026-01-20 14:42:51.638 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 349 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 MiB/s wr, 272 op/s
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.696815348 +0000 UTC m=+0.036360227 container create 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:42:51 compute-0 systemd[1]: Started libpod-conmon-5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f.scope.
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.680993538 +0000 UTC m=+0.020538437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:51 compute-0 ceph-mon[74360]: pgmap v1635: 321 pgs: 321 active+clean; 349 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 MiB/s wr, 272 op/s
Jan 20 14:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.800748364 +0000 UTC m=+0.140293263 container init 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.806684145 +0000 UTC m=+0.146229034 container start 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.811070963 +0000 UTC m=+0.150615852 container attach 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:42:51 compute-0 frosty_lehmann[296555]: 167 167
Jan 20 14:42:51 compute-0 systemd[1]: libpod-5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f.scope: Deactivated successfully.
Jan 20 14:42:51 compute-0 conmon[296555]: conmon 5ba16cc794bd2ff3c4aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f.scope/container/memory.events
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.81498766 +0000 UTC m=+0.154532539 container died 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1715c6e94f6b08b86fa81db14929bbb1c531bb3df2ec28656786dd0ee36931-merged.mount: Deactivated successfully.
Jan 20 14:42:51 compute-0 podman[296538]: 2026-01-20 14:42:51.849529516 +0000 UTC m=+0.189074415 container remove 5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lehmann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:42:51 compute-0 systemd[1]: libpod-conmon-5ba16cc794bd2ff3c4aa2ab7b1ee3c239adb73d45ee3713deec043953c924d5f.scope: Deactivated successfully.
Jan 20 14:42:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.028881686 +0000 UTC m=+0.052281738 container create 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:42:52 compute-0 systemd[1]: Started libpod-conmon-96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d.scope.
Jan 20 14:42:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27707950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.091 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.006196442 +0000 UTC m=+0.029596544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fc687419d269d4f52645ab0d9cc2b38b91885477ffb1102c1d7d4021a6ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fc687419d269d4f52645ab0d9cc2b38b91885477ffb1102c1d7d4021a6ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fc687419d269d4f52645ab0d9cc2b38b91885477ffb1102c1d7d4021a6ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b7fc687419d269d4f52645ab0d9cc2b38b91885477ffb1102c1d7d4021a6ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.118614868 +0000 UTC m=+0.142014920 container init 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.121 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.125 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.127778955 +0000 UTC m=+0.151179017 container start 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.131659051 +0000 UTC m=+0.155059133 container attach 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.382 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:42:52
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images']
Jan 20 14:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:42:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230279258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.570 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.573 250022 DEBUG nova.virt.libvirt.vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-2',id=78,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:46Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=62bbc690-cb71-4cdc-93e1-cdf395abbae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.573 250022 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.576 250022 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.579 250022 DEBUG nova.objects.instance [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62bbc690-cb71-4cdc-93e1-cdf395abbae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.599 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <uuid>62bbc690-cb71-4cdc-93e1-cdf395abbae4</uuid>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <name>instance-0000004e</name>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:name>tempest-tempest.common.compute-instance-1574727229-2</nova:name>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:42:51</nova:creationTime>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:user uuid="aa2e7857e85f483eb0d162e2ee8c2e2c">tempest-MultipleCreateTestJSON-164394330-project-member</nova:user>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:project uuid="a3e022a35f604df2bbc885e498b1e206">tempest-MultipleCreateTestJSON-164394330</nova:project>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <nova:port uuid="d6c7f812-24ee-4acf-9603-942a2c4658f7">
Jan 20 14:42:52 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <system>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="serial">62bbc690-cb71-4cdc-93e1-cdf395abbae4</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="uuid">62bbc690-cb71-4cdc-93e1-cdf395abbae4</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </system>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <os>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </os>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <features>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </features>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk">
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config">
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:6d:65:2c"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <target dev="tapd6c7f812-24"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/console.log" append="off"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <video>
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </video>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:42:52 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:42:52 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:42:52 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:42:52 compute-0 nova_compute[250018]: </domain>
Jan 20 14:42:52 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.614 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Preparing to wait for external event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.615 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.615 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.616 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.618 250022 DEBUG nova.virt.libvirt.vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-2',id=78,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:42:46Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=62bbc690-cb71-4cdc-93e1-cdf395abbae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.619 250022 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.621 250022 DEBUG nova.network.os_vif_util [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.622 250022 DEBUG os_vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.629 250022 DEBUG nova.network.neutron [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Updating instance_info_cache with network_info: [{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.631 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.631 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.632 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.636 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.636 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6c7f812-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.637 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6c7f812-24, col_values=(('external_ids', {'iface-id': 'd6c7f812-24ee-4acf-9603-942a2c4658f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:65:2c', 'vm-uuid': '62bbc690-cb71-4cdc-93e1-cdf395abbae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-0 NetworkManager[48960]: <info>  [1768920172.6402] manager: (tapd6c7f812-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.639 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.650 250022 DEBUG oslo_concurrency.lockutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Releasing lock "refresh_cache-1951432b-2c0c-4d1b-90df-d94dcf9fc32e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.654 250022 INFO os_vif [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24')
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.688 250022 INFO nova.virt.libvirt.driver [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance destroyed successfully.
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.689 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'numa_topology' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:52.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.723 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'resources' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.747 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.747 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.748 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No VIF found with MAC fa:16:3e:6d:65:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.748 250022 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Using config drive
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.773 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.779 250022 DEBUG nova.virt.libvirt.vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:47Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.780 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.781 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.781 250022 DEBUG os_vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.783 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6e1030d-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/27707950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3459419744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3230279258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1767237631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.791 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.796 250022 INFO os_vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55')
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.803 250022 DEBUG nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start _get_guest_xml network_info=[{"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.807 250022 WARNING nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.818 250022 DEBUG nova.virt.libvirt.host [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.819 250022 DEBUG nova.virt.libvirt.host [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.822 250022 DEBUG nova.virt.libvirt.host [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.823 250022 DEBUG nova.virt.libvirt.host [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.824 250022 DEBUG nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.824 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.824 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.824 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.825 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.825 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.825 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.825 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.825 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.826 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.826 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.826 250022 DEBUG nova.virt.hardware [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.826 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:52 compute-0 nova_compute[250018]: 2026-01-20 14:42:52.839 250022 DEBUG oslo_concurrency.processutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]: {
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:     "0": [
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:         {
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "devices": [
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "/dev/loop3"
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             ],
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "lv_name": "ceph_lv0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "lv_size": "7511998464",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "name": "ceph_lv0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "tags": {
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.cluster_name": "ceph",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.crush_device_class": "",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.encrypted": "0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.osd_id": "0",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.type": "block",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:                 "ceph.vdo": "0"
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             },
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "type": "block",
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:             "vg_name": "ceph_vg0"
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:         }
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]:     ]
Jan 20 14:42:52 compute-0 heuristic_mclean[296615]: }
Jan 20 14:42:52 compute-0 systemd[1]: libpod-96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d.scope: Deactivated successfully.
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.884575664 +0000 UTC m=+0.907975726 container died 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b7fc687419d269d4f52645ab0d9cc2b38b91885477ffb1102c1d7d4021a6ef3-merged.mount: Deactivated successfully.
Jan 20 14:42:52 compute-0 podman[296598]: 2026-01-20 14:42:52.945695191 +0000 UTC m=+0.969095243 container remove 96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:42:52 compute-0 systemd[1]: libpod-conmon-96f91ebee2c94beea1a8438104d5eeaa30913474ad431adb450001b518e3d72d.scope: Deactivated successfully.
Jan 20 14:42:52 compute-0 sudo[296475]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:53 compute-0 sudo[296721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:53 compute-0 sudo[296721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:53 compute-0 sudo[296721]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:53 compute-0 sudo[296746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:42:53 compute-0 sudo[296746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:53 compute-0 sudo[296746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:53 compute-0 sudo[296771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:53 compute-0 sudo[296771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:53 compute-0 sudo[296771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025460005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-0 sudo[296796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:42:53 compute-0 sudo[296796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.277 250022 DEBUG oslo_concurrency.processutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.310 250022 DEBUG oslo_concurrency.processutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.574 250022 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Creating config drive at /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.579 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqb3njfnv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.653607234 +0000 UTC m=+0.085997962 container create 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:42:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 6.6 MiB/s wr, 198 op/s
Jan 20 14:42:53 compute-0 systemd[1]: Started libpod-conmon-02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64.scope.
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.624497254 +0000 UTC m=+0.056888042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.717 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqb3njfnv" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:42:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234445789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.747260131 +0000 UTC m=+0.179650839 container init 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.747 250022 DEBUG nova.storage.rbd_utils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.754 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.75863842 +0000 UTC m=+0.191029118 container start 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.762856804 +0000 UTC m=+0.195247492 container attach 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:42:53 compute-0 condescending_mestorf[296917]: 167 167
Jan 20 14:42:53 compute-0 systemd[1]: libpod-02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64.scope: Deactivated successfully.
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.765705861 +0000 UTC m=+0.198096559 container died 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.782 250022 DEBUG oslo_concurrency.processutils [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.785 250022 DEBUG nova.virt.libvirt.vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:47Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.786 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.787 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.789 250022 DEBUG nova.objects.instance [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbb967631e99d56d2f414bafe6b0bbf71b281fbc80a44f0d057c27f2347c4d4-merged.mount: Deactivated successfully.
Jan 20 14:42:53 compute-0 podman[296897]: 2026-01-20 14:42:53.805691345 +0000 UTC m=+0.238082043 container remove 02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mestorf, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:42:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4025460005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-0 ceph-mon[74360]: pgmap v1636: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 6.6 MiB/s wr, 198 op/s
Jan 20 14:42:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3234445789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:42:53 compute-0 systemd[1]: libpod-conmon-02d04d2a822763ebebc41e7eb9beb5cce1ef2f464955cad35523a57fda2f6c64.scope: Deactivated successfully.
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.832 250022 DEBUG nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <uuid>1951432b-2c0c-4d1b-90df-d94dcf9fc32e</uuid>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <name>instance-0000004a</name>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-164859619</nova:name>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:42:52</nova:creationTime>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:user uuid="ff99fc8eda0640928c6e82981dacb266">tempest-ListServerFiltersTestJSON-2126845308-project-member</nova:user>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:project uuid="4b95747114ab4043b93a260387199c91">tempest-ListServerFiltersTestJSON-2126845308</nova:project>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <nova:port uuid="f6e1030d-5508-4e83-92ce-0a723132eb45">
Jan 20 14:42:53 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <system>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="serial">1951432b-2c0c-4d1b-90df-d94dcf9fc32e</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="uuid">1951432b-2c0c-4d1b-90df-d94dcf9fc32e</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </system>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <os>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </os>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <features>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </features>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk">
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_disk.config">
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </source>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:42:53 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:8d:b9:dd"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <target dev="tapf6e1030d-55"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e/console.log" append="off"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <video>
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </video>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:42:53 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:42:53 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:42:53 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:42:53 compute-0 nova_compute[250018]: </domain>
Jan 20 14:42:53 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.836 250022 DEBUG nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.837 250022 DEBUG nova.virt.libvirt.driver [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.839 250022 DEBUG nova.virt.libvirt.vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:47Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.839 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.840 250022 DEBUG nova.network.os_vif_util [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.841 250022 DEBUG os_vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.844 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.844 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.850 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6e1030d-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.851 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6e1030d-55, col_values=(('external_ids', {'iface-id': 'f6e1030d-5508-4e83-92ce-0a723132eb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:b9:dd', 'vm-uuid': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 NetworkManager[48960]: <info>  [1768920173.8553] manager: (tapf6e1030d-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.857 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.863 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.864 250022 INFO os_vif [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55')
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.935 250022 DEBUG oslo_concurrency.processutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config 62bbc690-cb71-4cdc-93e1-cdf395abbae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.936 250022 INFO nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Deleting local config drive /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4/disk.config because it was imported into RBD.
Jan 20 14:42:53 compute-0 kernel: tapf6e1030d-55: entered promiscuous mode
Jan 20 14:42:53 compute-0 NetworkManager[48960]: <info>  [1768920173.9470] manager: (tapf6e1030d-55): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.947 250022 DEBUG nova.network.neutron [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Updated VIF entry in instance network info cache for port d6c7f812-24ee-4acf-9603-942a2c4658f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.948 250022 DEBUG nova.network.neutron [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Updating instance_info_cache with network_info: [{"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 ovn_controller[148666]: 2026-01-20T14:42:53Z|00240|binding|INFO|Claiming lport f6e1030d-5508-4e83-92ce-0a723132eb45 for this chassis.
Jan 20 14:42:53 compute-0 ovn_controller[148666]: 2026-01-20T14:42:53Z|00241|binding|INFO|f6e1030d-5508-4e83-92ce-0a723132eb45: Claiming fa:16:3e:8d:b9:dd 10.100.0.4
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.958 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:b9:dd 10.100.0.4'], port_security=['fa:16:3e:8d:b9:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f6e1030d-5508-4e83-92ce-0a723132eb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.959 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f6e1030d-5508-4e83-92ce-0a723132eb45 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba bound to our chassis
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.960 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.974 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5547c9d7-f553-49f2-8f38-044ecba57b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.974 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb36e9cab-11 in ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.977 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb36e9cab-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.978 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[682b7f60-2d31-4e6e-8cbb-4ce073d05b03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.979 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2285666c-9119-4957-a36a-16a4d93dc6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-0 ovn_controller[148666]: 2026-01-20T14:42:53Z|00242|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 ovn-installed in OVS
Jan 20 14:42:53 compute-0 ovn_controller[148666]: 2026-01-20T14:42:53Z|00243|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 up in Southbound
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.982 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.990 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:53.991 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f70ee2-886f-4e9d-862c-bbb590774f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:53 compute-0 nova_compute[250018]: 2026-01-20 14:42:53.992 250022 DEBUG oslo_concurrency.lockutils [req-749ba855-7718-4bb5-9649-12251d0a2e9c req-c7074c69-789c-4f35-8b7e-bc92a33461a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-62bbc690-cb71-4cdc-93e1-cdf395abbae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:42:53 compute-0 podman[296987]: 2026-01-20 14:42:53.998206622 +0000 UTC m=+0.069535525 container create c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:42:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:53.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:54 compute-0 systemd-udevd[297018]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:54 compute-0 systemd-machined[216401]: New machine qemu-33-instance-0000004a.
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0146] device (tapf6e1030d-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:54 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000004a.
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0156] device (tapf6e1030d-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.016 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6609d8-5037-4486-b3cb-73a9bbca9caa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 kernel: tapd6c7f812-24: entered promiscuous mode
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0280] manager: (tapd6c7f812-24): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 20 14:42:54 compute-0 ovn_controller[148666]: 2026-01-20T14:42:54Z|00244|binding|INFO|Claiming lport d6c7f812-24ee-4acf-9603-942a2c4658f7 for this chassis.
Jan 20 14:42:54 compute-0 ovn_controller[148666]: 2026-01-20T14:42:54Z|00245|binding|INFO|d6c7f812-24ee-4acf-9603-942a2c4658f7: Claiming fa:16:3e:6d:65:2c 10.100.0.10
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.030 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 ovn_controller[148666]: 2026-01-20T14:42:54Z|00246|binding|INFO|Setting lport d6c7f812-24ee-4acf-9603-942a2c4658f7 ovn-installed in OVS
Jan 20 14:42:54 compute-0 ovn_controller[148666]: 2026-01-20T14:42:54Z|00247|binding|INFO|Setting lport d6c7f812-24ee-4acf-9603-942a2c4658f7 up in Southbound
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.047 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.047 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:65:2c 10.100.0.10'], port_security=['fa:16:3e:6d:65:2c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '62bbc690-cb71-4cdc-93e1-cdf395abbae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d6c7f812-24ee-4acf-9603-942a2c4658f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:54 compute-0 systemd[1]: Started libpod-conmon-c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663.scope.
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0507] device (tapd6c7f812-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.049 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a21c1c40-ce1e-4921-a98a-0ceabd088a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0518] device (tapd6c7f812-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.053 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.0605] manager: (tapb36e9cab-10): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.061 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1e751a-c35e-42e7-b98f-b152d0b03ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 podman[296987]: 2026-01-20 14:42:53.973810551 +0000 UTC m=+0.045139504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:42:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea30d5f00eaa3b1307b5667ad6b043c8a1bf5872ebbb1ca25802dbd4ac29a266/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea30d5f00eaa3b1307b5667ad6b043c8a1bf5872ebbb1ca25802dbd4ac29a266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea30d5f00eaa3b1307b5667ad6b043c8a1bf5872ebbb1ca25802dbd4ac29a266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea30d5f00eaa3b1307b5667ad6b043c8a1bf5872ebbb1ca25802dbd4ac29a266/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:54 compute-0 systemd-machined[216401]: New machine qemu-34-instance-0000004e.
Jan 20 14:42:54 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000004e.
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.095 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d33d3c-5218-492d-917d-e0b57373fd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 podman[296987]: 2026-01-20 14:42:54.097157753 +0000 UTC m=+0.168486666 container init c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.100 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4b9b78-ef61-4e92-a02c-07a321a360ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 podman[296987]: 2026-01-20 14:42:54.104921644 +0000 UTC m=+0.176250547 container start c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:42:54 compute-0 podman[296987]: 2026-01-20 14:42:54.108686275 +0000 UTC m=+0.180015198 container attach c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.1309] device (tapb36e9cab-10): carrier: link connected
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.139 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7f44dc-7a5c-4263-9d6b-b67d60d012b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.157 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4f83a9b6-7d70-4c8e-8396-5742827d26db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611485, 'reachable_time': 21739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297070, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.180 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0054b1ff-76ab-4252-8a10-cb0b373183a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:c252'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611485, 'tstamp': 611485}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297071, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.204 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb3e4ef-5971-4c04-92bc-18f5e618c971]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb36e9cab-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:c2:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611485, 'reachable_time': 21739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297073, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.251 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0932b7-0636-4647-aba5-f47d8b4de02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.313 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc663f2-45ba-4f49-b3e7-2d68bddbf372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.315 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.315 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.316 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb36e9cab-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-0 kernel: tapb36e9cab-10: entered promiscuous mode
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.3185] manager: (tapb36e9cab-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.319 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.323 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb36e9cab-10, col_values=(('external_ids', {'iface-id': '5dcae274-b8f4-440a-a3eb-5c1a5a044346'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 ovn_controller[148666]: 2026-01-20T14:42:54Z|00248|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.327 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.328 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7438398e-71ac-4af4-bac0-fd4159f6d116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.328 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b36e9cab-12c6-4a09-9aab-ef2679d875ba.pid.haproxy
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b36e9cab-12c6-4a09-9aab-ef2679d875ba
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.330 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'env', 'PROCESS_TAG=haproxy-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b36e9cab-12c6-4a09-9aab-ef2679d875ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.339 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.516 250022 DEBUG nova.compute.manager [req-188c0439-8e44-4e6c-88f9-1ee25b598f45 req-0dadf345-a793-4ead-bd72-68b6af2ba75b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.516 250022 DEBUG oslo_concurrency.lockutils [req-188c0439-8e44-4e6c-88f9-1ee25b598f45 req-0dadf345-a793-4ead-bd72-68b6af2ba75b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.516 250022 DEBUG oslo_concurrency.lockutils [req-188c0439-8e44-4e6c-88f9-1ee25b598f45 req-0dadf345-a793-4ead-bd72-68b6af2ba75b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.517 250022 DEBUG oslo_concurrency.lockutils [req-188c0439-8e44-4e6c-88f9-1ee25b598f45 req-0dadf345-a793-4ead-bd72-68b6af2ba75b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.517 250022 DEBUG nova.compute.manager [req-188c0439-8e44-4e6c-88f9-1ee25b598f45 req-0dadf345-a793-4ead-bd72-68b6af2ba75b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Processing event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.582 250022 DEBUG nova.compute.manager [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.582 250022 DEBUG oslo_concurrency.lockutils [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.583 250022 DEBUG oslo_concurrency.lockutils [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.583 250022 DEBUG oslo_concurrency.lockutils [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.584 250022 DEBUG nova.compute.manager [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.584 250022 WARNING nova.compute.manager [req-132f57c5-217d-4313-8047-b0cb12d0985a req-625c6434-3b6f-43e0-8d36-954f9ea56575 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state stopped and task_state powering-on.
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.649 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for 1951432b-2c0c-4d1b-90df-d94dcf9fc32e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.650 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920174.6487873, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.650 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Resumed (Lifecycle Event)
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.652 250022 DEBUG nova.compute.manager [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.655 250022 INFO nova.virt.libvirt.driver [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance rebooted successfully.
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.656 250022 DEBUG nova.compute.manager [None req-adf85e7f-ce10-4e45-b349-2a278b3cfd53 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.682 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.685 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:54 compute-0 podman[297182]: 2026-01-20 14:42:54.722921271 +0000 UTC m=+0.047912789 container create 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.746 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.747 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920174.6524277, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.747 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Started (Lifecycle Event)
Jan 20 14:42:54 compute-0 systemd[1]: Started libpod-conmon-8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460.scope.
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.770 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:54 compute-0 nova_compute[250018]: 2026-01-20 14:42:54.773 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fde834a0badb2a04c6fe9d3921fb2afc420bda1e9546d06901fab47a52f211/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:54 compute-0 podman[297182]: 2026-01-20 14:42:54.699964929 +0000 UTC m=+0.024956467 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:42:54 compute-0 podman[297182]: 2026-01-20 14:42:54.805887559 +0000 UTC m=+0.130879077 container init 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:42:54 compute-0 podman[297182]: 2026-01-20 14:42:54.812021496 +0000 UTC m=+0.137013014 container start 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:42:54 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [NOTICE]   (297206) : New worker (297211) forked
Jan 20 14:42:54 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [NOTICE]   (297206) : Loading success.
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.869 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d6c7f812-24ee-4acf-9603-942a2c4658f7 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 unbound from our chassis
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.871 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.882 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fd638624-ad8f-46ed-a066-085502341e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.883 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e260ad9-f1 in ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.885 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e260ad9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.885 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf6c670-55f0-4772-84bc-f8787a29ce41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.886 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[519db581-3119-4cc5-b4c3-777150e4fee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.898 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[14af2fc6-3487-4986-ad7a-28fbd3d84e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.919 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[daee36e5-e9dd-4d7d-8d98-ea6256001a5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 practical_mclaren[297032]: {
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:         "osd_id": 0,
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:         "type": "bluestore"
Jan 20 14:42:54 compute-0 practical_mclaren[297032]:     }
Jan 20 14:42:54 compute-0 practical_mclaren[297032]: }
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.945 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b95291ec-70fc-499e-a3f3-73e8239a72c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 systemd[1]: libpod-c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663.scope: Deactivated successfully.
Jan 20 14:42:54 compute-0 conmon[297032]: conmon c9455a986056056b9b51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663.scope/container/memory.events
Jan 20 14:42:54 compute-0 podman[296987]: 2026-01-20 14:42:54.948260457 +0000 UTC m=+1.019589370 container died c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.951 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[42d99a64-c6a1-4358-b752-029fde2e3183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 NetworkManager[48960]: <info>  [1768920174.9521] manager: (tap3e260ad9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/129)
Jan 20 14:42:54 compute-0 systemd-udevd[297044]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.985 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cda7f119-9966-4a00-b2e3-bce4e8c57a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea30d5f00eaa3b1307b5667ad6b043c8a1bf5872ebbb1ca25802dbd4ac29a266-merged.mount: Deactivated successfully.
Jan 20 14:42:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:54.990 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[69c8367d-886b-4b90-a64c-da0b50177190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 podman[296987]: 2026-01-20 14:42:55.005307843 +0000 UTC m=+1.076636746 container remove c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mclaren, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:42:55 compute-0 systemd[1]: libpod-conmon-c9455a986056056b9b51890e9b59ab984eeffc48435888f28614a6dabc4a1663.scope: Deactivated successfully.
Jan 20 14:42:55 compute-0 NetworkManager[48960]: <info>  [1768920175.0228] device (tap3e260ad9-f0): carrier: link connected
Jan 20 14:42:55 compute-0 sudo[296796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.039 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d653e510-88d9-484e-a99c-a6c8b64ebb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.058 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ff5d3c-acb1-4501-8dab-67e602bb24de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611574, 'reachable_time': 25361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297251, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.078 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc0ddd8-0b7e-436c-85e9-93ebfd953c4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:134a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611574, 'tstamp': 611574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297252, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.094 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a951773c-f027-432a-bec0-845f4f34080c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611574, 'reachable_time': 25361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297258, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.122 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[41367830-d178-4979-b056-68d5447f6cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.162 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920175.1618369, 62bbc690-cb71-4cdc-93e1-cdf395abbae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.162 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] VM Started (Lifecycle Event)
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.164 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.167 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.174 250022 INFO nova.virt.libvirt.driver [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Instance spawned successfully.
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.175 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.183 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9f3431-2558-4bb6-83d6-9abad38794e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.185 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.185 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.185 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e260ad9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:55 compute-0 NetworkManager[48960]: <info>  [1768920175.1877] manager: (tap3e260ad9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Jan 20 14:42:55 compute-0 kernel: tap3e260ad9-f0: entered promiscuous mode
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.189 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e260ad9-f0, col_values=(('external_ids', {'iface-id': '2b7c295d-f074-4cfb-aca0-08946126ddbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:55 compute-0 ovn_controller[148666]: 2026-01-20T14:42:55Z|00249|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.205 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.206 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.207 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c193d47a-0f48-4ffd-8b51-e9ee9c312b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.208 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.209 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'env', 'PROCESS_TAG=haproxy-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e260ad9-fcf1-432b-b71b-b943d4249b65.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.333 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.337 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.338 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.338 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.339 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.339 250022 DEBUG nova.virt.libvirt.driver [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.360 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.362 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.391 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.391 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920175.1626694, 62bbc690-cb71-4cdc-93e1-cdf395abbae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.392 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] VM Paused (Lifecycle Event)
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.398 250022 INFO nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Took 8.89 seconds to spawn the instance on the hypervisor.
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.399 250022 DEBUG nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.431 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.435 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920175.1667268, 62bbc690-cb71-4cdc-93e1-cdf395abbae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.435 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] VM Resumed (Lifecycle Event)
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.458 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.462 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.471 250022 INFO nova.compute.manager [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Took 9.97 seconds to build instance.
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.508 250022 DEBUG oslo_concurrency.lockutils [None req-70aa2a94-0c30-4957-9bb8-59360dd40b1d aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:55 compute-0 podman[297290]: 2026-01-20 14:42:55.595115626 +0000 UTC m=+0.055282510 container create 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:42:55 compute-0 systemd[1]: Started libpod-conmon-51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b.scope.
Jan 20 14:42:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60742e7e3a42c758b2aeb765883b0d2d9fc5dd2bed53e571edaac32b26eb78da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:42:55 compute-0 podman[297290]: 2026-01-20 14:42:55.568619328 +0000 UTC m=+0.028786232 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:42:55 compute-0 podman[297290]: 2026-01-20 14:42:55.66795056 +0000 UTC m=+0.128117454 container init 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 14:42:55 compute-0 podman[297290]: 2026-01-20 14:42:55.6742704 +0000 UTC m=+0.134437284 container start 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 14:42:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 5.7 MiB/s wr, 168 op/s
Jan 20 14:42:55 compute-0 ovn_controller[148666]: 2026-01-20T14:42:55Z|00250|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-0 ovn_controller[148666]: 2026-01-20T14:42:55Z|00251|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:55.725 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.746 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [NOTICE]   (297309) : New worker (297312) forked
Jan 20 14:42:55 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [NOTICE]   (297309) : Loading success.
Jan 20 14:42:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5d33498b-cc64-4103-a132-13a75305fddf does not exist
Jan 20 14:42:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ba2d5eb2-33ab-49cf-a478-1257b66dde13 does not exist
Jan 20 14:42:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 668bb2b4-cb56-44dc-b122-ab6fd78e701e does not exist
Jan 20 14:42:55 compute-0 ovn_controller[148666]: 2026-01-20T14:42:55Z|00252|binding|INFO|Releasing lport 5dcae274-b8f4-440a-a3eb-5c1a5a044346 from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-0 ovn_controller[148666]: 2026-01-20T14:42:55Z|00253|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:42:55 compute-0 nova_compute[250018]: 2026-01-20 14:42:55.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:55 compute-0 sudo[297321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:42:55 compute-0 sudo[297321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:55 compute-0 sudo[297321]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:55 compute-0 sudo[297346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:42:55 compute-0 sudo[297346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:42:55 compute-0 sudo[297346]: pam_unix(sudo:session): session closed for user root
Jan 20 14:42:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:56.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:56 compute-0 ceph-mon[74360]: pgmap v1637: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 5.7 MiB/s wr, 168 op/s
Jan 20 14:42:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.706 250022 DEBUG nova.compute.manager [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.707 250022 DEBUG oslo_concurrency.lockutils [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.708 250022 DEBUG oslo_concurrency.lockutils [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.708 250022 DEBUG oslo_concurrency.lockutils [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.708 250022 DEBUG nova.compute.manager [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.709 250022 WARNING nova.compute.manager [req-8ca94524-948a-4645-a611-8e75c9a950af req-ceda37e2-46df-49e2-9fcb-663510362e15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state active and task_state None.
Jan 20 14:42:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.969 250022 DEBUG nova.compute.manager [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.970 250022 DEBUG oslo_concurrency.lockutils [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.970 250022 DEBUG oslo_concurrency.lockutils [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.971 250022 DEBUG oslo_concurrency.lockutils [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.971 250022 DEBUG nova.compute.manager [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] No waiting events found dispatching network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:56 compute-0 nova_compute[250018]: 2026-01-20 14:42:56.971 250022 WARNING nova.compute.manager [req-6a51510d-5c03-405d-b89d-086dbf64c51a req-8cea4802-5602-4275-b860-dcbe1534a5d2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received unexpected event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 for instance with vm_state active and task_state None.
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.136 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.298 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920162.297201, 86aa2fb7-c532-46b4-a02b-8070608dfe6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.299 250022 INFO nova.compute.manager [-] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] VM Stopped (Lifecycle Event)
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.324 250022 DEBUG nova.compute.manager [None req-8e6da879-1beb-44e8-9f97-a5b154539269 - - - - - -] [instance: 86aa2fb7-c532-46b4-a02b-8070608dfe6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.362 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.364 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.365 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.366 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.367 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.369 250022 INFO nova.compute.manager [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Terminating instance
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.371 250022 DEBUG nova.compute.manager [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:42:57 compute-0 kernel: tapd6c7f812-24 (unregistering): left promiscuous mode
Jan 20 14:42:57 compute-0 NetworkManager[48960]: <info>  [1768920177.4124] device (tapd6c7f812-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.423 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.424 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 ovn_controller[148666]: 2026-01-20T14:42:57Z|00254|binding|INFO|Releasing lport d6c7f812-24ee-4acf-9603-942a2c4658f7 from this chassis (sb_readonly=0)
Jan 20 14:42:57 compute-0 ovn_controller[148666]: 2026-01-20T14:42:57Z|00255|binding|INFO|Setting lport d6c7f812-24ee-4acf-9603-942a2c4658f7 down in Southbound
Jan 20 14:42:57 compute-0 ovn_controller[148666]: 2026-01-20T14:42:57Z|00256|binding|INFO|Removing iface tapd6c7f812-24 ovn-installed in OVS
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.429 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:65:2c 10.100.0.10'], port_security=['fa:16:3e:6d:65:2c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '62bbc690-cb71-4cdc-93e1-cdf395abbae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d6c7f812-24ee-4acf-9603-942a2c4658f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.430 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d6c7f812-24ee-4acf-9603-942a2c4658f7 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 unbound from our chassis
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.431 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e260ad9-fcf1-432b-b71b-b943d4249b65, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.433 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4858189c-acb1-4c45-898b-b768ae8f4d06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.433 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace which is not needed anymore
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Jan 20 14:42:57 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004e.scope: Consumed 2.630s CPU time.
Jan 20 14:42:57 compute-0 systemd-machined[216401]: Machine qemu-34-instance-0000004e terminated.
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:42:57 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [NOTICE]   (297309) : haproxy version is 2.8.14-c23fe91
Jan 20 14:42:57 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [NOTICE]   (297309) : path to executable is /usr/sbin/haproxy
Jan 20 14:42:57 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [ALERT]    (297309) : Current worker (297312) exited with code 143 (Terminated)
Jan 20 14:42:57 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[297305]: [WARNING]  (297309) : All workers exited. Exiting... (0)
Jan 20 14:42:57 compute-0 systemd[1]: libpod-51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b.scope: Deactivated successfully.
Jan 20 14:42:57 compute-0 podman[297395]: 2026-01-20 14:42:57.575986925 +0000 UTC m=+0.047029296 container died 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.605 250022 INFO nova.virt.libvirt.driver [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Instance destroyed successfully.
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.606 250022 DEBUG nova.objects.instance [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'resources' on Instance uuid 62bbc690-cb71-4cdc-93e1-cdf395abbae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b-userdata-shm.mount: Deactivated successfully.
Jan 20 14:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-60742e7e3a42c758b2aeb765883b0d2d9fc5dd2bed53e571edaac32b26eb78da-merged.mount: Deactivated successfully.
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.618 250022 DEBUG nova.virt.libvirt.vif [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1574727229',display_name='tempest-tempest.common.compute-instance-1574727229-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1574727229-2',id=78,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-20T14:42:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-6khz0z7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:55Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=62bbc690-cb71-4cdc-93e1-cdf395abbae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.618 250022 DEBUG nova.network.os_vif_util [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "address": "fa:16:3e:6d:65:2c", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6c7f812-24", "ovs_interfaceid": "d6c7f812-24ee-4acf-9603-942a2c4658f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.618 250022 DEBUG nova.network.os_vif_util [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.619 250022 DEBUG os_vif [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.620 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6c7f812-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.626 250022 INFO os_vif [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:65:2c,bridge_name='br-int',has_traffic_filtering=True,id=d6c7f812-24ee-4acf-9603-942a2c4658f7,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6c7f812-24')
Jan 20 14:42:57 compute-0 podman[297395]: 2026-01-20 14:42:57.629527316 +0000 UTC m=+0.100569697 container cleanup 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:42:57 compute-0 systemd[1]: libpod-conmon-51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b.scope: Deactivated successfully.
Jan 20 14:42:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Jan 20 14:42:57 compute-0 podman[297452]: 2026-01-20 14:42:57.706096801 +0000 UTC m=+0.046077240 container remove 51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.716 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b7da5a99-4a2d-436a-9c02-badb0061db53]: (4, ('Tue Jan 20 02:42:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b)\n51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b\nTue Jan 20 02:42:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b)\n51fefaf468df0878fd44770f23f4e1b33b300fbc2a5bd2c2fa6376f38317272b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.718 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[09b161a5-64c6-4b0c-a61f-cf1201e8cb12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.719 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.720 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 kernel: tap3e260ad9-f0: left promiscuous mode
Jan 20 14:42:57 compute-0 nova_compute[250018]: 2026-01-20 14:42:57.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.741 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[27101139-78f5-4aa3-a0b9-e2d701660173]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.758 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b29d6-598c-49f7-8a9b-3e7adb461623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.760 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c67aae78-506a-4477-977f-81b4d8e95d34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.775 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[022bbfdf-ca10-4064-b288-df6ddaa953af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611566, 'reachable_time': 36552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297470, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e260ad9\x2dfcf1\x2d432b\x2db71b\x2db943d4249b65.mount: Deactivated successfully.
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.781 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:42:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:42:57.781 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc9524b-7db3-4d99-bb22-28037ecaeb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:42:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:42:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:42:58.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.092 250022 INFO nova.virt.libvirt.driver [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Deleting instance files /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4_del
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.094 250022 INFO nova.virt.libvirt.driver [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Deletion of /var/lib/nova/instances/62bbc690-cb71-4cdc-93e1-cdf395abbae4_del complete
Jan 20 14:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.172 250022 INFO nova.compute.manager [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.172 250022 DEBUG oslo.service.loopingcall [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.172 250022 DEBUG nova.compute.manager [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.173 250022 DEBUG nova.network.neutron [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:42:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:42:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:42:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:42:58.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.940 250022 DEBUG nova.network.neutron [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:42:58 compute-0 nova_compute[250018]: 2026-01-20 14:42:58.959 250022 INFO nova.compute.manager [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Took 0.79 seconds to deallocate network for instance.
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.006 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.007 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.084 250022 DEBUG oslo_concurrency.processutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:42:59 compute-0 ceph-mon[74360]: pgmap v1638: 321 pgs: 321 active+clean; 372 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.148 250022 DEBUG nova.compute.manager [req-148864b2-f935-459c-b2b3-de9637b6ec49 req-b86a7a25-2ec7-480e-bb09-71cba47fce3b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-vif-deleted-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.150 250022 DEBUG nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-vif-unplugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.150 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.151 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.151 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.151 250022 DEBUG nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] No waiting events found dispatching network-vif-unplugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.152 250022 WARNING nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received unexpected event network-vif-unplugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 for instance with vm_state deleted and task_state None.
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.152 250022 DEBUG nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.152 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.153 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.153 250022 DEBUG oslo_concurrency.lockutils [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.153 250022 DEBUG nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] No waiting events found dispatching network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.154 250022 WARNING nova.compute.manager [req-796c909c-43b5-4104-9d0e-d7c77175fb79 req-37d672fb-06a0-451e-8669-4844f80ba327 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Received unexpected event network-vif-plugged-d6c7f812-24ee-4acf-9603-942a2c4658f7 for instance with vm_state deleted and task_state None.
Jan 20 14:42:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:42:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109341430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.522 250022 DEBUG oslo_concurrency.processutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.532 250022 DEBUG nova.compute.provider_tree [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.550 250022 DEBUG nova.scheduler.client.report [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.570 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.602 250022 INFO nova.scheduler.client.report [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Deleted allocations for instance 62bbc690-cb71-4cdc-93e1-cdf395abbae4
Jan 20 14:42:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 362 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 219 op/s
Jan 20 14:42:59 compute-0 nova_compute[250018]: 2026-01-20 14:42:59.725 250022 DEBUG oslo_concurrency.lockutils [None req-5a66a57e-a82b-46fd-be22-e7fc74957927 aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "62bbc690-cb71-4cdc-93e1-cdf395abbae4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4040533622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/306700131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4109341430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:00 compute-0 ceph-mon[74360]: pgmap v1639: 321 pgs: 321 active+clean; 362 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 219 op/s
Jan 20 14:43:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:00.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 314 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 315 op/s
Jan 20 14:43:01 compute-0 ceph-mon[74360]: pgmap v1640: 321 pgs: 321 active+clean; 314 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 315 op/s
Jan 20 14:43:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:02 compute-0 nova_compute[250018]: 2026-01-20 14:43:02.138 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:02 compute-0 nova_compute[250018]: 2026-01-20 14:43:02.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:02.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 295 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 295 op/s
Jan 20 14:43:03 compute-0 ceph-mon[74360]: pgmap v1641: 321 pgs: 321 active+clean; 295 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 295 op/s
Jan 20 14:43:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.604 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.605 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.630 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.716 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.716 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.721 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.721 250022 INFO nova.compute.claims [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:43:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:04 compute-0 nova_compute[250018]: 2026-01-20 14:43:04.870 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3418317096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.290 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.294 250022 DEBUG nova.compute.provider_tree [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.315 250022 DEBUG nova.scheduler.client.report [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.340 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.341 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:43:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2608020822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3418317096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.408 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.409 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.440 250022 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.463 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.612 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.616 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.617 250022 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Creating image(s)
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.649 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.681 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 283 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 312 op/s
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.716 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.721 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.754 250022 DEBUG nova.policy [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa2e7857e85f483eb0d162e2ee8c2e2c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a3e022a35f604df2bbc885e498b1e206', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.787 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.788 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.789 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.789 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.818 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:05 compute-0 nova_compute[250018]: 2026-01-20 14:43:05.822 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 52f99263-c471-4724-813b-98b8a7c3c301_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.271 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 52f99263-c471-4724-813b-98b8a7c3c301_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.383 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] resizing rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:43:06 compute-0 sudo[297630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:06 compute-0 sudo[297630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:06 compute-0 sudo[297630]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:06 compute-0 ceph-mon[74360]: pgmap v1642: 321 pgs: 321 active+clean; 283 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 312 op/s
Jan 20 14:43:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2177212168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:06 compute-0 sudo[297691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:06 compute-0 sudo[297691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:06 compute-0 sudo[297691]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.476 250022 DEBUG nova.objects.instance [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'migration_context' on Instance uuid 52f99263-c471-4724-813b-98b8a7c3c301 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.489 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.489 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Ensure instance console log exists: /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.490 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.490 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:06 compute-0 nova_compute[250018]: 2026-01-20 14:43:06.490 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:06.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:07 compute-0 nova_compute[250018]: 2026-01-20 14:43:07.140 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:07 compute-0 nova_compute[250018]: 2026-01-20 14:43:07.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 294 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 349 op/s
Jan 20 14:43:07 compute-0 ceph-mon[74360]: pgmap v1643: 321 pgs: 321 active+clean; 294 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 349 op/s
Jan 20 14:43:07 compute-0 nova_compute[250018]: 2026-01-20 14:43:07.776 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Successfully created port: c967163a-0504-48ab-88e1-29cb7a413387 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:43:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:08.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1530503608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:09 compute-0 ovn_controller[148666]: 2026-01-20T14:43:09Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:b9:dd 10.100.0.4
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.357 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Successfully updated port: c967163a-0504-48ab-88e1-29cb7a413387 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.371 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.372 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquired lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.372 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.510 250022 DEBUG nova.compute.manager [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-changed-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.510 250022 DEBUG nova.compute.manager [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Refreshing instance network info cache due to event network-changed-c967163a-0504-48ab-88e1-29cb7a413387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.511 250022 DEBUG oslo_concurrency.lockutils [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:09 compute-0 nova_compute[250018]: 2026-01-20 14:43:09.639 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:43:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 295 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 301 op/s
Jan 20 14:43:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1857567897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:10 compute-0 ceph-mon[74360]: pgmap v1644: 321 pgs: 321 active+clean; 295 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.4 MiB/s wr, 301 op/s
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.659 250022 DEBUG nova.network.neutron [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Updating instance_info_cache with network_info: [{"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.691 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Releasing lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.692 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Instance network_info: |[{"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.693 250022 DEBUG oslo_concurrency.lockutils [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.693 250022 DEBUG nova.network.neutron [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Refreshing network info cache for port c967163a-0504-48ab-88e1-29cb7a413387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.699 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Start _get_guest_xml network_info=[{"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.707 250022 WARNING nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.717 250022 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.718 250022 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.723 250022 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.724 250022 DEBUG nova.virt.libvirt.host [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.726 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.726 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.727 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.728 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.728 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.729 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.729 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.730 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.730 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.731 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.731 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.732 250022 DEBUG nova.virt.hardware [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:43:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:10 compute-0 nova_compute[250018]: 2026-01-20 14:43:10.737 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:10.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1201707774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507388340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.206 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.231 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.235 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0063322980643960546 of space, bias 1.0, pg target 1.8996894193188163 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.647 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.648 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.649 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.650 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.650 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.653 250022 INFO nova.compute.manager [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Terminating instance
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.656 250022 DEBUG nova.compute.manager [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:43:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 288 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 308 op/s
Jan 20 14:43:11 compute-0 kernel: tapf6e1030d-55 (unregistering): left promiscuous mode
Jan 20 14:43:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85495823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:11 compute-0 NetworkManager[48960]: <info>  [1768920191.7132] device (tapf6e1030d-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:43:11 compute-0 ovn_controller[148666]: 2026-01-20T14:43:11Z|00257|binding|INFO|Releasing lport f6e1030d-5508-4e83-92ce-0a723132eb45 from this chassis (sb_readonly=0)
Jan 20 14:43:11 compute-0 ovn_controller[148666]: 2026-01-20T14:43:11Z|00258|binding|INFO|Setting lport f6e1030d-5508-4e83-92ce-0a723132eb45 down in Southbound
Jan 20 14:43:11 compute-0 ovn_controller[148666]: 2026-01-20T14:43:11Z|00259|binding|INFO|Removing iface tapf6e1030d-55 ovn-installed in OVS
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.725 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.735 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.738 250022 DEBUG nova.virt.libvirt.vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-2',id=81,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:05Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=52f99263-c471-4724-813b-98b8a7c3c301,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.738 250022 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.740 250022 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.742 250022 DEBUG nova.objects.instance [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f99263-c471-4724-813b-98b8a7c3c301 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:11.745 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:b9:dd 10.100.0.4'], port_security=['fa:16:3e:8d:b9:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1951432b-2c0c-4d1b-90df-d94dcf9fc32e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b95747114ab4043b93a260387199c91', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f18b0222-78a5-4c37-8065-772dbe5c63e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80e2aa5b-ecb8-4e93-992f-baaef718dd34, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f6e1030d-5508-4e83-92ce-0a723132eb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:11.747 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f6e1030d-5508-4e83-92ce-0a723132eb45 in datapath b36e9cab-12c6-4a09-9aab-ef2679d875ba unbound from our chassis
Jan 20 14:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:11.750 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b36e9cab-12c6-4a09-9aab-ef2679d875ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:11.751 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[11e201db-c05d-4a7c-99a7-adb8c230170a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:11.751 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba namespace which is not needed anymore
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.787 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.797 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <uuid>52f99263-c471-4724-813b-98b8a7c3c301</uuid>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <name>instance-00000051</name>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:name>tempest-MultipleCreateTestJSON-server-704404998-2</nova:name>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:43:10</nova:creationTime>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:user uuid="aa2e7857e85f483eb0d162e2ee8c2e2c">tempest-MultipleCreateTestJSON-164394330-project-member</nova:user>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:project uuid="a3e022a35f604df2bbc885e498b1e206">tempest-MultipleCreateTestJSON-164394330</nova:project>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <nova:port uuid="c967163a-0504-48ab-88e1-29cb7a413387">
Jan 20 14:43:11 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <system>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="serial">52f99263-c471-4724-813b-98b8a7c3c301</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="uuid">52f99263-c471-4724-813b-98b8a7c3c301</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </system>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <os>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </os>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <features>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </features>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/52f99263-c471-4724-813b-98b8a7c3c301_disk">
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/52f99263-c471-4724-813b-98b8a7c3c301_disk.config">
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:11 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:53:7b:54"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <target dev="tapc967163a-05"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/console.log" append="off"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <video>
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </video>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:43:11 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:43:11 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:43:11 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:43:11 compute-0 nova_compute[250018]: </domain>
Jan 20 14:43:11 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.799 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Preparing to wait for external event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.799 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.799 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.799 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.800 250022 DEBUG nova.virt.libvirt.vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-2',id=81,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:05Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=52f99263-c471-4724-813b-98b8a7c3c301,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.800 250022 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.801 250022 DEBUG nova.network.os_vif_util [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.801 250022 DEBUG os_vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.801 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.802 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.802 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.806 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc967163a-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.807 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc967163a-05, col_values=(('external_ids', {'iface-id': 'c967163a-0504-48ab-88e1-29cb7a413387', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:7b:54', 'vm-uuid': '52f99263-c471-4724-813b-98b8a7c3c301'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:11 compute-0 NetworkManager[48960]: <info>  [1768920191.8093] manager: (tapc967163a-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 20 14:43:11 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.812 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:11 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004a.scope: Consumed 14.470s CPU time.
Jan 20 14:43:11 compute-0 systemd-machined[216401]: Machine qemu-33-instance-0000004a terminated.
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.819 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.821 250022 INFO os_vif [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05')
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [NOTICE]   (297206) : haproxy version is 2.8.14-c23fe91
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [NOTICE]   (297206) : path to executable is /usr/sbin/haproxy
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [WARNING]  (297206) : Exiting Master process...
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.879 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.880 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.880 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] No VIF found with MAC fa:16:3e:53:7b:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [WARNING]  (297206) : Exiting Master process...
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.880 250022 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Using config drive
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [ALERT]    (297206) : Current worker (297211) exited with code 143 (Terminated)
Jan 20 14:43:11 compute-0 neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba[297198]: [WARNING]  (297206) : All workers exited. Exiting... (0)
Jan 20 14:43:11 compute-0 systemd[1]: libpod-8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460.scope: Deactivated successfully.
Jan 20 14:43:11 compute-0 podman[297828]: 2026-01-20 14:43:11.891170458 +0000 UTC m=+0.048885236 container died 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.908 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460-userdata-shm.mount: Deactivated successfully.
Jan 20 14:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63fde834a0badb2a04c6fe9d3921fb2afc420bda1e9546d06901fab47a52f211-merged.mount: Deactivated successfully.
Jan 20 14:43:11 compute-0 podman[297828]: 2026-01-20 14:43:11.939002464 +0000 UTC m=+0.096717242 container cleanup 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.939 250022 INFO nova.virt.libvirt.driver [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Instance destroyed successfully.
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.940 250022 DEBUG nova.objects.instance [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lazy-loading 'resources' on Instance uuid 1951432b-2c0c-4d1b-90df-d94dcf9fc32e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:11 compute-0 systemd[1]: libpod-conmon-8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460.scope: Deactivated successfully.
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.956 250022 DEBUG nova.virt.libvirt.vif [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-164859619',display_name='tempest-ListServerFiltersTestJSON-instance-164859619',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-164859619',id=74,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:42:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b95747114ab4043b93a260387199c91',ramdisk_id='',reservation_id='r-0ikbhbdb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-2126845308',owner_user_name='tempest-ListServerFiltersTestJSON-2126845308-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:42:54Z,user_data=None,user_id='ff99fc8eda0640928c6e82981dacb266',uuid=1951432b-2c0c-4d1b-90df-d94dcf9fc32e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.957 250022 DEBUG nova.network.os_vif_util [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converting VIF {"id": "f6e1030d-5508-4e83-92ce-0a723132eb45", "address": "fa:16:3e:8d:b9:dd", "network": {"id": "b36e9cab-12c6-4a09-9aab-ef2679d875ba", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-432532406-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b95747114ab4043b93a260387199c91", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6e1030d-55", "ovs_interfaceid": "f6e1030d-5508-4e83-92ce-0a723132eb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.958 250022 DEBUG nova.network.os_vif_util [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.958 250022 DEBUG os_vif [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.966 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.966 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6e1030d-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.972 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.973 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:11 compute-0 nova_compute[250018]: 2026-01-20 14:43:11.979 250022 INFO os_vif [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:b9:dd,bridge_name='br-int',has_traffic_filtering=True,id=f6e1030d-5508-4e83-92ce-0a723132eb45,network=Network(b36e9cab-12c6-4a09-9aab-ef2679d875ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6e1030d-55')
Jan 20 14:43:12 compute-0 podman[297889]: 2026-01-20 14:43:12.007332236 +0000 UTC m=+0.047759926 container remove 8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.014 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9adb82-043e-40da-8148-7981b1bac814]: (4, ('Tue Jan 20 02:43:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460)\n8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460\nTue Jan 20 02:43:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba (8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460)\n8df81f7c6031850bd65b855da51c408c39551b94c180a66807b33fe518490460\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.015 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[de557e98-032d-4bec-933c-ccb99589d470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.016 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb36e9cab-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:12 compute-0 kernel: tapb36e9cab-10: left promiscuous mode
Jan 20 14:43:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.044 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.047 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[23d1cc2e-6a82-4019-856a-77a663489421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.060 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a400d3fd-5e67-408e-b483-69ffe315df33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.061 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb760e2-0228-49ea-9a26-b9f6a2c11995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.079 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[65769ae4-9fb8-4c2c-ae29-6f930254790b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611476, 'reachable_time': 42442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297927, 'error': None, 'target': 'ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 systemd[1]: run-netns-ovnmeta\x2db36e9cab\x2d12c6\x2d4a09\x2d9aab\x2def2679d875ba.mount: Deactivated successfully.
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.083 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b36e9cab-12c6-4a09-9aab-ef2679d875ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:43:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:12.083 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[cf781309-e60f-4488-963e-7b225ac033f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.142 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/507388340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-0 ceph-mon[74360]: pgmap v1645: 321 pgs: 321 active+clean; 288 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 308 op/s
Jan 20 14:43:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/85495823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3879070571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.476 250022 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Creating config drive at /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.481 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp33iaak24 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.604 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920177.6030834, 62bbc690-cb71-4cdc-93e1-cdf395abbae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.605 250022 INFO nova.compute.manager [-] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] VM Stopped (Lifecycle Event)
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.614 250022 DEBUG nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.615 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.615 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.616 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.616 250022 DEBUG nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.617 250022 DEBUG nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-unplugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.617 250022 DEBUG nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.618 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.618 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.618 250022 DEBUG oslo_concurrency.lockutils [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.619 250022 DEBUG nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] No waiting events found dispatching network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.619 250022 WARNING nova.compute.manager [req-1c1a303d-266b-43d8-ba6e-07d16209eeaf req-ee8aacef-b97e-4cfe-a516-541dc56549fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received unexpected event network-vif-plugged-f6e1030d-5508-4e83-92ce-0a723132eb45 for instance with vm_state active and task_state deleting.
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.630 250022 DEBUG nova.compute.manager [None req-18f3f7d4-fbdc-4ca4-9a0d-5062ab42d526 - - - - - -] [instance: 62bbc690-cb71-4cdc-93e1-cdf395abbae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.632 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp33iaak24" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.673 250022 DEBUG nova.storage.rbd_utils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] rbd image 52f99263-c471-4724-813b-98b8a7c3c301_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:12 compute-0 nova_compute[250018]: 2026-01-20 14:43:12.677 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config 52f99263-c471-4724-813b-98b8a7c3c301_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.117 250022 DEBUG oslo_concurrency.processutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config 52f99263-c471-4724-813b-98b8a7c3c301_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.118 250022 INFO nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Deleting local config drive /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301/disk.config because it was imported into RBD.
Jan 20 14:43:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:13 compute-0 kernel: tapc967163a-05: entered promiscuous mode
Jan 20 14:43:13 compute-0 systemd-udevd[297804]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.2008] manager: (tapc967163a-05): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.200 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-0 ovn_controller[148666]: 2026-01-20T14:43:13Z|00260|binding|INFO|Claiming lport c967163a-0504-48ab-88e1-29cb7a413387 for this chassis.
Jan 20 14:43:13 compute-0 ovn_controller[148666]: 2026-01-20T14:43:13Z|00261|binding|INFO|c967163a-0504-48ab-88e1-29cb7a413387: Claiming fa:16:3e:53:7b:54 10.100.0.4
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.2115] device (tapc967163a-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.2146] device (tapc967163a-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.218 250022 DEBUG nova.network.neutron [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Updated VIF entry in instance network info cache for port c967163a-0504-48ab-88e1-29cb7a413387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.219 250022 DEBUG nova.network.neutron [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Updating instance_info_cache with network_info: [{"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.220 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:7b:54 10.100.0.4'], port_security=['fa:16:3e:53:7b:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '52f99263-c471-4724-813b-98b8a7c3c301', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '2', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=c967163a-0504-48ab-88e1-29cb7a413387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.223 160071 INFO neutron.agent.ovn.metadata.agent [-] Port c967163a-0504-48ab-88e1-29cb7a413387 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 bound to our chassis
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.226 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:13 compute-0 ovn_controller[148666]: 2026-01-20T14:43:13Z|00262|binding|INFO|Setting lport c967163a-0504-48ab-88e1-29cb7a413387 up in Southbound
Jan 20 14:43:13 compute-0 ovn_controller[148666]: 2026-01-20T14:43:13Z|00263|binding|INFO|Setting lport c967163a-0504-48ab-88e1-29cb7a413387 ovn-installed in OVS
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-0 systemd-machined[216401]: New machine qemu-35-instance-00000051.
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.242 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d07c713a-e435-4780-8ade-fa51ca14ea51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.243 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e260ad9-f1 in ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.245 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e260ad9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.245 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[abf3dda4-8fa1-4913-9a58-f7b24b181773]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.246 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c982ff0f-06a6-44f2-8cf1-cfaec0f4cc0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-00000051.
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.259 250022 DEBUG oslo_concurrency.lockutils [req-9f50f3f6-b7da-4aa7-8ad8-55b3cea2142b req-45372e99-a758-4387-b243-b274a0003771 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-52f99263-c471-4724-813b-98b8a7c3c301" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.259 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3424fb-d7e3-47c8-a45c-59d0c683583f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[41e4adfe-eb2f-46c6-922e-a337d6e24a29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.376 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6d495b3b-fb98-4a94-9e95-0e6c4cfb88b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.382 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3440efe8-b88a-4a85-8d56-215ecd6b7e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.3839] manager: (tap3e260ad9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.416 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[40c45e72-3095-44d4-9125-ad2f72d2877d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.420 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[998dc264-64e4-4f4b-b84c-27b46c8572e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.425 250022 INFO nova.virt.libvirt.driver [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Deleting instance files /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_del
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.426 250022 INFO nova.virt.libvirt.driver [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Deletion of /var/lib/nova/instances/1951432b-2c0c-4d1b-90df-d94dcf9fc32e_del complete
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.4510] device (tap3e260ad9-f0): carrier: link connected
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.459 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[580221c0-2368-458d-8c2f-af1eb6415a36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.473 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[36370f5e-58ab-404b-a69a-e7f8b7a53190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613417, 'reachable_time': 37130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298013, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.499 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cf7887-e81c-49a8-b2ac-8bdd474a84d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:134a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613417, 'tstamp': 613417}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298014, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.514 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6c152b-2ae5-48ff-87ae-c1b78ca33d6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e260ad9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:13:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613417, 'reachable_time': 37130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298015, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.515 250022 INFO nova.compute.manager [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Took 1.86 seconds to destroy the instance on the hypervisor.
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.515 250022 DEBUG oslo.service.loopingcall [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.516 250022 DEBUG nova.compute.manager [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.516 250022 DEBUG nova.network.neutron [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.560 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b66284-99d1-4976-af5a-ad03d1238a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.621 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b531cb1d-6aeb-4da2-948f-0ff8b1002d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.623 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.623 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.624 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e260ad9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-0 kernel: tap3e260ad9-f0: entered promiscuous mode
Jan 20 14:43:13 compute-0 NetworkManager[48960]: <info>  [1768920193.6271] manager: (tap3e260ad9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.635 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e260ad9-f0, col_values=(('external_ids', {'iface-id': '2b7c295d-f074-4cfb-aca0-08946126ddbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:13 compute-0 ovn_controller[148666]: 2026-01-20T14:43:13Z|00264|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.637 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.642 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.643 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d58a861-f285-4991-a895-4f63bcbeac97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.644 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3e260ad9-fcf1-432b-b71b-b943d4249b65.pid.haproxy
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3e260ad9-fcf1-432b-b71b-b943d4249b65
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:43:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:13.645 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'env', 'PROCESS_TAG=haproxy-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e260ad9-fcf1-432b-b71b-b943d4249b65.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:43:13 compute-0 nova_compute[250018]: 2026-01-20 14:43:13.651 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 260 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 5.3 MiB/s wr, 205 op/s
Jan 20 14:43:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 14:43:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:14.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 14:43:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1554951386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1120863099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:43:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1120863099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:43:14 compute-0 ceph-mon[74360]: pgmap v1646: 321 pgs: 321 active+clean; 260 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 5.3 MiB/s wr, 205 op/s
Jan 20 14:43:14 compute-0 podman[298055]: 2026-01-20 14:43:14.046905664 +0000 UTC m=+0.067726726 container create 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:14 compute-0 systemd[1]: Started libpod-conmon-7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f.scope.
Jan 20 14:43:14 compute-0 podman[298055]: 2026-01-20 14:43:14.013710845 +0000 UTC m=+0.034531927 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:43:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92ea9b39b8d58c47f15468f9a5374250be3485b3477ea03e106f82a7e33cacb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:14 compute-0 podman[298055]: 2026-01-20 14:43:14.184637677 +0000 UTC m=+0.205458739 container init 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:43:14 compute-0 podman[298055]: 2026-01-20 14:43:14.19066233 +0000 UTC m=+0.211483432 container start 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:43:14 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [NOTICE]   (298108) : New worker (298111) forked
Jan 20 14:43:14 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [NOTICE]   (298108) : Loading success.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.243 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920194.2432423, 52f99263-c471-4724-813b-98b8a7c3c301 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.244 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] VM Started (Lifecycle Event)
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.283 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.291 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920194.2434099, 52f99263-c471-4724-813b-98b8a7c3c301 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.291 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] VM Paused (Lifecycle Event)
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.301 250022 DEBUG nova.network.neutron [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.325 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.330 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.334 250022 INFO nova.compute.manager [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Took 0.82 seconds to deallocate network for instance.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.370 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.438 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.439 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.456 250022 DEBUG nova.compute.manager [req-ce0a2de5-0b57-41c1-a654-32e6ceecd462 req-62f28204-b719-411b-b26c-695ea647bfa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Received event network-vif-deleted-f6e1030d-5508-4e83-92ce-0a723132eb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.478 250022 DEBUG nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.504 250022 DEBUG nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.505 250022 DEBUG nova.compute.provider_tree [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.524 250022 DEBUG nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.551 250022 DEBUG nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.621 250022 DEBUG oslo_concurrency.processutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.721 250022 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.723 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.723 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.724 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.724 250022 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Processing event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.725 250022 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.725 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.726 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.726 250022 DEBUG oslo_concurrency.lockutils [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.727 250022 DEBUG nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] No waiting events found dispatching network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.728 250022 WARNING nova.compute.manager [req-574f4a1f-765d-44fa-8b09-01c1c5be9a03 req-976d640a-57e5-4146-9da6-ecadb28276e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received unexpected event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 for instance with vm_state building and task_state spawning.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.729 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.734 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.736 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920194.7342238, 52f99263-c471-4724-813b-98b8a7c3c301 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.736 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] VM Resumed (Lifecycle Event)
Jan 20 14:43:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.757 250022 INFO nova.virt.libvirt.driver [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Instance spawned successfully.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.758 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.770 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.780 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.786 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.787 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.788 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.789 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.790 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.791 250022 DEBUG nova.virt.libvirt.driver [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.800 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.843 250022 INFO nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Took 9.23 seconds to spawn the instance on the hypervisor.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.843 250022 DEBUG nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.905 250022 INFO nova.compute.manager [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Took 10.23 seconds to build instance.
Jan 20 14:43:14 compute-0 nova_compute[250018]: 2026-01-20 14:43:14.932 250022 DEBUG oslo_concurrency.lockutils [None req-1d74c129-e009-471c-8543-d5e591c353cb aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.028 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:15.029 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:15.030 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:43:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123347246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.122 250022 DEBUG oslo_concurrency.processutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.128 250022 DEBUG nova.compute.provider_tree [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.141 250022 DEBUG nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/123347246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.166 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.203 250022 INFO nova.scheduler.client.report [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Deleted allocations for instance 1951432b-2c0c-4d1b-90df-d94dcf9fc32e
Jan 20 14:43:15 compute-0 nova_compute[250018]: 2026-01-20 14:43:15.279 250022 DEBUG oslo_concurrency.lockutils [None req-96c0cd42-92e0-42b5-9c4c-1cac2220f408 ff99fc8eda0640928c6e82981dacb266 4b95747114ab4043b93a260387199c91 - - default default] Lock "1951432b-2c0c-4d1b-90df-d94dcf9fc32e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 185 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 273 op/s
Jan 20 14:43:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:16.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:16 compute-0 ceph-mon[74360]: pgmap v1647: 321 pgs: 321 active+clean; 185 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 273 op/s
Jan 20 14:43:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:16.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:16 compute-0 nova_compute[250018]: 2026-01-20 14:43:16.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:17 compute-0 nova_compute[250018]: 2026-01-20 14:43:17.145 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 141 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 14:43:17 compute-0 ceph-mon[74360]: pgmap v1648: 321 pgs: 321 active+clean; 141 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 14:43:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.042 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.043 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.061 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.132 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.133 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.138 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.138 250022 INFO nova.compute.claims [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:43:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.297 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:18 compute-0 ovn_controller[148666]: 2026-01-20T14:43:18Z|00265|binding|INFO|Releasing lport 2b7c295d-f074-4cfb-aca0-08946126ddbc from this chassis (sb_readonly=0)
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.461 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.510 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.511 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.511 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.512 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.512 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.514 250022 INFO nova.compute.manager [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Terminating instance
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.515 250022 DEBUG nova.compute.manager [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:43:18 compute-0 kernel: tapc967163a-05 (unregistering): left promiscuous mode
Jan 20 14:43:18 compute-0 NetworkManager[48960]: <info>  [1768920198.5750] device (tapc967163a-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:43:18 compute-0 ovn_controller[148666]: 2026-01-20T14:43:18Z|00266|binding|INFO|Releasing lport c967163a-0504-48ab-88e1-29cb7a413387 from this chassis (sb_readonly=0)
Jan 20 14:43:18 compute-0 ovn_controller[148666]: 2026-01-20T14:43:18Z|00267|binding|INFO|Setting lport c967163a-0504-48ab-88e1-29cb7a413387 down in Southbound
Jan 20 14:43:18 compute-0 ovn_controller[148666]: 2026-01-20T14:43:18Z|00268|binding|INFO|Removing iface tapc967163a-05 ovn-installed in OVS
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.629 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:7b:54 10.100.0.4'], port_security=['fa:16:3e:53:7b:54 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '52f99263-c471-4724-813b-98b8a7c3c301', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a3e022a35f604df2bbc885e498b1e206', 'neutron:revision_number': '4', 'neutron:security_group_ids': '885819b7-5060-4b73-ad54-3f31f821195c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89e607b1-9e39-47f0-8180-8aaef3a2a0e9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=c967163a-0504-48ab-88e1-29cb7a413387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.631 160071 INFO neutron.agent.ovn.metadata.agent [-] Port c967163a-0504-48ab-88e1-29cb7a413387 in datapath 3e260ad9-fcf1-432b-b71b-b943d4249b65 unbound from our chassis
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.633 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e260ad9-fcf1-432b-b71b-b943d4249b65, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.634 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[104ec577-05fc-4268-ad2b-57f9187ec830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.637 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 namespace which is not needed anymore
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.641 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 20 14:43:18 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000051.scope: Consumed 4.745s CPU time.
Jan 20 14:43:18 compute-0 systemd-machined[216401]: Machine qemu-35-instance-00000051 terminated.
Jan 20 14:43:18 compute-0 podman[298167]: 2026-01-20 14:43:18.705934577 +0000 UTC m=+0.056049049 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:18.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.751 250022 INFO nova.virt.libvirt.driver [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Instance destroyed successfully.
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.751 250022 DEBUG nova.objects.instance [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lazy-loading 'resources' on Instance uuid 52f99263-c471-4724-813b-98b8a7c3c301 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555294503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:18 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [NOTICE]   (298108) : haproxy version is 2.8.14-c23fe91
Jan 20 14:43:18 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [NOTICE]   (298108) : path to executable is /usr/sbin/haproxy
Jan 20 14:43:18 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [WARNING]  (298108) : Exiting Master process...
Jan 20 14:43:18 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [ALERT]    (298108) : Current worker (298111) exited with code 143 (Terminated)
Jan 20 14:43:18 compute-0 neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65[298099]: [WARNING]  (298108) : All workers exited. Exiting... (0)
Jan 20 14:43:18 compute-0 systemd[1]: libpod-7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f.scope: Deactivated successfully.
Jan 20 14:43:18 compute-0 podman[298207]: 2026-01-20 14:43:18.787697343 +0000 UTC m=+0.050487619 container died 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:43:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3490191056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3555294503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.791 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.800 250022 DEBUG nova.compute.provider_tree [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.810 250022 DEBUG nova.virt.libvirt.vif [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:43:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-704404998',display_name='tempest-MultipleCreateTestJSON-server-704404998-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-704404998-2',id=81,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-20T14:43:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a3e022a35f604df2bbc885e498b1e206',ramdisk_id='',reservation_id='r-0enj9v31',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-164394330',owner_user_name='tempest-MultipleCreateTestJSON-164394330-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:43:14Z,user_data=None,user_id='aa2e7857e85f483eb0d162e2ee8c2e2c',uuid=52f99263-c471-4724-813b-98b8a7c3c301,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.811 250022 DEBUG nova.network.os_vif_util [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converting VIF {"id": "c967163a-0504-48ab-88e1-29cb7a413387", "address": "fa:16:3e:53:7b:54", "network": {"id": "3e260ad9-fcf1-432b-b71b-b943d4249b65", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1425882684-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a3e022a35f604df2bbc885e498b1e206", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc967163a-05", "ovs_interfaceid": "c967163a-0504-48ab-88e1-29cb7a413387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.812 250022 DEBUG nova.network.os_vif_util [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.813 250022 DEBUG os_vif [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f-userdata-shm.mount: Deactivated successfully.
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.816 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc967163a-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e92ea9b39b8d58c47f15468f9a5374250be3485b3477ea03e106f82a7e33cacb-merged.mount: Deactivated successfully.
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.818 250022 DEBUG nova.scheduler.client.report [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.823 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.827 250022 INFO os_vif [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:7b:54,bridge_name='br-int',has_traffic_filtering=True,id=c967163a-0504-48ab-88e1-29cb7a413387,network=Network(3e260ad9-fcf1-432b-b71b-b943d4249b65),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc967163a-05')
Jan 20 14:43:18 compute-0 podman[298207]: 2026-01-20 14:43:18.830651947 +0000 UTC m=+0.093442223 container cleanup 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:43:18 compute-0 systemd[1]: libpod-conmon-7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f.scope: Deactivated successfully.
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.850 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.851 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:43:18 compute-0 podman[298205]: 2026-01-20 14:43:18.865749988 +0000 UTC m=+0.128583296 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:43:18 compute-0 podman[298286]: 2026-01-20 14:43:18.904862728 +0000 UTC m=+0.052589796 container remove 7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.910 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6f90de-855c-443f-8891-1419123a576e]: (4, ('Tue Jan 20 02:43:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f)\n7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f\nTue Jan 20 02:43:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 (7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f)\n7f5bd66fa6a67c8bb6396083178aab8d334e1f7ceb174d6c2a50c93d9378bb8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.911 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d5290dad-0fd8-47b4-a473-cf5512cd0be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.912 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e260ad9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 kernel: tap3e260ad9-f0: left promiscuous mode
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.919 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6af0bf-af72-47bb-8d2d-439cd3f16819]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.927 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.927 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.933 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.935 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[59fdf969-c0df-43e4-8564-ecda72a30dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.936 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0be0e691-abca-4df0-b8d1-11d4eb4eb49a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.945 250022 INFO nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.952 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bf091044-bf77-407e-b772-ab971e31909c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613409, 'reachable_time': 20931, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298306, 'error': None, 'target': 'ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.954 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e260ad9-fcf1-432b-b71b-b943d4249b65 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:43:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:18.955 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[94f89672-19f1-4e12-bf04-94a30b888097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e260ad9\x2dfcf1\x2d432b\x2db71b\x2db943d4249b65.mount: Deactivated successfully.
Jan 20 14:43:18 compute-0 nova_compute[250018]: 2026-01-20 14:43:18.960 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.053 250022 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-unplugged-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.054 250022 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.055 250022 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.055 250022 DEBUG oslo_concurrency.lockutils [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.055 250022 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] No waiting events found dispatching network-vif-unplugged-c967163a-0504-48ab-88e1-29cb7a413387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.055 250022 DEBUG nova.compute.manager [req-cc60b2e4-1f82-4aff-9198-1641d2dd8cf1 req-d5bcb852-3b53-48bf-a499-168961abc425 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-unplugged-c967163a-0504-48ab-88e1-29cb7a413387 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.082 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.084 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.085 250022 INFO nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Creating image(s)
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.118 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.153 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.190 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.196 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.298 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.301 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.302 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.303 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.342 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.347 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.461 250022 INFO nova.virt.libvirt.driver [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Deleting instance files /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301_del
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.464 250022 INFO nova.virt.libvirt.driver [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Deletion of /var/lib/nova/instances/52f99263-c471-4724-813b-98b8a7c3c301_del complete
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.562 250022 INFO nova.compute.manager [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Took 1.05 seconds to destroy the instance on the hypervisor.
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.563 250022 DEBUG oslo.service.loopingcall [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.564 250022 DEBUG nova.compute.manager [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.565 250022 DEBUG nova.network.neutron [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.598 250022 DEBUG nova.policy [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7382dbb4dfeb47a08ece33c6f113d77c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9fbbc94ed3bb41b1a060e5a7e1099ccf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.684 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 332 op/s
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.765 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] resizing rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:43:19 compute-0 ceph-mon[74360]: pgmap v1649: 321 pgs: 321 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.9 MiB/s wr, 332 op/s
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.876 250022 DEBUG nova.objects.instance [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lazy-loading 'migration_context' on Instance uuid 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.902 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.902 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Ensure instance console log exists: /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.903 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.903 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:19 compute-0 nova_compute[250018]: 2026-01-20 14:43:19.903 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:20.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:20 compute-0 nova_compute[250018]: 2026-01-20 14:43:20.564 250022 DEBUG nova.network.neutron [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:20 compute-0 nova_compute[250018]: 2026-01-20 14:43:20.631 250022 INFO nova.compute.manager [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Took 1.07 seconds to deallocate network for instance.
Jan 20 14:43:20 compute-0 nova_compute[250018]: 2026-01-20 14:43:20.718 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:20 compute-0 nova_compute[250018]: 2026-01-20 14:43:20.719 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:20 compute-0 nova_compute[250018]: 2026-01-20 14:43:20.886 250022 DEBUG oslo_concurrency.processutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4067799863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:21.034 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.189 250022 DEBUG nova.compute.manager [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.190 250022 DEBUG oslo_concurrency.lockutils [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52f99263-c471-4724-813b-98b8a7c3c301-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.190 250022 DEBUG oslo_concurrency.lockutils [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.191 250022 DEBUG oslo_concurrency.lockutils [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.191 250022 DEBUG nova.compute.manager [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] No waiting events found dispatching network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.192 250022 WARNING nova.compute.manager [req-c1f09e79-16b3-41d7-a040-a4c70663fee7 req-44d3589c-0214-4cfc-a858-2194c8f5bb7b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received unexpected event network-vif-plugged-c967163a-0504-48ab-88e1-29cb7a413387 for instance with vm_state deleted and task_state None.
Jan 20 14:43:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531891016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.355 250022 DEBUG oslo_concurrency.processutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.361 250022 DEBUG nova.compute.provider_tree [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.381 250022 DEBUG nova.scheduler.client.report [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.388 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Successfully created port: 09356ee8-6584-422c-b052-c6e0aedc7ab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.400 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.445 250022 INFO nova.scheduler.client.report [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Deleted allocations for instance 52f99263-c471-4724-813b-98b8a7c3c301
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.542 250022 DEBUG oslo_concurrency.lockutils [None req-5d041f0c-b01f-44f9-8e2e-faf00defc6dd aa2e7857e85f483eb0d162e2ee8c2e2c a3e022a35f604df2bbc885e498b1e206 - - default default] Lock "52f99263-c471-4724-813b-98b8a7c3c301" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 113 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 353 op/s
Jan 20 14:43:21 compute-0 nova_compute[250018]: 2026-01-20 14:43:21.781 250022 DEBUG nova.compute.manager [req-f159c058-4572-454d-9211-ca155aa9e431 req-8264804c-8a1e-42d9-9532-b0ec81087194 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Received event network-vif-deleted-c967163a-0504-48ab-88e1-29cb7a413387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3531891016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:21 compute-0 ceph-mon[74360]: pgmap v1650: 321 pgs: 321 active+clean; 113 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.5 MiB/s wr, 353 op/s
Jan 20 14:43:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:22.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.485 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Successfully updated port: 09356ee8-6584-422c-b052-c6e0aedc7ab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.512 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.512 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.512 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.625 250022 DEBUG nova.compute.manager [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.625 250022 DEBUG nova.compute.manager [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing instance network info cache due to event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.625 250022 DEBUG oslo_concurrency.lockutils [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:22.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:22 compute-0 nova_compute[250018]: 2026-01-20 14:43:22.764 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:43:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 70 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 816 KiB/s wr, 334 op/s
Jan 20 14:43:23 compute-0 nova_compute[250018]: 2026-01-20 14:43:23.820 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:23 compute-0 ceph-mon[74360]: pgmap v1651: 321 pgs: 321 active+clean; 70 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 816 KiB/s wr, 334 op/s
Jan 20 14:43:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:24.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.191 250022 DEBUG nova.network.neutron [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.220 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.220 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Instance network_info: |[{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.220 250022 DEBUG oslo_concurrency.lockutils [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.221 250022 DEBUG nova.network.neutron [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.224 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Start _get_guest_xml network_info=[{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.230 250022 WARNING nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.242 250022 DEBUG nova.virt.libvirt.host [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.243 250022 DEBUG nova.virt.libvirt.host [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.247 250022 DEBUG nova.virt.libvirt.host [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.249 250022 DEBUG nova.virt.libvirt.host [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.250 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.251 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.252 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.253 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.253 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.254 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.254 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.255 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.255 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.256 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.257 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.257 250022 DEBUG nova.virt.hardware [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.263 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1454149014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.735 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:24.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.763 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:24 compute-0 nova_compute[250018]: 2026-01-20 14:43:24.767 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1454149014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516531942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.197 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.199 250022 DEBUG nova.virt.libvirt.vif [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1638959585',display_name='tempest-AttachInterfacesUnderV243Test-server-1638959585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1638959585',id=82,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHBLhvwQD8pfuy81t4/uX9vNVmbmzfhtV4k/aMtROKxZHBT5Av/A3g1Hv5GBLLgGJzoauzbTsv3KqXvd+dGceMvL4+iryR2sjrFK7Zz/EMWrOCYAILIyDu16zdgqhbQwg==',key_name='tempest-keypair-1449064549',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fbbc94ed3bb41b1a060e5a7e1099ccf',ramdisk_id='',reservation_id='r-zhkvfssa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-743380895',owner_user_name='tempest-AttachInterfacesUnderV243Test-743380895-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7382dbb4dfeb47a08ece33c6f113d77c',uuid=50a3e5fd-193e-4c31-a7ce-b29b4ff10849,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.199 250022 DEBUG nova.network.os_vif_util [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converting VIF {"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.200 250022 DEBUG nova.network.os_vif_util [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.201 250022 DEBUG nova.objects.instance [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lazy-loading 'pci_devices' on Instance uuid 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.217 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <uuid>50a3e5fd-193e-4c31-a7ce-b29b4ff10849</uuid>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <name>instance-00000052</name>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-1638959585</nova:name>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:43:24</nova:creationTime>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:user uuid="7382dbb4dfeb47a08ece33c6f113d77c">tempest-AttachInterfacesUnderV243Test-743380895-project-member</nova:user>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:project uuid="9fbbc94ed3bb41b1a060e5a7e1099ccf">tempest-AttachInterfacesUnderV243Test-743380895</nova:project>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <nova:port uuid="09356ee8-6584-422c-b052-c6e0aedc7ab4">
Jan 20 14:43:25 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <system>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="serial">50a3e5fd-193e-4c31-a7ce-b29b4ff10849</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="uuid">50a3e5fd-193e-4c31-a7ce-b29b4ff10849</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </system>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <os>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </os>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <features>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </features>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk">
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config">
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:25 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:4b:70:28"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <target dev="tap09356ee8-65"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/console.log" append="off"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <video>
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </video>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:43:25 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:43:25 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:43:25 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:43:25 compute-0 nova_compute[250018]: </domain>
Jan 20 14:43:25 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.219 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Preparing to wait for external event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.219 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.220 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.220 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.221 250022 DEBUG nova.virt.libvirt.vif [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1638959585',display_name='tempest-AttachInterfacesUnderV243Test-server-1638959585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1638959585',id=82,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHBLhvwQD8pfuy81t4/uX9vNVmbmzfhtV4k/aMtROKxZHBT5Av/A3g1Hv5GBLLgGJzoauzbTsv3KqXvd+dGceMvL4+iryR2sjrFK7Zz/EMWrOCYAILIyDu16zdgqhbQwg==',key_name='tempest-keypair-1449064549',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fbbc94ed3bb41b1a060e5a7e1099ccf',ramdisk_id='',reservation_id='r-zhkvfssa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-743380895',owner_user_name='tempest-AttachInterfacesUnderV243Test-743380895-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7382dbb4dfeb47a08ece33c6f113d77c',uuid=50a3e5fd-193e-4c31-a7ce-b29b4ff10849,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.221 250022 DEBUG nova.network.os_vif_util [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converting VIF {"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.221 250022 DEBUG nova.network.os_vif_util [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.222 250022 DEBUG os_vif [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.222 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.223 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.223 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.225 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.226 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09356ee8-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.226 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09356ee8-65, col_values=(('external_ids', {'iface-id': '09356ee8-6584-422c-b052-c6e0aedc7ab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:70:28', 'vm-uuid': '50a3e5fd-193e-4c31-a7ce-b29b4ff10849'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.227 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.230 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:25 compute-0 NetworkManager[48960]: <info>  [1768920205.2294] manager: (tap09356ee8-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.236 250022 INFO os_vif [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65')
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.299 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.299 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.300 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] No VIF found with MAC fa:16:3e:4b:70:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.300 250022 INFO nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Using config drive
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.328 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:25 compute-0 sshd-session[298559]: Invalid user ubuntu from 157.245.78.139 port 33972
Jan 20 14:43:25 compute-0 sshd-session[298559]: Connection closed by invalid user ubuntu 157.245.78.139 port 33972 [preauth]
Jan 20 14:43:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.797 250022 INFO nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Creating config drive at /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.807 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_h00m2t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/516531942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:25 compute-0 ceph-mon[74360]: pgmap v1652: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Jan 20 14:43:25 compute-0 nova_compute[250018]: 2026-01-20 14:43:25.972 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5_h00m2t" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.020 250022 DEBUG nova.storage.rbd_utils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] rbd image 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.025 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:26.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.063 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.065 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.190 250022 DEBUG oslo_concurrency.processutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config 50a3e5fd-193e-4c31-a7ce-b29b4ff10849_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.190 250022 INFO nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Deleting local config drive /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849/disk.config because it was imported into RBD.
Jan 20 14:43:26 compute-0 kernel: tap09356ee8-65: entered promiscuous mode
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.2544] manager: (tap09356ee8-65): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 20 14:43:26 compute-0 ovn_controller[148666]: 2026-01-20T14:43:26Z|00269|binding|INFO|Claiming lport 09356ee8-6584-422c-b052-c6e0aedc7ab4 for this chassis.
Jan 20 14:43:26 compute-0 ovn_controller[148666]: 2026-01-20T14:43:26Z|00270|binding|INFO|09356ee8-6584-422c-b052-c6e0aedc7ab4: Claiming fa:16:3e:4b:70:28 10.100.0.5
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.301 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.308 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.322 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:70:28 10.100.0.5'], port_security=['fa:16:3e:4b:70:28 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50a3e5fd-193e-4c31-a7ce-b29b4ff10849', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fbbc94ed3bb41b1a060e5a7e1099ccf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '074276d6-bd54-418a-a9d1-e949772a1489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6902607-52d1-4bb5-82d0-ef4c4182165b, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=09356ee8-6584-422c-b052-c6e0aedc7ab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.323 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 09356ee8-6584-422c-b052-c6e0aedc7ab4 in datapath b9346c24-c2ec-4e94-9b82-f911ab82abc7 bound to our chassis
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.323 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9346c24-c2ec-4e94-9b82-f911ab82abc7
Jan 20 14:43:26 compute-0 systemd-machined[216401]: New machine qemu-36-instance-00000052.
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.339 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[698a3f3a-bcf5-41ef-b36c-865a9b4a93d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.340 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9346c24-c1 in ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.343 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9346c24-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.343 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c0252c42-6204-47a4-95d1-899da550512b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6d38360c-ab30-4778-b180-7864984a4fb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000052.
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.364 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b879cf08-7d39-4055-8e3a-01bfb1bfc852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 systemd-udevd[298640]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.3775] device (tap09356ee8-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.3784] device (tap09356ee8-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:43:26 compute-0 ovn_controller[148666]: 2026-01-20T14:43:26Z|00271|binding|INFO|Setting lport 09356ee8-6584-422c-b052-c6e0aedc7ab4 ovn-installed in OVS
Jan 20 14:43:26 compute-0 ovn_controller[148666]: 2026-01-20T14:43:26Z|00272|binding|INFO|Setting lport 09356ee8-6584-422c-b052-c6e0aedc7ab4 up in Southbound
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.398 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.398 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[21f6318f-f5cc-4eb1-ab95-1b4c2637e942]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.441 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ba26bfd4-e356-4c55-833d-ce73b6e4410f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.4529] manager: (tapb9346c24-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 20 14:43:26 compute-0 systemd-udevd[298643]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.454 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0c13e7-dd49-4724-8e03-23f4a61971c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.509 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.517 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1d0891-bdc0-46c4-bbbe-f6532005117f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.520 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2cad94e4-4b57-49f2-8bb5-8e1b0bb767dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.537 250022 DEBUG nova.network.neutron [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updated VIF entry in instance network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.538 250022 DEBUG nova.network.neutron [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:26 compute-0 sudo[298653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.5552] device (tapb9346c24-c0): carrier: link connected
Jan 20 14:43:26 compute-0 sudo[298653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:26 compute-0 sudo[298653]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.568 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8b13309b-f76e-4904-ac8b-596485e28854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.589 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a79cb9-d114-406d-b284-acb3e20a2376]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9346c24-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614727, 'reachable_time': 36518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298699, 'error': None, 'target': 'ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.606 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[44b59796-6067-4cee-9e8c-b016e67a793c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:83f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614727, 'tstamp': 614727}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298715, 'error': None, 'target': 'ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 sudo[298696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:26 compute-0 sudo[298696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:26 compute-0 sudo[298696]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.625 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[727df3d0-9f80-4c14-be82-2089e7ec2bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9346c24-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 86], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614727, 'reachable_time': 36518, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298721, 'error': None, 'target': 'ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.652 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6590f066-dfa6-4d3c-86a9-0f778befc552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.706 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[038ebbd6-ef80-43da-aaf3-1f394bae70d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.707 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9346c24-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.707 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.708 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9346c24-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.709 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 NetworkManager[48960]: <info>  [1768920206.7100] manager: (tapb9346c24-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 20 14:43:26 compute-0 kernel: tapb9346c24-c0: entered promiscuous mode
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.712 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9346c24-c0, col_values=(('external_ids', {'iface-id': '2ebe49d9-8184-4676-9d89-c4994a985ed3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:26 compute-0 ovn_controller[148666]: 2026-01-20T14:43:26Z|00273|binding|INFO|Releasing lport 2ebe49d9-8184-4676-9d89-c4994a985ed3 from this chassis (sb_readonly=0)
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.715 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9346c24-c2ec-4e94-9b82-f911ab82abc7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9346c24-c2ec-4e94-9b82-f911ab82abc7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.716 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9114f143-93fe-433e-bf55-46e7e6314b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.716 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b9346c24-c2ec-4e94-9b82-f911ab82abc7
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b9346c24-c2ec-4e94-9b82-f911ab82abc7.pid.haproxy
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b9346c24-c2ec-4e94-9b82-f911ab82abc7
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:43:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:26.717 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'env', 'PROCESS_TAG=haproxy-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9346c24-c2ec-4e94-9b82-f911ab82abc7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.731 250022 DEBUG oslo_concurrency.lockutils [req-447c1acb-c7cb-4844-bbc9-a580ed5ce96b req-3b95dd5b-5bfd-4a57-90dc-4f2d02d200ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:26.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.935 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920191.8869796, 1951432b-2c0c-4d1b-90df-d94dcf9fc32e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.936 250022 INFO nova.compute.manager [-] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] VM Stopped (Lifecycle Event)
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.950 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920206.9506404, 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.951 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] VM Started (Lifecycle Event)
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.956 250022 DEBUG nova.compute.manager [None req-4caa6288-1e51-4e5a-9189-460bf0cdc7f6 - - - - - -] [instance: 1951432b-2c0c-4d1b-90df-d94dcf9fc32e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.972 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.976 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920206.951327, 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.976 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] VM Paused (Lifecycle Event)
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.984 250022 DEBUG nova.compute.manager [req-b08f9e85-edd3-43fb-b94b-a677e7cfad9b req-a712c41d-36e0-46cb-8e27-9a4a90c18352 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.985 250022 DEBUG oslo_concurrency.lockutils [req-b08f9e85-edd3-43fb-b94b-a677e7cfad9b req-a712c41d-36e0-46cb-8e27-9a4a90c18352 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.985 250022 DEBUG oslo_concurrency.lockutils [req-b08f9e85-edd3-43fb-b94b-a677e7cfad9b req-a712c41d-36e0-46cb-8e27-9a4a90c18352 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.985 250022 DEBUG oslo_concurrency.lockutils [req-b08f9e85-edd3-43fb-b94b-a677e7cfad9b req-a712c41d-36e0-46cb-8e27-9a4a90c18352 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.986 250022 DEBUG nova.compute.manager [req-b08f9e85-edd3-43fb-b94b-a677e7cfad9b req-a712c41d-36e0-46cb-8e27-9a4a90c18352 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Processing event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.986 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.990 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.993 250022 INFO nova.virt.libvirt.driver [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Instance spawned successfully.
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.993 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:43:26 compute-0 nova_compute[250018]: 2026-01-20 14:43:26.997 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.001 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920206.989692, 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.001 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] VM Resumed (Lifecycle Event)
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.008 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.009 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.009 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.010 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.010 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.011 250022 DEBUG nova.virt.libvirt.driver [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.020 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.023 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.051 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:27 compute-0 podman[298797]: 2026-01-20 14:43:27.059545908 +0000 UTC m=+0.048319850 container create f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.087 250022 INFO nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Took 8.00 seconds to spawn the instance on the hypervisor.
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.088 250022 DEBUG nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:27 compute-0 systemd[1]: Started libpod-conmon-f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176.scope.
Jan 20 14:43:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c4150b8f4b6bdf0925e398e6a4c0242f7601e52e673b8736f534b543f5c8c6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:27 compute-0 podman[298797]: 2026-01-20 14:43:27.03523918 +0000 UTC m=+0.024013132 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.139 250022 INFO nova.compute.manager [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Took 9.03 seconds to build instance.
Jan 20 14:43:27 compute-0 podman[298797]: 2026-01-20 14:43:27.146750091 +0000 UTC m=+0.135524043 container init f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:43:27 compute-0 podman[298797]: 2026-01-20 14:43:27.151248124 +0000 UTC m=+0.140022056 container start f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.158 250022 DEBUG oslo_concurrency.lockutils [None req-ef2fa57b-1b9d-4fa2-b4e1-837639bd88ba 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:27 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [NOTICE]   (298816) : New worker (298818) forked
Jan 20 14:43:27 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [NOTICE]   (298816) : Loading success.
Jan 20 14:43:27 compute-0 nova_compute[250018]: 2026-01-20 14:43:27.188 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Jan 20 14:43:27 compute-0 ceph-mon[74360]: pgmap v1653: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Jan 20 14:43:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:28.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:28.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.157 250022 DEBUG nova.compute.manager [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.157 250022 DEBUG oslo_concurrency.lockutils [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.158 250022 DEBUG oslo_concurrency.lockutils [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.158 250022 DEBUG oslo_concurrency.lockutils [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.158 250022 DEBUG nova.compute.manager [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] No waiting events found dispatching network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:29 compute-0 nova_compute[250018]: 2026-01-20 14:43:29.158 250022 WARNING nova.compute.manager [req-7d04e27a-5a34-4a7f-b387-5cb5e1ecb86c req-054c9bb5-6c92-4f07-ac6f-1377e85f591f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received unexpected event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 for instance with vm_state active and task_state None.
Jan 20 14:43:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 20 14:43:29 compute-0 ceph-mon[74360]: pgmap v1654: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 20 14:43:30 compute-0 nova_compute[250018]: 2026-01-20 14:43:30.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:30.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:30 compute-0 nova_compute[250018]: 2026-01-20 14:43:30.227 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:30 compute-0 NetworkManager[48960]: <info>  [1768920210.2545] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 20 14:43:30 compute-0 NetworkManager[48960]: <info>  [1768920210.2555] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Jan 20 14:43:30 compute-0 nova_compute[250018]: 2026-01-20 14:43:30.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:30 compute-0 ovn_controller[148666]: 2026-01-20T14:43:30Z|00274|binding|INFO|Releasing lport 2ebe49d9-8184-4676-9d89-c4994a985ed3 from this chassis (sb_readonly=0)
Jan 20 14:43:30 compute-0 nova_compute[250018]: 2026-01-20 14:43:30.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:30 compute-0 nova_compute[250018]: 2026-01-20 14:43:30.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:30.756 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:30.757 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:30.757 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:30.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.007 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.072 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.072 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.358 250022 DEBUG nova.compute.manager [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.359 250022 DEBUG nova.compute.manager [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing instance network info cache due to event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.360 250022 DEBUG oslo_concurrency.lockutils [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.360 250022 DEBUG oslo_concurrency.lockutils [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.360 250022 DEBUG nova.network.neutron [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958872977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.494 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.583 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.583 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:43:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2958872977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.762 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.763 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4383MB free_disk=20.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.965 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.966 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:43:31 compute-0 nova_compute[250018]: 2026-01-20 14:43:31.966 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.002 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.191 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526759509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.501 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.507 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.544 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.579 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.580 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.581 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:32 compute-0 nova_compute[250018]: 2026-01-20 14:43:32.581 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:43:32 compute-0 ceph-mon[74360]: pgmap v1655: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 20 14:43:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/526759509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:32.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Jan 20 14:43:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/543489734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/23827069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:33 compute-0 nova_compute[250018]: 2026-01-20 14:43:33.748 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920198.7471347, 52f99263-c471-4724-813b-98b8a7c3c301 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:33 compute-0 nova_compute[250018]: 2026-01-20 14:43:33.749 250022 INFO nova.compute.manager [-] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] VM Stopped (Lifecycle Event)
Jan 20 14:43:33 compute-0 nova_compute[250018]: 2026-01-20 14:43:33.811 250022 DEBUG nova.compute.manager [None req-16d23ea2-6765-41e6-b638-0f83a33ce0bf - - - - - -] [instance: 52f99263-c471-4724-813b-98b8a7c3c301] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:33 compute-0 nova_compute[250018]: 2026-01-20 14:43:33.993 250022 DEBUG nova.network.neutron [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updated VIF entry in instance network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:33 compute-0 nova_compute[250018]: 2026-01-20 14:43:33.994 250022 DEBUG nova.network.neutron [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:34 compute-0 nova_compute[250018]: 2026-01-20 14:43:34.058 250022 DEBUG oslo_concurrency.lockutils [req-d6c7165f-6aa8-4c62-961f-0bf9e5181d24 req-30a16add-6126-4b8b-9a6c-c632437ea7ea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:34.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:34 compute-0 ceph-mon[74360]: pgmap v1656: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Jan 20 14:43:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/332974038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2644878239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:35 compute-0 nova_compute[250018]: 2026-01-20 14:43:35.228 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:35 compute-0 nova_compute[250018]: 2026-01-20 14:43:35.600 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-0 nova_compute[250018]: 2026-01-20 14:43:35.601 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:35 compute-0 nova_compute[250018]: 2026-01-20 14:43:35.602 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:43:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 20 14:43:35 compute-0 ceph-mon[74360]: pgmap v1657: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 89 op/s
Jan 20 14:43:36 compute-0 nova_compute[250018]: 2026-01-20 14:43:36.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:36.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:36.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.195 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.622 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.623 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.709 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:43:37 compute-0 ceph-mon[74360]: pgmap v1658: 321 pgs: 321 active+clean; 88 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.848 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.848 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.854 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:43:37 compute-0 nova_compute[250018]: 2026-01-20 14:43:37.854 250022 INFO nova.compute.claims [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:43:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.120 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:43:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132858433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.651 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.659 250022 DEBUG nova.compute.provider_tree [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.710 250022 DEBUG nova.scheduler.client.report [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:43:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4057377900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.793 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/23348796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/132858433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.794 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.913 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.913 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.950 250022 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:43:38 compute-0 nova_compute[250018]: 2026-01-20 14:43:38.980 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.108 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.134 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.135 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.136 250022 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Creating image(s)
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.159 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.184 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.216 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.222 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:39 compute-0 ovn_controller[148666]: 2026-01-20T14:43:39Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:70:28 10.100.0.5
Jan 20 14:43:39 compute-0 ovn_controller[148666]: 2026-01-20T14:43:39Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:70:28 10.100.0.5
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.304 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.305 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.306 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.306 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.336 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.340 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ad4801ff-a246-41b7-af59-709b9a78601e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.663 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ad4801ff-a246-41b7-af59-709b9a78601e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 92 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 431 KiB/s wr, 81 op/s
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.750 250022 DEBUG nova.policy [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2975742546164cad937d13671d17108a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28a523cfe06042ff96554913a78e1e3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.759 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] resizing rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:43:39 compute-0 ceph-mon[74360]: pgmap v1659: 321 pgs: 321 active+clean; 92 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 431 KiB/s wr, 81 op/s
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.856 250022 DEBUG nova.objects.instance [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'migration_context' on Instance uuid ad4801ff-a246-41b7-af59-709b9a78601e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.880 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.880 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Ensure instance console log exists: /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.881 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.881 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:39 compute-0 nova_compute[250018]: 2026-01-20 14:43:39.881 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:40 compute-0 nova_compute[250018]: 2026-01-20 14:43:40.231 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:41 compute-0 nova_compute[250018]: 2026-01-20 14:43:41.462 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Successfully created port: 052dfa44-9329-474e-bd60-601248d6420b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:43:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 176 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 80 op/s
Jan 20 14:43:41 compute-0 ceph-mon[74360]: pgmap v1660: 321 pgs: 321 active+clean; 176 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 80 op/s
Jan 20 14:43:42 compute-0 nova_compute[250018]: 2026-01-20 14:43:42.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:43:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:42.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:42 compute-0 nova_compute[250018]: 2026-01-20 14:43:42.196 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:42 compute-0 nova_compute[250018]: 2026-01-20 14:43:42.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.490 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Successfully updated port: 052dfa44-9329-474e-bd60-601248d6420b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.511 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.511 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquired lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.511 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 235 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 6.4 MiB/s wr, 107 op/s
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.713 250022 DEBUG nova.compute.manager [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Received event network-changed-052dfa44-9329-474e-bd60-601248d6420b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.714 250022 DEBUG nova.compute.manager [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Refreshing instance network info cache due to event network-changed-052dfa44-9329-474e-bd60-601248d6420b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.714 250022 DEBUG oslo_concurrency.lockutils [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:43 compute-0 nova_compute[250018]: 2026-01-20 14:43:43.790 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:43:43 compute-0 ceph-mon[74360]: pgmap v1661: 321 pgs: 321 active+clean; 235 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 6.4 MiB/s wr, 107 op/s
Jan 20 14:43:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:44.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:44.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.130 250022 DEBUG nova.network.neutron [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Updating instance_info_cache with network_info: [{"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.163 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Releasing lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.164 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Instance network_info: |[{"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.165 250022 DEBUG oslo_concurrency.lockutils [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.166 250022 DEBUG nova.network.neutron [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Refreshing network info cache for port 052dfa44-9329-474e-bd60-601248d6420b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.171 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Start _get_guest_xml network_info=[{"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.177 250022 WARNING nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.185 250022 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.186 250022 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.194 250022 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.195 250022 DEBUG nova.virt.libvirt.host [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.197 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.198 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.198 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.199 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.199 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.200 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.200 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.200 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.201 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.201 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.202 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.202 250022 DEBUG nova.virt.hardware [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.207 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047181053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.694 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.722 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:45 compute-0 nova_compute[250018]: 2026-01-20 14:43:45.727 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4047181053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:46.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:43:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985602347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.169 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.171 250022 DEBUG nova.virt.libvirt.vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-3',id=85,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:39Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=ad4801ff-a246-41b7-af59-709b9a78601e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.171 250022 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.172 250022 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.173 250022 DEBUG nova.objects.instance [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'pci_devices' on Instance uuid ad4801ff-a246-41b7-af59-709b9a78601e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.190 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <uuid>ad4801ff-a246-41b7-af59-709b9a78601e</uuid>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <name>instance-00000055</name>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1818827013-3</nova:name>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:43:45</nova:creationTime>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:user uuid="2975742546164cad937d13671d17108a">tempest-ListServersNegativeTestJSON-1080060493-project-member</nova:user>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:project uuid="28a523cfe06042ff96554913a78e1e3a">tempest-ListServersNegativeTestJSON-1080060493</nova:project>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <nova:port uuid="052dfa44-9329-474e-bd60-601248d6420b">
Jan 20 14:43:46 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <system>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="serial">ad4801ff-a246-41b7-af59-709b9a78601e</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="uuid">ad4801ff-a246-41b7-af59-709b9a78601e</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </system>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <os>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </os>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <features>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </features>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ad4801ff-a246-41b7-af59-709b9a78601e_disk">
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ad4801ff-a246-41b7-af59-709b9a78601e_disk.config">
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </source>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:43:46 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:2b:5f:52"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <target dev="tap052dfa44-93"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/console.log" append="off"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <video>
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </video>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:43:46 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:43:46 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:43:46 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:43:46 compute-0 nova_compute[250018]: </domain>
Jan 20 14:43:46 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.192 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Preparing to wait for external event network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.192 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.193 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.193 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.194 250022 DEBUG nova.virt.libvirt.vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-3',id=85,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:43:39Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=ad4801ff-a246-41b7-af59-709b9a78601e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.194 250022 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.196 250022 DEBUG nova.network.os_vif_util [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.197 250022 DEBUG os_vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.197 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.198 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.198 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.201 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052dfa44-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.202 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap052dfa44-93, col_values=(('external_ids', {'iface-id': '052dfa44-9329-474e-bd60-601248d6420b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:5f:52', 'vm-uuid': 'ad4801ff-a246-41b7-af59-709b9a78601e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:46 compute-0 NetworkManager[48960]: <info>  [1768920226.2394] manager: (tap052dfa44-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.239 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.242 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.247 250022 INFO os_vif [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93')
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.327 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.327 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.328 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] No VIF found with MAC fa:16:3e:2b:5f:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.328 250022 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Using config drive
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.349 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:46 compute-0 sudo[299153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:46 compute-0 sudo[299153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:46 compute-0 sudo[299153]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:46 compute-0 sudo[299178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:46 compute-0 sudo[299178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:46 compute-0 sudo[299178]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.968 250022 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Creating config drive at /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config
Jan 20 14:43:46 compute-0 nova_compute[250018]: 2026-01-20 14:43:46.973 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo1qepvcj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:47 compute-0 ceph-mon[74360]: pgmap v1662: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2985602347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3855425524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1131127473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.107 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo1qepvcj" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.144 250022 DEBUG nova.storage.rbd_utils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] rbd image ad4801ff-a246-41b7-af59-709b9a78601e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.148 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config ad4801ff-a246-41b7-af59-709b9a78601e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.395 250022 DEBUG oslo_concurrency.processutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config ad4801ff-a246-41b7-af59-709b9a78601e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.395 250022 INFO nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Deleting local config drive /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e/disk.config because it was imported into RBD.
Jan 20 14:43:47 compute-0 kernel: tap052dfa44-93: entered promiscuous mode
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.4451] manager: (tap052dfa44-93): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 20 14:43:47 compute-0 systemd-udevd[299253]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.502 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 ovn_controller[148666]: 2026-01-20T14:43:47Z|00275|binding|INFO|Claiming lport 052dfa44-9329-474e-bd60-601248d6420b for this chassis.
Jan 20 14:43:47 compute-0 ovn_controller[148666]: 2026-01-20T14:43:47Z|00276|binding|INFO|052dfa44-9329-474e-bd60-601248d6420b: Claiming fa:16:3e:2b:5f:52 10.100.0.9
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.508 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:5f:52 10.100.0.9'], port_security=['fa:16:3e:2b:5f:52 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ad4801ff-a246-41b7-af59-709b9a78601e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53d0b281-776f-4682-8aaf-098e1d364008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28a523cfe06042ff96554913a78e1e3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1879c269-0854-40a3-8eb9-b61f97d38545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1afefec-2060-4dfb-acbb-1ce14c3a663c, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=052dfa44-9329-474e-bd60-601248d6420b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.509 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 052dfa44-9329-474e-bd60-601248d6420b in datapath 53d0b281-776f-4682-8aaf-098e1d364008 bound to our chassis
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.510 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.5146] device (tap052dfa44-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.5155] device (tap052dfa44-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.521 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a7dd2107-7ab2-454a-94a1-51e3fcc1700d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.522 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53d0b281-71 in ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:43:47 compute-0 ovn_controller[148666]: 2026-01-20T14:43:47Z|00277|binding|INFO|Setting lport 052dfa44-9329-474e-bd60-601248d6420b ovn-installed in OVS
Jan 20 14:43:47 compute-0 ovn_controller[148666]: 2026-01-20T14:43:47Z|00278|binding|INFO|Setting lport 052dfa44-9329-474e-bd60-601248d6420b up in Southbound
Jan 20 14:43:47 compute-0 systemd-machined[216401]: New machine qemu-37-instance-00000055.
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.526 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53d0b281-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.526 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[17946ab5-a8cf-43e0-8781-d69f583a3bdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.527 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.527 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8b01a32f-95d7-416f-a3ac-570384283138]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.540 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[26e57c5b-e595-442f-b112-725d18a252ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000055.
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.565 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d65a78b1-4f12-4a81-87c6-bc2869e2de8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.595 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c36a3f55-86b7-4b21-99bd-df513b674a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 systemd-udevd[299257]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.602 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[80a5c056-3b1b-4e15-b4a9-b2ddb018abb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.6034] manager: (tap53d0b281-70): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.632 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb6d437-c4b4-48ee-b275-f9c4a21e8e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.635 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[adb77a4b-3575-4ad0-8f03-e07730594774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.6556] device (tap53d0b281-70): carrier: link connected
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.661 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8c99e6cc-6158-4256-bc19-ebc56f21592a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.679 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[33c8f441-b1da-47f1-9e36-6805f7df3817]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53d0b281-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:be:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616837, 'reachable_time': 23896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299289, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.696 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae1936-e691-4f45-a810-e8d98667f6d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:befe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616837, 'tstamp': 616837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299290, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.715 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2938c5-68ae-4ff6-9e27-267b2073a2bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53d0b281-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:be:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616837, 'reachable_time': 23896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299291, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.745 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb63ed1-3306-4d2b-a5bd-61c447fb5b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.802 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[80379d98-34f3-45b9-9ad2-ce12736c57c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.803 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53d0b281-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.804 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.804 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53d0b281-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:47 compute-0 NetworkManager[48960]: <info>  [1768920227.8066] manager: (tap53d0b281-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 20 14:43:47 compute-0 kernel: tap53d0b281-70: entered promiscuous mode
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.808 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53d0b281-70, col_values=(('external_ids', {'iface-id': '2ea34810-4753-414f-ae43-b7b379fc432c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 ovn_controller[148666]: 2026-01-20T14:43:47Z|00279|binding|INFO|Releasing lport 2ea34810-4753-414f-ae43-b7b379fc432c from this chassis (sb_readonly=0)
Jan 20 14:43:47 compute-0 nova_compute[250018]: 2026-01-20 14:43:47.831 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.832 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.833 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[206bc6d6-810f-494c-a931-fb3c0cb503b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.834 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/53d0b281-776f-4682-8aaf-098e1d364008.pid.haproxy
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 53d0b281-776f-4682-8aaf-098e1d364008
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:43:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:47.834 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'env', 'PROCESS_TAG=haproxy-53d0b281-776f-4682-8aaf-098e1d364008', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53d0b281-776f-4682-8aaf-098e1d364008.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.015 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920228.015149, ad4801ff-a246-41b7-af59-709b9a78601e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.016 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] VM Started (Lifecycle Event)
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.048 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.054 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920228.016202, ad4801ff-a246-41b7-af59-709b9a78601e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.054 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] VM Paused (Lifecycle Event)
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.075 250022 DEBUG nova.network.neutron [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Updated VIF entry in instance network info cache for port 052dfa44-9329-474e-bd60-601248d6420b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.075 250022 DEBUG nova.network.neutron [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Updating instance_info_cache with network_info: [{"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.082 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.085 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:48 compute-0 ceph-mon[74360]: pgmap v1663: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.109 250022 DEBUG oslo_concurrency.lockutils [req-f5ca6f31-1ac7-46e4-bd63-a894623e031f req-25f94120-6a0d-42d6-971e-cd91cea5a606 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-ad4801ff-a246-41b7-af59-709b9a78601e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.110 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:48 compute-0 podman[299366]: 2026-01-20 14:43:48.206409661 +0000 UTC m=+0.068880608 container create 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:48 compute-0 systemd[1]: Started libpod-conmon-32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c.scope.
Jan 20 14:43:48 compute-0 podman[299366]: 2026-01-20 14:43:48.16837591 +0000 UTC m=+0.030846957 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:43:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b037c6d909b77ac0b5af88dbc1accee593f578bf3b11830ebce9d8ca071ce31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:48 compute-0 podman[299366]: 2026-01-20 14:43:48.279183733 +0000 UTC m=+0.141654710 container init 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:43:48 compute-0 podman[299366]: 2026-01-20 14:43:48.28462299 +0000 UTC m=+0.147093947 container start 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 14:43:48 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [NOTICE]   (299386) : New worker (299388) forked
Jan 20 14:43:48 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [NOTICE]   (299386) : Loading success.
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.590 250022 DEBUG nova.compute.manager [req-ab189c5b-a8b4-4ea8-9f17-27b95b9a12d3 req-8b26023d-4205-492b-85e0-32ff1b2503db 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Received event network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.590 250022 DEBUG oslo_concurrency.lockutils [req-ab189c5b-a8b4-4ea8-9f17-27b95b9a12d3 req-8b26023d-4205-492b-85e0-32ff1b2503db 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.590 250022 DEBUG oslo_concurrency.lockutils [req-ab189c5b-a8b4-4ea8-9f17-27b95b9a12d3 req-8b26023d-4205-492b-85e0-32ff1b2503db 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.590 250022 DEBUG oslo_concurrency.lockutils [req-ab189c5b-a8b4-4ea8-9f17-27b95b9a12d3 req-8b26023d-4205-492b-85e0-32ff1b2503db 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.591 250022 DEBUG nova.compute.manager [req-ab189c5b-a8b4-4ea8-9f17-27b95b9a12d3 req-8b26023d-4205-492b-85e0-32ff1b2503db 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Processing event network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.591 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.594 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920228.594738, ad4801ff-a246-41b7-af59-709b9a78601e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.595 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] VM Resumed (Lifecycle Event)
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.596 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.599 250022 INFO nova.virt.libvirt.driver [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Instance spawned successfully.
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.599 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.623 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.624 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.624 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.625 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.625 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.625 250022 DEBUG nova.virt.libvirt.driver [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.638 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.641 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.673 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:43:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.797 250022 INFO nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Took 9.66 seconds to spawn the instance on the hypervisor.
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.797 250022 DEBUG nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.911 250022 INFO nova.compute.manager [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Took 11.11 seconds to build instance.
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.946 250022 DEBUG oslo_concurrency.lockutils [None req-8c56c911-dd71-4ce2-be61-541eba597933 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.961 250022 DEBUG nova.objects.instance [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lazy-loading 'flavor' on Instance uuid 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.988 250022 DEBUG oslo_concurrency.lockutils [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:48 compute-0 nova_compute[250018]: 2026-01-20 14:43:48.988 250022 DEBUG oslo_concurrency.lockutils [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:49 compute-0 nova_compute[250018]: 2026-01-20 14:43:49.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2437304830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3926692694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:43:49 compute-0 podman[299398]: 2026-01-20 14:43:49.488142624 +0000 UTC m=+0.074263783 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:43:49 compute-0 podman[299397]: 2026-01-20 14:43:49.494247599 +0000 UTC m=+0.081693944 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:43:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Jan 20 14:43:50 compute-0 ceph-mon[74360]: pgmap v1664: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 7.5 MiB/s wr, 151 op/s
Jan 20 14:43:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:43:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:50.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.617 250022 DEBUG nova.network.neutron [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.734 250022 DEBUG nova.compute.manager [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Received event network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.735 250022 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.736 250022 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.736 250022 DEBUG oslo_concurrency.lockutils [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.737 250022 DEBUG nova.compute.manager [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] No waiting events found dispatching network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.737 250022 WARNING nova.compute.manager [req-32e2c550-f1f1-4bd4-99cd-c920efb2609d req-1b5073b9-ee96-466f-9e4a-7d0c87398e76 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Received unexpected event network-vif-plugged-052dfa44-9329-474e-bd60-601248d6420b for instance with vm_state active and task_state None.
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.749 250022 DEBUG nova.compute.manager [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.750 250022 DEBUG nova.compute.manager [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing instance network info cache due to event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:50 compute-0 nova_compute[250018]: 2026-01-20 14:43:50.750 250022 DEBUG oslo_concurrency.lockutils [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:50.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:51 compute-0 nova_compute[250018]: 2026-01-20 14:43:51.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 194 op/s
Jan 20 14:43:51 compute-0 ceph-mon[74360]: pgmap v1665: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 194 op/s
Jan 20 14:43:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:43:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:43:52 compute-0 nova_compute[250018]: 2026-01-20 14:43:52.248 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:43:52
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes']
Jan 20 14:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:43:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:52.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:52 compute-0 nova_compute[250018]: 2026-01-20 14:43:52.999 250022 DEBUG nova.network.neutron [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.031 250022 DEBUG oslo_concurrency.lockutils [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.031 250022 DEBUG nova.compute.manager [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.032 250022 DEBUG nova.compute.manager [None req-f54f832f-ecba-46d6-a82d-14b370d440e3 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] network_info to inject: |[{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.034 250022 DEBUG oslo_concurrency.lockutils [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.035 250022 DEBUG nova.network.neutron [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:53 compute-0 nova_compute[250018]: 2026-01-20 14:43:53.547 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Jan 20 14:43:53 compute-0 ceph-mon[74360]: pgmap v1666: 321 pgs: 321 active+clean; 260 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Jan 20 14:43:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:54.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.494 250022 DEBUG nova.network.neutron [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updated VIF entry in instance network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.495 250022 DEBUG nova.network.neutron [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.519 250022 DEBUG oslo_concurrency.lockutils [req-eb0d7ee5-9290-467e-9425-753769a7d8f0 req-16b0bc40-d60f-4640-bbdf-5717155b455c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.597 250022 DEBUG nova.objects.instance [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lazy-loading 'flavor' on Instance uuid 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.632 250022 DEBUG oslo_concurrency.lockutils [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:55 compute-0 nova_compute[250018]: 2026-01-20 14:43:55.633 250022 DEBUG oslo_concurrency.lockutils [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 227 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Jan 20 14:43:56 compute-0 ceph-mon[74360]: pgmap v1667: 321 pgs: 321 active+clean; 227 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Jan 20 14:43:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 20 14:43:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 20 14:43:56 compute-0 nova_compute[250018]: 2026-01-20 14:43:56.302 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:56 compute-0 sudo[299445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:56 compute-0 sudo[299445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-0 sudo[299445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-0 sudo[299470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:43:56 compute-0 sudo[299470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-0 sudo[299470]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-0 sudo[299495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:56 compute-0 sudo[299495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-0 sudo[299495]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:56 compute-0 sudo[299520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:43:56 compute-0 sudo[299520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:56.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:57 compute-0 nova_compute[250018]: 2026-01-20 14:43:57.012 250022 DEBUG nova.network.neutron [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:43:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/278798223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3948072205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:43:57 compute-0 sudo[299520]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:57 compute-0 nova_compute[250018]: 2026-01-20 14:43:57.124 250022 DEBUG nova.compute.manager [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:43:57 compute-0 nova_compute[250018]: 2026-01-20 14:43:57.125 250022 DEBUG nova.compute.manager [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing instance network info cache due to event network-changed-09356ee8-6584-422c-b052-c6e0aedc7ab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:43:57 compute-0 nova_compute[250018]: 2026-01-20 14:43:57.125 250022 DEBUG oslo_concurrency.lockutils [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 20d54854-c3a6-45a5-80dd-37109def2a45 does not exist
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1d3b6b96-ef66-4fe5-ab90-adb01c66006a does not exist
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8be6208f-7078-410e-a05b-fde8636fdc68 does not exist
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:43:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:43:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:57 compute-0 sudo[299576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:57 compute-0 sudo[299576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:57 compute-0 sudo[299576]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:57 compute-0 nova_compute[250018]: 2026-01-20 14:43:57.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:57 compute-0 sudo[299601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:43:57 compute-0 sudo[299601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:57 compute-0 sudo[299601]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:57 compute-0 sudo[299626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:57 compute-0 sudo[299626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:57 compute-0 sudo[299626]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:57 compute-0 sudo[299651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:43:57 compute-0 sudo[299651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:43:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 901 KiB/s wr, 248 op/s
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.705353859 +0000 UTC m=+0.041885266 container create 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 14:43:57 compute-0 systemd[1]: Started libpod-conmon-7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d.scope.
Jan 20 14:43:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.686426716 +0000 UTC m=+0.022958143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.798951376 +0000 UTC m=+0.135482813 container init 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.805311448 +0000 UTC m=+0.141842855 container start 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.808170055 +0000 UTC m=+0.144701542 container attach 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:43:57 compute-0 dazzling_pike[299735]: 167 167
Jan 20 14:43:57 compute-0 systemd[1]: libpod-7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d.scope: Deactivated successfully.
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.81312781 +0000 UTC m=+0.149659217 container died 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-71fd2ff4dba73302ef745bee0491cec3e20888df4a6c51f78db34269e62b3aa8-merged.mount: Deactivated successfully.
Jan 20 14:43:57 compute-0 podman[299718]: 2026-01-20 14:43:57.862032025 +0000 UTC m=+0.198563432 container remove 7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:43:57 compute-0 systemd[1]: libpod-conmon-7e6c3bfcf706df8229718d3cd39892a7e2bb8b77c2f95d5378ad9a1803b7558d.scope: Deactivated successfully.
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.034115268 +0000 UTC m=+0.043874160 container create bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:43:58 compute-0 systemd[1]: Started libpod-conmon-bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32.scope.
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:43:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:43:58 compute-0 ceph-mon[74360]: pgmap v1668: 321 pgs: 321 active+clean; 230 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 901 KiB/s wr, 248 op/s
Jan 20 14:43:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.016232304 +0000 UTC m=+0.025991216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.126337037 +0000 UTC m=+0.136095949 container init bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.133936523 +0000 UTC m=+0.143695415 container start bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.137491109 +0000 UTC m=+0.147250001 container attach bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:43:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:43:58.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:43:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:43:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:43:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:43:58.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.884 250022 DEBUG nova.network.neutron [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:43:58 compute-0 elegant_volhard[299776]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:43:58 compute-0 elegant_volhard[299776]: --> relative data size: 1.0
Jan 20 14:43:58 compute-0 elegant_volhard[299776]: --> All data devices are unavailable
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.919 250022 DEBUG oslo_concurrency.lockutils [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.920 250022 DEBUG nova.compute.manager [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.921 250022 DEBUG nova.compute.manager [None req-1ad7bc42-0b73-4dde-8393-d483231f78af 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] network_info to inject: |[{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 20 14:43:58 compute-0 systemd[1]: libpod-bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32.scope: Deactivated successfully.
Jan 20 14:43:58 compute-0 podman[299759]: 2026-01-20 14:43:58.923244843 +0000 UTC m=+0.933003735 container died bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.924 250022 DEBUG oslo_concurrency.lockutils [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:43:58 compute-0 nova_compute[250018]: 2026-01-20 14:43:58.924 250022 DEBUG nova.network.neutron [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Refreshing network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd5f17e435030488b8415e45776eb10abc32244eba488284816085007610f9f-merged.mount: Deactivated successfully.
Jan 20 14:43:59 compute-0 podman[299759]: 2026-01-20 14:43:59.00842905 +0000 UTC m=+1.018187942 container remove bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:43:59 compute-0 systemd[1]: libpod-conmon-bca633c5c8646fc34f2181996fecbda6c67307ea42e96a888e26ef0057b7ac32.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 sudo[299651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:59 compute-0 sudo[299806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:59 compute-0 sudo[299806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:59 compute-0 sudo[299806]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:59 compute-0 sudo[299831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:43:59 compute-0 sudo[299831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:59 compute-0 sudo[299831]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:59 compute-0 sudo[299856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:43:59 compute-0 sudo[299856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:59 compute-0 sudo[299856]: pam_unix(sudo:session): session closed for user root
Jan 20 14:43:59 compute-0 sudo[299881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:43:59 compute-0 sudo[299881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.500 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.502 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.503 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.503 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.503 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.504 250022 INFO nova.compute.manager [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Terminating instance
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.505 250022 DEBUG nova.compute.manager [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:43:59 compute-0 kernel: tap09356ee8-65 (unregistering): left promiscuous mode
Jan 20 14:43:59 compute-0 NetworkManager[48960]: <info>  [1768920239.5619] device (tap09356ee8-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:43:59 compute-0 ovn_controller[148666]: 2026-01-20T14:43:59Z|00280|binding|INFO|Releasing lport 09356ee8-6584-422c-b052-c6e0aedc7ab4 from this chassis (sb_readonly=0)
Jan 20 14:43:59 compute-0 ovn_controller[148666]: 2026-01-20T14:43:59Z|00281|binding|INFO|Setting lport 09356ee8-6584-422c-b052-c6e0aedc7ab4 down in Southbound
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 ovn_controller[148666]: 2026-01-20T14:43:59Z|00282|binding|INFO|Removing iface tap09356ee8-65 ovn-installed in OVS
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.579 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.584983454 +0000 UTC m=+0.044648851 container create 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.589 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:70:28 10.100.0.5'], port_security=['fa:16:3e:4b:70:28 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50a3e5fd-193e-4c31-a7ce-b29b4ff10849', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fbbc94ed3bb41b1a060e5a7e1099ccf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '074276d6-bd54-418a-a9d1-e949772a1489', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6902607-52d1-4bb5-82d0-ef4c4182165b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=09356ee8-6584-422c-b052-c6e0aedc7ab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.591 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 09356ee8-6584-422c-b052-c6e0aedc7ab4 in datapath b9346c24-c2ec-4e94-9b82-f911ab82abc7 unbound from our chassis
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.593 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9346c24-c2ec-4e94-9b82-f911ab82abc7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.597 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9a428b0f-a450-4ca3-bbcd-a5a9fb7abd0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.598 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7 namespace which is not needed anymore
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 systemd[1]: Started libpod-conmon-904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef.scope.
Jan 20 14:43:59 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000052.scope: Consumed 14.276s CPU time.
Jan 20 14:43:59 compute-0 systemd-machined[216401]: Machine qemu-36-instance-00000052 terminated.
Jan 20 14:43:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.564342455 +0000 UTC m=+0.024007872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.667350156 +0000 UTC m=+0.127015573 container init 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.674611903 +0000 UTC m=+0.134277300 container start 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:43:59 compute-0 wizardly_davinci[299972]: 167 167
Jan 20 14:43:59 compute-0 systemd[1]: libpod-904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.679752163 +0000 UTC m=+0.139417580 container attach 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:43:59 compute-0 conmon[299972]: conmon 904354989b666792e958 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef.scope/container/memory.events
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.681674634 +0000 UTC m=+0.141340031 container died 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a372d409b595bf0f373cd47f857e608a8d3e954463f6332f63a74249079dd0e-merged.mount: Deactivated successfully.
Jan 20 14:43:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 238 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 251 op/s
Jan 20 14:43:59 compute-0 podman[299947]: 2026-01-20 14:43:59.718314657 +0000 UTC m=+0.177980054 container remove 904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_davinci, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.734 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 systemd[1]: libpod-conmon-904354989b666792e95819d75a2ac2fc84162ac5498b161262ca81f43b7038ef.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.742 250022 INFO nova.virt.libvirt.driver [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Instance destroyed successfully.
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.742 250022 DEBUG nova.objects.instance [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lazy-loading 'resources' on Instance uuid 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:43:59 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [NOTICE]   (298816) : haproxy version is 2.8.14-c23fe91
Jan 20 14:43:59 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [NOTICE]   (298816) : path to executable is /usr/sbin/haproxy
Jan 20 14:43:59 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [WARNING]  (298816) : Exiting Master process...
Jan 20 14:43:59 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [ALERT]    (298816) : Current worker (298818) exited with code 143 (Terminated)
Jan 20 14:43:59 compute-0 neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7[298812]: [WARNING]  (298816) : All workers exited. Exiting... (0)
Jan 20 14:43:59 compute-0 systemd[1]: libpod-f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 podman[299991]: 2026-01-20 14:43:59.753622085 +0000 UTC m=+0.056077721 container died f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.782 250022 DEBUG nova.virt.libvirt.vif [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:43:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1638959585',display_name='tempest-AttachInterfacesUnderV243Test-server-1638959585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1638959585',id=82,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHBLhvwQD8pfuy81t4/uX9vNVmbmzfhtV4k/aMtROKxZHBT5Av/A3g1Hv5GBLLgGJzoauzbTsv3KqXvd+dGceMvL4+iryR2sjrFK7Zz/EMWrOCYAILIyDu16zdgqhbQwg==',key_name='tempest-keypair-1449064549',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:43:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9fbbc94ed3bb41b1a060e5a7e1099ccf',ramdisk_id='',reservation_id='r-zhkvfssa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-743380895',owner_user_name='tempest-AttachInterfacesUnderV243Test-743380895-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:43:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7382dbb4dfeb47a08ece33c6f113d77c',uuid=50a3e5fd-193e-4c31-a7ce-b29b4ff10849,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.782 250022 DEBUG nova.network.os_vif_util [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converting VIF {"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.783 250022 DEBUG nova.network.os_vif_util [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.783 250022 DEBUG os_vif [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.785 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09356ee8-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:59 compute-0 ceph-mon[74360]: pgmap v1669: 321 pgs: 321 active+clean; 238 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.2 MiB/s wr, 251 op/s
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176-userdata-shm.mount: Deactivated successfully.
Jan 20 14:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c4150b8f4b6bdf0925e398e6a4c0242f7601e52e673b8736f534b543f5c8c6a-merged.mount: Deactivated successfully.
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.794 250022 INFO os_vif [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:70:28,bridge_name='br-int',has_traffic_filtering=True,id=09356ee8-6584-422c-b052-c6e0aedc7ab4,network=Network(b9346c24-c2ec-4e94-9b82-f911ab82abc7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09356ee8-65')
Jan 20 14:43:59 compute-0 podman[299991]: 2026-01-20 14:43:59.806278481 +0000 UTC m=+0.108734107 container cleanup f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:43:59 compute-0 systemd[1]: libpod-conmon-f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176.scope: Deactivated successfully.
Jan 20 14:43:59 compute-0 podman[300056]: 2026-01-20 14:43:59.871005615 +0000 UTC m=+0.041319201 container remove f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.876 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e00a1e0c-45c9-4ade-8097-00e723faf110]: (4, ('Tue Jan 20 02:43:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7 (f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176)\nf1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176\nTue Jan 20 02:43:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7 (f1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176)\nf1b38fae35d9447fd601d87aee208b210a5e3817976b2fb0704ddf81f7fb2176\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.878 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce44f83-9a8c-4734-9ca1-244a4a1cc814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.879 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9346c24-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.928 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 kernel: tapb9346c24-c0: left promiscuous mode
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.932 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.934 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d776043c-be1d-44fa-8eb6-bbceb6ba0410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.949 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[97899225-9926-4494-b442-0d816883a07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.951 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[773b2fbc-3fe3-474e-a9cc-4bec262162a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 nova_compute[250018]: 2026-01-20 14:43:59.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:43:59 compute-0 podman[300074]: 2026-01-20 14:43:59.964039066 +0000 UTC m=+0.097732479 container create 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.971 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[edf70fa7-d9d0-4cc4-aa48-8a7771604b7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614715, 'reachable_time': 30098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300092, 'error': None, 'target': 'ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.974 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9346c24-c2ec-4e94-9b82-f911ab82abc7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:43:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:43:59.974 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[43e18993-3b00-44e1-aa7d-79049927f1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:43:59 compute-0 systemd[1]: run-netns-ovnmeta\x2db9346c24\x2dc2ec\x2d4e94\x2d9b82\x2df911ab82abc7.mount: Deactivated successfully.
Jan 20 14:44:00 compute-0 systemd[1]: Started libpod-conmon-45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89.scope.
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:43:59.939732798 +0000 UTC m=+0.073426231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:44:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40aa8ea5ffc05f504801913fb5c3217d5195e4883bd476a700234a2a4f7e8d00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40aa8ea5ffc05f504801913fb5c3217d5195e4883bd476a700234a2a4f7e8d00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40aa8ea5ffc05f504801913fb5c3217d5195e4883bd476a700234a2a4f7e8d00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40aa8ea5ffc05f504801913fb5c3217d5195e4883bd476a700234a2a4f7e8d00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:44:00.068903958 +0000 UTC m=+0.202597431 container init 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:44:00.076215656 +0000 UTC m=+0.209909089 container start 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:44:00.081788917 +0000 UTC m=+0.215482350 container attach 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:44:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.238 250022 INFO nova.virt.libvirt.driver [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Deleting instance files /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849_del
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.240 250022 INFO nova.virt.libvirt.driver [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Deletion of /var/lib/nova/instances/50a3e5fd-193e-4c31-a7ce-b29b4ff10849_del complete
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.311 250022 INFO nova.compute.manager [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Took 0.81 seconds to destroy the instance on the hypervisor.
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.311 250022 DEBUG oslo.service.loopingcall [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.311 250022 DEBUG nova.compute.manager [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.312 250022 DEBUG nova.network.neutron [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:44:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:00.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:00 compute-0 kind_burnell[300095]: {
Jan 20 14:44:00 compute-0 kind_burnell[300095]:     "0": [
Jan 20 14:44:00 compute-0 kind_burnell[300095]:         {
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "devices": [
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "/dev/loop3"
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             ],
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "lv_name": "ceph_lv0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "lv_size": "7511998464",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "name": "ceph_lv0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "tags": {
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.cluster_name": "ceph",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.crush_device_class": "",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.encrypted": "0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.osd_id": "0",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.type": "block",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:                 "ceph.vdo": "0"
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             },
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "type": "block",
Jan 20 14:44:00 compute-0 kind_burnell[300095]:             "vg_name": "ceph_vg0"
Jan 20 14:44:00 compute-0 kind_burnell[300095]:         }
Jan 20 14:44:00 compute-0 kind_burnell[300095]:     ]
Jan 20 14:44:00 compute-0 kind_burnell[300095]: }
Jan 20 14:44:00 compute-0 systemd[1]: libpod-45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89.scope: Deactivated successfully.
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:44:00.853841618 +0000 UTC m=+0.987535061 container died 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.877 250022 DEBUG nova.network.neutron [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updated VIF entry in instance network info cache for port 09356ee8-6584-422c-b052-c6e0aedc7ab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.878 250022 DEBUG nova.network.neutron [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [{"id": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "address": "fa:16:3e:4b:70:28", "network": {"id": "b9346c24-c2ec-4e94-9b82-f911ab82abc7", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-504372403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fbbc94ed3bb41b1a060e5a7e1099ccf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09356ee8-65", "ovs_interfaceid": "09356ee8-6584-422c-b052-c6e0aedc7ab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-40aa8ea5ffc05f504801913fb5c3217d5195e4883bd476a700234a2a4f7e8d00-merged.mount: Deactivated successfully.
Jan 20 14:44:00 compute-0 nova_compute[250018]: 2026-01-20 14:44:00.904 250022 DEBUG oslo_concurrency.lockutils [req-a0da50bf-39f1-4969-84bb-e9d24dbbb2b2 req-2a5bbcf7-83a7-4972-8d01-b1ac6354007a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-50a3e5fd-193e-4c31-a7ce-b29b4ff10849" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:00 compute-0 podman[300074]: 2026-01-20 14:44:00.936727005 +0000 UTC m=+1.070420408 container remove 45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:44:00 compute-0 systemd[1]: libpod-conmon-45e6b20c06ed62f40f5066ba25b865cc540b0d7dc7262192f6cbdeb2c5872c89.scope: Deactivated successfully.
Jan 20 14:44:00 compute-0 sudo[299881]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:01 compute-0 sudo[300117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:01 compute-0 sudo[300117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:01 compute-0 sudo[300117]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:01 compute-0 sudo[300142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:44:01 compute-0 sudo[300142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:01 compute-0 sudo[300142]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:01 compute-0 sudo[300167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:01 compute-0 sudo[300167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:01 compute-0 sudo[300167]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1574054384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:01 compute-0 sudo[300192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:44:01 compute-0 sudo[300192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.248 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.249 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.249 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.249 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.249 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.250 250022 INFO nova.compute.manager [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Terminating instance
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.251 250022 DEBUG nova.compute.manager [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:44:01 compute-0 kernel: tap052dfa44-93 (unregistering): left promiscuous mode
Jan 20 14:44:01 compute-0 NetworkManager[48960]: <info>  [1768920241.3184] device (tap052dfa44-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:44:01 compute-0 ovn_controller[148666]: 2026-01-20T14:44:01Z|00283|binding|INFO|Releasing lport 052dfa44-9329-474e-bd60-601248d6420b from this chassis (sb_readonly=0)
Jan 20 14:44:01 compute-0 ovn_controller[148666]: 2026-01-20T14:44:01Z|00284|binding|INFO|Setting lport 052dfa44-9329-474e-bd60-601248d6420b down in Southbound
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.330 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 ovn_controller[148666]: 2026-01-20T14:44:01Z|00285|binding|INFO|Removing iface tap052dfa44-93 ovn-installed in OVS
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.332 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.350 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:5f:52 10.100.0.9'], port_security=['fa:16:3e:2b:5f:52 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ad4801ff-a246-41b7-af59-709b9a78601e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53d0b281-776f-4682-8aaf-098e1d364008', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28a523cfe06042ff96554913a78e1e3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1879c269-0854-40a3-8eb9-b61f97d38545', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a1afefec-2060-4dfb-acbb-1ce14c3a663c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=052dfa44-9329-474e-bd60-601248d6420b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.351 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 052dfa44-9329-474e-bd60-601248d6420b in datapath 53d0b281-776f-4682-8aaf-098e1d364008 unbound from our chassis
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.352 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53d0b281-776f-4682-8aaf-098e1d364008, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.354 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.353 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aafabdf9-ebb5-48da-af4b-0248b79bf8d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.355 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 namespace which is not needed anymore
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.366 250022 DEBUG nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-unplugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.367 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.367 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.367 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.367 250022 DEBUG nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] No waiting events found dispatching network-vif-unplugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-unplugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG oslo_concurrency.lockutils [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.368 250022 DEBUG nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] No waiting events found dispatching network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.369 250022 WARNING nova.compute.manager [req-a87757dd-82ad-4e46-8dc8-308e111a9c0d req-4cfc3f08-df00-4386-a4fd-49b92283fc4a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received unexpected event network-vif-plugged-09356ee8-6584-422c-b052-c6e0aedc7ab4 for instance with vm_state active and task_state deleting.
Jan 20 14:44:01 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 20 14:44:01 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000055.scope: Consumed 12.621s CPU time.
Jan 20 14:44:01 compute-0 systemd-machined[216401]: Machine qemu-37-instance-00000055 terminated.
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.474 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [NOTICE]   (299386) : haproxy version is 2.8.14-c23fe91
Jan 20 14:44:01 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [NOTICE]   (299386) : path to executable is /usr/sbin/haproxy
Jan 20 14:44:01 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [WARNING]  (299386) : Exiting Master process...
Jan 20 14:44:01 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [ALERT]    (299386) : Current worker (299388) exited with code 143 (Terminated)
Jan 20 14:44:01 compute-0 neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008[299382]: [WARNING]  (299386) : All workers exited. Exiting... (0)
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.487 250022 INFO nova.virt.libvirt.driver [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Instance destroyed successfully.
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.490 250022 DEBUG nova.objects.instance [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lazy-loading 'resources' on Instance uuid ad4801ff-a246-41b7-af59-709b9a78601e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:01 compute-0 systemd[1]: libpod-32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c.scope: Deactivated successfully.
Jan 20 14:44:01 compute-0 podman[300270]: 2026-01-20 14:44:01.499039752 +0000 UTC m=+0.047102996 container died 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.522 250022 DEBUG nova.virt.libvirt.vif [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:43:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1818827013',display_name='tempest-ListServersNegativeTestJSON-server-1818827013-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1818827013-3',id=85,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-20T14:43:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28a523cfe06042ff96554913a78e1e3a',ramdisk_id='',reservation_id='r-s8qopxwt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1080060493',owner_user_name='tempest-ListServersNegativeTestJSON-1080060493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:43:48Z,user_data=None,user_id='2975742546164cad937d13671d17108a',uuid=ad4801ff-a246-41b7-af59-709b9a78601e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.522 250022 DEBUG nova.network.os_vif_util [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converting VIF {"id": "052dfa44-9329-474e-bd60-601248d6420b", "address": "fa:16:3e:2b:5f:52", "network": {"id": "53d0b281-776f-4682-8aaf-098e1d364008", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1516883251-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28a523cfe06042ff96554913a78e1e3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap052dfa44-93", "ovs_interfaceid": "052dfa44-9329-474e-bd60-601248d6420b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.523 250022 DEBUG nova.network.os_vif_util [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.523 250022 DEBUG os_vif [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.525 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.525 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052dfa44-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.526 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b037c6d909b77ac0b5af88dbc1accee593f578bf3b11830ebce9d8ca071ce31-merged.mount: Deactivated successfully.
Jan 20 14:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.532 250022 INFO os_vif [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:5f:52,bridge_name='br-int',has_traffic_filtering=True,id=052dfa44-9329-474e-bd60-601248d6420b,network=Network(53d0b281-776f-4682-8aaf-098e1d364008),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap052dfa44-93')
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.538721108 +0000 UTC m=+0.047501628 container create e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:44:01 compute-0 podman[300270]: 2026-01-20 14:44:01.546797526 +0000 UTC m=+0.094860770 container cleanup 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:44:01 compute-0 systemd[1]: Started libpod-conmon-e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440.scope.
Jan 20 14:44:01 compute-0 systemd[1]: libpod-conmon-32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c.scope: Deactivated successfully.
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.515246282 +0000 UTC m=+0.024026812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:44:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.632418747 +0000 UTC m=+0.141199247 container init e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.636 250022 DEBUG nova.network.neutron [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.643034615 +0000 UTC m=+0.151815115 container start e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:44:01 compute-0 podman[300341]: 2026-01-20 14:44:01.64581918 +0000 UTC m=+0.059384320 container remove 32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:44:01 compute-0 objective_wilbur[300345]: 167 167
Jan 20 14:44:01 compute-0 systemd[1]: libpod-e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440.scope: Deactivated successfully.
Jan 20 14:44:01 compute-0 conmon[300345]: conmon e513b7b24c397cc66de5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440.scope/container/memory.events
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.650771595 +0000 UTC m=+0.159552105 container attach e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.65207295 +0000 UTC m=+0.160853470 container died e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.655 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[983b7cbe-bfe1-4adc-8735-5de16997cc8b]: (4, ('Tue Jan 20 02:44:01 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 (32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c)\n32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c\nTue Jan 20 02:44:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 (32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c)\n32c9186f66ae97e98fb5be2110cea78e4411149975837976a405ee089b009f0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.658 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4f10c0aa-592c-4edc-84a0-437766e87793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.659 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53d0b281-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.661 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 kernel: tap53d0b281-70: left promiscuous mode
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.663 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.671 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bd88fb-be14-49ac-a344-b49f7709a9f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-292780000f0989852892d2d02137a8167d2fdcb066d20898efe17ff3210da289-merged.mount: Deactivated successfully.
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.684 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.691 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[59d8b32b-0914-427b-a118-1b8c576d4dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.692 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[65209922-8e0d-4e65-a3c7-098b5adca924]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 podman[300292]: 2026-01-20 14:44:01.699518805 +0000 UTC m=+0.208299305 container remove e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilbur, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:44:01 compute-0 systemd[1]: libpod-conmon-e513b7b24c397cc66de56f7cc867f9fef9e71a6c7a53184de560c8db80d9a440.scope: Deactivated successfully.
Jan 20 14:44:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 260 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 273 op/s
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.710 250022 INFO nova.compute.manager [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Took 1.40 seconds to deallocate network for instance.
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.711 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ce8712e2-7d7d-430a-91fe-61d27cec2185]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616831, 'reachable_time': 33326, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300374, 'error': None, 'target': 'ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d53d0b281\x2d776f\x2d4682\x2d8aaf\x2d098e1d364008.mount: Deactivated successfully.
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.716 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53d0b281-776f-4682-8aaf-098e1d364008 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:44:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:01.716 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[a2540d57-890b-4d72-9402-c86a906c6d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.725 250022 DEBUG nova.compute.manager [req-4f2bcee2-3e5f-4538-928d-4ae4823729af req-c07c756c-689e-4c6b-ab48-474c37b69ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Received event network-vif-deleted-09356ee8-6584-422c-b052-c6e0aedc7ab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.805 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:01 compute-0 nova_compute[250018]: 2026-01-20 14:44:01.806 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:01 compute-0 podman[300384]: 2026-01-20 14:44:01.864028263 +0000 UTC m=+0.045809372 container create b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:44:01 compute-0 systemd[1]: Started libpod-conmon-b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056.scope.
Jan 20 14:44:01 compute-0 podman[300384]: 2026-01-20 14:44:01.842686595 +0000 UTC m=+0.024467714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:44:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05629201e1a07f00e0e75bee89ece0a69cfad6405e77f29d25e7b347bf065834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05629201e1a07f00e0e75bee89ece0a69cfad6405e77f29d25e7b347bf065834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05629201e1a07f00e0e75bee89ece0a69cfad6405e77f29d25e7b347bf065834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05629201e1a07f00e0e75bee89ece0a69cfad6405e77f29d25e7b347bf065834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:01 compute-0 podman[300384]: 2026-01-20 14:44:01.956690134 +0000 UTC m=+0.138471283 container init b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:44:01 compute-0 podman[300384]: 2026-01-20 14:44:01.966780937 +0000 UTC m=+0.148562046 container start b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:44:01 compute-0 podman[300384]: 2026-01-20 14:44:01.970667613 +0000 UTC m=+0.152448772 container attach b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.051 250022 INFO nova.virt.libvirt.driver [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Deleting instance files /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e_del
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.052 250022 INFO nova.virt.libvirt.driver [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Deletion of /var/lib/nova/instances/ad4801ff-a246-41b7-af59-709b9a78601e_del complete
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.098 250022 DEBUG oslo_concurrency.processutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.131 250022 INFO nova.compute.manager [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Took 0.88 seconds to destroy the instance on the hypervisor.
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.132 250022 DEBUG oslo.service.loopingcall [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.132 250022 DEBUG nova.compute.manager [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.132 250022 DEBUG nova.network.neutron [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:44:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:02.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4262766353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:02 compute-0 ceph-mon[74360]: pgmap v1670: 321 pgs: 321 active+clean; 260 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 273 op/s
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245543980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.545 250022 DEBUG oslo_concurrency.processutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.552 250022 DEBUG nova.compute.provider_tree [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.571 250022 DEBUG nova.scheduler.client.report [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.602 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.644 250022 INFO nova.scheduler.client.report [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Deleted allocations for instance 50a3e5fd-193e-4c31-a7ce-b29b4ff10849
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]: {
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:         "osd_id": 0,
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:         "type": "bluestore"
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]:     }
Jan 20 14:44:02 compute-0 objective_kowalevski[300401]: }
Jan 20 14:44:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:02 compute-0 systemd[1]: libpod-b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056.scope: Deactivated successfully.
Jan 20 14:44:02 compute-0 podman[300384]: 2026-01-20 14:44:02.80529852 +0000 UTC m=+0.987079619 container died b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-05629201e1a07f00e0e75bee89ece0a69cfad6405e77f29d25e7b347bf065834-merged.mount: Deactivated successfully.
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.840 250022 DEBUG oslo_concurrency.lockutils [None req-1570069f-8b84-4255-96e2-f12a4f8b518c 7382dbb4dfeb47a08ece33c6f113d77c 9fbbc94ed3bb41b1a060e5a7e1099ccf - - default default] Lock "50a3e5fd-193e-4c31-a7ce-b29b4ff10849" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:02 compute-0 podman[300384]: 2026-01-20 14:44:02.859692804 +0000 UTC m=+1.041473903 container remove b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kowalevski, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:44:02 compute-0 systemd[1]: libpod-conmon-b75f5b498e83c687dda609d8a45a46b371e790556c06b132123335a1489af056.scope: Deactivated successfully.
Jan 20 14:44:02 compute-0 sudo[300192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:44:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.915 250022 DEBUG nova.network.neutron [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4ad13081-ffe6-4672-9f26-34b0c7b939ec does not exist
Jan 20 14:44:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b74ba8b8-12bf-457e-8441-084823791e92 does not exist
Jan 20 14:44:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 33cc083e-6c0b-4643-862e-ffb8dbe7ec96 does not exist
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.937 250022 INFO nova.compute.manager [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Took 0.81 seconds to deallocate network for instance.
Jan 20 14:44:02 compute-0 sudo[300456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:02 compute-0 sudo[300456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.995 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:02 compute-0 nova_compute[250018]: 2026-01-20 14:44:02.995 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:02 compute-0 sudo[300456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:03 compute-0 sudo[300481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:44:03 compute-0 sudo[300481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:03 compute-0 sudo[300481]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.090 250022 DEBUG oslo_concurrency.processutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/14759032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3245543980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:44:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/736157011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.519 250022 DEBUG oslo_concurrency.processutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.526 250022 DEBUG nova.compute.provider_tree [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.558 250022 DEBUG nova.scheduler.client.report [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.592 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.636 250022 INFO nova.scheduler.client.report [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Deleted allocations for instance ad4801ff-a246-41b7-af59-709b9a78601e
Jan 20 14:44:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 209 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.772 250022 DEBUG oslo_concurrency.lockutils [None req-7614a1a0-87a7-4e97-a5da-9e888b4c5c1e 2975742546164cad937d13671d17108a 28a523cfe06042ff96554913a78e1e3a - - default default] Lock "ad4801ff-a246-41b7-af59-709b9a78601e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:03 compute-0 nova_compute[250018]: 2026-01-20 14:44:03.986 250022 DEBUG nova.compute.manager [req-47298a8d-7e95-4330-bf6f-3d037c225e6f req-88010da4-4e19-4f50-826f-7f315d6886ad 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Received event network-vif-deleted-052dfa44-9329-474e-bd60-601248d6420b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:04.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/736157011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:04 compute-0 ceph-mon[74360]: pgmap v1671: 321 pgs: 321 active+clean; 209 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Jan 20 14:44:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:04.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3605756288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 123 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 20 14:44:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:06 compute-0 ceph-mon[74360]: pgmap v1672: 321 pgs: 321 active+clean; 123 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 20 14:44:06 compute-0 nova_compute[250018]: 2026-01-20 14:44:06.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:06.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:06 compute-0 sudo[300530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:06 compute-0 sudo[300530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:06 compute-0 sudo[300530]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:06 compute-0 sudo[300555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:06 compute-0 sudo[300555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:06 compute-0 sudo[300555]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:07 compute-0 nova_compute[250018]: 2026-01-20 14:44:07.254 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 134 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Jan 20 14:44:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/847151595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:07 compute-0 sshd-session[300580]: Invalid user test from 157.245.78.139 port 41368
Jan 20 14:44:08 compute-0 sshd-session[300580]: Connection closed by invalid user test 157.245.78.139 port 41368 [preauth]
Jan 20 14:44:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:08.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:08 compute-0 nova_compute[250018]: 2026-01-20 14:44:08.527 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:08.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:08 compute-0 nova_compute[250018]: 2026-01-20 14:44:08.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:08 compute-0 ceph-mon[74360]: pgmap v1673: 321 pgs: 321 active+clean; 134 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Jan 20 14:44:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/674612077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 126 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 20 14:44:09 compute-0 ceph-mon[74360]: pgmap v1674: 321 pgs: 321 active+clean; 126 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 20 14:44:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:10.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:10.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018215353387306904 of space, bias 1.0, pg target 0.5464606016192072 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 14:44:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1216611789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:11 compute-0 nova_compute[250018]: 2026-01-20 14:44:11.533 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 104 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 269 op/s
Jan 20 14:44:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:12 compute-0 nova_compute[250018]: 2026-01-20 14:44:12.309 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:12.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:12 compute-0 ceph-mon[74360]: pgmap v1675: 321 pgs: 321 active+clean; 104 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 269 op/s
Jan 20 14:44:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 250 op/s
Jan 20 14:44:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1650611202' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:44:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1650611202' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:44:14 compute-0 ceph-mon[74360]: pgmap v1676: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 250 op/s
Jan 20 14:44:14 compute-0 nova_compute[250018]: 2026-01-20 14:44:14.741 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920239.737268, 50a3e5fd-193e-4c31-a7ce-b29b4ff10849 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:14 compute-0 nova_compute[250018]: 2026-01-20 14:44:14.741 250022 INFO nova.compute.manager [-] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] VM Stopped (Lifecycle Event)
Jan 20 14:44:14 compute-0 nova_compute[250018]: 2026-01-20 14:44:14.779 250022 DEBUG nova.compute.manager [None req-07b4d940-1724-4606-966b-6c758cbd8781 - - - - - -] [instance: 50a3e5fd-193e-4c31-a7ce-b29b4ff10849] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 263 op/s
Jan 20 14:44:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:16 compute-0 ceph-mon[74360]: pgmap v1677: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 263 op/s
Jan 20 14:44:16 compute-0 nova_compute[250018]: 2026-01-20 14:44:16.487 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920241.4845333, ad4801ff-a246-41b7-af59-709b9a78601e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:16 compute-0 nova_compute[250018]: 2026-01-20 14:44:16.487 250022 INFO nova.compute.manager [-] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] VM Stopped (Lifecycle Event)
Jan 20 14:44:16 compute-0 nova_compute[250018]: 2026-01-20 14:44:16.524 250022 DEBUG nova.compute.manager [None req-2747de3d-3d5e-4cae-8a59-a89c4ca9d844 - - - - - -] [instance: ad4801ff-a246-41b7-af59-709b9a78601e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:16 compute-0 nova_compute[250018]: 2026-01-20 14:44:16.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:16.950 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:16.951 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:44:16 compute-0 nova_compute[250018]: 2026-01-20 14:44:16.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2953342997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:17 compute-0 nova_compute[250018]: 2026-01-20 14:44:17.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 455 KiB/s wr, 174 op/s
Jan 20 14:44:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:18.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:18 compute-0 ceph-mon[74360]: pgmap v1678: 321 pgs: 321 active+clean; 88 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 455 KiB/s wr, 174 op/s
Jan 20 14:44:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 124 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 141 op/s
Jan 20 14:44:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:20.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:20 compute-0 podman[300591]: 2026-01-20 14:44:20.479599302 +0000 UTC m=+0.060361507 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:44:20 compute-0 podman[300590]: 2026-01-20 14:44:20.514863448 +0000 UTC m=+0.095515490 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:44:20 compute-0 ceph-mon[74360]: pgmap v1679: 321 pgs: 321 active+clean; 124 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 141 op/s
Jan 20 14:44:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:20.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:21 compute-0 nova_compute[250018]: 2026-01-20 14:44:21.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/805797725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 129 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Jan 20 14:44:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:22.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:22 compute-0 nova_compute[250018]: 2026-01-20 14:44:22.603 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:22 compute-0 ceph-mon[74360]: pgmap v1680: 321 pgs: 321 active+clean; 129 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Jan 20 14:44:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/723143886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:22.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:22.952 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 20 14:44:23 compute-0 ceph-mon[74360]: pgmap v1681: 321 pgs: 321 active+clean; 134 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 20 14:44:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:24.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:24.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 145 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 20 14:44:26 compute-0 nova_compute[250018]: 2026-01-20 14:44:26.066 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:26.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:26 compute-0 ceph-mon[74360]: pgmap v1682: 321 pgs: 321 active+clean; 145 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 20 14:44:26 compute-0 nova_compute[250018]: 2026-01-20 14:44:26.541 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:26.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:27 compute-0 sudo[300633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:27 compute-0 sudo[300633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:27 compute-0 sudo[300633]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:27 compute-0 nova_compute[250018]: 2026-01-20 14:44:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:27 compute-0 sudo[300658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:27 compute-0 sudo[300658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:27 compute-0 sudo[300658]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:27 compute-0 nova_compute[250018]: 2026-01-20 14:44:27.657 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 149 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 111 op/s
Jan 20 14:44:27 compute-0 ceph-mon[74360]: pgmap v1683: 321 pgs: 321 active+clean; 149 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 111 op/s
Jan 20 14:44:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:28.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 161 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:44:29 compute-0 ceph-mon[74360]: pgmap v1684: 321 pgs: 321 active+clean; 161 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.164 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.164 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.182 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:44:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.262 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.263 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.270 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.270 250022 INFO nova.compute.claims [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.374 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:30.756 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:30.757 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:30.757 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258013465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.811 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.818 250022 DEBUG nova.compute.provider_tree [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3024534450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3258013465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:30.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.837 250022 DEBUG nova.scheduler.client.report [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.861 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.862 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.919 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.920 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.934 250022 INFO nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:44:30 compute-0 nova_compute[250018]: 2026-01-20 14:44:30.951 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.041 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.042 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.042 250022 INFO nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Creating image(s)
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.070 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.098 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.127 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.132 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.168 250022 DEBUG nova.policy [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c3328d5d79641b0bc4d2f97582ca3cf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56a999e2b19840cab7a889876343a7db', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.171 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.172 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.202 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.202 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.203 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.203 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.203 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.227 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.228 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.229 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.229 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.256 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.261 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f9373561-17a5-410b-aa05-07c34a291c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.544 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478923842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.626 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.811 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.812 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4518MB free_disk=20.92218017578125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.813 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.813 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.899 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance f9373561-17a5-410b-aa05-07c34a291c53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.901 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.933 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Successfully created port: 62633e3d-4aea-4f14-8c3a-15b53f508832 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:44:31 compute-0 nova_compute[250018]: 2026-01-20 14:44:31.955 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1478923842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:32 compute-0 ceph-mon[74360]: pgmap v1685: 321 pgs: 321 active+clean; 167 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 14:44:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114195465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.365 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.370 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.387 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.415 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.415 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.702 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.726 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f9373561-17a5-410b-aa05-07c34a291c53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:32 compute-0 nova_compute[250018]: 2026-01-20 14:44:32.792 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] resizing rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:44:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:32.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1114195465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.237 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Successfully updated port: 62633e3d-4aea-4f14-8c3a-15b53f508832 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.251 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.251 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquired lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.251 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.329 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.332 250022 DEBUG nova.compute.manager [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-changed-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.332 250022 DEBUG nova.compute.manager [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Refreshing instance network info cache due to event network-changed-62633e3d-4aea-4f14-8c3a-15b53f508832. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.333 250022 DEBUG oslo_concurrency.lockutils [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.337 250022 DEBUG nova.objects.instance [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lazy-loading 'migration_context' on Instance uuid f9373561-17a5-410b-aa05-07c34a291c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.349 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.349 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Ensure instance console log exists: /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.350 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.350 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.350 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:33 compute-0 nova_compute[250018]: 2026-01-20 14:44:33.442 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:44:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 172 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 14:44:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:34.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3327996647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:34 compute-0 ceph-mon[74360]: pgmap v1686: 321 pgs: 321 active+clean; 172 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 14:44:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/899748910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.336 250022 DEBUG nova.network.neutron [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updating instance_info_cache with network_info: [{"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.371 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Releasing lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.372 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Instance network_info: |[{"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.372 250022 DEBUG oslo_concurrency.lockutils [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.373 250022 DEBUG nova.network.neutron [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Refreshing network info cache for port 62633e3d-4aea-4f14-8c3a-15b53f508832 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.378 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Start _get_guest_xml network_info=[{"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.383 250022 WARNING nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.393 250022 DEBUG nova.virt.libvirt.host [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.394 250022 DEBUG nova.virt.libvirt.host [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.398 250022 DEBUG nova.virt.libvirt.host [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.398 250022 DEBUG nova.virt.libvirt.host [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.399 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.399 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.400 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.400 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.400 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.401 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.401 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.401 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.401 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.401 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.402 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.402 250022 DEBUG nova.virt.hardware [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.404 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3129307005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 160 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 190 op/s
Jan 20 14:44:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137075845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.839 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.868 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:35 compute-0 nova_compute[250018]: 2026-01-20 14:44:35.872 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:36.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194028691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.296 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.298 250022 DEBUG nova.virt.libvirt.vif [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1152100167',display_name='tempest-NoVNCConsoleTestJSON-server-1152100167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1152100167',id=89,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56a999e2b19840cab7a889876343a7db',ramdisk_id='',reservation_id='r-6ieuzakv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1905064762',owner_user_name='tempest-NoVNCConsoleTestJSON-1905064762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:30Z,user_data=None,user_id='5c3328d5d79641b0bc4d2f97582ca3cf',uuid=f9373561-17a5-410b-aa05-07c34a291c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.299 250022 DEBUG nova.network.os_vif_util [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converting VIF {"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.300 250022 DEBUG nova.network.os_vif_util [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.301 250022 DEBUG nova.objects.instance [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lazy-loading 'pci_devices' on Instance uuid f9373561-17a5-410b-aa05-07c34a291c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.321 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <uuid>f9373561-17a5-410b-aa05-07c34a291c53</uuid>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <name>instance-00000059</name>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:name>tempest-NoVNCConsoleTestJSON-server-1152100167</nova:name>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:44:35</nova:creationTime>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:user uuid="5c3328d5d79641b0bc4d2f97582ca3cf">tempest-NoVNCConsoleTestJSON-1905064762-project-member</nova:user>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:project uuid="56a999e2b19840cab7a889876343a7db">tempest-NoVNCConsoleTestJSON-1905064762</nova:project>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <nova:port uuid="62633e3d-4aea-4f14-8c3a-15b53f508832">
Jan 20 14:44:36 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <system>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="serial">f9373561-17a5-410b-aa05-07c34a291c53</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="uuid">f9373561-17a5-410b-aa05-07c34a291c53</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </system>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <os>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </os>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <features>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </features>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f9373561-17a5-410b-aa05-07c34a291c53_disk">
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </source>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f9373561-17a5-410b-aa05-07c34a291c53_disk.config">
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </source>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:44:36 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:ac:d6:51"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <target dev="tap62633e3d-4a"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/console.log" append="off"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <video>
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </video>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:44:36 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:44:36 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:44:36 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:44:36 compute-0 nova_compute[250018]: </domain>
Jan 20 14:44:36 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.323 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Preparing to wait for external event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.323 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.323 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.323 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.324 250022 DEBUG nova.virt.libvirt.vif [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:44:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1152100167',display_name='tempest-NoVNCConsoleTestJSON-server-1152100167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1152100167',id=89,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56a999e2b19840cab7a889876343a7db',ramdisk_id='',reservation_id='r-6ieuzakv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-NoVNCConsoleTestJSON-1905064762',owner_user_name='tempest-NoVNCConsoleTestJSON-1905064762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:44:30Z,user_data=None,user_id='5c3328d5d79641b0bc4d2f97582ca3cf',uuid=f9373561-17a5-410b-aa05-07c34a291c53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.324 250022 DEBUG nova.network.os_vif_util [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converting VIF {"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.325 250022 DEBUG nova.network.os_vif_util [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.325 250022 DEBUG os_vif [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.326 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.326 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.329 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.329 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62633e3d-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.330 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62633e3d-4a, col_values=(('external_ids', {'iface-id': '62633e3d-4aea-4f14-8c3a-15b53f508832', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:d6:51', 'vm-uuid': 'f9373561-17a5-410b-aa05-07c34a291c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:36 compute-0 NetworkManager[48960]: <info>  [1768920276.3320] manager: (tap62633e3d-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.333 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.338 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.339 250022 INFO os_vif [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a')
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.403 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.403 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.403 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] No VIF found with MAC fa:16:3e:ac:d6:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.404 250022 INFO nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Using config drive
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.433 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:36 compute-0 ceph-mon[74360]: pgmap v1687: 321 pgs: 321 active+clean; 160 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 190 op/s
Jan 20 14:44:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/137075845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2245713301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2194028691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.664 250022 DEBUG nova.network.neutron [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updated VIF entry in instance network info cache for port 62633e3d-4aea-4f14-8c3a-15b53f508832. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.665 250022 DEBUG nova.network.neutron [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updating instance_info_cache with network_info: [{"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.680 250022 DEBUG oslo_concurrency.lockutils [req-3f6c9422-fec8-4b60-84e6-06f94b97b857 req-6398fe17-0c9f-43da-9c8e-3eb6788ccb05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.799 250022 INFO nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Creating config drive at /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.804 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7x6pk3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.953 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7x6pk3q" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.984 250022 DEBUG nova.storage.rbd_utils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] rbd image f9373561-17a5-410b-aa05-07c34a291c53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:44:36 compute-0 nova_compute[250018]: 2026-01-20 14:44:36.989 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config f9373561-17a5-410b-aa05-07c34a291c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.140 250022 DEBUG oslo_concurrency.processutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config f9373561-17a5-410b-aa05-07c34a291c53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.141 250022 INFO nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Deleting local config drive /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53/disk.config because it was imported into RBD.
Jan 20 14:44:37 compute-0 kernel: tap62633e3d-4a: entered promiscuous mode
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.2038] manager: (tap62633e3d-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Jan 20 14:44:37 compute-0 ovn_controller[148666]: 2026-01-20T14:44:37Z|00286|binding|INFO|Claiming lport 62633e3d-4aea-4f14-8c3a-15b53f508832 for this chassis.
Jan 20 14:44:37 compute-0 ovn_controller[148666]: 2026-01-20T14:44:37Z|00287|binding|INFO|62633e3d-4aea-4f14-8c3a-15b53f508832: Claiming fa:16:3e:ac:d6:51 10.100.0.5
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.204 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.213 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.224 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:d6:51 10.100.0.5'], port_security=['fa:16:3e:ac:d6:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f9373561-17a5-410b-aa05-07c34a291c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56a999e2b19840cab7a889876343a7db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f83fc749-8eea-4fd7-b0f5-b6e18e8db925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8300c9f3-ccdb-4750-a44f-15efa22cea8c, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=62633e3d-4aea-4f14-8c3a-15b53f508832) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.225 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 62633e3d-4aea-4f14-8c3a-15b53f508832 in datapath f26bc93b-7963-4d81-a34c-3fca60d66f94 bound to our chassis
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.227 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f26bc93b-7963-4d81-a34c-3fca60d66f94
Jan 20 14:44:37 compute-0 systemd-machined[216401]: New machine qemu-38-instance-00000059.
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.240 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[99f817f5-f762-4dd0-b18a-55caa6e8eabe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.241 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf26bc93b-71 in ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.242 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf26bc93b-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.243 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c5819ce8-b7ea-43fc-86d1-c02e41dbb80c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.243 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4e2dc6-da64-4b2b-851d-e9968b02f37d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.254 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[85928a66-5fa6-4d09-9869-0248a960d048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000059.
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.279 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2b485891-8ede-4611-b1db-50aa30c273d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 systemd-udevd[301058]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.2953] device (tap62633e3d-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.2962] device (tap62633e3d-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.306 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab05cc7-6662-4ec1-9464-5bc69eac2cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.3155] manager: (tapf26bc93b-70): new Veth device (/org/freedesktop/NetworkManager/Devices/147)
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.317 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5333aa4f-94d9-4e06-aa56-7d595adca80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_controller[148666]: 2026-01-20T14:44:37Z|00288|binding|INFO|Setting lport 62633e3d-4aea-4f14-8c3a-15b53f508832 ovn-installed in OVS
Jan 20 14:44:37 compute-0 ovn_controller[148666]: 2026-01-20T14:44:37Z|00289|binding|INFO|Setting lport 62633e3d-4aea-4f14-8c3a-15b53f508832 up in Southbound
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.347 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[06d38438-77a3-4827-b7db-b9c776cbf03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.364 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd04a96-4871-4a8c-ba3f-bd151b3627f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.3891] device (tapf26bc93b-70): carrier: link connected
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.396 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[efd54d43-4f40-4d83-8e89-82aff8e85d3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.412 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e821dc59-54b6-4943-b1cf-13ab7d46a236]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf26bc93b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f7:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621811, 'reachable_time': 40341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301088, 'error': None, 'target': 'ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.427 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[546d36a2-a29a-4dfb-894a-4001e85fd946]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:f773'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621811, 'tstamp': 621811}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301089, 'error': None, 'target': 'ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.444 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ff176ee8-30a2-432e-be37-e62258f74817]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf26bc93b-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:f7:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621811, 'reachable_time': 40341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301090, 'error': None, 'target': 'ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.474 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb2f51a-583e-4de5-aa28-2e342c903582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.529 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a91852-7be3-46ba-98e1-2c266916e1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.531 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf26bc93b-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.531 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.531 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf26bc93b-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.533 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 kernel: tapf26bc93b-70: entered promiscuous mode
Jan 20 14:44:37 compute-0 NetworkManager[48960]: <info>  [1768920277.5347] manager: (tapf26bc93b-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.536 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.537 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf26bc93b-70, col_values=(('external_ids', {'iface-id': '4276149a-5761-42b4-b681-91ebc350c84e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.538 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 ovn_controller[148666]: 2026-01-20T14:44:37Z|00290|binding|INFO|Releasing lport 4276149a-5761-42b4-b681-91ebc350c84e from this chassis (sb_readonly=0)
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.552 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.553 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f26bc93b-7963-4d81-a34c-3fca60d66f94.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f26bc93b-7963-4d81-a34c-3fca60d66f94.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.554 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee4acd6b-9e7a-4dbe-b28e-e9fe9e490ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.555 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-f26bc93b-7963-4d81-a34c-3fca60d66f94
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/f26bc93b-7963-4d81-a34c-3fca60d66f94.pid.haproxy
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID f26bc93b-7963-4d81-a34c-3fca60d66f94
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:44:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:37.555 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'env', 'PROCESS_TAG=haproxy-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f26bc93b-7963-4d81-a34c-3fca60d66f94.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:44:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4240458191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.704 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.705 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920277.7037945, f9373561-17a5-410b-aa05-07c34a291c53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.706 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] VM Started (Lifecycle Event)
Jan 20 14:44:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 173 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.734 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.739 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920277.7038875, f9373561-17a5-410b-aa05-07c34a291c53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.739 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] VM Paused (Lifecycle Event)
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.760 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.763 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:37 compute-0 nova_compute[250018]: 2026-01-20 14:44:37.786 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:37 compute-0 podman[301164]: 2026-01-20 14:44:37.894091571 +0000 UTC m=+0.046261765 container create 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:44:37 compute-0 systemd[1]: Started libpod-conmon-8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174.scope.
Jan 20 14:44:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114924a62fd73fc31422fa13398c023946186b840eca1178ff23f173ac7d8933/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:44:37 compute-0 podman[301164]: 2026-01-20 14:44:37.867663455 +0000 UTC m=+0.019833669 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:44:37 compute-0 podman[301164]: 2026-01-20 14:44:37.972445484 +0000 UTC m=+0.124615688 container init 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:44:37 compute-0 podman[301164]: 2026-01-20 14:44:37.977622875 +0000 UTC m=+0.129793069 container start 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:44:37 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [NOTICE]   (301183) : New worker (301185) forked
Jan 20 14:44:37 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [NOTICE]   (301183) : Loading success.
Jan 20 14:44:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:38.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.635 250022 DEBUG nova.compute.manager [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.636 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.636 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.637 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.637 250022 DEBUG nova.compute.manager [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Processing event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.637 250022 DEBUG nova.compute.manager [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.637 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.637 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.638 250022 DEBUG oslo_concurrency.lockutils [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.638 250022 DEBUG nova.compute.manager [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] No waiting events found dispatching network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.638 250022 WARNING nova.compute.manager [req-1e3003e7-88a0-482d-96de-d03c3f40fb2a req-48df5027-18a3-49fc-a311-54f7530bead5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received unexpected event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 for instance with vm_state building and task_state spawning.
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.639 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.646 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920278.6459422, f9373561-17a5-410b-aa05-07c34a291c53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.647 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] VM Resumed (Lifecycle Event)
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.649 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.653 250022 INFO nova.virt.libvirt.driver [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Instance spawned successfully.
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.653 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.675 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.681 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.686 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.687 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.687 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.687 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.688 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.688 250022 DEBUG nova.virt.libvirt.driver [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.709 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.756 250022 INFO nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Took 7.72 seconds to spawn the instance on the hypervisor.
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.756 250022 DEBUG nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:38 compute-0 ceph-mon[74360]: pgmap v1688: 321 pgs: 321 active+clean; 173 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Jan 20 14:44:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1059237412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.821 250022 INFO nova.compute.manager [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Took 8.59 seconds to build instance.
Jan 20 14:44:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:38.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:38 compute-0 nova_compute[250018]: 2026-01-20 14:44:38.846 250022 DEBUG oslo_concurrency.lockutils [None req-a793309f-6546-4a6f-be3a-6f4fc990ee50 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.538 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.538 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.539 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:44:39 compute-0 nova_compute[250018]: 2026-01-20 14:44:39.539 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f9373561-17a5-410b-aa05-07c34a291c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 193 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 20 14:44:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4220284943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:40 compute-0 ceph-mon[74360]: pgmap v1689: 321 pgs: 321 active+clean; 193 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 20 14:44:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:44:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2622766125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:40 compute-0 nova_compute[250018]: 2026-01-20 14:44:40.835 250022 DEBUG nova.compute.manager [None req-3093d6c1-445a-4544-85f4-ebcc7f831c00 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Jan 20 14:44:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:40.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.340 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updating instance_info_cache with network_info: [{"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2622766125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.367 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-f9373561-17a5-410b-aa05-07c34a291c53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.368 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.433 250022 DEBUG nova.compute.manager [None req-0080a6c6-a8ba-4c2b-b609-2d0768185e61 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Jan 20 14:44:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 227 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 140 op/s
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.926 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.926 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.927 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.927 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.927 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.928 250022 INFO nova.compute.manager [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Terminating instance
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.929 250022 DEBUG nova.compute.manager [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:44:41 compute-0 kernel: tap62633e3d-4a (unregistering): left promiscuous mode
Jan 20 14:44:41 compute-0 NetworkManager[48960]: <info>  [1768920281.9725] device (tap62633e3d-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:44:41 compute-0 nova_compute[250018]: 2026-01-20 14:44:41.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:41 compute-0 ovn_controller[148666]: 2026-01-20T14:44:41Z|00291|binding|INFO|Releasing lport 62633e3d-4aea-4f14-8c3a-15b53f508832 from this chassis (sb_readonly=0)
Jan 20 14:44:41 compute-0 ovn_controller[148666]: 2026-01-20T14:44:41Z|00292|binding|INFO|Setting lport 62633e3d-4aea-4f14-8c3a-15b53f508832 down in Southbound
Jan 20 14:44:41 compute-0 ovn_controller[148666]: 2026-01-20T14:44:41Z|00293|binding|INFO|Removing iface tap62633e3d-4a ovn-installed in OVS
Jan 20 14:44:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:41.994 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:d6:51 10.100.0.5'], port_security=['fa:16:3e:ac:d6:51 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f9373561-17a5-410b-aa05-07c34a291c53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56a999e2b19840cab7a889876343a7db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f83fc749-8eea-4fd7-b0f5-b6e18e8db925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8300c9f3-ccdb-4750-a44f-15efa22cea8c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=62633e3d-4aea-4f14-8c3a-15b53f508832) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:41.995 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 62633e3d-4aea-4f14-8c3a-15b53f508832 in datapath f26bc93b-7963-4d81-a34c-3fca60d66f94 unbound from our chassis
Jan 20 14:44:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:41.997 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f26bc93b-7963-4d81-a34c-3fca60d66f94, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:44:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:41.998 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab4f25d-8305-412e-8805-c65cd049e70e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:41.999 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94 namespace which is not needed anymore
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.023 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:42 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 20 14:44:42 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000059.scope: Consumed 3.856s CPU time.
Jan 20 14:44:42 compute-0 systemd-machined[216401]: Machine qemu-38-instance-00000059 terminated.
Jan 20 14:44:42 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [NOTICE]   (301183) : haproxy version is 2.8.14-c23fe91
Jan 20 14:44:42 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [NOTICE]   (301183) : path to executable is /usr/sbin/haproxy
Jan 20 14:44:42 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [WARNING]  (301183) : Exiting Master process...
Jan 20 14:44:42 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [ALERT]    (301183) : Current worker (301185) exited with code 143 (Terminated)
Jan 20 14:44:42 compute-0 neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94[301179]: [WARNING]  (301183) : All workers exited. Exiting... (0)
Jan 20 14:44:42 compute-0 systemd[1]: libpod-8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174.scope: Deactivated successfully.
Jan 20 14:44:42 compute-0 podman[301221]: 2026-01-20 14:44:42.137894073 +0000 UTC m=+0.046179003 container died 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174-userdata-shm.mount: Deactivated successfully.
Jan 20 14:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-114924a62fd73fc31422fa13398c023946186b840eca1178ff23f173ac7d8933-merged.mount: Deactivated successfully.
Jan 20 14:44:42 compute-0 podman[301221]: 2026-01-20 14:44:42.183946851 +0000 UTC m=+0.092231761 container cleanup 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:44:42 compute-0 systemd[1]: libpod-conmon-8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174.scope: Deactivated successfully.
Jan 20 14:44:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:42.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:42 compute-0 podman[301263]: 2026-01-20 14:44:42.249719353 +0000 UTC m=+0.043525650 container remove 8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.256 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e91389ef-1ba4-4783-89ac-df6c2f5e5d17]: (4, ('Tue Jan 20 02:44:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94 (8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174)\n8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174\nTue Jan 20 02:44:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94 (8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174)\n8db864a93838dd72c25574890c6c88e5ed10178d74ddec82cd0290d570280174\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.258 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[17daee4f-9511-45d1-8af7-436d3e21690e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.260 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf26bc93b-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:42 compute-0 kernel: tapf26bc93b-70: left promiscuous mode
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.285 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b92b17c9-350e-4301-ba89-158e485fbec3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.295 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.297 250022 INFO nova.virt.libvirt.driver [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Instance destroyed successfully.
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.298 250022 DEBUG nova.objects.instance [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lazy-loading 'resources' on Instance uuid f9373561-17a5-410b-aa05-07c34a291c53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.299 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[469ff1ea-6df7-4b4c-9914-607308b9a71b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.301 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2400c26f-3cf3-400b-ae7b-918e12e6fc16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.313 250022 DEBUG nova.virt.libvirt.vif [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:44:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-NoVNCConsoleTestJSON-server-1152100167',display_name='tempest-NoVNCConsoleTestJSON-server-1152100167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-novncconsoletestjson-server-1152100167',id=89,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:44:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56a999e2b19840cab7a889876343a7db',ramdisk_id='',reservation_id='r-6ieuzakv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-NoVNCConsoleTestJSON-1905064762',owner_user_name='tempest-NoVNCConsoleTestJSON-1905064762-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:44:38Z,user_data=None,user_id='5c3328d5d79641b0bc4d2f97582ca3cf',uuid=f9373561-17a5-410b-aa05-07c34a291c53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.313 250022 DEBUG nova.network.os_vif_util [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converting VIF {"id": "62633e3d-4aea-4f14-8c3a-15b53f508832", "address": "fa:16:3e:ac:d6:51", "network": {"id": "f26bc93b-7963-4d81-a34c-3fca60d66f94", "bridge": "br-int", "label": "tempest-NoVNCConsoleTestJSON-1480508232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56a999e2b19840cab7a889876343a7db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62633e3d-4a", "ovs_interfaceid": "62633e3d-4aea-4f14-8c3a-15b53f508832", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.314 250022 DEBUG nova.network.os_vif_util [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.314 250022 DEBUG os_vif [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.315 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.315 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62633e3d-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.318 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f85aad2-5c94-4933-be0b-a2a72a707e27]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621802, 'reachable_time': 16863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301283, 'error': None, 'target': 'ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.319 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.320 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f26bc93b-7963-4d81-a34c-3fca60d66f94 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.321 250022 INFO os_vif [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:d6:51,bridge_name='br-int',has_traffic_filtering=True,id=62633e3d-4aea-4f14-8c3a-15b53f508832,network=Network(f26bc93b-7963-4d81-a34c-3fca60d66f94),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62633e3d-4a')
Jan 20 14:44:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:42.320 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[689468d7-11a2-46e7-bda7-0eed04967213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:44:42 compute-0 systemd[1]: run-netns-ovnmeta\x2df26bc93b\x2d7963\x2d4d81\x2da34c\x2d3fca60d66f94.mount: Deactivated successfully.
Jan 20 14:44:42 compute-0 ceph-mon[74360]: pgmap v1690: 321 pgs: 321 active+clean; 227 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 140 op/s
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.735 250022 INFO nova.virt.libvirt.driver [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Deleting instance files /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53_del
Jan 20 14:44:42 compute-0 nova_compute[250018]: 2026-01-20 14:44:42.736 250022 INFO nova.virt.libvirt.driver [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Deletion of /var/lib/nova/instances/f9373561-17a5-410b-aa05-07c34a291c53_del complete
Jan 20 14:44:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:42.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:43 compute-0 nova_compute[250018]: 2026-01-20 14:44:43.441 250022 INFO nova.compute.manager [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Took 1.51 seconds to destroy the instance on the hypervisor.
Jan 20 14:44:43 compute-0 nova_compute[250018]: 2026-01-20 14:44:43.442 250022 DEBUG oslo.service.loopingcall [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:44:43 compute-0 nova_compute[250018]: 2026-01-20 14:44:43.443 250022 DEBUG nova.compute.manager [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:44:43 compute-0 nova_compute[250018]: 2026-01-20 14:44:43.443 250022 DEBUG nova.network.neutron [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:44:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 253 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.9 MiB/s wr, 184 op/s
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.100 250022 DEBUG nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-unplugged-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.101 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.102 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.102 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.102 250022 DEBUG nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] No waiting events found dispatching network-vif-unplugged-62633e3d-4aea-4f14-8c3a-15b53f508832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.103 250022 DEBUG nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-unplugged-62633e3d-4aea-4f14-8c3a-15b53f508832 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.103 250022 DEBUG nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.104 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f9373561-17a5-410b-aa05-07c34a291c53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.104 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.105 250022 DEBUG oslo_concurrency.lockutils [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.105 250022 DEBUG nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] No waiting events found dispatching network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.105 250022 WARNING nova.compute.manager [req-572d480d-472c-402c-b639-7d59b405b7c3 req-baa2b056-ffbb-440c-8fdd-ff3d1fd9ddb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received unexpected event network-vif-plugged-62633e3d-4aea-4f14-8c3a-15b53f508832 for instance with vm_state active and task_state deleting.
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.161 250022 DEBUG nova.network.neutron [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.181 250022 INFO nova.compute.manager [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Took 0.74 seconds to deallocate network for instance.
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.228 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.229 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:44:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:44.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.312 250022 DEBUG nova.compute.manager [req-f746e2de-2be3-4736-8a6c-2cf2bfdd8ee0 req-62444325-642d-4014-9dc5-02d52a2aa369 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Received event network-vif-deleted-62633e3d-4aea-4f14-8c3a-15b53f508832 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.320 250022 DEBUG oslo_concurrency.processutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:44:44 compute-0 ceph-mon[74360]: pgmap v1691: 321 pgs: 321 active+clean; 253 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.9 MiB/s wr, 184 op/s
Jan 20 14:44:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:44:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417755684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.752 250022 DEBUG oslo_concurrency.processutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.760 250022 DEBUG nova.compute.provider_tree [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.808 250022 DEBUG nova.scheduler.client.report [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.836 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:44.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.857 250022 INFO nova.scheduler.client.report [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Deleted allocations for instance f9373561-17a5-410b-aa05-07c34a291c53
Jan 20 14:44:44 compute-0 nova_compute[250018]: 2026-01-20 14:44:44.927 250022 DEBUG oslo_concurrency.lockutils [None req-2906175d-3297-4fef-bf7d-e0ea1b989e18 5c3328d5d79641b0bc4d2f97582ca3cf 56a999e2b19840cab7a889876343a7db - - default default] Lock "f9373561-17a5-410b-aa05-07c34a291c53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:44:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3502547636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3417755684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/714686278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2544638103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:44:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 230 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 208 op/s
Jan 20 14:44:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:46 compute-0 ceph-mon[74360]: pgmap v1692: 321 pgs: 321 active+clean; 230 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 208 op/s
Jan 20 14:44:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/504919309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:47 compute-0 sudo[301327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:47 compute-0 sudo[301327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:47 compute-0 sudo[301327]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:47 compute-0 sudo[301352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:44:47 compute-0 sudo[301352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:44:47 compute-0 sudo[301352]: pam_unix(sudo:session): session closed for user root
Jan 20 14:44:47 compute-0 nova_compute[250018]: 2026-01-20 14:44:47.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:47 compute-0 nova_compute[250018]: 2026-01-20 14:44:47.707 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 221 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 185 op/s
Jan 20 14:44:47 compute-0 ceph-mon[74360]: pgmap v1693: 321 pgs: 321 active+clean; 221 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 185 op/s
Jan 20 14:44:48 compute-0 sshd-session[301378]: Invalid user test from 157.245.78.139 port 54918
Jan 20 14:44:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:48 compute-0 sshd-session[301378]: Connection closed by invalid user test 157.245.78.139 port 54918 [preauth]
Jan 20 14:44:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:48 compute-0 nova_compute[250018]: 2026-01-20 14:44:48.362 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:44:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 253 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Jan 20 14:44:49 compute-0 ceph-mon[74360]: pgmap v1694: 321 pgs: 321 active+clean; 253 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Jan 20 14:44:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:50.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:51 compute-0 podman[301382]: 2026-01-20 14:44:51.527044398 +0000 UTC m=+0.096518006 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:44:51 compute-0 podman[301381]: 2026-01-20 14:44:51.527904791 +0000 UTC m=+0.102226581 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:44:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 175 op/s
Jan 20 14:44:51 compute-0 ceph-mon[74360]: pgmap v1695: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 175 op/s
Jan 20 14:44:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:52.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:52 compute-0 nova_compute[250018]: 2026-01-20 14:44:52.318 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:44:52
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 20 14:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:44:52 compute-0 nova_compute[250018]: 2026-01-20 14:44:52.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:52.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:53 compute-0 nova_compute[250018]: 2026-01-20 14:44:53.418 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 196 op/s
Jan 20 14:44:53 compute-0 ceph-mon[74360]: pgmap v1696: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 196 op/s
Jan 20 14:44:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:54.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:55 compute-0 nova_compute[250018]: 2026-01-20 14:44:55.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:55.332 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:44:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:44:55.333 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:44:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/366812670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/733160525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:44:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 216 op/s
Jan 20 14:44:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:44:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:56.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:44:56 compute-0 ceph-mon[74360]: pgmap v1697: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 216 op/s
Jan 20 14:44:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:56.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:57 compute-0 nova_compute[250018]: 2026-01-20 14:44:57.296 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920282.2943418, f9373561-17a5-410b-aa05-07c34a291c53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:44:57 compute-0 nova_compute[250018]: 2026-01-20 14:44:57.297 250022 INFO nova.compute.manager [-] [instance: f9373561-17a5-410b-aa05-07c34a291c53] VM Stopped (Lifecycle Event)
Jan 20 14:44:57 compute-0 nova_compute[250018]: 2026-01-20 14:44:57.316 250022 DEBUG nova.compute.manager [None req-b6c28f98-9b9e-4dae-8f71-5a24cfd2135d - - - - - -] [instance: f9373561-17a5-410b-aa05-07c34a291c53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:44:57 compute-0 nova_compute[250018]: 2026-01-20 14:44:57.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:44:57 compute-0 nova_compute[250018]: 2026-01-20 14:44:57.712 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:44:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 20 14:44:57 compute-0 ceph-mon[74360]: pgmap v1698: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 190 op/s
Jan 20 14:44:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:44:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:44:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:44:58.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:44:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:44:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:44:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:44:58.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:44:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Jan 20 14:45:00 compute-0 ceph-mon[74360]: pgmap v1699: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.7 MiB/s wr, 205 op/s
Jan 20 14:45:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:00.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:00.335 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 164 KiB/s wr, 211 op/s
Jan 20 14:45:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:02 compute-0 nova_compute[250018]: 2026-01-20 14:45:02.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:02 compute-0 ceph-mon[74360]: pgmap v1700: 321 pgs: 321 active+clean; 260 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 164 KiB/s wr, 211 op/s
Jan 20 14:45:02 compute-0 nova_compute[250018]: 2026-01-20 14:45:02.713 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:03 compute-0 sudo[301431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:03 compute-0 sudo[301431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-0 sudo[301431]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-0 sudo[301456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:45:03 compute-0 sudo[301456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-0 sudo[301456]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-0 sudo[301481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:03 compute-0 sudo[301481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-0 sudo[301481]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:03 compute-0 sudo[301506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:45:03 compute-0 sudo[301506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 263 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 573 KiB/s wr, 193 op/s
Jan 20 14:45:03 compute-0 ceph-mon[74360]: pgmap v1701: 321 pgs: 321 active+clean; 263 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 573 KiB/s wr, 193 op/s
Jan 20 14:45:04 compute-0 sudo[301506]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:45:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:04.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2c1159e1-e41e-4038-a31a-f5b35a95f53c does not exist
Jan 20 14:45:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cd9020e4-61be-4f4d-a733-0f3e77588a90 does not exist
Jan 20 14:45:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 98f33e13-15ea-47fb-a19d-c2a6ae9bf2fc does not exist
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:45:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-0 sudo[301564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:04 compute-0 sudo[301564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:04 compute-0 sudo[301564]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:04 compute-0 sudo[301589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:45:04 compute-0 sudo[301589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:04 compute-0 sudo[301589]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:04 compute-0 sudo[301614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:04 compute-0 sudo[301614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:04 compute-0 sudo[301614]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:04 compute-0 sudo[301639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:45:04 compute-0 sudo[301639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.811226665 +0000 UTC m=+0.044211049 container create abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:45:04 compute-0 systemd[1]: Started libpod-conmon-abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b.scope.
Jan 20 14:45:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.788697794 +0000 UTC m=+0.021682138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.897858142 +0000 UTC m=+0.130842536 container init abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.907488493 +0000 UTC m=+0.140472857 container start abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.912367486 +0000 UTC m=+0.145351840 container attach abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 20 14:45:04 compute-0 sad_swartz[301721]: 167 167
Jan 20 14:45:04 compute-0 systemd[1]: libpod-abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b.scope: Deactivated successfully.
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.914187015 +0000 UTC m=+0.147171349 container died abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 14:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6b19f0419a706157ffe0a4d998c73e1656fec527eb1224a9cd63e5aef6f51d1-merged.mount: Deactivated successfully.
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:45:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:45:04 compute-0 podman[301705]: 2026-01-20 14:45:04.955242547 +0000 UTC m=+0.188226891 container remove abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swartz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:45:04 compute-0 systemd[1]: libpod-conmon-abcf0c4c4868bb1adacae912a171dc7e4030279d26a2e5fc3a6a0fcfe65ef38b.scope: Deactivated successfully.
Jan 20 14:45:05 compute-0 podman[301744]: 2026-01-20 14:45:05.158805834 +0000 UTC m=+0.064375676 container create fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:45:05 compute-0 systemd[1]: Started libpod-conmon-fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa.scope.
Jan 20 14:45:05 compute-0 podman[301744]: 2026-01-20 14:45:05.133700384 +0000 UTC m=+0.039270236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:05 compute-0 podman[301744]: 2026-01-20 14:45:05.249761069 +0000 UTC m=+0.155330961 container init fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:45:05 compute-0 podman[301744]: 2026-01-20 14:45:05.258878015 +0000 UTC m=+0.164447857 container start fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:45:05 compute-0 podman[301744]: 2026-01-20 14:45:05.263532282 +0000 UTC m=+0.169102124 container attach fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:45:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 263 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 225 op/s
Jan 20 14:45:05 compute-0 ceph-mon[74360]: pgmap v1702: 321 pgs: 321 active+clean; 263 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 225 op/s
Jan 20 14:45:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1471590964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:06 compute-0 peaceful_noyce[301761]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:45:06 compute-0 peaceful_noyce[301761]: --> relative data size: 1.0
Jan 20 14:45:06 compute-0 peaceful_noyce[301761]: --> All data devices are unavailable
Jan 20 14:45:06 compute-0 systemd[1]: libpod-fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa.scope: Deactivated successfully.
Jan 20 14:45:06 compute-0 podman[301744]: 2026-01-20 14:45:06.074788556 +0000 UTC m=+0.980358378 container died fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c860aa57acef9967729348879b0b1a6007f7940f906d7e2686b6b7416a9131d-merged.mount: Deactivated successfully.
Jan 20 14:45:06 compute-0 podman[301744]: 2026-01-20 14:45:06.126735733 +0000 UTC m=+1.032305535 container remove fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:45:06 compute-0 systemd[1]: libpod-conmon-fbfcd7d0eafc982f4a18b8cfdd6b83716175d0fb9fd4b33ec31b8f8713f8defa.scope: Deactivated successfully.
Jan 20 14:45:06 compute-0 sudo[301639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:06 compute-0 sudo[301788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:06 compute-0 sudo[301788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:06 compute-0 sudo[301788]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:06 compute-0 sudo[301813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:45:06 compute-0 sudo[301813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:06 compute-0 sudo[301813]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:06 compute-0 sudo[301838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:06 compute-0 sudo[301838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:06 compute-0 sudo[301838]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:06 compute-0 sudo[301863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:45:06 compute-0 sudo[301863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.770414156 +0000 UTC m=+0.039950764 container create 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:45:06 compute-0 systemd[1]: Started libpod-conmon-6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249.scope.
Jan 20 14:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.754734432 +0000 UTC m=+0.024271070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.858151023 +0000 UTC m=+0.127687671 container init 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.86613984 +0000 UTC m=+0.135676468 container start 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:45:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.870393975 +0000 UTC m=+0.139930613 container attach 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:45:06 compute-0 naughty_kowalevski[301944]: 167 167
Jan 20 14:45:06 compute-0 systemd[1]: libpod-6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249.scope: Deactivated successfully.
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.872651457 +0000 UTC m=+0.142188085 container died 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-253dce1d739a3e9006880a2853f0ecc7c51cae3d551e892b36dd6e44d3670264-merged.mount: Deactivated successfully.
Jan 20 14:45:06 compute-0 podman[301928]: 2026-01-20 14:45:06.912855997 +0000 UTC m=+0.182392635 container remove 6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:45:06 compute-0 systemd[1]: libpod-conmon-6ba13e657da03da6c21770c9352c5f255b082a626945d4c8b8f272682526c249.scope: Deactivated successfully.
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.112017823 +0000 UTC m=+0.036791347 container create c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:45:07 compute-0 systemd[1]: Started libpod-conmon-c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c.scope.
Jan 20 14:45:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0a2c31907cc2ecdcc16042d915fed4f918ca7b4368898ad49985b8fcce593f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0a2c31907cc2ecdcc16042d915fed4f918ca7b4368898ad49985b8fcce593f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0a2c31907cc2ecdcc16042d915fed4f918ca7b4368898ad49985b8fcce593f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0a2c31907cc2ecdcc16042d915fed4f918ca7b4368898ad49985b8fcce593f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.193621124 +0000 UTC m=+0.118394668 container init c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.095901136 +0000 UTC m=+0.020674680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.206855814 +0000 UTC m=+0.131629348 container start c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.210867021 +0000 UTC m=+0.135640545 container attach c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:45:07 compute-0 sudo[301990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:07 compute-0 sudo[301990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:07 compute-0 nova_compute[250018]: 2026-01-20 14:45:07.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:07 compute-0 sudo[301990]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:07 compute-0 sudo[302015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:07 compute-0 sudo[302015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:07 compute-0 sudo[302015]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:07 compute-0 nova_compute[250018]: 2026-01-20 14:45:07.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 271 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Jan 20 14:45:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:45:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8776 writes, 38K keys, 8775 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8776 writes, 8775 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1762 writes, 7799 keys, 1762 commit groups, 1.0 writes per commit group, ingest: 11.13 MB, 0.02 MB/s
                                           Interval WAL: 1762 writes, 1762 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     43.3      1.09              0.17        22    0.049       0      0       0.0       0.0
                                             L6      1/0    9.34 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9    131.7    109.1      1.66              0.56        21    0.079    114K    12K       0.0       0.0
                                            Sum      1/0    9.34 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     79.6     83.0      2.75              0.73        43    0.064    114K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    129.9    132.8      0.44              0.15        10    0.044     34K   3136       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    131.7    109.1      1.66              0.56        21    0.079    114K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     43.3      1.08              0.17        21    0.052       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.08 MB/s write, 0.21 GB read, 0.07 MB/s read, 2.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 25.50 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000245 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1489,24.63 MB,8.10318%) FilterBlock(44,322.73 KB,0.103674%) IndexBlock(44,568.09 KB,0.182493%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:45:07 compute-0 ceph-mon[74360]: pgmap v1703: 321 pgs: 321 active+clean; 271 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]: {
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:     "0": [
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:         {
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "devices": [
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "/dev/loop3"
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             ],
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "lv_name": "ceph_lv0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "lv_size": "7511998464",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "name": "ceph_lv0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "tags": {
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.cluster_name": "ceph",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.crush_device_class": "",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.encrypted": "0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.osd_id": "0",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.type": "block",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:                 "ceph.vdo": "0"
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             },
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "type": "block",
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:             "vg_name": "ceph_vg0"
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:         }
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]:     ]
Jan 20 14:45:07 compute-0 stupefied_jemison[301985]: }
Jan 20 14:45:07 compute-0 systemd[1]: libpod-c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c.scope: Deactivated successfully.
Jan 20 14:45:07 compute-0 podman[301968]: 2026-01-20 14:45:07.942246461 +0000 UTC m=+0.867019985 container died c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb0a2c31907cc2ecdcc16042d915fed4f918ca7b4368898ad49985b8fcce593f-merged.mount: Deactivated successfully.
Jan 20 14:45:08 compute-0 podman[301968]: 2026-01-20 14:45:08.008117756 +0000 UTC m=+0.932891280 container remove c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:45:08 compute-0 systemd[1]: libpod-conmon-c018247a839e55519f1a2a1af77559ddb6d1ef5a3a0d1d940bdd472bcee9b41c.scope: Deactivated successfully.
Jan 20 14:45:08 compute-0 sudo[301863]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:08 compute-0 sudo[302058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:08 compute-0 sudo[302058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:08 compute-0 sudo[302058]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:08 compute-0 sudo[302083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:45:08 compute-0 sudo[302083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:08 compute-0 sudo[302083]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:08 compute-0 sudo[302108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:08 compute-0 sudo[302108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:08 compute-0 sudo[302108]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:08 compute-0 sudo[302133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:45:08 compute-0 sudo[302133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.726483853 +0000 UTC m=+0.052254267 container create 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:45:08 compute-0 systemd[1]: Started libpod-conmon-6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d.scope.
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.702681098 +0000 UTC m=+0.028451592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.827965362 +0000 UTC m=+0.153735776 container init 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.839792034 +0000 UTC m=+0.165562488 container start 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.844327696 +0000 UTC m=+0.170098150 container attach 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:45:08 compute-0 awesome_cartwright[302217]: 167 167
Jan 20 14:45:08 compute-0 systemd[1]: libpod-6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d.scope: Deactivated successfully.
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.846644039 +0000 UTC m=+0.172414493 container died 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:45:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:08.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99f46aecdcbab73b26b60ca4e2dad7b0f6d8287594e720caf9479a93d156e47-merged.mount: Deactivated successfully.
Jan 20 14:45:08 compute-0 podman[302200]: 2026-01-20 14:45:08.900539879 +0000 UTC m=+0.226310333 container remove 6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:45:08 compute-0 systemd[1]: libpod-conmon-6e485888070db3f5cad5ce3876229f7c3eb0c92ce3a1299f3dbb216743c1ef4d.scope: Deactivated successfully.
Jan 20 14:45:08 compute-0 nova_compute[250018]: 2026-01-20 14:45:08.933 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:08 compute-0 nova_compute[250018]: 2026-01-20 14:45:08.935 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:08 compute-0 nova_compute[250018]: 2026-01-20 14:45:08.952 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.029 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.030 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.041 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.042 250022 INFO nova.compute.claims [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:45:09 compute-0 podman[302243]: 2026-01-20 14:45:09.115341431 +0000 UTC m=+0.054405046 container create e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:45:09 compute-0 systemd[1]: Started libpod-conmon-e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77.scope.
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.170 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:09 compute-0 podman[302243]: 2026-01-20 14:45:09.092902083 +0000 UTC m=+0.031965788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:45:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1cbc76c52c18284ebd9af56601824690e7282380d4c42a44ec7166246e1f0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1cbc76c52c18284ebd9af56601824690e7282380d4c42a44ec7166246e1f0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1cbc76c52c18284ebd9af56601824690e7282380d4c42a44ec7166246e1f0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1cbc76c52c18284ebd9af56601824690e7282380d4c42a44ec7166246e1f0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:09 compute-0 podman[302243]: 2026-01-20 14:45:09.21900736 +0000 UTC m=+0.158071025 container init e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:45:09 compute-0 podman[302243]: 2026-01-20 14:45:09.227992623 +0000 UTC m=+0.167056258 container start e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:45:09 compute-0 podman[302243]: 2026-01-20 14:45:09.231892989 +0000 UTC m=+0.170956634 container attach e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:45:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641399765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.666 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.675 250022 DEBUG nova.compute.provider_tree [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.715 250022 DEBUG nova.scheduler.client.report [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.768 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.769 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.828 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.831 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.850 250022 INFO nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.869 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.986 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.989 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:45:09 compute-0 nova_compute[250018]: 2026-01-20 14:45:09.990 250022 INFO nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Creating image(s)
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.031 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3641399765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.074 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.103 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.107 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]: {
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:         "osd_id": 0,
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:         "type": "bluestore"
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]:     }
Jan 20 14:45:10 compute-0 relaxed_cartwright[302259]: }
Jan 20 14:45:10 compute-0 systemd[1]: libpod-e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77.scope: Deactivated successfully.
Jan 20 14:45:10 compute-0 podman[302243]: 2026-01-20 14:45:10.135230738 +0000 UTC m=+1.074294363 container died e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.135 250022 DEBUG nova.policy [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37e9ef97fbe0448e9fbe32d48b66211f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b31139b2a4e49cba5e7048febf901c4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b1cbc76c52c18284ebd9af56601824690e7282380d4c42a44ec7166246e1f0e-merged.mount: Deactivated successfully.
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.184 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.185 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.185 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.186 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:10 compute-0 podman[302243]: 2026-01-20 14:45:10.191504543 +0000 UTC m=+1.130568168 container remove e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cartwright, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:45:10 compute-0 systemd[1]: libpod-conmon-e6b57318d05764ec663a3bfab788821ce6ac03f0ad51264c86ce384795a8ec77.scope: Deactivated successfully.
Jan 20 14:45:10 compute-0 sudo[302133]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.221 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.227 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:10.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:10 compute-0 nova_compute[250018]: 2026-01-20 14:45:10.901 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Successfully created port: 90244da9-daf0-4694-952a-a087ecc6c3db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004340459531453564 of space, bias 1.0, pg target 1.302137859436069 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021565073632049134 of space, bias 1.0, pg target 0.6447957015982692 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:45:11 compute-0 ceph-mon[74360]: pgmap v1704: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 20 14:45:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:45:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 976 KiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.131 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Successfully updated port: 90244da9-daf0-4694-952a-a087ecc6c3db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b0608284-5644-4db1-881b-accbee7dbd77 does not exist
Jan 20 14:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8c6241ef-ce1e-4866-99fe-7629c172753e does not exist
Jan 20 14:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2915680f-3ed2-4aa7-8d1f-f117d9f72585 does not exist
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.162 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.162 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.162 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:45:12 compute-0 sudo[302411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.243 250022 DEBUG nova.compute.manager [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-changed-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.243 250022 DEBUG nova.compute.manager [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Refreshing instance network info cache due to event network-changed-90244da9-daf0-4694-952a-a087ecc6c3db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.243 250022 DEBUG oslo_concurrency.lockutils [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:12 compute-0 sudo[302411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:12 compute-0 sudo[302411]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:45:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:12.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.329 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:45:12 compute-0 sudo[302436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:45:12 compute-0 sudo[302436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:12 compute-0 sudo[302436]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:12 compute-0 nova_compute[250018]: 2026-01-20 14:45:12.715 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:45:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:12.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:12 compute-0 ceph-mon[74360]: pgmap v1705: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 976 KiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 20 14:45:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/950872874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.139 250022 DEBUG nova.network.neutron [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updating instance_info_cache with network_info: [{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.160 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.160 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance network_info: |[{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.160 250022 DEBUG oslo_concurrency.lockutils [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.161 250022 DEBUG nova.network.neutron [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Refreshing network info cache for port 90244da9-daf0-4694-952a-a087ecc6c3db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:45:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:45:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:45:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.717 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Jan 20 14:45:13 compute-0 nova_compute[250018]: 2026-01-20 14:45:13.802 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] resizing rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:45:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.318 250022 DEBUG nova.network.neutron [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updated VIF entry in instance network info cache for port 90244da9-daf0-4694-952a-a087ecc6c3db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.319 250022 DEBUG nova.network.neutron [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updating instance_info_cache with network_info: [{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.337 250022 DEBUG oslo_concurrency.lockutils [req-80144caf-4da3-4fa4-b718-d3eb536de1c9 req-4cee80e6-2289-423a-a5d8-afd209e9d088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2711257148' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:14 compute-0 ceph-mon[74360]: pgmap v1706: 321 pgs: 321 active+clean; 279 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 820 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.576 250022 DEBUG nova.objects.instance [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'migration_context' on Instance uuid 163342b0-95c5-4e11-8a6d-f7a9cd588782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.598 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.599 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Ensure instance console log exists: /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.600 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.601 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.601 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.606 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Start _get_guest_xml network_info=[{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.615 250022 WARNING nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.622 250022 DEBUG nova.virt.libvirt.host [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.623 250022 DEBUG nova.virt.libvirt.host [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.627 250022 DEBUG nova.virt.libvirt.host [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.627 250022 DEBUG nova.virt.libvirt.host [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.630 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.630 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.631 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.632 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.632 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.633 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.634 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.634 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.635 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.635 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.636 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.638 250022 DEBUG nova.virt.hardware [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:45:14 compute-0 nova_compute[250018]: 2026-01-20 14:45:14.644 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:14.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:45:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957072564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.096 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.126 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.131 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:45:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413851278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.553 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.555 250022 DEBUG nova.virt.libvirt.vif [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-579419372',display_name='tempest-DeleteServersTestJSON-server-579419372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-579419372',id=93,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-29b1ll4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:45:09Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=163342b0-95c5-4e11-8a6d-f7a9cd588782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.556 250022 DEBUG nova.network.os_vif_util [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.557 250022 DEBUG nova.network.os_vif_util [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.558 250022 DEBUG nova.objects.instance [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 163342b0-95c5-4e11-8a6d-f7a9cd588782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.574 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <uuid>163342b0-95c5-4e11-8a6d-f7a9cd588782</uuid>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <name>instance-0000005d</name>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:name>tempest-DeleteServersTestJSON-server-579419372</nova:name>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:45:14</nova:creationTime>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:user uuid="37e9ef97fbe0448e9fbe32d48b66211f">tempest-DeleteServersTestJSON-1162922273-project-member</nova:user>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:project uuid="3b31139b2a4e49cba5e7048febf901c4">tempest-DeleteServersTestJSON-1162922273</nova:project>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <nova:port uuid="90244da9-daf0-4694-952a-a087ecc6c3db">
Jan 20 14:45:15 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <system>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="serial">163342b0-95c5-4e11-8a6d-f7a9cd588782</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="uuid">163342b0-95c5-4e11-8a6d-f7a9cd588782</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </system>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <os>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </os>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <features>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </features>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/163342b0-95c5-4e11-8a6d-f7a9cd588782_disk">
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </source>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config">
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </source>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:45:15 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:53:d4:48"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <target dev="tap90244da9-da"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/console.log" append="off"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <video>
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </video>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:45:15 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:45:15 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:45:15 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:45:15 compute-0 nova_compute[250018]: </domain>
Jan 20 14:45:15 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.576 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Preparing to wait for external event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.577 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.577 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.578 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.579 250022 DEBUG nova.virt.libvirt.vif [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-579419372',display_name='tempest-DeleteServersTestJSON-server-579419372',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-579419372',id=93,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-29b1ll4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:45:09Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=163342b0-95c5-4e11-8a6d-f7a9cd588782,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.579 250022 DEBUG nova.network.os_vif_util [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.580 250022 DEBUG nova.network.os_vif_util [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.580 250022 DEBUG os_vif [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.581 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.582 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.582 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.586 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90244da9-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.587 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90244da9-da, col_values=(('external_ids', {'iface-id': '90244da9-daf0-4694-952a-a087ecc6c3db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:d4:48', 'vm-uuid': '163342b0-95c5-4e11-8a6d-f7a9cd588782'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:15 compute-0 NetworkManager[48960]: <info>  [1768920315.5894] manager: (tap90244da9-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.594 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.595 250022 INFO os_vif [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da')
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.642 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.643 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.643 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No VIF found with MAC fa:16:3e:53:d4:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.643 250022 INFO nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Using config drive
Jan 20 14:45:15 compute-0 nova_compute[250018]: 2026-01-20 14:45:15.666 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 347 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 706 KiB/s rd, 6.2 MiB/s wr, 180 op/s
Jan 20 14:45:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3957072564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/413851278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:16 compute-0 ceph-mon[74360]: pgmap v1707: 321 pgs: 321 active+clean; 347 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 706 KiB/s rd, 6.2 MiB/s wr, 180 op/s
Jan 20 14:45:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:16.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:16.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.544 250022 INFO nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Creating config drive at /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.551 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaz4l0ylt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.707 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaz4l0ylt" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 368 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 4.5 MiB/s wr, 113 op/s
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.746 250022 DEBUG nova.storage.rbd_utils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.750 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:17 compute-0 ceph-mon[74360]: pgmap v1708: 321 pgs: 321 active+clean; 368 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 4.5 MiB/s wr, 113 op/s
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.921 250022 DEBUG oslo_concurrency.processutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config 163342b0-95c5-4e11-8a6d-f7a9cd588782_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.922 250022 INFO nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Deleting local config drive /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782/disk.config because it was imported into RBD.
Jan 20 14:45:17 compute-0 kernel: tap90244da9-da: entered promiscuous mode
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:17 compute-0 ovn_controller[148666]: 2026-01-20T14:45:17Z|00294|binding|INFO|Claiming lport 90244da9-daf0-4694-952a-a087ecc6c3db for this chassis.
Jan 20 14:45:17 compute-0 ovn_controller[148666]: 2026-01-20T14:45:17Z|00295|binding|INFO|90244da9-daf0-4694-952a-a087ecc6c3db: Claiming fa:16:3e:53:d4:48 10.100.0.4
Jan 20 14:45:17 compute-0 NetworkManager[48960]: <info>  [1768920317.9879] manager: (tap90244da9-da): new Tun device (/org/freedesktop/NetworkManager/Devices/150)
Jan 20 14:45:17 compute-0 nova_compute[250018]: 2026-01-20 14:45:17.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.003 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d4:48 10.100.0.4'], port_security=['fa:16:3e:53:d4:48 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '163342b0-95c5-4e11-8a6d-f7a9cd588782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=90244da9-daf0-4694-952a-a087ecc6c3db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.004 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 90244da9-daf0-4694-952a-a087ecc6c3db in datapath fbd5d614-a7d3-4563-913c-104506628e59 bound to our chassis
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.006 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:45:18 compute-0 systemd-udevd[302673]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.023 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d23476-14d1-4810-9a49-7a25fe6409a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.024 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd5d614-a1 in ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.027 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd5d614-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.027 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f5c817-4d04-4e8c-bee0-13105ff20109]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 systemd-machined[216401]: New machine qemu-39-instance-0000005d.
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.028 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2165b4c2-3a41-4376-bad6-6a94784c8789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 NetworkManager[48960]: <info>  [1768920318.0364] device (tap90244da9-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:45:18 compute-0 NetworkManager[48960]: <info>  [1768920318.0383] device (tap90244da9-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.041 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bbc8c3-800b-48bf-accb-b4010ace88ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-0000005d.
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.068 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e46299e-824b-43e1-aa63-0510c25a25a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.073 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_controller[148666]: 2026-01-20T14:45:18Z|00296|binding|INFO|Setting lport 90244da9-daf0-4694-952a-a087ecc6c3db ovn-installed in OVS
Jan 20 14:45:18 compute-0 ovn_controller[148666]: 2026-01-20T14:45:18Z|00297|binding|INFO|Setting lport 90244da9-daf0-4694-952a-a087ecc6c3db up in Southbound
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.077 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.098 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[78708165-05cb-454b-8db8-9ee9ac6f9091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.103 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[faf33ad7-c7d4-4c75-952c-b9a1e0c2d91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 NetworkManager[48960]: <info>  [1768920318.1040] manager: (tapfbd5d614-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/151)
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.133 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2af0dd6e-3e4f-4503-8edb-7d041032a069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.136 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3eba7a4f-cdd8-4a2b-8013-83b21ae9f6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 NetworkManager[48960]: <info>  [1768920318.1563] device (tapfbd5d614-a0): carrier: link connected
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.161 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a4d75d-fa70-48b7-9557-cda48faa7b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.179 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[927ec592-8357-4cc0-a7a0-807d36e27567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625887, 'reachable_time': 22493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302705, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.194 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0b680f05-65a3-4543-bc52-e648d323472c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:38be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625887, 'tstamp': 625887}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302706, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.213 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[834ba094-b53b-4c2f-b175-9b9a1c863249]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625887, 'reachable_time': 22493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302707, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.243 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a95776e1-5b76-44b0-9758-6fe7fa7ad0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:18.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.304 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[590ecc9d-7c3a-4922-8ddf-42abf094ca78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.306 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.306 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.306 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd5d614-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.308 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 kernel: tapfbd5d614-a0: entered promiscuous mode
Jan 20 14:45:18 compute-0 NetworkManager[48960]: <info>  [1768920318.3093] manager: (tapfbd5d614-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.313 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd5d614-a0, col_values=(('external_ids', {'iface-id': 'b370b74e-dca0-4ff7-a96f-85b392e20721'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.314 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_controller[148666]: 2026-01-20T14:45:18Z|00298|binding|INFO|Releasing lport b370b74e-dca0-4ff7-a96f-85b392e20721 from this chassis (sb_readonly=0)
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.314 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.317 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.317 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e94c045d-04d7-4876-a0b4-94194edc76fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.318 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:45:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:18.319 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'env', 'PROCESS_TAG=haproxy-fbd5d614-a7d3-4563-913c-104506628e59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd5d614-a7d3-4563-913c-104506628e59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.328 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:18 compute-0 podman[302739]: 2026-01-20 14:45:18.717502125 +0000 UTC m=+0.058337402 container create 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:45:18 compute-0 systemd[1]: Started libpod-conmon-832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a.scope.
Jan 20 14:45:18 compute-0 podman[302739]: 2026-01-20 14:45:18.682872427 +0000 UTC m=+0.023707764 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:45:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:45:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a8f6b132cb0bcf514a6345c7ff9d6497e014c2b1cbb95ae132142b6ab33333/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.813 250022 DEBUG nova.compute.manager [req-2d7dcd48-3a7d-4bf4-860b-3c6fa103b334 req-e07e6c25-9ddf-4374-8ba0-917b3be38909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.814 250022 DEBUG oslo_concurrency.lockutils [req-2d7dcd48-3a7d-4bf4-860b-3c6fa103b334 req-e07e6c25-9ddf-4374-8ba0-917b3be38909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.814 250022 DEBUG oslo_concurrency.lockutils [req-2d7dcd48-3a7d-4bf4-860b-3c6fa103b334 req-e07e6c25-9ddf-4374-8ba0-917b3be38909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.815 250022 DEBUG oslo_concurrency.lockutils [req-2d7dcd48-3a7d-4bf4-860b-3c6fa103b334 req-e07e6c25-9ddf-4374-8ba0-917b3be38909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:18 compute-0 nova_compute[250018]: 2026-01-20 14:45:18.815 250022 DEBUG nova.compute.manager [req-2d7dcd48-3a7d-4bf4-860b-3c6fa103b334 req-e07e6c25-9ddf-4374-8ba0-917b3be38909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Processing event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:45:18 compute-0 podman[302739]: 2026-01-20 14:45:18.816144838 +0000 UTC m=+0.156980135 container init 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:45:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3296741815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/129227990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:18 compute-0 podman[302739]: 2026-01-20 14:45:18.822218712 +0000 UTC m=+0.163053989 container start 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:45:18 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [NOTICE]   (302759) : New worker (302761) forked
Jan 20 14:45:18 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [NOTICE]   (302759) : Loading success.
Jan 20 14:45:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:18.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.035 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920319.0348222, 163342b0-95c5-4e11-8a6d-f7a9cd588782 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.036 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] VM Started (Lifecycle Event)
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.038 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.041 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.044 250022 INFO nova.virt.libvirt.driver [-] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance spawned successfully.
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.044 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.055 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.060 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.063 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.064 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.064 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.065 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.065 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.066 250022 DEBUG nova.virt.libvirt.driver [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.088 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.088 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920319.0350192, 163342b0-95c5-4e11-8a6d-f7a9cd588782 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.089 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] VM Paused (Lifecycle Event)
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.112 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.115 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920319.0404997, 163342b0-95c5-4e11-8a6d-f7a9cd588782 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.115 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] VM Resumed (Lifecycle Event)
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.129 250022 INFO nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Took 9.14 seconds to spawn the instance on the hypervisor.
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.129 250022 DEBUG nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.137 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.140 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.174 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.200 250022 INFO nova.compute.manager [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Took 10.20 seconds to build instance.
Jan 20 14:45:19 compute-0 nova_compute[250018]: 2026-01-20 14:45:19.217 250022 DEBUG oslo_concurrency.lockutils [None req-8bb0a9d4-c16b-4f59-88bf-5bca717e9abf 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Jan 20 14:45:19 compute-0 ceph-mon[74360]: pgmap v1709: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Jan 20 14:45:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:20.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.987 250022 DEBUG nova.compute.manager [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.988 250022 DEBUG oslo_concurrency.lockutils [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.989 250022 DEBUG oslo_concurrency.lockutils [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.990 250022 DEBUG oslo_concurrency.lockutils [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.990 250022 DEBUG nova.compute.manager [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] No waiting events found dispatching network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:20 compute-0 nova_compute[250018]: 2026-01-20 14:45:20.991 250022 WARNING nova.compute.manager [req-2e220d8a-6466-4fe5-a5b6-9eb6249030ea req-022611ce-a91e-4c89-b552-9da921ef434b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received unexpected event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db for instance with vm_state active and task_state None.
Jan 20 14:45:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Jan 20 14:45:21 compute-0 ceph-mon[74360]: pgmap v1710: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Jan 20 14:45:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:22.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:22 compute-0 nova_compute[250018]: 2026-01-20 14:45:22.300 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:22 compute-0 nova_compute[250018]: 2026-01-20 14:45:22.301 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:22 compute-0 nova_compute[250018]: 2026-01-20 14:45:22.302 250022 INFO nova.compute.manager [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Shelving
Jan 20 14:45:22 compute-0 nova_compute[250018]: 2026-01-20 14:45:22.332 250022 DEBUG nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:45:22 compute-0 podman[302815]: 2026-01-20 14:45:22.486190202 +0000 UTC m=+0.062897945 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:22 compute-0 podman[302814]: 2026-01-20 14:45:22.523420471 +0000 UTC m=+0.111228175 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:45:22 compute-0 nova_compute[250018]: 2026-01-20 14:45:22.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:22.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 334 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 20 14:45:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1557210698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:24.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:24.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:25 compute-0 ceph-mon[74360]: pgmap v1711: 321 pgs: 321 active+clean; 334 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 20 14:45:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4037203044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:25 compute-0 nova_compute[250018]: 2026-01-20 14:45:25.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 323 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:45:26 compute-0 ceph-mon[74360]: pgmap v1712: 321 pgs: 321 active+clean; 323 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.079050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326079180, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2103, "num_deletes": 251, "total_data_size": 3463660, "memory_usage": 3516016, "flush_reason": "Manual Compaction"}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326100427, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3397927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36758, "largest_seqno": 38860, "table_properties": {"data_size": 3388811, "index_size": 5546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20370, "raw_average_key_size": 20, "raw_value_size": 3370006, "raw_average_value_size": 3400, "num_data_blocks": 242, "num_entries": 991, "num_filter_entries": 991, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920134, "oldest_key_time": 1768920134, "file_creation_time": 1768920326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 21385 microseconds, and 7385 cpu microseconds.
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.100460) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3397927 bytes OK
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.100478) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.102943) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.102955) EVENT_LOG_v1 {"time_micros": 1768920326102951, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.102970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3455000, prev total WAL file size 3455000, number of live WAL files 2.
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.103888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3318KB)], [80(9562KB)]
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326104023, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13190220, "oldest_snapshot_seqno": -1}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6695 keys, 11261776 bytes, temperature: kUnknown
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326189729, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11261776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11215995, "index_size": 27941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171762, "raw_average_key_size": 25, "raw_value_size": 11095103, "raw_average_value_size": 1657, "num_data_blocks": 1115, "num_entries": 6695, "num_filter_entries": 6695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920326, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.190195) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11261776 bytes
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.191811) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7210, records dropped: 515 output_compression: NoCompression
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.191836) EVENT_LOG_v1 {"time_micros": 1768920326191822, "job": 46, "event": "compaction_finished", "compaction_time_micros": 85903, "compaction_time_cpu_micros": 35754, "output_level": 6, "num_output_files": 1, "total_output_size": 11261776, "num_input_records": 7210, "num_output_records": 6695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326192765, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920326195360, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.103733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.195454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.195463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.195466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.195469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:45:26.195472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:45:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:26.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:26.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:27 compute-0 nova_compute[250018]: 2026-01-20 14:45:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:27 compute-0 nova_compute[250018]: 2026-01-20 14:45:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:27 compute-0 sudo[302863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:27 compute-0 sudo[302863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:27 compute-0 sudo[302863]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:27 compute-0 sudo[302888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:27 compute-0 sudo[302888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:27 compute-0 sudo[302888]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 224 op/s
Jan 20 14:45:27 compute-0 nova_compute[250018]: 2026-01-20 14:45:27.747 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:27 compute-0 ceph-mon[74360]: pgmap v1713: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 224 op/s
Jan 20 14:45:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:45:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:45:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 217 op/s
Jan 20 14:45:29 compute-0 ceph-mon[74360]: pgmap v1714: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 217 op/s
Jan 20 14:45:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:30.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:30 compute-0 nova_compute[250018]: 2026-01-20 14:45:30.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:30.758 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:30.758 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:30.759 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1005838163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1978237855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:45:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:30.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 20 14:45:31 compute-0 ceph-mon[74360]: pgmap v1715: 321 pgs: 321 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.077 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:32 compute-0 sshd-session[302916]: Invalid user test from 157.245.78.139 port 33092
Jan 20 14:45:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:45:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:32.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:45:32 compute-0 sshd-session[302916]: Connection closed by invalid user test 157.245.78.139 port 33092 [preauth]
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.373 250022 DEBUG nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:45:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086620939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.497 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.563 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.564 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.700 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.701 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=20.879981994628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.701 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.701 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.748 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1381354546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4086620939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.877 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 163342b0-95c5-4e11-8a6d-f7a9cd588782 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.877 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.877 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:45:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:32 compute-0 nova_compute[250018]: 2026-01-20 14:45:32.909 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2097302598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:33 compute-0 nova_compute[250018]: 2026-01-20 14:45:33.416 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:33 compute-0 nova_compute[250018]: 2026-01-20 14:45:33.422 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:33 compute-0 nova_compute[250018]: 2026-01-20 14:45:33.442 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:33 compute-0 nova_compute[250018]: 2026-01-20 14:45:33.471 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:45:33 compute-0 nova_compute[250018]: 2026-01-20 14:45:33.471 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 316 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 14:45:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:34.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1341277129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2097302598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:34 compute-0 ceph-mon[74360]: pgmap v1716: 321 pgs: 321 active+clean; 316 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 14:45:34 compute-0 nova_compute[250018]: 2026-01-20 14:45:34.472 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:34 compute-0 nova_compute[250018]: 2026-01-20 14:45:34.472 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:45:35 compute-0 kernel: tap90244da9-da (unregistering): left promiscuous mode
Jan 20 14:45:35 compute-0 NetworkManager[48960]: <info>  [1768920335.2736] device (tap90244da9-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:45:35 compute-0 ovn_controller[148666]: 2026-01-20T14:45:35Z|00299|binding|INFO|Releasing lport 90244da9-daf0-4694-952a-a087ecc6c3db from this chassis (sb_readonly=0)
Jan 20 14:45:35 compute-0 ovn_controller[148666]: 2026-01-20T14:45:35Z|00300|binding|INFO|Setting lport 90244da9-daf0-4694-952a-a087ecc6c3db down in Southbound
Jan 20 14:45:35 compute-0 ovn_controller[148666]: 2026-01-20T14:45:35Z|00301|binding|INFO|Removing iface tap90244da9-da ovn-installed in OVS
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.330 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d4:48 10.100.0.4'], port_security=['fa:16:3e:53:d4:48 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '163342b0-95c5-4e11-8a6d-f7a9cd588782', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=90244da9-daf0-4694-952a-a087ecc6c3db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.331 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 90244da9-daf0-4694-952a-a087ecc6c3db in datapath fbd5d614-a7d3-4563-913c-104506628e59 unbound from our chassis
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.332 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd5d614-a7d3-4563-913c-104506628e59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.333 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbbd1d3-fd59-4e7a-996d-ca601036e6be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.334 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace which is not needed anymore
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/263069660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:35 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 20 14:45:35 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Consumed 13.954s CPU time.
Jan 20 14:45:35 compute-0 systemd-machined[216401]: Machine qemu-39-instance-0000005d terminated.
Jan 20 14:45:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [NOTICE]   (302759) : haproxy version is 2.8.14-c23fe91
Jan 20 14:45:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [NOTICE]   (302759) : path to executable is /usr/sbin/haproxy
Jan 20 14:45:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [WARNING]  (302759) : Exiting Master process...
Jan 20 14:45:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [ALERT]    (302759) : Current worker (302761) exited with code 143 (Terminated)
Jan 20 14:45:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[302755]: [WARNING]  (302759) : All workers exited. Exiting... (0)
Jan 20 14:45:35 compute-0 systemd[1]: libpod-832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a.scope: Deactivated successfully.
Jan 20 14:45:35 compute-0 podman[302989]: 2026-01-20 14:45:35.469232495 +0000 UTC m=+0.043373107 container died 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a8f6b132cb0bcf514a6345c7ff9d6497e014c2b1cbb95ae132142b6ab33333-merged.mount: Deactivated successfully.
Jan 20 14:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a-userdata-shm.mount: Deactivated successfully.
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.519 250022 DEBUG nova.compute.manager [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-vif-unplugged-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.520 250022 DEBUG oslo_concurrency.lockutils [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.520 250022 DEBUG oslo_concurrency.lockutils [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.520 250022 DEBUG oslo_concurrency.lockutils [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.521 250022 DEBUG nova.compute.manager [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] No waiting events found dispatching network-vif-unplugged-90244da9-daf0-4694-952a-a087ecc6c3db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.521 250022 WARNING nova.compute.manager [req-005c7280-ec05-48b7-a5f4-c61a21945597 req-b40713b6-237d-42b6-bf5a-86d0a9566447 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received unexpected event network-vif-unplugged-90244da9-daf0-4694-952a-a087ecc6c3db for instance with vm_state active and task_state shelving.
Jan 20 14:45:35 compute-0 podman[302989]: 2026-01-20 14:45:35.530594957 +0000 UTC m=+0.104735569 container cleanup 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 14:45:35 compute-0 systemd[1]: libpod-conmon-832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a.scope: Deactivated successfully.
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.543 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.548 250022 INFO nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance shutdown successfully after 13 seconds.
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.553 250022 INFO nova.virt.libvirt.driver [-] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance destroyed successfully.
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.553 250022 DEBUG nova.objects.instance [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 163342b0-95c5-4e11-8a6d-f7a9cd588782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.594 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 podman[303022]: 2026-01-20 14:45:35.595481486 +0000 UTC m=+0.041795584 container remove 832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.601 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4ec23e-a0a2-438b-8552-93613e61f02c]: (4, ('Tue Jan 20 02:45:35 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a)\n832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a\nTue Jan 20 02:45:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a)\n832b2a0aa7a2980c4f4b5fadaf661331f85b1ba104679371aff6bfcaf85d6a9a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.602 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5ae30e-1883-43c0-9d96-13ecd1334de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.603 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.605 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 kernel: tapfbd5d614-a0: left promiscuous mode
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.626 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[687a9403-cf7f-4888-9704-95ed79880cea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.648 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e89162-a1f6-493a-9116-62f3fe61e992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.650 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a881bfa5-be4a-49ee-b703-0d3aba6bbf94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.665 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e0a572-c93e-4041-ad73-4618af7612fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625881, 'reachable_time': 26733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303047, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.668 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:45:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:35.668 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c64402-6200-43df-b492-01c073760486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:45:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbd5d614\x2da7d3\x2d4563\x2d913c\x2d104506628e59.mount: Deactivated successfully.
Jan 20 14:45:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 252 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 306 op/s
Jan 20 14:45:35 compute-0 nova_compute[250018]: 2026-01-20 14:45:35.884 250022 INFO nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Beginning cold snapshot process
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.027 250022 DEBUG nova.virt.libvirt.imagebackend [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.258 250022 DEBUG nova.storage.rbd_utils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] creating snapshot(1ca078a8afcc4d6d8078d44846df66e6) on rbd image(163342b0-95c5-4e11-8a6d-f7a9cd588782_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:45:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:36.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 20 14:45:36 compute-0 ceph-mon[74360]: pgmap v1717: 321 pgs: 321 active+clean; 252 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 306 op/s
Jan 20 14:45:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 20 14:45:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.426 250022 DEBUG nova.storage.rbd_utils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] cloning vms/163342b0-95c5-4e11-8a6d-f7a9cd588782_disk@1ca078a8afcc4d6d8078d44846df66e6 to images/20f02a0e-ef11-4157-9d07-37674d1e0ea8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.524 250022 DEBUG nova.storage.rbd_utils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] flattening images/20f02a0e-ef11-4157-9d07-37674d1e0ea8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:45:36 compute-0 nova_compute[250018]: 2026-01-20 14:45:36.868 250022 DEBUG nova.storage.rbd_utils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] removing snapshot(1ca078a8afcc4d6d8078d44846df66e6) on rbd image(163342b0-95c5-4e11-8a6d-f7a9cd588782_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:45:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 20 14:45:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 20 14:45:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.728 250022 DEBUG nova.compute.manager [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.731 250022 DEBUG oslo_concurrency.lockutils [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.732 250022 DEBUG oslo_concurrency.lockutils [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.733 250022 DEBUG oslo_concurrency.lockutils [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.733 250022 DEBUG nova.compute.manager [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] No waiting events found dispatching network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.734 250022 WARNING nova.compute.manager [req-4972c1a7-a28d-4e6d-9c8f-3fd4116a6cab req-23a89dff-f70e-42df-9d20-d29ac5349cee 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received unexpected event network-vif-plugged-90244da9-daf0-4694-952a-a087ecc6c3db for instance with vm_state active and task_state shelving_image_uploading.
Jan 20 14:45:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 294 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.3 MiB/s wr, 293 op/s
Jan 20 14:45:37 compute-0 nova_compute[250018]: 2026-01-20 14:45:37.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:37 compute-0 ceph-mon[74360]: osdmap e233: 3 total, 3 up, 3 in
Jan 20 14:45:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2360956372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:38 compute-0 nova_compute[250018]: 2026-01-20 14:45:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:38.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:38.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:39 compute-0 ceph-mon[74360]: osdmap e234: 3 total, 3 up, 3 in
Jan 20 14:45:39 compute-0 ceph-mon[74360]: pgmap v1720: 321 pgs: 321 active+clean; 294 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 7.3 MiB/s wr, 293 op/s
Jan 20 14:45:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3888261964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:39 compute-0 nova_compute[250018]: 2026-01-20 14:45:39.249 250022 DEBUG nova.storage.rbd_utils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] creating snapshot(snap) on rbd image(20f02a0e-ef11-4157-9d07-37674d1e0ea8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:45:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.7 MiB/s wr, 310 op/s
Jan 20 14:45:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 20 14:45:40 compute-0 ceph-mon[74360]: pgmap v1721: 321 pgs: 321 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.7 MiB/s wr, 310 op/s
Jan 20 14:45:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 20 14:45:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 20 14:45:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:40 compute-0 nova_compute[250018]: 2026-01-20 14:45:40.596 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:40.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:45:41 compute-0 ceph-mon[74360]: osdmap e235: 3 total, 3 up, 3 in
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.221 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.222 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.222 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:45:41 compute-0 nova_compute[250018]: 2026-01-20 14:45:41.222 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 163342b0-95c5-4e11-8a6d-f7a9cd588782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 358 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 217 op/s
Jan 20 14:45:42 compute-0 ceph-mon[74360]: pgmap v1723: 321 pgs: 321 active+clean; 358 MiB data, 867 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 217 op/s
Jan 20 14:45:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:42.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.662 250022 INFO nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Snapshot image upload complete
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.662 250022 DEBUG nova.compute.manager [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.761 250022 INFO nova.compute.manager [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Shelve offloading
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.772 250022 INFO nova.virt.libvirt.driver [-] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance destroyed successfully.
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.773 250022 DEBUG nova.compute.manager [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.775 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.827 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:45:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:45:42 compute-0 nova_compute[250018]: 2026-01-20 14:45:42.999 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updating instance_info_cache with network_info: [{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:43 compute-0 nova_compute[250018]: 2026-01-20 14:45:43.021 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:43 compute-0 nova_compute[250018]: 2026-01-20 14:45:43.022 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:45:43 compute-0 nova_compute[250018]: 2026-01-20 14:45:43.022 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:43 compute-0 nova_compute[250018]: 2026-01-20 14:45:43.023 250022 DEBUG nova.network.neutron [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:45:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1623992898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 358 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.4 MiB/s wr, 179 op/s
Jan 20 14:45:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:44.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:44 compute-0 ceph-mon[74360]: pgmap v1724: 321 pgs: 321 active+clean; 358 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.4 MiB/s wr, 179 op/s
Jan 20 14:45:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:44.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:45 compute-0 nova_compute[250018]: 2026-01-20 14:45:45.349 250022 DEBUG nova.network.neutron [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updating instance_info_cache with network_info: [{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:45 compute-0 nova_compute[250018]: 2026-01-20 14:45:45.382 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:45 compute-0 nova_compute[250018]: 2026-01-20 14:45:45.597 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 380 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 117 op/s
Jan 20 14:45:46 compute-0 ceph-mon[74360]: pgmap v1725: 321 pgs: 321 active+clean; 380 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 117 op/s
Jan 20 14:45:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:45:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:46.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.850 250022 INFO nova.virt.libvirt.driver [-] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Instance destroyed successfully.
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.851 250022 DEBUG nova.objects.instance [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'resources' on Instance uuid 163342b0-95c5-4e11-8a6d-f7a9cd588782 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.875 250022 DEBUG nova.virt.libvirt.vif [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-579419372',display_name='tempest-DeleteServersTestJSON-server-579419372',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-579419372',id=93,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:45:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-29b1ll4p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member',shelved_at='2026-01-20T14:45:42.662772',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='20f02a0e-ef11-4157-9d07-37674d1e0ea8'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:45:35Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=163342b0-95c5-4e11-8a6d-f7a9cd588782,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.876 250022 DEBUG nova.network.os_vif_util [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90244da9-da", "ovs_interfaceid": "90244da9-daf0-4694-952a-a087ecc6c3db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.876 250022 DEBUG nova.network.os_vif_util [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.877 250022 DEBUG os_vif [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.879 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.879 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90244da9-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.880 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.885 250022 INFO os_vif [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d4:48,bridge_name='br-int',has_traffic_filtering=True,id=90244da9-daf0-4694-952a-a087ecc6c3db,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90244da9-da')
Jan 20 14:45:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:45:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.962 250022 DEBUG nova.compute.manager [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Received event network-changed-90244da9-daf0-4694-952a-a087ecc6c3db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.962 250022 DEBUG nova.compute.manager [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Refreshing instance network info cache due to event network-changed-90244da9-daf0-4694-952a-a087ecc6c3db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.963 250022 DEBUG oslo_concurrency.lockutils [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.963 250022 DEBUG oslo_concurrency.lockutils [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:45:46 compute-0 nova_compute[250018]: 2026-01-20 14:45:46.963 250022 DEBUG nova.network.neutron [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Refreshing network info cache for port 90244da9-daf0-4694-952a-a087ecc6c3db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:45:47 compute-0 sudo[303213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:47 compute-0 sudo[303213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:47 compute-0 sudo[303213]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 399 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 20 14:45:47 compute-0 sudo[303238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:45:47 compute-0 sudo[303238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:45:47 compute-0 sudo[303238]: pam_unix(sudo:session): session closed for user root
Jan 20 14:45:47 compute-0 nova_compute[250018]: 2026-01-20 14:45:47.869 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3346481726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:47 compute-0 ceph-mon[74360]: pgmap v1726: 321 pgs: 321 active+clean; 399 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 115 op/s
Jan 20 14:45:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 20 14:45:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 20 14:45:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 20 14:45:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:48 compute-0 nova_compute[250018]: 2026-01-20 14:45:48.589 250022 DEBUG nova.network.neutron [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updated VIF entry in instance network info cache for port 90244da9-daf0-4694-952a-a087ecc6c3db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:45:48 compute-0 nova_compute[250018]: 2026-01-20 14:45:48.589 250022 DEBUG nova.network.neutron [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Updating instance_info_cache with network_info: [{"id": "90244da9-daf0-4694-952a-a087ecc6c3db", "address": "fa:16:3e:53:d4:48", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": null, "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap90244da9-da", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:45:48 compute-0 nova_compute[250018]: 2026-01-20 14:45:48.619 250022 DEBUG oslo_concurrency.lockutils [req-12ca0ebe-b1dd-4d92-8687-23dda25bf36e req-3e4c37af-5305-47b2-b825-2ada551bc0cb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-163342b0-95c5-4e11-8a6d-f7a9cd588782" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:45:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/703686633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:45:49 compute-0 ceph-mon[74360]: osdmap e236: 3 total, 3 up, 3 in
Jan 20 14:45:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 376 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 20 14:45:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:50 compute-0 ceph-mon[74360]: pgmap v1728: 321 pgs: 321 active+clean; 376 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.551 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920335.5482202, 163342b0-95c5-4e11-8a6d-f7a9cd588782 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.552 250022 INFO nova.compute.manager [-] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] VM Stopped (Lifecycle Event)
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.570 250022 DEBUG nova.compute.manager [None req-818d101d-e1a3-4d3c-a974-8d2e23e1fd27 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.575 250022 DEBUG nova.compute.manager [None req-818d101d-e1a3-4d3c-a974-8d2e23e1fd27 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.595 250022 INFO nova.compute.manager [None req-818d101d-e1a3-4d3c-a974-8d2e23e1fd27 - - - - - -] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Jan 20 14:45:50 compute-0 nova_compute[250018]: 2026-01-20 14:45:50.825 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:50.826 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:45:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:45:50.828 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:45:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.541 250022 INFO nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Deleting instance files /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782_del
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.542 250022 INFO nova.virt.libvirt.driver [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 163342b0-95c5-4e11-8a6d-f7a9cd588782] Deletion of /var/lib/nova/instances/163342b0-95c5-4e11-8a6d-f7a9cd588782_del complete
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.640 250022 INFO nova.scheduler.client.report [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Deleted allocations for instance 163342b0-95c5-4e11-8a6d-f7a9cd588782
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.700 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.700 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.741 250022 DEBUG oslo_concurrency.processutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:45:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 338 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 14:45:51 compute-0 nova_compute[250018]: 2026-01-20 14:45:51.883 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:51 compute-0 ceph-mon[74360]: pgmap v1729: 321 pgs: 321 active+clean; 338 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 14:45:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:45:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435567543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.167 250022 DEBUG oslo_concurrency.processutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.175 250022 DEBUG nova.compute.provider_tree [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.193 250022 DEBUG nova.scheduler.client.report [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.228 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.283 250022 DEBUG oslo_concurrency.lockutils [None req-04e987f3-9952-40f8-bedd-ab998c4a4453 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "163342b0-95c5-4e11-8a6d-f7a9cd588782" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 29.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:45:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:52.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:45:52
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'images', 'vms', 'default.rgw.log']
Jan 20 14:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:45:52 compute-0 nova_compute[250018]: 2026-01-20 14:45:52.871 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:45:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:45:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/435567543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:45:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:53 compute-0 podman[303290]: 2026-01-20 14:45:53.484458503 +0000 UTC m=+0.071323174 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:45:53 compute-0 podman[303289]: 2026-01-20 14:45:53.558495938 +0000 UTC m=+0.138620997 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:45:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 326 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 20 14:45:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 20 14:45:54 compute-0 ceph-mon[74360]: pgmap v1730: 321 pgs: 321 active+clean; 326 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 20 14:45:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:54.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 20 14:45:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 20 14:45:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:45:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:45:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 326 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 611 KiB/s wr, 150 op/s
Jan 20 14:45:55 compute-0 ceph-mon[74360]: osdmap e237: 3 total, 3 up, 3 in
Jan 20 14:45:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:56 compute-0 nova_compute[250018]: 2026-01-20 14:45:56.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:45:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:45:57 compute-0 ceph-mon[74360]: pgmap v1732: 321 pgs: 321 active+clean; 326 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 611 KiB/s wr, 150 op/s
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:45:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 308 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 518 KiB/s wr, 153 op/s
Jan 20 14:45:57 compute-0 nova_compute[250018]: 2026-01-20 14:45:57.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:45:58 compute-0 ceph-mon[74360]: pgmap v1733: 321 pgs: 321 active+clean; 308 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 518 KiB/s wr, 153 op/s
Jan 20 14:45:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:45:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:45:58.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:45:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:45:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:45:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:45:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 279 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 152 op/s
Jan 20 14:46:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:00.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:00 compute-0 ceph-mon[74360]: pgmap v1734: 321 pgs: 321 active+clean; 279 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 152 op/s
Jan 20 14:46:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:00.830 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2979122474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 249 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 123 op/s
Jan 20 14:46:01 compute-0 nova_compute[250018]: 2026-01-20 14:46:01.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:02.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.740 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.741 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:02 compute-0 ceph-mon[74360]: pgmap v1735: 321 pgs: 321 active+clean; 249 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 123 op/s
Jan 20 14:46:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/113448044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.770 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.871 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.871 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.874 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.880 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.881 250022 INFO nova.compute.claims [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:46:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:02 compute-0 nova_compute[250018]: 2026-01-20 14:46:02.979 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 20 14:46:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123534067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.439 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.446 250022 DEBUG nova.compute.provider_tree [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.474 250022 DEBUG nova.scheduler.client.report [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.501 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.502 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:46:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 20 14:46:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.558 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.558 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.592 250022 INFO nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.616 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:46:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 247 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 99 op/s
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.816 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.818 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:46:03 compute-0 nova_compute[250018]: 2026-01-20 14:46:03.819 250022 INFO nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Creating image(s)
Jan 20 14:46:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:04.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2123534067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:05 compute-0 ceph-mon[74360]: osdmap e238: 3 total, 3 up, 3 in
Jan 20 14:46:05 compute-0 ceph-mon[74360]: pgmap v1737: 321 pgs: 321 active+clean; 247 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 99 op/s
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.017 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.059 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.102 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.107 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.146 250022 DEBUG nova.policy [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37e9ef97fbe0448e9fbe32d48b66211f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b31139b2a4e49cba5e7048febf901c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.198 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.199 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.199 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.200 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.234 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:05 compute-0 nova_compute[250018]: 2026-01-20 14:46:05.238 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 253 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 588 KiB/s rd, 827 KiB/s wr, 58 op/s
Jan 20 14:46:06 compute-0 nova_compute[250018]: 2026-01-20 14:46:06.131 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Successfully created port: bc4c8178-e331-4054-89ce-aae2ffa6e0ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:46:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:06 compute-0 ceph-mon[74360]: pgmap v1738: 321 pgs: 321 active+clean; 253 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 588 KiB/s rd, 827 KiB/s wr, 58 op/s
Jan 20 14:46:06 compute-0 nova_compute[250018]: 2026-01-20 14:46:06.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.598 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Successfully updated port: bc4c8178-e331-4054-89ce-aae2ffa6e0ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.616 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.616 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.616 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.667 250022 DEBUG nova.compute.manager [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-changed-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.667 250022 DEBUG nova.compute.manager [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Refreshing instance network info cache due to event network-changed-bc4c8178-e331-4054-89ce-aae2ffa6e0ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.668 250022 DEBUG oslo_concurrency.lockutils [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:46:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 275 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 109 op/s
Jan 20 14:46:07 compute-0 sudo[303459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:07 compute-0 sudo[303459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:07 compute-0 sudo[303459]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.876 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.878 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:07 compute-0 sudo[303484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:07 compute-0 sudo[303484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:07 compute-0 sudo[303484]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:07 compute-0 nova_compute[250018]: 2026-01-20 14:46:07.944 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] resizing rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:46:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:08 compute-0 nova_compute[250018]: 2026-01-20 14:46:08.703 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:46:08 compute-0 ceph-mon[74360]: pgmap v1739: 321 pgs: 321 active+clean; 275 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 109 op/s
Jan 20 14:46:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 287 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 108 op/s
Jan 20 14:46:10 compute-0 nova_compute[250018]: 2026-01-20 14:46:10.119 250022 DEBUG nova.network.neutron [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Updating instance_info_cache with network_info: [{"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:10 compute-0 nova_compute[250018]: 2026-01-20 14:46:10.139 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:46:10 compute-0 nova_compute[250018]: 2026-01-20 14:46:10.140 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance network_info: |[{"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:46:10 compute-0 nova_compute[250018]: 2026-01-20 14:46:10.140 250022 DEBUG oslo_concurrency.lockutils [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:46:10 compute-0 nova_compute[250018]: 2026-01-20 14:46:10.141 250022 DEBUG nova.network.neutron [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Refreshing network info cache for port bc4c8178-e331-4054-89ce-aae2ffa6e0ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:46:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:10.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:10 compute-0 ceph-mon[74360]: pgmap v1740: 321 pgs: 321 active+clean; 287 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 108 op/s
Jan 20 14:46:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006736336485669086 of space, bias 1.0, pg target 2.020900945700726 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.0003791381676903034 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:46:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2347137275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 314 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 175 op/s
Jan 20 14:46:11 compute-0 nova_compute[250018]: 2026-01-20 14:46:11.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.168 250022 DEBUG nova.objects.instance [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'migration_context' on Instance uuid 9280bbf5-da74-42c8-b8a3-a392cec3f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.195 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.196 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Ensure instance console log exists: /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.196 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.197 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.197 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.199 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Start _get_guest_xml network_info=[{"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.203 250022 WARNING nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.209 250022 DEBUG nova.virt.libvirt.host [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.210 250022 DEBUG nova.virt.libvirt.host [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.213 250022 DEBUG nova.virt.libvirt.host [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.213 250022 DEBUG nova.virt.libvirt.host [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.214 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.214 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.215 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.215 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.215 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.215 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.215 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.216 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.216 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.216 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.216 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.217 250022 DEBUG nova.virt.hardware [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.219 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071100775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.706 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.733 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:12 compute-0 sudo[303605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.738 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:12 compute-0 sudo[303605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-0 sudo[303605]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-0 sudo[303648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:46:12 compute-0 sudo[303648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-0 sudo[303648]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-0 sudo[303674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:12 compute-0 sudo[303674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-0 sudo[303674]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.878 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:12 compute-0 ceph-mon[74360]: pgmap v1741: 321 pgs: 321 active+clean; 314 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 175 op/s
Jan 20 14:46:12 compute-0 sudo[303718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:46:12 compute-0 sudo[303718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.958 250022 DEBUG nova.network.neutron [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Updated VIF entry in instance network info cache for port bc4c8178-e331-4054-89ce-aae2ffa6e0ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.958 250022 DEBUG nova.network.neutron [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Updating instance_info_cache with network_info: [{"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:12 compute-0 nova_compute[250018]: 2026-01-20 14:46:12.976 250022 DEBUG oslo_concurrency.lockutils [req-152e45aa-635d-4fcb-8925-69ce4f4ac947 req-6b1dc31b-0024-4bbf-9124-0896fd486c71 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9280bbf5-da74-42c8-b8a3-a392cec3f921" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880388694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.205 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.206 250022 DEBUG nova.virt.libvirt.vif [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1057607620',display_name='tempest-DeleteServersTestJSON-server-1057607620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1057607620',id=96,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-p7j50y06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:03Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=9280bbf5-da74-42c8-b8a3-a392cec3f921,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.207 250022 DEBUG nova.network.os_vif_util [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.208 250022 DEBUG nova.network.os_vif_util [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.209 250022 DEBUG nova.objects.instance [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9280bbf5-da74-42c8-b8a3-a392cec3f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.223 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <uuid>9280bbf5-da74-42c8-b8a3-a392cec3f921</uuid>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <name>instance-00000060</name>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:name>tempest-DeleteServersTestJSON-server-1057607620</nova:name>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:46:12</nova:creationTime>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:user uuid="37e9ef97fbe0448e9fbe32d48b66211f">tempest-DeleteServersTestJSON-1162922273-project-member</nova:user>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:project uuid="3b31139b2a4e49cba5e7048febf901c4">tempest-DeleteServersTestJSON-1162922273</nova:project>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <nova:port uuid="bc4c8178-e331-4054-89ce-aae2ffa6e0ee">
Jan 20 14:46:13 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <system>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="serial">9280bbf5-da74-42c8-b8a3-a392cec3f921</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="uuid">9280bbf5-da74-42c8-b8a3-a392cec3f921</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </system>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <os>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </os>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <features>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </features>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9280bbf5-da74-42c8-b8a3-a392cec3f921_disk">
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </source>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config">
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </source>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:46:13 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:3d:c6:ef"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <target dev="tapbc4c8178-e3"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/console.log" append="off"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <video>
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </video>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:46:13 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:46:13 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:46:13 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:46:13 compute-0 nova_compute[250018]: </domain>
Jan 20 14:46:13 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.224 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Preparing to wait for external event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.225 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.225 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.225 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.226 250022 DEBUG nova.virt.libvirt.vif [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1057607620',display_name='tempest-DeleteServersTestJSON-server-1057607620',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1057607620',id=96,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-p7j50y06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:03Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=9280bbf5-da74-42c8-b8a3-a392cec3f921,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.226 250022 DEBUG nova.network.os_vif_util [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.227 250022 DEBUG nova.network.os_vif_util [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.227 250022 DEBUG os_vif [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.228 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.228 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.229 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.233 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc4c8178-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.233 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc4c8178-e3, col_values=(('external_ids', {'iface-id': 'bc4c8178-e331-4054-89ce-aae2ffa6e0ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:c6:ef', 'vm-uuid': '9280bbf5-da74-42c8-b8a3-a392cec3f921'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:13 compute-0 NetworkManager[48960]: <info>  [1768920373.2365] manager: (tapbc4c8178-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.237 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.242 250022 INFO os_vif [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3')
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.305 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.305 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.305 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No VIF found with MAC fa:16:3e:3d:c6:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.306 250022 INFO nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Using config drive
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.329 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:13 compute-0 sudo[303718]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 98cf5c26-8688-4d28-8bc3-a7b3ff271b3e does not exist
Jan 20 14:46:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e9af3bad-0487-4a2d-9483-a366c2b5be12 does not exist
Jan 20 14:46:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dfa404fb-ad53-4618-80fc-906a8c12b5b0 does not exist
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:46:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:13 compute-0 sudo[303796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:13 compute-0 sudo[303796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:13 compute-0 sudo[303796]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:13 compute-0 sudo[303821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:46:13 compute-0 sudo[303821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:13 compute-0 sudo[303821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:13 compute-0 sudo[303846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:13 compute-0 sudo[303846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:13 compute-0 sudo[303846]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:13 compute-0 sudo[303872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:46:13 compute-0 sudo[303872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 318 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 176 op/s
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.926 250022 INFO nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Creating config drive at /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config
Jan 20 14:46:13 compute-0 nova_compute[250018]: 2026-01-20 14:46:13.934 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xz_7zyf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3071100775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2880388694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/514796829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:46:13 compute-0 ceph-mon[74360]: pgmap v1742: 321 pgs: 321 active+clean; 318 MiB data, 842 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 176 op/s
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.070638937 +0000 UTC m=+0.039632704 container create 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:46:14 compute-0 nova_compute[250018]: 2026-01-20 14:46:14.071 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xz_7zyf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:14 compute-0 nova_compute[250018]: 2026-01-20 14:46:14.100 250022 DEBUG nova.storage.rbd_utils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:14 compute-0 nova_compute[250018]: 2026-01-20 14:46:14.103 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:14 compute-0 systemd[1]: Started libpod-conmon-00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4.scope.
Jan 20 14:46:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.051447038 +0000 UTC m=+0.020440835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.152369203 +0000 UTC m=+0.121362990 container init 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.159316531 +0000 UTC m=+0.128310298 container start 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.162570699 +0000 UTC m=+0.131564466 container attach 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:46:14 compute-0 systemd[1]: libpod-00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4.scope: Deactivated successfully.
Jan 20 14:46:14 compute-0 upbeat_chaum[303978]: 167 167
Jan 20 14:46:14 compute-0 conmon[303978]: conmon 00a6b93248505c27513c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4.scope/container/memory.events
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.167687688 +0000 UTC m=+0.136681455 container died 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f64803589e9eeb6d4fded53f958d8ea66a83c19972b2fdd7ea4d0b23a6fb7e83-merged.mount: Deactivated successfully.
Jan 20 14:46:14 compute-0 podman[303943]: 2026-01-20 14:46:14.204516945 +0000 UTC m=+0.173510702 container remove 00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chaum, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:46:14 compute-0 systemd[1]: libpod-conmon-00a6b93248505c27513cf9f354efd77d3ae3e04925345c329f215af7f69542e4.scope: Deactivated successfully.
Jan 20 14:46:14 compute-0 podman[304015]: 2026-01-20 14:46:14.349147435 +0000 UTC m=+0.038312759 container create d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:46:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:14.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:14 compute-0 systemd[1]: Started libpod-conmon-d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939.scope.
Jan 20 14:46:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:14 compute-0 podman[304015]: 2026-01-20 14:46:14.418547686 +0000 UTC m=+0.107713040 container init d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:46:14 compute-0 podman[304015]: 2026-01-20 14:46:14.332781542 +0000 UTC m=+0.021946896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:14 compute-0 podman[304015]: 2026-01-20 14:46:14.428623608 +0000 UTC m=+0.117788932 container start d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:46:14 compute-0 podman[304015]: 2026-01-20 14:46:14.432177415 +0000 UTC m=+0.121342739 container attach d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:46:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:15 compute-0 keen_hypatia[304032]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:46:15 compute-0 keen_hypatia[304032]: --> relative data size: 1.0
Jan 20 14:46:15 compute-0 keen_hypatia[304032]: --> All data devices are unavailable
Jan 20 14:46:15 compute-0 systemd[1]: libpod-d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939.scope: Deactivated successfully.
Jan 20 14:46:15 compute-0 podman[304015]: 2026-01-20 14:46:15.230088688 +0000 UTC m=+0.919254022 container died d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b775a4d1209651210ba91ff3352d33332b5aa94603505c8f509b65893340be-merged.mount: Deactivated successfully.
Jan 20 14:46:15 compute-0 podman[304015]: 2026-01-20 14:46:15.286907307 +0000 UTC m=+0.976072631 container remove d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:46:15 compute-0 systemd[1]: libpod-conmon-d8a7477ad14f817654b5e65014a21a274f2ef69b6e737d235e1618ecf54f3939.scope: Deactivated successfully.
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.322 250022 DEBUG oslo_concurrency.processutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config 9280bbf5-da74-42c8-b8a3-a392cec3f921_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.324 250022 INFO nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Deleting local config drive /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921/disk.config because it was imported into RBD.
Jan 20 14:46:15 compute-0 sudo[303872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:15 compute-0 kernel: tapbc4c8178-e3: entered promiscuous mode
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.3801] manager: (tapbc4c8178-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 20 14:46:15 compute-0 sudo[304066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:15 compute-0 systemd-udevd[304100]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:46:15 compute-0 sudo[304066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:15 compute-0 ovn_controller[148666]: 2026-01-20T14:46:15Z|00302|binding|INFO|Claiming lport bc4c8178-e331-4054-89ce-aae2ffa6e0ee for this chassis.
Jan 20 14:46:15 compute-0 ovn_controller[148666]: 2026-01-20T14:46:15Z|00303|binding|INFO|bc4c8178-e331-4054-89ce-aae2ffa6e0ee: Claiming fa:16:3e:3d:c6:ef 10.100.0.4
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 sudo[304066]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.429 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c6:ef 10.100.0.4'], port_security=['fa:16:3e:3d:c6:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9280bbf5-da74-42c8-b8a3-a392cec3f921', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bc4c8178-e331-4054-89ce-aae2ffa6e0ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.431 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bc4c8178-e331-4054-89ce-aae2ffa6e0ee in datapath fbd5d614-a7d3-4563-913c-104506628e59 bound to our chassis
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.4325] device (tapbc4c8178-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.432 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.4349] device (tapbc4c8178-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:46:15 compute-0 ovn_controller[148666]: 2026-01-20T14:46:15Z|00304|binding|INFO|Setting lport bc4c8178-e331-4054-89ce-aae2ffa6e0ee ovn-installed in OVS
Jan 20 14:46:15 compute-0 ovn_controller[148666]: 2026-01-20T14:46:15Z|00305|binding|INFO|Setting lport bc4c8178-e331-4054-89ce-aae2ffa6e0ee up in Southbound
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.443 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.445 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b12babf2-49cd-4341-a33b-c1f983c0c71c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.446 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd5d614-a1 in ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.448 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd5d614-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.448 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a70cc2-e606-49e0-975d-da60b82466bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.449 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[020aa173-fc43-49fc-a2e6-a0d4316d31ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 systemd-machined[216401]: New machine qemu-40-instance-00000060.
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.460 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[065e8f8f-21c6-4ac4-a70c-205dba082e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000060.
Jan 20 14:46:15 compute-0 sudo[304103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.480 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[250cea9a-6641-4b5a-8d8e-d52f77022a35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 sudo[304103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:15 compute-0 sudo[304103]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.508 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b627dcc7-da2d-4213-8afc-39dcc35f041e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.5156] manager: (tapfbd5d614-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/155)
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.515 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bef706ec-4989-424a-b6db-ea28cf9b2451]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 sudo[304136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:15 compute-0 sudo[304136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:15 compute-0 sudo[304136]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.543 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[35d7058b-c106-4401-a7f5-b8a9a9befe1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.546 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0c592b91-d1ea-48f9-86c4-7d40765983a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.5686] device (tapfbd5d614-a0): carrier: link connected
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.573 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6b883aa8-2847-4dab-aa78-9e435454572a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.588 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8e7535-6827-4cde-ad08-c070ce3dfbca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631629, 'reachable_time': 21095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304210, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 sudo[304186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:46:15 compute-0 sudo[304186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:15 compute-0 sshd-session[304051]: Invalid user test from 157.245.78.139 port 53816
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.604 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce544ac-3853-4920-b150-843351641c53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:38be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 631629, 'tstamp': 631629}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304212, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.622 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd28192-2ee9-4fb9-8002-c790bba6cbd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631629, 'reachable_time': 21095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304214, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.654 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3533a43a-c56f-4556-b6d5-1d99688a8ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 sshd-session[304051]: Connection closed by invalid user test 157.245.78.139 port 53816 [preauth]
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.718 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[184d9776-4887-41d7-ac9e-6a30b9d76bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.720 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.720 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.721 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd5d614-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 NetworkManager[48960]: <info>  [1768920375.7232] manager: (tapfbd5d614-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 20 14:46:15 compute-0 kernel: tapfbd5d614-a0: entered promiscuous mode
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.727 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd5d614-a0, col_values=(('external_ids', {'iface-id': 'b370b74e-dca0-4ff7-a96f-85b392e20721'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:15 compute-0 ovn_controller[148666]: 2026-01-20T14:46:15Z|00306|binding|INFO|Releasing lport b370b74e-dca0-4ff7-a96f-85b392e20721 from this chassis (sb_readonly=0)
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.732 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 nova_compute[250018]: 2026-01-20 14:46:15.745 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.747 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.748 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9628f3-088e-45fe-a127-5a3ace6e5c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.749 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:46:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:15.751 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'env', 'PROCESS_TAG=haproxy-fbd5d614-a7d3-4563-913c-104506628e59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd5d614-a7d3-4563-913c-104506628e59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:46:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 360 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Jan 20 14:46:15 compute-0 podman[304264]: 2026-01-20 14:46:15.93983487 +0000 UTC m=+0.047775966 container create 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:15 compute-0 ceph-mon[74360]: pgmap v1743: 321 pgs: 321 active+clean; 360 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 182 op/s
Jan 20 14:46:15 compute-0 systemd[1]: Started libpod-conmon-9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433.scope.
Jan 20 14:46:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:16.013932038 +0000 UTC m=+0.121873144 container init 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:15.920449995 +0000 UTC m=+0.028391131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:16.021462652 +0000 UTC m=+0.129403748 container start 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:46:16 compute-0 dreamy_saha[304286]: 167 167
Jan 20 14:46:16 compute-0 systemd[1]: libpod-9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433.scope: Deactivated successfully.
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:16.027541507 +0000 UTC m=+0.135482603 container attach 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:46:16 compute-0 conmon[304286]: conmon 9702b40896d801169ddb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433.scope/container/memory.events
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:16.027978278 +0000 UTC m=+0.135919374 container died 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd493ad239d1b1aa648ea583a802866184ef2a4b860acd1ff4c888248018877-merged.mount: Deactivated successfully.
Jan 20 14:46:16 compute-0 podman[304264]: 2026-01-20 14:46:16.061283291 +0000 UTC m=+0.169224387 container remove 9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:46:16 compute-0 systemd[1]: libpod-conmon-9702b40896d801169ddb67a5b3d6c2729d95b049217cc5647bc8744a14a2d433.scope: Deactivated successfully.
Jan 20 14:46:16 compute-0 podman[304308]: 2026-01-20 14:46:16.102613522 +0000 UTC m=+0.052275639 container create 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 14:46:16 compute-0 systemd[1]: Started libpod-conmon-7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7.scope.
Jan 20 14:46:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f49adb14476b2162e2f2280cb198b12c535d762c6d239cedd45e7a5823a927/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:16 compute-0 podman[304308]: 2026-01-20 14:46:16.077500011 +0000 UTC m=+0.027162148 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:46:16 compute-0 podman[304308]: 2026-01-20 14:46:16.175307562 +0000 UTC m=+0.124969699 container init 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:46:16 compute-0 podman[304308]: 2026-01-20 14:46:16.18043581 +0000 UTC m=+0.130097927 container start 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:46:16 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [NOTICE]   (304344) : New worker (304355) forked
Jan 20 14:46:16 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [NOTICE]   (304344) : Loading success.
Jan 20 14:46:16 compute-0 podman[304343]: 2026-01-20 14:46:16.239708987 +0000 UTC m=+0.042215926 container create d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:46:16 compute-0 systemd[1]: Started libpod-conmon-d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c.scope.
Jan 20 14:46:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:16 compute-0 nova_compute[250018]: 2026-01-20 14:46:16.310 250022 DEBUG nova.compute.manager [req-0e49300c-b6d6-4cce-a345-8f1d3e77de1f req-d1a3a693-59ae-40f8-b952-214239433265 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:16 compute-0 nova_compute[250018]: 2026-01-20 14:46:16.311 250022 DEBUG oslo_concurrency.lockutils [req-0e49300c-b6d6-4cce-a345-8f1d3e77de1f req-d1a3a693-59ae-40f8-b952-214239433265 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:16 compute-0 nova_compute[250018]: 2026-01-20 14:46:16.311 250022 DEBUG oslo_concurrency.lockutils [req-0e49300c-b6d6-4cce-a345-8f1d3e77de1f req-d1a3a693-59ae-40f8-b952-214239433265 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:16 compute-0 nova_compute[250018]: 2026-01-20 14:46:16.312 250022 DEBUG oslo_concurrency.lockutils [req-0e49300c-b6d6-4cce-a345-8f1d3e77de1f req-d1a3a693-59ae-40f8-b952-214239433265 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:16 compute-0 nova_compute[250018]: 2026-01-20 14:46:16.312 250022 DEBUG nova.compute.manager [req-0e49300c-b6d6-4cce-a345-8f1d3e77de1f req-d1a3a693-59ae-40f8-b952-214239433265 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Processing event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0324fa1ec3ee5f50677a917d0513a62706560e50624a47df7d92fe3ce2e935a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:16 compute-0 podman[304343]: 2026-01-20 14:46:16.221051941 +0000 UTC m=+0.023558890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0324fa1ec3ee5f50677a917d0513a62706560e50624a47df7d92fe3ce2e935a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0324fa1ec3ee5f50677a917d0513a62706560e50624a47df7d92fe3ce2e935a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0324fa1ec3ee5f50677a917d0513a62706560e50624a47df7d92fe3ce2e935a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:16 compute-0 podman[304343]: 2026-01-20 14:46:16.335481282 +0000 UTC m=+0.137988221 container init d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:46:16 compute-0 podman[304343]: 2026-01-20 14:46:16.345301788 +0000 UTC m=+0.147808727 container start d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:46:16 compute-0 podman[304343]: 2026-01-20 14:46:16.350420667 +0000 UTC m=+0.152927636 container attach d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:16.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]: {
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:     "0": [
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:         {
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "devices": [
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "/dev/loop3"
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             ],
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "lv_name": "ceph_lv0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "lv_size": "7511998464",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "name": "ceph_lv0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "tags": {
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.cluster_name": "ceph",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.crush_device_class": "",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.encrypted": "0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.osd_id": "0",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.type": "block",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:                 "ceph.vdo": "0"
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             },
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "type": "block",
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:             "vg_name": "ceph_vg0"
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:         }
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]:     ]
Jan 20 14:46:17 compute-0 dreamy_williamson[304371]: }
Jan 20 14:46:17 compute-0 systemd[1]: libpod-d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c.scope: Deactivated successfully.
Jan 20 14:46:17 compute-0 podman[304343]: 2026-01-20 14:46:17.160069597 +0000 UTC m=+0.962576536 container died d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 14:46:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2610441827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1110613297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.435 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920377.434829, 9280bbf5-da74-42c8-b8a3-a392cec3f921 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.435 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] VM Started (Lifecycle Event)
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.438 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.441 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.444 250022 INFO nova.virt.libvirt.driver [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance spawned successfully.
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.445 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.458 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.461 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.474 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.475 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.475 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.475 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.476 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.476 250022 DEBUG nova.virt.libvirt.driver [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.512 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.512 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920377.4350529, 9280bbf5-da74-42c8-b8a3-a392cec3f921 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.512 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] VM Paused (Lifecycle Event)
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.544 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.547 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920377.4404943, 9280bbf5-da74-42c8-b8a3-a392cec3f921 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.548 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] VM Resumed (Lifecycle Event)
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.556 250022 INFO nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Took 13.74 seconds to spawn the instance on the hypervisor.
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.556 250022 DEBUG nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.564 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.567 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.594 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:46:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0324fa1ec3ee5f50677a917d0513a62706560e50624a47df7d92fe3ce2e935a-merged.mount: Deactivated successfully.
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.651 250022 INFO nova.compute.manager [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Took 14.81 seconds to build instance.
Jan 20 14:46:17 compute-0 podman[304343]: 2026-01-20 14:46:17.66465096 +0000 UTC m=+1.467157899 container remove d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_williamson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.687 250022 DEBUG oslo_concurrency.lockutils [None req-9e26ae61-2dc5-4769-a49c-95c31cf472c8 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:17 compute-0 sudo[304186]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:17 compute-0 systemd[1]: libpod-conmon-d1e40ad6463cc6ca4fc0d236cc83fcf7723ca52b97aa409110bdb4c38985203c.scope: Deactivated successfully.
Jan 20 14:46:17 compute-0 sudo[304434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:17 compute-0 sudo[304434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:17 compute-0 sudo[304434]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 370 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 200 op/s
Jan 20 14:46:17 compute-0 sudo[304462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:46:17 compute-0 sudo[304462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:17 compute-0 sudo[304462]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:17 compute-0 sudo[304487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:17 compute-0 sudo[304487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:17 compute-0 sudo[304487]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:17 compute-0 nova_compute[250018]: 2026-01-20 14:46:17.883 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:17 compute-0 sudo[304512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:46:17 compute-0 sudo[304512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.265316377 +0000 UTC m=+0.038715879 container create 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:46:18 compute-0 systemd[1]: Started libpod-conmon-70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073.scope.
Jan 20 14:46:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.248127382 +0000 UTC m=+0.021526904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.349065737 +0000 UTC m=+0.122465259 container init 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.35509494 +0000 UTC m=+0.128494442 container start 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:46:18 compute-0 kind_yalow[304593]: 167 167
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.358945395 +0000 UTC m=+0.132344897 container attach 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 14:46:18 compute-0 systemd[1]: libpod-70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073.scope: Deactivated successfully.
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.360524377 +0000 UTC m=+0.133923879 container died 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:46:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:18.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4049b050ced2f718b9fc6bdc24da58c6c1b3f6a29cf1f3e79fd088a8cd907f4f-merged.mount: Deactivated successfully.
Jan 20 14:46:18 compute-0 podman[304577]: 2026-01-20 14:46:18.401879748 +0000 UTC m=+0.175279290 container remove 70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yalow, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:46:18 compute-0 systemd[1]: libpod-conmon-70087b48f8461f1274fc464630b5daa5305a780ae99b380084ed786c95b6e073.scope: Deactivated successfully.
Jan 20 14:46:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 14:46:18 compute-0 podman[304616]: 2026-01-20 14:46:18.601677833 +0000 UTC m=+0.046109621 container create 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:46:18 compute-0 systemd[1]: Started libpod-conmon-0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf.scope.
Jan 20 14:46:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:18 compute-0 podman[304616]: 2026-01-20 14:46:18.577331162 +0000 UTC m=+0.021762970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63582ab1080f244554df8f815a3b828d147c44dd1bd6e567f6d9acdcf9046fb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63582ab1080f244554df8f815a3b828d147c44dd1bd6e567f6d9acdcf9046fb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63582ab1080f244554df8f815a3b828d147c44dd1bd6e567f6d9acdcf9046fb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63582ab1080f244554df8f815a3b828d147c44dd1bd6e567f6d9acdcf9046fb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.689 250022 DEBUG nova.compute.manager [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.690 250022 DEBUG oslo_concurrency.lockutils [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.691 250022 DEBUG oslo_concurrency.lockutils [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.691 250022 DEBUG oslo_concurrency.lockutils [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.691 250022 DEBUG nova.compute.manager [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] No waiting events found dispatching network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:18 compute-0 nova_compute[250018]: 2026-01-20 14:46:18.692 250022 WARNING nova.compute.manager [req-92ca05ab-a399-4368-997c-2020323dbe94 req-5a8a090a-b053-4762-ad34-cb04d8536d0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received unexpected event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee for instance with vm_state active and task_state None.
Jan 20 14:46:18 compute-0 podman[304616]: 2026-01-20 14:46:18.697224782 +0000 UTC m=+0.141656590 container init 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:46:18 compute-0 podman[304616]: 2026-01-20 14:46:18.706073091 +0000 UTC m=+0.150504879 container start 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:46:18 compute-0 podman[304616]: 2026-01-20 14:46:18.709908325 +0000 UTC m=+0.154340143 container attach 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:46:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:18.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]: {
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:         "osd_id": 0,
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:         "type": "bluestore"
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]:     }
Jan 20 14:46:19 compute-0 vigilant_hertz[304632]: }
Jan 20 14:46:19 compute-0 systemd[1]: libpod-0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf.scope: Deactivated successfully.
Jan 20 14:46:19 compute-0 podman[304616]: 2026-01-20 14:46:19.551980644 +0000 UTC m=+0.996412462 container died 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:46:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 14:46:20 compute-0 ceph-mon[74360]: pgmap v1744: 321 pgs: 321 active+clean; 370 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 200 op/s
Jan 20 14:46:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:20.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.792 250022 DEBUG oslo_concurrency.lockutils [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.794 250022 DEBUG oslo_concurrency.lockutils [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.794 250022 DEBUG nova.compute.manager [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.800 250022 DEBUG nova.compute.manager [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.801 250022 DEBUG nova.objects.instance [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'flavor' on Instance uuid 9280bbf5-da74-42c8-b8a3-a392cec3f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:20 compute-0 nova_compute[250018]: 2026-01-20 14:46:20.832 250022 DEBUG nova.virt.libvirt.driver [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:46:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:20.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-63582ab1080f244554df8f815a3b828d147c44dd1bd6e567f6d9acdcf9046fb6-merged.mount: Deactivated successfully.
Jan 20 14:46:21 compute-0 podman[304616]: 2026-01-20 14:46:21.107211129 +0000 UTC m=+2.551642917 container remove 0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hertz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:46:21 compute-0 systemd[1]: libpod-conmon-0b4d87acec134e080a2156a9546b656994df123de5644af2d3548f0f4d1a6faf.scope: Deactivated successfully.
Jan 20 14:46:21 compute-0 sudo[304512]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:46:21 compute-0 ceph-mon[74360]: pgmap v1745: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 14:46:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:46:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d7379d86-136c-4bc2-aba2-f9e85bdaded6 does not exist
Jan 20 14:46:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev aa346e7f-9551-48c6-88e5-9b86f5ee6078 does not exist
Jan 20 14:46:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5035514a-d4f2-41fe-a15b-f085a3572df7 does not exist
Jan 20 14:46:21 compute-0 sudo[304667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:21 compute-0 sudo[304667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:21 compute-0 sudo[304667]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:21 compute-0 sudo[304692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:46:21 compute-0 sudo[304692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:21 compute-0 sudo[304692]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:22 compute-0 nova_compute[250018]: 2026-01-20 14:46:22.885 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:23 compute-0 nova_compute[250018]: 2026-01-20 14:46:23.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Jan 20 14:46:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a777d6f0 =====
Jan 20 14:46:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:24.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a777d6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:24 compute-0 radosgw[93153]: beast: 0x7fb5a777d6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:24.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:46:24 compute-0 ceph-mon[74360]: pgmap v1746: 321 pgs: 321 active+clean; 372 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Jan 20 14:46:24 compute-0 podman[304720]: 2026-01-20 14:46:24.462137113 +0000 UTC m=+0.052608576 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:46:24 compute-0 podman[304719]: 2026-01-20 14:46:24.489882795 +0000 UTC m=+0.080199874 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:46:25 compute-0 ceph-mon[74360]: pgmap v1747: 321 pgs: 321 active+clean; 374 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Jan 20 14:46:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 228 op/s
Jan 20 14:46:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:26.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:27 compute-0 nova_compute[250018]: 2026-01-20 14:46:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:27 compute-0 ceph-mon[74360]: pgmap v1748: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 228 op/s
Jan 20 14:46:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 502 KiB/s wr, 208 op/s
Jan 20 14:46:27 compute-0 nova_compute[250018]: 2026-01-20 14:46:27.888 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:27 compute-0 sudo[304768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:27 compute-0 sudo[304768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:27 compute-0 sudo[304768]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:28 compute-0 sudo[304793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:28 compute-0 sudo[304793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:28 compute-0 sudo[304793]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:28 compute-0 nova_compute[250018]: 2026-01-20 14:46:28.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:28 compute-0 nova_compute[250018]: 2026-01-20 14:46:28.239 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:28.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3610685573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 55 KiB/s wr, 174 op/s
Jan 20 14:46:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:30 compute-0 ceph-mon[74360]: pgmap v1749: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 502 KiB/s wr, 208 op/s
Jan 20 14:46:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3828261511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2519742879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:30.759 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:30.759 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:30.760 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:31 compute-0 ovn_controller[148666]: 2026-01-20T14:46:31Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:c6:ef 10.100.0.4
Jan 20 14:46:31 compute-0 ovn_controller[148666]: 2026-01-20T14:46:31Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:c6:ef 10.100.0.4
Jan 20 14:46:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 51 KiB/s wr, 153 op/s
Jan 20 14:46:31 compute-0 ceph-mon[74360]: pgmap v1750: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 55 KiB/s wr, 174 op/s
Jan 20 14:46:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:32.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:32 compute-0 nova_compute[250018]: 2026-01-20 14:46:32.396 250022 DEBUG nova.virt.libvirt.driver [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:46:32 compute-0 nova_compute[250018]: 2026-01-20 14:46:32.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:33 compute-0 ceph-mon[74360]: pgmap v1751: 321 pgs: 321 active+clean; 374 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 51 KiB/s wr, 153 op/s
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514728644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.526 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.598 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.599 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.753 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.754 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4239MB free_disk=20.809669494628906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.754 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.754 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 388 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 998 KiB/s wr, 165 op/s
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.844 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 9280bbf5-da74-42c8-b8a3-a392cec3f921 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.844 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.844 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:46:33 compute-0 nova_compute[250018]: 2026-01-20 14:46:33.886 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3514728644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440153925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.309 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.315 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.332 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.354 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.355 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:34.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:34 compute-0 kernel: tapbc4c8178-e3 (unregistering): left promiscuous mode
Jan 20 14:46:34 compute-0 NetworkManager[48960]: <info>  [1768920394.8324] device (tapbc4c8178-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:46:34 compute-0 ovn_controller[148666]: 2026-01-20T14:46:34Z|00307|binding|INFO|Releasing lport bc4c8178-e331-4054-89ce-aae2ffa6e0ee from this chassis (sb_readonly=0)
Jan 20 14:46:34 compute-0 ovn_controller[148666]: 2026-01-20T14:46:34Z|00308|binding|INFO|Setting lport bc4c8178-e331-4054-89ce-aae2ffa6e0ee down in Southbound
Jan 20 14:46:34 compute-0 ovn_controller[148666]: 2026-01-20T14:46:34Z|00309|binding|INFO|Removing iface tapbc4c8178-e3 ovn-installed in OVS
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:34.848 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c6:ef 10.100.0.4'], port_security=['fa:16:3e:3d:c6:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9280bbf5-da74-42c8-b8a3-a392cec3f921', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=bc4c8178-e331-4054-89ce-aae2ffa6e0ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:34.850 160071 INFO neutron.agent.ovn.metadata.agent [-] Port bc4c8178-e331-4054-89ce-aae2ffa6e0ee in datapath fbd5d614-a7d3-4563-913c-104506628e59 unbound from our chassis
Jan 20 14:46:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:34.852 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd5d614-a7d3-4563-913c-104506628e59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:46:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:34.853 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1d4ba8-b3e8-47ca-b7ab-8a44212f378c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:34.854 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace which is not needed anymore
Jan 20 14:46:34 compute-0 nova_compute[250018]: 2026-01-20 14:46:34.861 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:34 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 20 14:46:34 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000060.scope: Consumed 14.429s CPU time.
Jan 20 14:46:34 compute-0 systemd-machined[216401]: Machine qemu-40-instance-00000060 terminated.
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.134 250022 DEBUG nova.compute.manager [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-vif-unplugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.135 250022 DEBUG oslo_concurrency.lockutils [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.135 250022 DEBUG oslo_concurrency.lockutils [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.135 250022 DEBUG oslo_concurrency.lockutils [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.136 250022 DEBUG nova.compute.manager [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] No waiting events found dispatching network-vif-unplugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.136 250022 WARNING nova.compute.manager [req-e2672bfa-34ac-48b6-b84c-c6e4696b303c req-23dd293a-b323-4a32-a891-f7bcfb9215b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received unexpected event network-vif-unplugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee for instance with vm_state active and task_state powering-off.
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.354 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.354 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [NOTICE]   (304344) : haproxy version is 2.8.14-c23fe91
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [NOTICE]   (304344) : path to executable is /usr/sbin/haproxy
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [WARNING]  (304344) : Exiting Master process...
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [WARNING]  (304344) : Exiting Master process...
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [ALERT]    (304344) : Current worker (304355) exited with code 143 (Terminated)
Jan 20 14:46:35 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[304334]: [WARNING]  (304344) : All workers exited. Exiting... (0)
Jan 20 14:46:35 compute-0 systemd[1]: libpod-7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7.scope: Deactivated successfully.
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.408 250022 INFO nova.virt.libvirt.driver [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance shutdown successfully after 13 seconds.
Jan 20 14:46:35 compute-0 podman[304890]: 2026-01-20 14:46:35.409502971 +0000 UTC m=+0.459761350 container died 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.412 250022 INFO nova.virt.libvirt.driver [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance destroyed successfully.
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.413 250022 DEBUG nova.objects.instance [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9280bbf5-da74-42c8-b8a3-a392cec3f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.424 250022 DEBUG nova.compute.manager [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:35 compute-0 nova_compute[250018]: 2026-01-20 14:46:35.463 250022 DEBUG oslo_concurrency.lockutils [None req-0328f0c2-97a9-41ed-a3a4-2295c543897c 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:35 compute-0 ceph-mon[74360]: pgmap v1752: 321 pgs: 321 active+clean; 388 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 998 KiB/s wr, 165 op/s
Jan 20 14:46:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2440153925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 424 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 260 op/s
Jan 20 14:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7-userdata-shm.mount: Deactivated successfully.
Jan 20 14:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0f49adb14476b2162e2f2280cb198b12c535d762c6d239cedd45e7a5823a927-merged.mount: Deactivated successfully.
Jan 20 14:46:36 compute-0 podman[304890]: 2026-01-20 14:46:36.133780068 +0000 UTC m=+1.184038447 container cleanup 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:46:36 compute-0 systemd[1]: libpod-conmon-7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7.scope: Deactivated successfully.
Jan 20 14:46:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:36.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:36 compute-0 podman[304931]: 2026-01-20 14:46:36.666860045 +0000 UTC m=+0.511793741 container remove 7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.672 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb75849-b70a-4a0e-8c7e-d954759312ec]: (4, ('Tue Jan 20 02:46:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7)\n7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7\nTue Jan 20 02:46:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7)\n7fb12ee97b7c3f959341e9f77b692c51236d098d3eff33fb6065534e0fb181c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.673 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8e15d20e-7e48-440f-baf2-d2d5ae88c598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.674 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:36 compute-0 kernel: tapfbd5d614-a0: left promiscuous mode
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.720 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.756 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[39370813-f4f8-461c-bd75-78baa842ed19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.783 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d75d165-3753-4a6d-8aa8-30bcca9125d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.785 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[70360615-abf2-40f3-97ce-c1032a9576af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.803 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.804 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.804 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.804 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.804 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.805 250022 INFO nova.compute.manager [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Terminating instance
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.806 250022 DEBUG nova.compute.manager [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.807 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ca67a438-bdeb-4e3d-95e9-16469196d5d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 631622, 'reachable_time': 23512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304951, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.811 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:46:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:36.811 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[334bf4dc-6481-4f11-9881-e9399330c00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbd5d614\x2da7d3\x2d4563\x2d913c\x2d104506628e59.mount: Deactivated successfully.
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.815 250022 INFO nova.virt.libvirt.driver [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Instance destroyed successfully.
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.815 250022 DEBUG nova.objects.instance [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'resources' on Instance uuid 9280bbf5-da74-42c8-b8a3-a392cec3f921 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.828 250022 DEBUG nova.virt.libvirt.vif [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:46:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1057607620',display_name='tempest-DeleteServersTestJSON-server-1057607620',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1057607620',id=96,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:46:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-p7j50y06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:46:35Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=9280bbf5-da74-42c8-b8a3-a392cec3f921,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.829 250022 DEBUG nova.network.os_vif_util [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "address": "fa:16:3e:3d:c6:ef", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4c8178-e3", "ovs_interfaceid": "bc4c8178-e331-4054-89ce-aae2ffa6e0ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.830 250022 DEBUG nova.network.os_vif_util [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.831 250022 DEBUG os_vif [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.832 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.833 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc4c8178-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.834 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:36 compute-0 nova_compute[250018]: 2026-01-20 14:46:36.838 250022 INFO os_vif [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c6:ef,bridge_name='br-int',has_traffic_filtering=True,id=bc4c8178-e331-4054-89ce-aae2ffa6e0ee,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4c8178-e3')
Jan 20 14:46:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3432940160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4114297959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.192 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.193 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.193 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.265 250022 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.266 250022 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.266 250022 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.266 250022 DEBUG oslo_concurrency.lockutils [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.266 250022 DEBUG nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] No waiting events found dispatching network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.267 250022 WARNING nova.compute.manager [req-bbbf1ebc-b7a9-44c6-9394-23afd4eb75dd req-78a539c9-855c-4428-9882-c1be1da3bf04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received unexpected event network-vif-plugged-bc4c8178-e331-4054-89ce-aae2ffa6e0ee for instance with vm_state stopped and task_state deleting.
Jan 20 14:46:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 434 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Jan 20 14:46:37 compute-0 nova_compute[250018]: 2026-01-20 14:46:37.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:38 compute-0 ceph-mon[74360]: pgmap v1753: 321 pgs: 321 active+clean; 424 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 260 op/s
Jan 20 14:46:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4020167201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:38.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.513 250022 INFO nova.virt.libvirt.driver [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Deleting instance files /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921_del
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.513 250022 INFO nova.virt.libvirt.driver [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Deletion of /var/lib/nova/instances/9280bbf5-da74-42c8-b8a3-a392cec3f921_del complete
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.576 250022 INFO nova.compute.manager [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Took 1.77 seconds to destroy the instance on the hypervisor.
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.576 250022 DEBUG oslo.service.loopingcall [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.577 250022 DEBUG nova.compute.manager [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:46:38 compute-0 nova_compute[250018]: 2026-01-20 14:46:38.577 250022 DEBUG nova.network.neutron [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:46:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 414 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:46:39 compute-0 ceph-mon[74360]: pgmap v1754: 321 pgs: 321 active+clean; 434 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Jan 20 14:46:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/104493319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.192 250022 DEBUG nova.network.neutron [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.234 250022 INFO nova.compute.manager [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Took 1.66 seconds to deallocate network for instance.
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.247 250022 DEBUG nova.compute.manager [req-5ad737a8-9cd5-4a73-9dc5-77498b6dc7ef req-a6ebcbbe-a117-4dbd-b706-d53919905c07 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Received event network-vif-deleted-bc4c8178-e331-4054-89ce-aae2ffa6e0ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.288 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.289 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.346 250022 DEBUG oslo_concurrency.processutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:40.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928440784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.824 250022 DEBUG oslo_concurrency.processutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.833 250022 DEBUG nova.compute.provider_tree [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.850 250022 DEBUG nova.scheduler.client.report [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.875 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.899 250022 INFO nova.scheduler.client.report [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Deleted allocations for instance 9280bbf5-da74-42c8-b8a3-a392cec3f921
Jan 20 14:46:40 compute-0 nova_compute[250018]: 2026-01-20 14:46:40.971 250022 DEBUG oslo_concurrency.lockutils [None req-289116ba-819a-4a40-91cd-d77d0fef376d 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "9280bbf5-da74-42c8-b8a3-a392cec3f921" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 348 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:46:41 compute-0 ceph-mon[74360]: pgmap v1755: 321 pgs: 321 active+clean; 414 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Jan 20 14:46:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1928440784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2051009862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:41 compute-0 nova_compute[250018]: 2026-01-20 14:46:41.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:42.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:42 compute-0 nova_compute[250018]: 2026-01-20 14:46:42.894 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:42 compute-0 ceph-mon[74360]: pgmap v1756: 321 pgs: 321 active+clean; 348 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Jan 20 14:46:43 compute-0 nova_compute[250018]: 2026-01-20 14:46:43.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:43 compute-0 nova_compute[250018]: 2026-01-20 14:46:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:46:43 compute-0 nova_compute[250018]: 2026-01-20 14:46:43.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:46:43 compute-0 nova_compute[250018]: 2026-01-20 14:46:43.079 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:46:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 281 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.181 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.182 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.205 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.279 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.280 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.288 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.289 250022 INFO nova.compute.claims [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:46:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:44.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.419 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2699107769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950123431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.917 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.926 250022 DEBUG nova.compute.provider_tree [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.945 250022 DEBUG nova.scheduler.client.report [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.971 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:44 compute-0 nova_compute[250018]: 2026-01-20 14:46:44.972 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.018 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.019 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.040 250022 INFO nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.058 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.164 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.165 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.165 250022 INFO nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Creating image(s)
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.188 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.215 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.245 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.248 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.312 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.314 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.315 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.315 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.343 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.347 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a7738b4c-0943-43c1-94f9-6777caabf0fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.635 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 a7738b4c-0943-43c1-94f9-6777caabf0fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.669 250022 DEBUG nova.policy [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37e9ef97fbe0448e9fbe32d48b66211f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b31139b2a4e49cba5e7048febf901c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.712 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] resizing rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:46:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 324 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.821 250022 DEBUG nova.objects.instance [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'migration_context' on Instance uuid a7738b4c-0943-43c1-94f9-6777caabf0fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:45 compute-0 ceph-mon[74360]: pgmap v1757: 321 pgs: 321 active+clean; 281 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 256 op/s
Jan 20 14:46:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3551339405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3950123431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.967 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.968 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Ensure instance console log exists: /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.968 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.969 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:45 compute-0 nova_compute[250018]: 2026-01-20 14:46:45.969 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:46.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:46 compute-0 nova_compute[250018]: 2026-01-20 14:46:46.837 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:46 compute-0 nova_compute[250018]: 2026-01-20 14:46:46.845 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Successfully created port: d746ce41-d358-47ba-a788-1e026544b773 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:46:46 compute-0 ceph-mon[74360]: pgmap v1758: 321 pgs: 321 active+clean; 324 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 227 op/s
Jan 20 14:46:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 336 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.869 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Successfully updated port: d746ce41-d358-47ba-a788-1e026544b773 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.888 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.889 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.889 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.978 250022 DEBUG nova.compute.manager [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-changed-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.978 250022 DEBUG nova.compute.manager [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Refreshing instance network info cache due to event network-changed-d746ce41-d358-47ba-a788-1e026544b773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:46:47 compute-0 nova_compute[250018]: 2026-01-20 14:46:47.979 250022 DEBUG oslo_concurrency.lockutils [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.073 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.095 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:46:48 compute-0 sudo[305187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:48 compute-0 sudo[305187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:48 compute-0 sudo[305187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:48 compute-0 sudo[305212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:46:48 compute-0 sudo[305212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:46:48 compute-0 sudo[305212]: pam_unix(sudo:session): session closed for user root
Jan 20 14:46:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:46:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:48.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.922 250022 DEBUG nova.network.neutron [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Updating instance_info_cache with network_info: [{"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.962 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.962 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Instance network_info: |[{"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.964 250022 DEBUG oslo_concurrency.lockutils [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.964 250022 DEBUG nova.network.neutron [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Refreshing network info cache for port d746ce41-d358-47ba-a788-1e026544b773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.969 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Start _get_guest_xml network_info=[{"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.976 250022 WARNING nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.981 250022 DEBUG nova.virt.libvirt.host [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.982 250022 DEBUG nova.virt.libvirt.host [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.991 250022 DEBUG nova.virt.libvirt.host [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.992 250022 DEBUG nova.virt.libvirt.host [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.994 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.995 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.996 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.996 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.997 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.997 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.998 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.998 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:46:48 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.999 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:48.999 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.000 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.001 250022 DEBUG nova.virt.hardware [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.005 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:49 compute-0 ceph-mon[74360]: pgmap v1759: 321 pgs: 321 active+clean; 336 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 2.5 MiB/s wr, 134 op/s
Jan 20 14:46:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3506616449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.504 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.531 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.535 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 347 MiB data, 891 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 159 op/s
Jan 20 14:46:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:46:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845254384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.975 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.977 250022 DEBUG nova.virt.libvirt.vif [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-261180473',display_name='tempest-DeleteServersTestJSON-server-261180473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-261180473',id=98,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-j3knx1cg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:45Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=a7738b4c-0943-43c1-94f9-6777caabf0fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.977 250022 DEBUG nova.network.os_vif_util [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.978 250022 DEBUG nova.network.os_vif_util [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:49 compute-0 nova_compute[250018]: 2026-01-20 14:46:49.980 250022 DEBUG nova.objects.instance [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7738b4c-0943-43c1-94f9-6777caabf0fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.026 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <uuid>a7738b4c-0943-43c1-94f9-6777caabf0fe</uuid>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <name>instance-00000062</name>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:name>tempest-DeleteServersTestJSON-server-261180473</nova:name>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:46:48</nova:creationTime>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:user uuid="37e9ef97fbe0448e9fbe32d48b66211f">tempest-DeleteServersTestJSON-1162922273-project-member</nova:user>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:project uuid="3b31139b2a4e49cba5e7048febf901c4">tempest-DeleteServersTestJSON-1162922273</nova:project>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <nova:port uuid="d746ce41-d358-47ba-a788-1e026544b773">
Jan 20 14:46:50 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <system>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="serial">a7738b4c-0943-43c1-94f9-6777caabf0fe</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="uuid">a7738b4c-0943-43c1-94f9-6777caabf0fe</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </system>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <os>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </os>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <features>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </features>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a7738b4c-0943-43c1-94f9-6777caabf0fe_disk">
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </source>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config">
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </source>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:46:50 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e8:3f:c4"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <target dev="tapd746ce41-d3"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/console.log" append="off"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <video>
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </video>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:46:50 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:46:50 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:46:50 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:46:50 compute-0 nova_compute[250018]: </domain>
Jan 20 14:46:50 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.027 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Preparing to wait for external event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.028 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.028 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.028 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.029 250022 DEBUG nova.virt.libvirt.vif [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-261180473',display_name='tempest-DeleteServersTestJSON-server-261180473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-261180473',id=98,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-j3knx1cg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:46:45Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=a7738b4c-0943-43c1-94f9-6777caabf0fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.030 250022 DEBUG nova.network.os_vif_util [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.030 250022 DEBUG nova.network.os_vif_util [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.030 250022 DEBUG os_vif [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.031 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.032 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.032 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.036 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.036 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd746ce41-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.037 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd746ce41-d3, col_values=(('external_ids', {'iface-id': 'd746ce41-d358-47ba-a788-1e026544b773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:3f:c4', 'vm-uuid': 'a7738b4c-0943-43c1-94f9-6777caabf0fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:50 compute-0 NetworkManager[48960]: <info>  [1768920410.0394] manager: (tapd746ce41-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.046 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.047 250022 INFO os_vif [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3')
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.104 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920395.1031003, 9280bbf5-da74-42c8-b8a3-a392cec3f921 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.104 250022 INFO nova.compute.manager [-] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] VM Stopped (Lifecycle Event)
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.164 250022 DEBUG nova.compute.manager [None req-9f7e44fd-1e6e-4d12-8257-475d1ebded45 - - - - - -] [instance: 9280bbf5-da74-42c8-b8a3-a392cec3f921] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.177 250022 DEBUG nova.network.neutron [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Updated VIF entry in instance network info cache for port d746ce41-d358-47ba-a788-1e026544b773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.178 250022 DEBUG nova.network.neutron [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Updating instance_info_cache with network_info: [{"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.188 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.189 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.189 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No VIF found with MAC fa:16:3e:e8:3f:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.190 250022 INFO nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Using config drive
Jan 20 14:46:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2897975098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3506616449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3845254384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.229 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.235 250022 DEBUG oslo_concurrency.lockutils [req-48e4e084-2032-44e0-9bf4-e2e01813f8fe req-33444433-8c51-4020-8c40-d1f836a519a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-a7738b4c-0943-43c1-94f9-6777caabf0fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:46:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.608 250022 INFO nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Creating config drive at /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.617 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplxnvz3ep execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.752 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplxnvz3ep" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.778 250022 DEBUG nova.storage.rbd_utils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.782 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.980 250022 DEBUG oslo_concurrency.processutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config a7738b4c-0943-43c1-94f9-6777caabf0fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:50 compute-0 nova_compute[250018]: 2026-01-20 14:46:50.981 250022 INFO nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Deleting local config drive /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe/disk.config because it was imported into RBD.
Jan 20 14:46:51 compute-0 kernel: tapd746ce41-d3: entered promiscuous mode
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.0277] manager: (tapd746ce41-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Jan 20 14:46:51 compute-0 ovn_controller[148666]: 2026-01-20T14:46:51Z|00310|binding|INFO|Claiming lport d746ce41-d358-47ba-a788-1e026544b773 for this chassis.
Jan 20 14:46:51 compute-0 ovn_controller[148666]: 2026-01-20T14:46:51Z|00311|binding|INFO|d746ce41-d358-47ba-a788-1e026544b773: Claiming fa:16:3e:e8:3f:c4 10.100.0.4
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.030 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.040 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:3f:c4 10.100.0.4'], port_security=['fa:16:3e:e8:3f:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a7738b4c-0943-43c1-94f9-6777caabf0fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d746ce41-d358-47ba-a788-1e026544b773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.042 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d746ce41-d358-47ba-a788-1e026544b773 in datapath fbd5d614-a7d3-4563-913c-104506628e59 bound to our chassis
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.044 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:51 compute-0 ovn_controller[148666]: 2026-01-20T14:46:51Z|00312|binding|INFO|Setting lport d746ce41-d358-47ba-a788-1e026544b773 ovn-installed in OVS
Jan 20 14:46:51 compute-0 ovn_controller[148666]: 2026-01-20T14:46:51Z|00313|binding|INFO|Setting lport d746ce41-d358-47ba-a788-1e026544b773 up in Southbound
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.053 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.054 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8e0440-e6b8-4d6b-b969-019b6e1ab751]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.058 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd5d614-a1 in ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.061 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd5d614-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.062 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fbebb21e-1ee7-422d-9d7a-15266e558340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 systemd-machined[216401]: New machine qemu-41-instance-00000062.
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.063 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6393cf2-fcfe-40df-9524-13f9526bc64e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 systemd-udevd[305374]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.0789] device (tapd746ce41-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.0796] device (tapd746ce41-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:46:51 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000062.
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.081 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9cc0ee-b548-48ee-bbdb-a4f57faa9dd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.106 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[31d9e598-a4c9-425c-bc58-1a427ce5e08d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.134 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[65f2d227-1142-4bc7-82e1-a4186d4b59d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.1414] manager: (tapfbd5d614-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.140 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c93085b0-e9df-410b-9ce9-e5ccdf50fa22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.172 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[42d13be9-3282-4d27-b89e-14da9d18276a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.174 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e761c932-37df-4261-891d-386a3d152343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.1950] device (tapfbd5d614-a0): carrier: link connected
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.201 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e2219c-c490-4936-aa5f-69f59ad0be82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ceph-mon[74360]: pgmap v1760: 321 pgs: 321 active+clean; 347 MiB data, 891 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 159 op/s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.219 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[83599af2-b662-4f21-b069-4d0e3ab2e2be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635191, 'reachable_time': 24204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305406, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.235 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d439a0f5-3213-4660-ba46-f4e000eed8db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:38be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635191, 'tstamp': 635191}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305407, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.251 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73b69e0b-c206-40d0-bc0d-8cc3d8aa288c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635191, 'reachable_time': 24204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305408, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.282 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[475f39a6-fe07-4e6f-b5ea-f0465870e6c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.330 250022 DEBUG nova.compute.manager [req-52517694-b8ed-419a-add6-a395bf69a748 req-64e758dc-4460-4f2c-b2b7-5c63c18754e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.330 250022 DEBUG oslo_concurrency.lockutils [req-52517694-b8ed-419a-add6-a395bf69a748 req-64e758dc-4460-4f2c-b2b7-5c63c18754e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.330 250022 DEBUG oslo_concurrency.lockutils [req-52517694-b8ed-419a-add6-a395bf69a748 req-64e758dc-4460-4f2c-b2b7-5c63c18754e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.331 250022 DEBUG oslo_concurrency.lockutils [req-52517694-b8ed-419a-add6-a395bf69a748 req-64e758dc-4460-4f2c-b2b7-5c63c18754e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.331 250022 DEBUG nova.compute.manager [req-52517694-b8ed-419a-add6-a395bf69a748 req-64e758dc-4460-4f2c-b2b7-5c63c18754e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Processing event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5d959-e82d-4131-b397-066d916761cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.345 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.346 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.346 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd5d614-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:51 compute-0 kernel: tapfbd5d614-a0: entered promiscuous mode
Jan 20 14:46:51 compute-0 NetworkManager[48960]: <info>  [1768920411.3486] manager: (tapfbd5d614-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.351 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd5d614-a0, col_values=(('external_ids', {'iface-id': 'b370b74e-dca0-4ff7-a96f-85b392e20721'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:51 compute-0 ovn_controller[148666]: 2026-01-20T14:46:51Z|00314|binding|INFO|Releasing lport b370b74e-dca0-4ff7-a96f-85b392e20721 from this chassis (sb_readonly=0)
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.372 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.374 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cd466546-02b2-4207-a8de-23e43ea53386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.375 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:46:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:51.375 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'env', 'PROCESS_TAG=haproxy-fbd5d614-a7d3-4563-913c-104506628e59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd5d614-a7d3-4563-913c-104506628e59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.662 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920411.6619062, a7738b4c-0943-43c1-94f9-6777caabf0fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.662 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] VM Started (Lifecycle Event)
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.664 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.667 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.670 250022 INFO nova.virt.libvirt.driver [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Instance spawned successfully.
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.670 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.700 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.704 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.704 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.705 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.705 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.705 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.706 250022 DEBUG nova.virt.libvirt.driver [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.709 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.740 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.740 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920411.6641665, a7738b4c-0943-43c1-94f9-6777caabf0fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.740 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] VM Paused (Lifecycle Event)
Jan 20 14:46:51 compute-0 podman[305482]: 2026-01-20 14:46:51.746310681 +0000 UTC m=+0.060425978 container create dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.772 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.780 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920411.667444, a7738b4c-0943-43c1-94f9-6777caabf0fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.780 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] VM Resumed (Lifecycle Event)
Jan 20 14:46:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.782 250022 INFO nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Took 6.62 seconds to spawn the instance on the hypervisor.
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.783 250022 DEBUG nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:51 compute-0 systemd[1]: Started libpod-conmon-dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7.scope.
Jan 20 14:46:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e7199bc05d20d60150cd94178889b3e186505d089688928ebec106c81e76532/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:46:51 compute-0 podman[305482]: 2026-01-20 14:46:51.714950181 +0000 UTC m=+0.029065528 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.811 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.817 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:51 compute-0 podman[305482]: 2026-01-20 14:46:51.821651833 +0000 UTC m=+0.135767330 container init dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:46:51 compute-0 podman[305482]: 2026-01-20 14:46:51.826561415 +0000 UTC m=+0.140676712 container start dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.847 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:46:51 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [NOTICE]   (305503) : New worker (305505) forked
Jan 20 14:46:51 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [NOTICE]   (305503) : Loading success.
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.855 250022 INFO nova.compute.manager [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Took 7.61 seconds to build instance.
Jan 20 14:46:51 compute-0 nova_compute[250018]: 2026-01-20 14:46:51.871 250022 DEBUG oslo_concurrency.lockutils [None req-046445ac-1722-4413-b78e-f4e4e91d1409 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:52.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:52.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:46:52
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'vms', 'backups', '.mgr', 'default.rgw.meta']
Jan 20 14:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:46:52 compute-0 nova_compute[250018]: 2026-01-20 14:46:52.899 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:53 compute-0 ceph-mon[74360]: pgmap v1761: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.280 250022 DEBUG nova.objects.instance [None req-71249a30-a4cc-4b42-acb7-a329d6a03d6f 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7738b4c-0943-43c1-94f9-6777caabf0fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.297 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920413.297336, a7738b4c-0943-43c1-94f9-6777caabf0fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.297 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] VM Paused (Lifecycle Event)
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.316 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.320 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.351 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.434 250022 DEBUG nova.compute.manager [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.434 250022 DEBUG oslo_concurrency.lockutils [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.435 250022 DEBUG oslo_concurrency.lockutils [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.435 250022 DEBUG oslo_concurrency.lockutils [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.435 250022 DEBUG nova.compute.manager [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] No waiting events found dispatching network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.435 250022 WARNING nova.compute.manager [req-6fa1802d-2b2b-432b-971a-4e8d7364d24d req-d6ffbaf9-eb51-4905-8daf-5d0459a7bbb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received unexpected event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 for instance with vm_state active and task_state suspending.
Jan 20 14:46:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Jan 20 14:46:53 compute-0 kernel: tapd746ce41-d3 (unregistering): left promiscuous mode
Jan 20 14:46:53 compute-0 NetworkManager[48960]: <info>  [1768920413.7914] device (tapd746ce41-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:46:53 compute-0 ovn_controller[148666]: 2026-01-20T14:46:53Z|00315|binding|INFO|Releasing lport d746ce41-d358-47ba-a788-1e026544b773 from this chassis (sb_readonly=0)
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:53 compute-0 ovn_controller[148666]: 2026-01-20T14:46:53Z|00316|binding|INFO|Setting lport d746ce41-d358-47ba-a788-1e026544b773 down in Southbound
Jan 20 14:46:53 compute-0 ovn_controller[148666]: 2026-01-20T14:46:53Z|00317|binding|INFO|Removing iface tapd746ce41-d3 ovn-installed in OVS
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.805 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:53.812 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:3f:c4 10.100.0.4'], port_security=['fa:16:3e:e8:3f:c4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a7738b4c-0943-43c1-94f9-6777caabf0fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d746ce41-d358-47ba-a788-1e026544b773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:53.814 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d746ce41-d358-47ba-a788-1e026544b773 in datapath fbd5d614-a7d3-4563-913c-104506628e59 unbound from our chassis
Jan 20 14:46:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:53.817 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd5d614-a7d3-4563-913c-104506628e59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:46:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:53.819 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[812dcf2c-f333-45c4-a692-3d39d11fc8a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:53.820 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace which is not needed anymore
Jan 20 14:46:53 compute-0 nova_compute[250018]: 2026-01-20 14:46:53.837 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:53 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 20 14:46:53 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000062.scope: Consumed 2.479s CPU time.
Jan 20 14:46:53 compute-0 systemd-machined[216401]: Machine qemu-41-instance-00000062 terminated.
Jan 20 14:46:53 compute-0 NetworkManager[48960]: <info>  [1768920413.9697] manager: (tapd746ce41-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Jan 20 14:46:54 compute-0 nova_compute[250018]: 2026-01-20 14:46:54.008 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [NOTICE]   (305503) : haproxy version is 2.8.14-c23fe91
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [NOTICE]   (305503) : path to executable is /usr/sbin/haproxy
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [WARNING]  (305503) : Exiting Master process...
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [WARNING]  (305503) : Exiting Master process...
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [ALERT]    (305503) : Current worker (305505) exited with code 143 (Terminated)
Jan 20 14:46:54 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[305499]: [WARNING]  (305503) : All workers exited. Exiting... (0)
Jan 20 14:46:54 compute-0 systemd[1]: libpod-dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7.scope: Deactivated successfully.
Jan 20 14:46:54 compute-0 nova_compute[250018]: 2026-01-20 14:46:54.018 250022 DEBUG nova.compute.manager [None req-71249a30-a4cc-4b42-acb7-a329d6a03d6f 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:46:54 compute-0 podman[305541]: 2026-01-20 14:46:54.021214838 +0000 UTC m=+0.073000380 container died dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7-userdata-shm.mount: Deactivated successfully.
Jan 20 14:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e7199bc05d20d60150cd94178889b3e186505d089688928ebec106c81e76532-merged.mount: Deactivated successfully.
Jan 20 14:46:54 compute-0 podman[305541]: 2026-01-20 14:46:54.063821122 +0000 UTC m=+0.115606664 container cleanup dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:46:54 compute-0 systemd[1]: libpod-conmon-dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7.scope: Deactivated successfully.
Jan 20 14:46:54 compute-0 podman[305580]: 2026-01-20 14:46:54.14309486 +0000 UTC m=+0.050053537 container remove dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.149 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d7b193-d7ef-493b-9b67-8616cc0069e8]: (4, ('Tue Jan 20 02:46:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7)\ndae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7\nTue Jan 20 02:46:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (dae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7)\ndae7d3f0171baf5316b0082ce23bd8824245150c5a4b3af9ed186a90b3be0cc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.151 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[77948079-4c0b-4ce3-b681-18f18223a25a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.151 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:54 compute-0 nova_compute[250018]: 2026-01-20 14:46:54.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-0 kernel: tapfbd5d614-a0: left promiscuous mode
Jan 20 14:46:54 compute-0 nova_compute[250018]: 2026-01-20 14:46:54.168 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-0 nova_compute[250018]: 2026-01-20 14:46:54.173 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[07fb9798-d446-4c84-8eb1-e4cd767bea61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.187 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7a743d-5c6d-4443-b5a9-8a805dd696fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.188 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8cfd5a-cf3c-4144-a217-1eeffc1967cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.208 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a33f7e0d-5d3d-4d4c-a0bc-027274491df6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635185, 'reachable_time': 37770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305595, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbd5d614\x2da7d3\x2d4563\x2d913c\x2d104506628e59.mount: Deactivated successfully.
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.210 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:46:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:54.210 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e44d40e8-26eb-449b-ac65-904c3a623f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:46:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.080 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-0 ceph-mon[74360]: pgmap v1762: 321 pgs: 321 active+clean; 374 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.415 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.416 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.416 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.416 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.417 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.418 250022 INFO nova.compute.manager [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Terminating instance
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.420 250022 DEBUG nova.compute.manager [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.427 250022 INFO nova.virt.libvirt.driver [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Instance destroyed successfully.
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.427 250022 DEBUG nova.objects.instance [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'resources' on Instance uuid a7738b4c-0943-43c1-94f9-6777caabf0fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.443 250022 DEBUG nova.virt.libvirt.vif [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:46:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-261180473',display_name='tempest-DeleteServersTestJSON-server-261180473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-261180473',id=98,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:46:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-j3knx1cg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:46:54Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=a7738b4c-0943-43c1-94f9-6777caabf0fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.444 250022 DEBUG nova.network.os_vif_util [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "d746ce41-d358-47ba-a788-1e026544b773", "address": "fa:16:3e:e8:3f:c4", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd746ce41-d3", "ovs_interfaceid": "d746ce41-d358-47ba-a788-1e026544b773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.445 250022 DEBUG nova.network.os_vif_util [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.445 250022 DEBUG os_vif [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.447 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.447 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd746ce41-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.451 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.458 250022 INFO os_vif [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:3f:c4,bridge_name='br-int',has_traffic_filtering=True,id=d746ce41-d358-47ba-a788-1e026544b773,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd746ce41-d3')
Jan 20 14:46:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:55.461 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:46:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:55.465 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:55 compute-0 podman[305601]: 2026-01-20 14:46:55.484543492 +0000 UTC m=+0.072171857 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:46:55 compute-0 podman[305600]: 2026-01-20 14:46:55.50183426 +0000 UTC m=+0.090392870 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.560 250022 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-unplugged-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.561 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.561 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.561 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.562 250022 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] No waiting events found dispatching network-vif-unplugged-d746ce41-d358-47ba-a788-1e026544b773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.562 250022 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-unplugged-d746ce41-d358-47ba-a788-1e026544b773 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.562 250022 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.562 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.563 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.563 250022 DEBUG oslo_concurrency.lockutils [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.563 250022 DEBUG nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] No waiting events found dispatching network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.563 250022 WARNING nova.compute.manager [req-75e7d446-eaae-4984-8052-8be9d3913d08 req-dee04e90-2208-490b-b5a1-51ccd1f718f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received unexpected event network-vif-plugged-d746ce41-d358-47ba-a788-1e026544b773 for instance with vm_state suspended and task_state deleting.
Jan 20 14:46:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 356 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.825 250022 INFO nova.virt.libvirt.driver [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Deleting instance files /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe_del
Jan 20 14:46:55 compute-0 nova_compute[250018]: 2026-01-20 14:46:55.826 250022 INFO nova.virt.libvirt.driver [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Deletion of /var/lib/nova/instances/a7738b4c-0943-43c1-94f9-6777caabf0fe_del complete
Jan 20 14:46:56 compute-0 nova_compute[250018]: 2026-01-20 14:46:56.036 250022 INFO nova.compute.manager [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 20 14:46:56 compute-0 nova_compute[250018]: 2026-01-20 14:46:56.037 250022 DEBUG oslo.service.loopingcall [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:46:56 compute-0 nova_compute[250018]: 2026-01-20 14:46:56.037 250022 DEBUG nova.compute.manager [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:46:56 compute-0 nova_compute[250018]: 2026-01-20 14:46:56.037 250022 DEBUG nova.network.neutron [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:46:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:56.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:46:56.469 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.169 250022 DEBUG nova.network.neutron [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.230 250022 INFO nova.compute.manager [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Took 1.19 seconds to deallocate network for instance.
Jan 20 14:46:57 compute-0 ceph-mon[74360]: pgmap v1763: 321 pgs: 321 active+clean; 356 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Jan 20 14:46:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/876713242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.302 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.302 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.371 250022 DEBUG oslo_concurrency.processutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.663 250022 DEBUG nova.compute.manager [req-07a02586-fe04-4644-b47a-f663a1823ff3 req-207c0a5a-82b0-4b18-969a-25a4cd8cff9c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Received event network-vif-deleted-d746ce41-d358-47ba-a788-1e026544b773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:46:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 344 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 239 op/s
Jan 20 14:46:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:46:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857006168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.877 250022 DEBUG oslo_concurrency.processutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.885 250022 DEBUG nova.compute.provider_tree [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.904 250022 DEBUG nova.scheduler.client.report [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.924 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:57 compute-0 nova_compute[250018]: 2026-01-20 14:46:57.953 250022 INFO nova.scheduler.client.report [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Deleted allocations for instance a7738b4c-0943-43c1-94f9-6777caabf0fe
Jan 20 14:46:58 compute-0 nova_compute[250018]: 2026-01-20 14:46:58.024 250022 DEBUG oslo_concurrency.lockutils [None req-949b817f-9ea5-4826-b8c7-be585cbe62b4 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "a7738b4c-0943-43c1-94f9-6777caabf0fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:46:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:46:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:46:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:46:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:46:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:46:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:46:58.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:46:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1857006168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:46:59 compute-0 sshd-session[305687]: Invalid user test from 157.245.78.139 port 42774
Jan 20 14:46:59 compute-0 sshd-session[305687]: Connection closed by invalid user test 157.245.78.139 port 42774 [preauth]
Jan 20 14:46:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:46:59 compute-0 ceph-mon[74360]: pgmap v1764: 321 pgs: 321 active+clean; 344 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 239 op/s
Jan 20 14:46:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 309 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 229 op/s
Jan 20 14:47:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:00.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:00 compute-0 nova_compute[250018]: 2026-01-20 14:47:00.450 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:01 compute-0 ceph-mon[74360]: pgmap v1765: 321 pgs: 321 active+clean; 309 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 229 op/s
Jan 20 14:47:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 242 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 781 KiB/s wr, 216 op/s
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.403 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.403 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.427 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:47:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.479 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.480 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.486 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.486 250022 INFO nova.compute.claims [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.626 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2640005168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:02 compute-0 nova_compute[250018]: 2026-01-20 14:47:02.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871043654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.120 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.131 250022 DEBUG nova.compute.provider_tree [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.156 250022 DEBUG nova.scheduler.client.report [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.178 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.179 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.223 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.224 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.244 250022 INFO nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.262 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.359 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.360 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.360 250022 INFO nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Creating image(s)
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.387 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.411 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.440 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.445 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.526 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.527 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.527 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.528 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.556 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.561 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:03 compute-0 nova_compute[250018]: 2026-01-20 14:47:03.707 250022 DEBUG nova.policy [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37e9ef97fbe0448e9fbe32d48b66211f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b31139b2a4e49cba5e7048febf901c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:47:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 228 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 44 KiB/s wr, 145 op/s
Jan 20 14:47:03 compute-0 ceph-mon[74360]: pgmap v1766: 321 pgs: 321 active+clean; 242 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 781 KiB/s wr, 216 op/s
Jan 20 14:47:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3871043654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1489706811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:04.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.606 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.704 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] resizing rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.822 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Successfully created port: 5659965f-0485-4982-898c-f273d7898a5f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.895 250022 DEBUG nova.objects.instance [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'migration_context' on Instance uuid bf7690ac-9b5a-41e3-83bf-3c83cbacc45c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.908 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.909 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Ensure instance console log exists: /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.909 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.910 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:04 compute-0 nova_compute[250018]: 2026-01-20 14:47:04.910 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:05 compute-0 ceph-mon[74360]: pgmap v1767: 321 pgs: 321 active+clean; 228 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 44 KiB/s wr, 145 op/s
Jan 20 14:47:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3517147788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 227 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 170 op/s
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.819 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Successfully updated port: 5659965f-0485-4982-898c-f273d7898a5f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.854 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.854 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.854 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.951 250022 DEBUG nova.compute.manager [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-changed-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.951 250022 DEBUG nova.compute.manager [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Refreshing instance network info cache due to event network-changed-5659965f-0485-4982-898c-f273d7898a5f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:05 compute-0 nova_compute[250018]: 2026-01-20 14:47:05.951 250022 DEBUG oslo_concurrency.lockutils [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:06 compute-0 nova_compute[250018]: 2026-01-20 14:47:06.024 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:47:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:06.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:07 compute-0 ceph-mon[74360]: pgmap v1768: 321 pgs: 321 active+clean; 227 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 170 op/s
Jan 20 14:47:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3563148152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.385 250022 DEBUG nova.network.neutron [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating instance_info_cache with network_info: [{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.404 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.405 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance network_info: |[{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.405 250022 DEBUG oslo_concurrency.lockutils [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.405 250022 DEBUG nova.network.neutron [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Refreshing network info cache for port 5659965f-0485-4982-898c-f273d7898a5f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.408 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Start _get_guest_xml network_info=[{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.412 250022 WARNING nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.416 250022 DEBUG nova.virt.libvirt.host [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.416 250022 DEBUG nova.virt.libvirt.host [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.419 250022 DEBUG nova.virt.libvirt.host [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.419 250022 DEBUG nova.virt.libvirt.host [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.421 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.421 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.421 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.422 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.422 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.422 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.422 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.423 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.423 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.423 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.424 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.424 250022 DEBUG nova.virt.hardware [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.427 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 252 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 14:47:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1903933642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.874 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.897 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.900 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:07 compute-0 nova_compute[250018]: 2026-01-20 14:47:07.926 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/724914289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/724914289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1903933642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:08 compute-0 sudo[305944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:08 compute-0 sudo[305944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:08 compute-0 sudo[305944]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155745966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.327 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.330 250022 DEBUG nova.virt.libvirt.vif [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1331472194',display_name='tempest-DeleteServersTestJSON-server-1331472194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1331472194',id=99,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-5fm4q1fz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:03Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=bf7690ac-9b5a-41e3-83bf-3c83cbacc45c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.331 250022 DEBUG nova.network.os_vif_util [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.332 250022 DEBUG nova.network.os_vif_util [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.333 250022 DEBUG nova.objects.instance [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf7690ac-9b5a-41e3-83bf-3c83cbacc45c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:08 compute-0 sudo[305969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:08 compute-0 sudo[305969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:08 compute-0 sudo[305969]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.349 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <uuid>bf7690ac-9b5a-41e3-83bf-3c83cbacc45c</uuid>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <name>instance-00000063</name>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:name>tempest-DeleteServersTestJSON-server-1331472194</nova:name>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:47:07</nova:creationTime>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:user uuid="37e9ef97fbe0448e9fbe32d48b66211f">tempest-DeleteServersTestJSON-1162922273-project-member</nova:user>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:project uuid="3b31139b2a4e49cba5e7048febf901c4">tempest-DeleteServersTestJSON-1162922273</nova:project>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <nova:port uuid="5659965f-0485-4982-898c-f273d7898a5f">
Jan 20 14:47:08 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <system>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="serial">bf7690ac-9b5a-41e3-83bf-3c83cbacc45c</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="uuid">bf7690ac-9b5a-41e3-83bf-3c83cbacc45c</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </system>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <os>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </os>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <features>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </features>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk">
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </source>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config">
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </source>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:47:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:b7:cb:a9"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <target dev="tap5659965f-04"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/console.log" append="off"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <video>
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </video>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:47:08 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:47:08 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:47:08 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:47:08 compute-0 nova_compute[250018]: </domain>
Jan 20 14:47:08 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.350 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Preparing to wait for external event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.351 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.351 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.351 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.352 250022 DEBUG nova.virt.libvirt.vif [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1331472194',display_name='tempest-DeleteServersTestJSON-server-1331472194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1331472194',id=99,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-5fm4q1fz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:03Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=bf7690ac-9b5a-41e3-83bf-3c83cbacc45c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.352 250022 DEBUG nova.network.os_vif_util [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.353 250022 DEBUG nova.network.os_vif_util [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.353 250022 DEBUG os_vif [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.353 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.354 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.354 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.357 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.357 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5659965f-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.358 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5659965f-04, col_values=(('external_ids', {'iface-id': '5659965f-0485-4982-898c-f273d7898a5f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:cb:a9', 'vm-uuid': 'bf7690ac-9b5a-41e3-83bf-3c83cbacc45c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:08 compute-0 NetworkManager[48960]: <info>  [1768920428.3603] manager: (tap5659965f-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.366 250022 INFO os_vif [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04')
Jan 20 14:47:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:08.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.522 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.523 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.523 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] No VIF found with MAC fa:16:3e:b7:cb:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.524 250022 INFO nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Using config drive
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.547 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.890 250022 INFO nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Creating config drive at /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config
Jan 20 14:47:08 compute-0 nova_compute[250018]: 2026-01-20 14:47:08.895 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1n2p1sao execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.020 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920414.018239, a7738b4c-0943-43c1-94f9-6777caabf0fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.020 250022 INFO nova.compute.manager [-] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] VM Stopped (Lifecycle Event)
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.027 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1n2p1sao" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.052 250022 DEBUG nova.storage.rbd_utils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] rbd image bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.056 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.092 250022 DEBUG nova.compute.manager [None req-bc0995d4-5491-47db-b3ea-0d2486d9f942 - - - - - -] [instance: a7738b4c-0943-43c1-94f9-6777caabf0fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:09 compute-0 ceph-mon[74360]: pgmap v1769: 321 pgs: 321 active+clean; 252 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 14:47:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2155745966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.554 250022 DEBUG oslo_concurrency.processutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.555 250022 INFO nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Deleting local config drive /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/disk.config because it was imported into RBD.
Jan 20 14:47:09 compute-0 kernel: tap5659965f-04: entered promiscuous mode
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.6107] manager: (tap5659965f-04): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 20 14:47:09 compute-0 ovn_controller[148666]: 2026-01-20T14:47:09Z|00318|binding|INFO|Claiming lport 5659965f-0485-4982-898c-f273d7898a5f for this chassis.
Jan 20 14:47:09 compute-0 ovn_controller[148666]: 2026-01-20T14:47:09Z|00319|binding|INFO|5659965f-0485-4982-898c-f273d7898a5f: Claiming fa:16:3e:b7:cb:a9 10.100.0.4
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.612 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.623 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:cb:a9 10.100.0.4'], port_security=['fa:16:3e:b7:cb:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bf7690ac-9b5a-41e3-83bf-3c83cbacc45c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5659965f-0485-4982-898c-f273d7898a5f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.627 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5659965f-0485-4982-898c-f273d7898a5f in datapath fbd5d614-a7d3-4563-913c-104506628e59 bound to our chassis
Jan 20 14:47:09 compute-0 ovn_controller[148666]: 2026-01-20T14:47:09Z|00320|binding|INFO|Setting lport 5659965f-0485-4982-898c-f273d7898a5f ovn-installed in OVS
Jan 20 14:47:09 compute-0 ovn_controller[148666]: 2026-01-20T14:47:09Z|00321|binding|INFO|Setting lport 5659965f-0485-4982-898c-f273d7898a5f up in Southbound
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.630 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.631 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.631 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.642 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69dddfc4-d7dd-4f5b-9319-6ac0a0d78352]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.643 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd5d614-a1 in ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.644 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd5d614-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.644 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[742a44f6-67cd-4ed5-a85f-3163ff942d43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 systemd-machined[216401]: New machine qemu-42-instance-00000063.
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.645 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5e14fa06-b0cc-4f86-aa9c-9d7cb26ca295]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.657 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[f9084e8d-60dc-4742-8cc5-18413866eef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000063.
Jan 20 14:47:09 compute-0 systemd-udevd[306073]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.677 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9dbc4fbf-cddb-49a5-ae65-952aca92962a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.6814] device (tap5659965f-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.6821] device (tap5659965f-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.699 250022 DEBUG nova.network.neutron [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updated VIF entry in instance network info cache for port 5659965f-0485-4982-898c-f273d7898a5f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.700 250022 DEBUG nova.network.neutron [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating instance_info_cache with network_info: [{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.706 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ae631899-d1c5-4338-8e2b-e8299461ad33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 systemd-udevd[306075]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.710 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[267ed54b-c8d8-4a1c-a1f9-6d9cee3c7e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.7122] manager: (tapfbd5d614-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.742 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ea44ccfa-a5a9-4e5e-8671-b9bd3551c956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.745 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4af1fe-87e6-46c4-991a-e0f867cec5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.752 250022 DEBUG oslo_concurrency.lockutils [req-e3ad4a53-6f93-4d3a-86c1-032ad3ebe93d req-ad54af5d-5abd-4e76-9c12-af0e27ca310c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.7659] device (tapfbd5d614-a0): carrier: link connected
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.770 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cf859eab-6a4d-477b-81ad-fbf35eea1acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.786 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc1de93-658d-4a5a-9879-9cce6d8e248b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637048, 'reachable_time': 43456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306104, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 283 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 136 op/s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.797 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[12a35580-2e1c-4a6a-b379-08ac8d9b8317]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:38be'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637048, 'tstamp': 637048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306105, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.813 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[96c17002-486d-4977-b3e8-43c2c54293d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd5d614-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:38:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637048, 'reachable_time': 43456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306106, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.841 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b33db421-9437-43c5-92d3-dd0b5d67f293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.894 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[793bd0ef-0c38-439a-b6d8-87fe7be72bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.895 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.895 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.895 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd5d614-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:09 compute-0 NetworkManager[48960]: <info>  [1768920429.8979] manager: (tapfbd5d614-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.898 250022 DEBUG nova.compute.manager [req-f9e99604-0bcd-4644-95e8-b27c26ce31da req-0ae8de94-5b84-4526-883b-dc4264155eab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.899 250022 DEBUG oslo_concurrency.lockutils [req-f9e99604-0bcd-4644-95e8-b27c26ce31da req-0ae8de94-5b84-4526-883b-dc4264155eab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:09 compute-0 kernel: tapfbd5d614-a0: entered promiscuous mode
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.899 250022 DEBUG oslo_concurrency.lockutils [req-f9e99604-0bcd-4644-95e8-b27c26ce31da req-0ae8de94-5b84-4526-883b-dc4264155eab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.900 250022 DEBUG oslo_concurrency.lockutils [req-f9e99604-0bcd-4644-95e8-b27c26ce31da req-0ae8de94-5b84-4526-883b-dc4264155eab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.900 250022 DEBUG nova.compute.manager [req-f9e99604-0bcd-4644-95e8-b27c26ce31da req-0ae8de94-5b84-4526-883b-dc4264155eab 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Processing event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.901 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd5d614-a0, col_values=(('external_ids', {'iface-id': 'b370b74e-dca0-4ff7-a96f-85b392e20721'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:09 compute-0 ovn_controller[148666]: 2026-01-20T14:47:09Z|00322|binding|INFO|Releasing lport b370b74e-dca0-4ff7-a96f-85b392e20721 from this chassis (sb_readonly=0)
Jan 20 14:47:09 compute-0 nova_compute[250018]: 2026-01-20 14:47:09.919 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.920 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.921 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[19ea73e5-db0a-44eb-b28d-b20f05c7caf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.922 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/fbd5d614-a7d3-4563-913c-104506628e59.pid.haproxy
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID fbd5d614-a7d3-4563-913c-104506628e59
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:47:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:09.922 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'env', 'PROCESS_TAG=haproxy-fbd5d614-a7d3-4563-913c-104506628e59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd5d614-a7d3-4563-913c-104506628e59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.034 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920430.0333252, bf7690ac-9b5a-41e3-83bf-3c83cbacc45c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.034 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] VM Started (Lifecycle Event)
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.037 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.040 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.044 250022 INFO nova.virt.libvirt.driver [-] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance spawned successfully.
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.044 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.059 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.065 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.069 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.069 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.069 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.070 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.070 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.071 250022 DEBUG nova.virt.libvirt.driver [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.116 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.117 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920430.0334883, bf7690ac-9b5a-41e3-83bf-3c83cbacc45c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.117 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] VM Paused (Lifecycle Event)
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.156 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.161 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920430.0403745, bf7690ac-9b5a-41e3-83bf-3c83cbacc45c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.161 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] VM Resumed (Lifecycle Event)
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.168 250022 INFO nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Took 6.81 seconds to spawn the instance on the hypervisor.
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.169 250022 DEBUG nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.206 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.211 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.256 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.261 250022 INFO nova.compute.manager [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Took 7.80 seconds to build instance.
Jan 20 14:47:10 compute-0 nova_compute[250018]: 2026-01-20 14:47:10.279 250022 DEBUG oslo_concurrency.lockutils [None req-23b1d9a4-dfb3-4832-8f35-098755b9a582 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:10 compute-0 podman[306180]: 2026-01-20 14:47:10.372707107 +0000 UTC m=+0.078948319 container create 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:47:10 compute-0 podman[306180]: 2026-01-20 14:47:10.320733309 +0000 UTC m=+0.026974541 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:47:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:10.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:10 compute-0 systemd[1]: Started libpod-conmon-166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289.scope.
Jan 20 14:47:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:10.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510de2d8de5dd55127ff40c1c04ec264e2c0b952177a177123af3989d5774a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:10 compute-0 podman[306180]: 2026-01-20 14:47:10.486297535 +0000 UTC m=+0.192538747 container init 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:47:10 compute-0 podman[306180]: 2026-01-20 14:47:10.491824275 +0000 UTC m=+0.198065467 container start 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:47:10 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [NOTICE]   (306199) : New worker (306201) forked
Jan 20 14:47:10 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [NOTICE]   (306199) : Loading success.
Jan 20 14:47:11 compute-0 ceph-mon[74360]: pgmap v1770: 321 pgs: 321 active+clean; 283 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 136 op/s
Jan 20 14:47:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3117658322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006239421936069235 of space, bias 1.0, pg target 1.8718265808207706 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.00038041044342080775 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:47:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.985 250022 DEBUG nova.compute.manager [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.986 250022 DEBUG oslo_concurrency.lockutils [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.986 250022 DEBUG oslo_concurrency.lockutils [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.987 250022 DEBUG oslo_concurrency.lockutils [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.987 250022 DEBUG nova.compute.manager [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] No waiting events found dispatching network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:11 compute-0 nova_compute[250018]: 2026-01-20 14:47:11.987 250022 WARNING nova.compute.manager [req-f391e60c-1067-443c-9af9-3dae26cf8b29 req-9108dfed-d9d6-438a-b800-0c743ffb27d6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received unexpected event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f for instance with vm_state active and task_state None.
Jan 20 14:47:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:12.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:12.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3074709641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:12 compute-0 nova_compute[250018]: 2026-01-20 14:47:12.906 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:13 compute-0 nova_compute[250018]: 2026-01-20 14:47:13.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:13 compute-0 ceph-mon[74360]: pgmap v1771: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 20 14:47:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 20 14:47:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:14.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1676694116' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1676694116' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:15 compute-0 nova_compute[250018]: 2026-01-20 14:47:15.304 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:15 compute-0 nova_compute[250018]: 2026-01-20 14:47:15.305 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:15 compute-0 nova_compute[250018]: 2026-01-20 14:47:15.305 250022 DEBUG nova.network.neutron [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:15 compute-0 ceph-mon[74360]: pgmap v1772: 321 pgs: 321 active+clean; 295 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 182 op/s
Jan 20 14:47:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/279901560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Jan 20 14:47:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:16.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:16 compute-0 nova_compute[250018]: 2026-01-20 14:47:16.874 250022 DEBUG nova.network.neutron [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating instance_info_cache with network_info: [{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:16 compute-0 nova_compute[250018]: 2026-01-20 14:47:16.894 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:47:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 28K writes, 109K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 28K writes, 9755 syncs, 2.94 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7081 writes, 26K keys, 7081 commit groups, 1.0 writes per commit group, ingest: 28.23 MB, 0.05 MB/s
                                           Interval WAL: 7081 writes, 2891 syncs, 2.45 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.082 250022 DEBUG nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.082 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Creating file /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/81f1e7e9f76346c7a742fcbea74e3014.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.083 250022 DEBUG oslo_concurrency.processutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/81f1e7e9f76346c7a742fcbea74e3014.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:17 compute-0 ceph-mon[74360]: pgmap v1773: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.593 250022 DEBUG oslo_concurrency.processutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/81f1e7e9f76346c7a742fcbea74e3014.tmp" returned: 1 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.594 250022 DEBUG oslo_concurrency.processutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c/81f1e7e9f76346c7a742fcbea74e3014.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.594 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Creating directory /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.595 250022 DEBUG oslo_concurrency.processutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 249 op/s
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.817 250022 DEBUG oslo_concurrency.processutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.823 250022 DEBUG nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:47:17 compute-0 nova_compute[250018]: 2026-01-20 14:47:17.909 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:18 compute-0 nova_compute[250018]: 2026-01-20 14:47:18.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:18.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:19 compute-0 ceph-mon[74360]: pgmap v1774: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.4 MiB/s wr, 249 op/s
Jan 20 14:47:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 247 op/s
Jan 20 14:47:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:20.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:21 compute-0 sudo[306217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:21 compute-0 sudo[306217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-0 sudo[306217]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-0 sudo[306242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:47:21 compute-0 sudo[306242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-0 sudo[306242]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-0 ceph-mon[74360]: pgmap v1775: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 247 op/s
Jan 20 14:47:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 196 KiB/s wr, 258 op/s
Jan 20 14:47:21 compute-0 sudo[306268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:21 compute-0 sudo[306268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:21 compute-0 sudo[306268]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:21 compute-0 sudo[306293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:47:21 compute-0 sudo[306293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:22 compute-0 sudo[306293]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:22.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1b4f5825-bbb7-41f4-b67e-d903dd0ddc45 does not exist
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3260b276-0c22-4dea-9e5e-7586be952468 does not exist
Jan 20 14:47:22 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 130ca391-5cd8-4414-990a-2606cfa25149 does not exist
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:47:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:47:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:22 compute-0 sudo[306348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:22 compute-0 sudo[306348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:22 compute-0 sudo[306348]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:22 compute-0 sudo[306373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:47:22 compute-0 sudo[306373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:22 compute-0 sudo[306373]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:22 compute-0 sudo[306398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:22 compute-0 sudo[306398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:22 compute-0 sudo[306398]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:22 compute-0 sudo[306423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:47:22 compute-0 sudo[306423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:22 compute-0 nova_compute[250018]: 2026-01-20 14:47:22.910 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:23 compute-0 ceph-mon[74360]: pgmap v1776: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 196 KiB/s wr, 258 op/s
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:47:23 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.138821132 +0000 UTC m=+0.039249155 container create 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 14:47:23 compute-0 systemd[1]: Started libpod-conmon-9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72.scope.
Jan 20 14:47:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.118755307 +0000 UTC m=+0.019183300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.228132642 +0000 UTC m=+0.128560655 container init 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.235336957 +0000 UTC m=+0.135764940 container start 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.239268253 +0000 UTC m=+0.139696236 container attach 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:47:23 compute-0 vigorous_burnell[306503]: 167 167
Jan 20 14:47:23 compute-0 systemd[1]: libpod-9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72.scope: Deactivated successfully.
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.241594646 +0000 UTC m=+0.142022639 container died 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 20 14:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-01cdb81fe85f448735aef858c4758429caaa9305febaa95984818a649a30028e-merged.mount: Deactivated successfully.
Jan 20 14:47:23 compute-0 podman[306487]: 2026-01-20 14:47:23.280165902 +0000 UTC m=+0.180593895 container remove 9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:47:23 compute-0 systemd[1]: libpod-conmon-9b1a0798fe3e329400db5b7e3678e749c9d947e93acec55a4420550ed521ae72.scope: Deactivated successfully.
Jan 20 14:47:23 compute-0 nova_compute[250018]: 2026-01-20 14:47:23.363 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:23 compute-0 podman[306527]: 2026-01-20 14:47:23.465144675 +0000 UTC m=+0.046573743 container create 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:47:23 compute-0 systemd[1]: Started libpod-conmon-95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111.scope.
Jan 20 14:47:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:23 compute-0 podman[306527]: 2026-01-20 14:47:23.443234211 +0000 UTC m=+0.024663299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:23 compute-0 podman[306527]: 2026-01-20 14:47:23.547180367 +0000 UTC m=+0.128609435 container init 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:47:23 compute-0 podman[306527]: 2026-01-20 14:47:23.557021974 +0000 UTC m=+0.138451042 container start 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:47:23 compute-0 podman[306527]: 2026-01-20 14:47:23.561822164 +0000 UTC m=+0.143251222 container attach 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:47:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 197 op/s
Jan 20 14:47:24 compute-0 ecstatic_diffie[306544]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:47:24 compute-0 ecstatic_diffie[306544]: --> relative data size: 1.0
Jan 20 14:47:24 compute-0 ecstatic_diffie[306544]: --> All data devices are unavailable
Jan 20 14:47:24 compute-0 systemd[1]: libpod-95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111.scope: Deactivated successfully.
Jan 20 14:47:24 compute-0 podman[306527]: 2026-01-20 14:47:24.391687393 +0000 UTC m=+0.973116471 container died 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c5c76be281af33195409579d6fc8a6a8ec4a635ad8eb29bb2abbef521bade9-merged.mount: Deactivated successfully.
Jan 20 14:47:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:24.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:24 compute-0 podman[306527]: 2026-01-20 14:47:24.460272281 +0000 UTC m=+1.041701369 container remove 95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:47:24 compute-0 systemd[1]: libpod-conmon-95f7a3cc212fb7188846ac38f8fef6034f8df1c0276728226efee88a0ed71111.scope: Deactivated successfully.
Jan 20 14:47:24 compute-0 sudo[306423]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:24 compute-0 sudo[306574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:24 compute-0 sudo[306574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:24 compute-0 sudo[306574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:24 compute-0 sudo[306599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:47:24 compute-0 sudo[306599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:24 compute-0 sudo[306599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:24 compute-0 sudo[306624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:24 compute-0 sudo[306624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:24 compute-0 sudo[306624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:24 compute-0 sudo[306649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:47:24 compute-0 sudo[306649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:25 compute-0 ceph-mon[74360]: pgmap v1777: 321 pgs: 321 active+clean; 295 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 197 op/s
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.106485202 +0000 UTC m=+0.064125968 container create e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:47:25 compute-0 systemd[1]: Started libpod-conmon-e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454.scope.
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.076752097 +0000 UTC m=+0.034392903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.191200769 +0000 UTC m=+0.148841575 container init e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.197199681 +0000 UTC m=+0.154840397 container start e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.201005874 +0000 UTC m=+0.158646630 container attach e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:47:25 compute-0 youthful_borg[306731]: 167 167
Jan 20 14:47:25 compute-0 systemd[1]: libpod-e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454.scope: Deactivated successfully.
Jan 20 14:47:25 compute-0 conmon[306731]: conmon e04da8b823d63a46ae66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454.scope/container/memory.events
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.203171512 +0000 UTC m=+0.160812268 container died e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-97c19d6b9bc89fc645f9f2034c2b0b58b5b6310a3894c4e436c8855a4d95bc7e-merged.mount: Deactivated successfully.
Jan 20 14:47:25 compute-0 podman[306714]: 2026-01-20 14:47:25.251718188 +0000 UTC m=+0.209358914 container remove e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:47:25 compute-0 systemd[1]: libpod-conmon-e04da8b823d63a46ae662f42f1b8480aae0cdf7283b3b197059b47c29ccde454.scope: Deactivated successfully.
Jan 20 14:47:25 compute-0 podman[306755]: 2026-01-20 14:47:25.484948478 +0000 UTC m=+0.049729798 container create 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:47:25 compute-0 ovn_controller[148666]: 2026-01-20T14:47:25Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:cb:a9 10.100.0.4
Jan 20 14:47:25 compute-0 ovn_controller[148666]: 2026-01-20T14:47:25Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:cb:a9 10.100.0.4
Jan 20 14:47:25 compute-0 systemd[1]: Started libpod-conmon-9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282.scope.
Jan 20 14:47:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a06fa3142c6ffbc4938c9e8cde22ab071d77b2b5a09d974288450f354b02ae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a06fa3142c6ffbc4938c9e8cde22ab071d77b2b5a09d974288450f354b02ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a06fa3142c6ffbc4938c9e8cde22ab071d77b2b5a09d974288450f354b02ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a06fa3142c6ffbc4938c9e8cde22ab071d77b2b5a09d974288450f354b02ae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:25 compute-0 podman[306755]: 2026-01-20 14:47:25.458357457 +0000 UTC m=+0.023138827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:25 compute-0 podman[306755]: 2026-01-20 14:47:25.553500216 +0000 UTC m=+0.118281556 container init 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:47:25 compute-0 podman[306755]: 2026-01-20 14:47:25.560847565 +0000 UTC m=+0.125628885 container start 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:47:25 compute-0 podman[306755]: 2026-01-20 14:47:25.564278148 +0000 UTC m=+0.129059488 container attach 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:47:25 compute-0 podman[306775]: 2026-01-20 14:47:25.588902306 +0000 UTC m=+0.050938462 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 14:47:25 compute-0 podman[306772]: 2026-01-20 14:47:25.623089052 +0000 UTC m=+0.084266575 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:47:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 315 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 221 op/s
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]: {
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:     "0": [
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:         {
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "devices": [
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "/dev/loop3"
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             ],
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "lv_name": "ceph_lv0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "lv_size": "7511998464",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "name": "ceph_lv0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "tags": {
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.cluster_name": "ceph",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.crush_device_class": "",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.encrypted": "0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.osd_id": "0",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.type": "block",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:                 "ceph.vdo": "0"
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             },
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "type": "block",
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:             "vg_name": "ceph_vg0"
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:         }
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]:     ]
Jan 20 14:47:26 compute-0 quirky_goldstine[306773]: }
Jan 20 14:47:26 compute-0 systemd[1]: libpod-9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282.scope: Deactivated successfully.
Jan 20 14:47:26 compute-0 podman[306827]: 2026-01-20 14:47:26.385462151 +0000 UTC m=+0.025717698 container died 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a06fa3142c6ffbc4938c9e8cde22ab071d77b2b5a09d974288450f354b02ae3-merged.mount: Deactivated successfully.
Jan 20 14:47:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:26.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:26 compute-0 podman[306827]: 2026-01-20 14:47:26.448086268 +0000 UTC m=+0.088341725 container remove 9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goldstine, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:47:26 compute-0 systemd[1]: libpod-conmon-9132cc40247d25e2f7d45d26d22d9a7ea31e906282145b25e49ed7c45a5ad282.scope: Deactivated successfully.
Jan 20 14:47:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:26 compute-0 sudo[306649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:26 compute-0 sudo[306842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:26 compute-0 sudo[306842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:26 compute-0 sudo[306842]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:26 compute-0 sudo[306867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:47:26 compute-0 sudo[306867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:26 compute-0 sudo[306867]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:26 compute-0 sudo[306892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:26 compute-0 sudo[306892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:26 compute-0 sudo[306892]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:26 compute-0 sudo[306917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:47:26 compute-0 sudo[306917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.030800678 +0000 UTC m=+0.036738586 container create 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:47:27 compute-0 systemd[1]: Started libpod-conmon-572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097.scope.
Jan 20 14:47:27 compute-0 ceph-mon[74360]: pgmap v1778: 321 pgs: 321 active+clean; 315 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 221 op/s
Jan 20 14:47:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.092145851 +0000 UTC m=+0.098083779 container init 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.097753563 +0000 UTC m=+0.103691481 container start 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.100644871 +0000 UTC m=+0.106582789 container attach 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:47:27 compute-0 nostalgic_mayer[307000]: 167 167
Jan 20 14:47:27 compute-0 systemd[1]: libpod-572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097.scope: Deactivated successfully.
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.102157642 +0000 UTC m=+0.108095560 container died 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.016001008 +0000 UTC m=+0.021938946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d4f45194c82ee1965e741b99ae85acd25013c9fe73cdc53d27cacaecbb440ec-merged.mount: Deactivated successfully.
Jan 20 14:47:27 compute-0 podman[306984]: 2026-01-20 14:47:27.133151413 +0000 UTC m=+0.139089331 container remove 572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:47:27 compute-0 systemd[1]: libpod-conmon-572cde5947a76fa530000d9ea3a887aaff2109c90cc099a558b3ffa9d60e2097.scope: Deactivated successfully.
Jan 20 14:47:27 compute-0 podman[307024]: 2026-01-20 14:47:27.285859111 +0000 UTC m=+0.035416182 container create d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:47:27 compute-0 systemd[1]: Started libpod-conmon-d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc.scope.
Jan 20 14:47:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521ad794e04ad3771f6b137263392ee9d1ab5747a7f0fc7c9bcfcb14872438b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521ad794e04ad3771f6b137263392ee9d1ab5747a7f0fc7c9bcfcb14872438b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521ad794e04ad3771f6b137263392ee9d1ab5747a7f0fc7c9bcfcb14872438b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9521ad794e04ad3771f6b137263392ee9d1ab5747a7f0fc7c9bcfcb14872438b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:27 compute-0 podman[307024]: 2026-01-20 14:47:27.270833463 +0000 UTC m=+0.020390554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:47:27 compute-0 podman[307024]: 2026-01-20 14:47:27.367542614 +0000 UTC m=+0.117099715 container init d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:47:27 compute-0 podman[307024]: 2026-01-20 14:47:27.372997022 +0000 UTC m=+0.122554093 container start d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:47:27 compute-0 podman[307024]: 2026-01-20 14:47:27.375936721 +0000 UTC m=+0.125493812 container attach d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:47:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 366 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 206 op/s
Jan 20 14:47:27 compute-0 nova_compute[250018]: 2026-01-20 14:47:27.866 250022 DEBUG nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:47:27 compute-0 nova_compute[250018]: 2026-01-20 14:47:27.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:28 compute-0 strange_curie[307040]: {
Jan 20 14:47:28 compute-0 strange_curie[307040]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:47:28 compute-0 strange_curie[307040]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:47:28 compute-0 strange_curie[307040]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:47:28 compute-0 strange_curie[307040]:         "osd_id": 0,
Jan 20 14:47:28 compute-0 strange_curie[307040]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:47:28 compute-0 strange_curie[307040]:         "type": "bluestore"
Jan 20 14:47:28 compute-0 strange_curie[307040]:     }
Jan 20 14:47:28 compute-0 strange_curie[307040]: }
Jan 20 14:47:28 compute-0 systemd[1]: libpod-d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc.scope: Deactivated successfully.
Jan 20 14:47:28 compute-0 podman[307024]: 2026-01-20 14:47:28.224739793 +0000 UTC m=+0.974296864 container died d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9521ad794e04ad3771f6b137263392ee9d1ab5747a7f0fc7c9bcfcb14872438b-merged.mount: Deactivated successfully.
Jan 20 14:47:28 compute-0 podman[307024]: 2026-01-20 14:47:28.276430903 +0000 UTC m=+1.025987974 container remove d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:47:28 compute-0 systemd[1]: libpod-conmon-d641cc158cb3c16af0d81a3ed94b0ad2989c767273dbbc9fad9ae46862d146dc.scope: Deactivated successfully.
Jan 20 14:47:28 compute-0 sudo[306917]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:47:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:47:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c6ff11f9-25bf-4353-9610-f5be89bfd49d does not exist
Jan 20 14:47:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7c0f5e28-7e36-4d24-8b26-c2825272ebd7 does not exist
Jan 20 14:47:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bec6a192-afde-471b-8820-c040ee1fc1c4 does not exist
Jan 20 14:47:28 compute-0 nova_compute[250018]: 2026-01-20 14:47:28.366 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:28 compute-0 sudo[307073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-0 sudo[307073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-0 sudo[307073]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-0 sudo[307095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-0 sudo[307095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-0 sudo[307095]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:28.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:28.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:28 compute-0 sudo[307121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:47:28 compute-0 sudo[307121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-0 sudo[307121]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:28 compute-0 sudo[307141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:28 compute-0 sudo[307141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:28 compute-0 sudo[307141]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:29 compute-0 nova_compute[250018]: 2026-01-20 14:47:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:29 compute-0 nova_compute[250018]: 2026-01-20 14:47:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:29 compute-0 ceph-mon[74360]: pgmap v1779: 321 pgs: 321 active+clean; 366 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 206 op/s
Jan 20 14:47:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:47:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 393 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 213 op/s
Jan 20 14:47:30 compute-0 kernel: tap5659965f-04 (unregistering): left promiscuous mode
Jan 20 14:47:30 compute-0 NetworkManager[48960]: <info>  [1768920450.1654] device (tap5659965f-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:47:30 compute-0 ovn_controller[148666]: 2026-01-20T14:47:30Z|00323|binding|INFO|Releasing lport 5659965f-0485-4982-898c-f273d7898a5f from this chassis (sb_readonly=0)
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.174 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 ovn_controller[148666]: 2026-01-20T14:47:30Z|00324|binding|INFO|Setting lport 5659965f-0485-4982-898c-f273d7898a5f down in Southbound
Jan 20 14:47:30 compute-0 ovn_controller[148666]: 2026-01-20T14:47:30Z|00325|binding|INFO|Removing iface tap5659965f-04 ovn-installed in OVS
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.188 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:cb:a9 10.100.0.4'], port_security=['fa:16:3e:b7:cb:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bf7690ac-9b5a-41e3-83bf-3c83cbacc45c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd5d614-a7d3-4563-913c-104506628e59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b31139b2a4e49cba5e7048febf901c4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '117d6f57-074c-4b36-b375-42e0ab117254', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c42c6982-be52-495a-8746-42a46932572f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5659965f-0485-4982-898c-f273d7898a5f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.190 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5659965f-0485-4982-898c-f273d7898a5f in datapath fbd5d614-a7d3-4563-913c-104506628e59 unbound from our chassis
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.191 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd5d614-a7d3-4563-913c-104506628e59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.192 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa1e0fa-7df5-44cc-b9d1-a9dfb2fde888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.193 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 namespace which is not needed anymore
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.252 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 20 14:47:30 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000063.scope: Consumed 14.022s CPU time.
Jan 20 14:47:30 compute-0 systemd-machined[216401]: Machine qemu-42-instance-00000063 terminated.
Jan 20 14:47:30 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [NOTICE]   (306199) : haproxy version is 2.8.14-c23fe91
Jan 20 14:47:30 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [NOTICE]   (306199) : path to executable is /usr/sbin/haproxy
Jan 20 14:47:30 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [WARNING]  (306199) : Exiting Master process...
Jan 20 14:47:30 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [ALERT]    (306199) : Current worker (306201) exited with code 143 (Terminated)
Jan 20 14:47:30 compute-0 neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59[306195]: [WARNING]  (306199) : All workers exited. Exiting... (0)
Jan 20 14:47:30 compute-0 systemd[1]: libpod-166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289.scope: Deactivated successfully.
Jan 20 14:47:30 compute-0 podman[307198]: 2026-01-20 14:47:30.344978677 +0000 UTC m=+0.050872789 container died 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 14:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289-userdata-shm.mount: Deactivated successfully.
Jan 20 14:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7510de2d8de5dd55127ff40c1c04ec264e2c0b952177a177123af3989d5774a5-merged.mount: Deactivated successfully.
Jan 20 14:47:30 compute-0 podman[307198]: 2026-01-20 14:47:30.401784937 +0000 UTC m=+0.107679049 container cleanup 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:47:30 compute-0 systemd[1]: libpod-conmon-166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289.scope: Deactivated successfully.
Jan 20 14:47:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:30.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:30 compute-0 podman[307237]: 2026-01-20 14:47:30.467150249 +0000 UTC m=+0.037941339 container remove 166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:47:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:30.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.473 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5c771d51-8679-4820-a15d-c43c505bf774]: (4, ('Tue Jan 20 02:47:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289)\n166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289\nTue Jan 20 02:47:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 (166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289)\n166b3bfa97d2106bc6a4da33eeaac50c6568bcea32ab4796de6c00c196fef289\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.474 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[acc9b285-d1c8-4226-940d-1a6b167fc2e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.475 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd5d614-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 kernel: tapfbd5d614-a0: left promiscuous mode
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.493 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.495 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f77b37-ef10-4831-8e67-ee39cba15964]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.506 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[87d9c192-0abb-4f8b-be55-32400888be70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.507 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfb3235-ab16-428a-a643-ad19402430f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.521 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e568b337-29f3-4dcf-8b32-671f512caeaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637042, 'reachable_time': 32713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307256, 'error': None, 'target': 'ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.523 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd5d614-a7d3-4563-913c-104506628e59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.523 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[eb181dc6-5c93-4f44-9fe8-c0f89d357930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dfbd5d614\x2da7d3\x2d4563\x2d913c\x2d104506628e59.mount: Deactivated successfully.
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.759 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.760 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:30.760 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.882 250022 INFO nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance shutdown successfully after 13 seconds.
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.887 250022 INFO nova.virt.libvirt.driver [-] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Instance destroyed successfully.
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.887 250022 DEBUG nova.virt.libvirt.vif [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1331472194',display_name='tempest-DeleteServersTestJSON-server-1331472194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1331472194',id=99,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-5fm4q1fz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:47:14Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=bf7690ac-9b5a-41e3-83bf-3c83cbacc45c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-60721994-network", "vif_mac": "fa:16:3e:b7:cb:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.888 250022 DEBUG nova.network.os_vif_util [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-60721994-network", "vif_mac": "fa:16:3e:b7:cb:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.889 250022 DEBUG nova.network.os_vif_util [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.889 250022 DEBUG os_vif [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.891 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5659965f-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.896 250022 INFO os_vif [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04')
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.899 250022 DEBUG nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:30 compute-0 nova_compute[250018]: 2026-01-20 14:47:30.899 250022 DEBUG nova.virt.libvirt.driver [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.077 250022 DEBUG neutronclient.v2_0.client [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 5659965f-0485-4982-898c-f273d7898a5f for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.164 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.164 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.164 250022 DEBUG oslo_concurrency.lockutils [None req-2e9a95ec-dc91-4aa4-95a4-8879707ac308 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:31 compute-0 ceph-mon[74360]: pgmap v1780: 321 pgs: 321 active+clean; 393 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 213 op/s
Jan 20 14:47:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 403 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:47:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.942 250022 DEBUG nova.compute.manager [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-unplugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.943 250022 DEBUG oslo_concurrency.lockutils [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.943 250022 DEBUG oslo_concurrency.lockutils [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.943 250022 DEBUG oslo_concurrency.lockutils [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.943 250022 DEBUG nova.compute.manager [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] No waiting events found dispatching network-vif-unplugged-5659965f-0485-4982-898c-f273d7898a5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:31 compute-0 nova_compute[250018]: 2026-01-20 14:47:31.944 250022 WARNING nova.compute.manager [req-f9ff71d0-fab8-4acf-ba20-f246357d4344 req-918284ae-e991-41bb-8d2b-bab258f6aeae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received unexpected event network-vif-unplugged-5659965f-0485-4982-898c-f273d7898a5f for instance with vm_state active and task_state resize_migrated.
Jan 20 14:47:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:32.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:32.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:32 compute-0 nova_compute[250018]: 2026-01-20 14:47:32.915 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:33 compute-0 ceph-mon[74360]: pgmap v1781: 321 pgs: 321 active+clean; 403 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:47:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3143770173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.921 250022 DEBUG nova.compute.manager [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.921 250022 DEBUG oslo_concurrency.lockutils [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.921 250022 DEBUG oslo_concurrency.lockutils [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.921 250022 DEBUG oslo_concurrency.lockutils [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.922 250022 DEBUG nova.compute.manager [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] No waiting events found dispatching network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:33 compute-0 nova_compute[250018]: 2026-01-20 14:47:33.922 250022 WARNING nova.compute.manager [req-bdbb8a28-8748-42d7-9f0d-ec50d6672627 req-d1fa5ba2-d9ec-4f61-a344-41746951dced 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received unexpected event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f for instance with vm_state active and task_state resize_migrated.
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.073 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.073 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2578026580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:34.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:34.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.514860) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454514909, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1484, "num_deletes": 254, "total_data_size": 2329117, "memory_usage": 2360440, "flush_reason": "Manual Compaction"}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454523650, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1416448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38862, "largest_seqno": 40344, "table_properties": {"data_size": 1411216, "index_size": 2436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14662, "raw_average_key_size": 21, "raw_value_size": 1399361, "raw_average_value_size": 2031, "num_data_blocks": 108, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920326, "oldest_key_time": 1768920326, "file_creation_time": 1768920454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 8823 microseconds, and 3853 cpu microseconds.
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.523685) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1416448 bytes OK
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.523702) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.524869) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.524882) EVENT_LOG_v1 {"time_micros": 1768920454524878, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.524896) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2322675, prev total WAL file size 2322675, number of live WAL files 2.
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.525654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1383KB)], [83(10MB)]
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454525757, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12678224, "oldest_snapshot_seqno": -1}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747494950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6916 keys, 9676447 bytes, temperature: kUnknown
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454588614, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9676447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9632259, "index_size": 25772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 176760, "raw_average_key_size": 25, "raw_value_size": 9510686, "raw_average_value_size": 1375, "num_data_blocks": 1026, "num_entries": 6916, "num_filter_entries": 6916, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.588936) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9676447 bytes
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.590589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.3 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(15.8) write-amplify(6.8) OK, records in: 7384, records dropped: 468 output_compression: NoCompression
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.590611) EVENT_LOG_v1 {"time_micros": 1768920454590600, "job": 48, "event": "compaction_finished", "compaction_time_micros": 62989, "compaction_time_cpu_micros": 37280, "output_level": 6, "num_output_files": 1, "total_output_size": 9676447, "num_input_records": 7384, "num_output_records": 6916, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454591019, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.591 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920454593036, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.525481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.593147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.593153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.593155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.593157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:47:34.593159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.673 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.673 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.798 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.799 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4424MB free_disk=20.806148529052734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.799 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.800 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.850 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration for instance bf7690ac-9b5a-41e3-83bf-3c83cbacc45c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.881 250022 INFO nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating resource usage from migration a1059fde-04ca-4e1c-8ccd-474c4cd4cbec
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.882 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Starting to track outgoing migration a1059fde-04ca-4e1c-8ccd-474c4cd4cbec with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.921 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration a1059fde-04ca-4e1c-8ccd-474c4cd4cbec is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.947 250022 INFO nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance fc5c09c4-3e90-4b31-8610-2e555b7e2406 has allocations against this compute host but is not found in the database.
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.948 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:47:34 compute-0 nova_compute[250018]: 2026-01-20 14:47:34.948 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.123 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.169 250022 DEBUG nova.compute.manager [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-changed-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.169 250022 DEBUG nova.compute.manager [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Refreshing instance network info cache due to event network-changed-5659965f-0485-4982-898c-f273d7898a5f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.170 250022 DEBUG oslo_concurrency.lockutils [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.170 250022 DEBUG oslo_concurrency.lockutils [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.171 250022 DEBUG nova.network.neutron [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Refreshing network info cache for port 5659965f-0485-4982-898c-f273d7898a5f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.182 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.183 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.202 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.297 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:35 compute-0 ceph-mon[74360]: pgmap v1782: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2747494950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246613625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.589 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.597 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.620 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.650 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.650 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.651 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.658 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.659 250022 INFO nova.compute.claims [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:47:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.820 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:35 compute-0 nova_compute[250018]: 2026-01-20 14:47:35.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589147395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.263 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.269 250022 DEBUG nova.compute.provider_tree [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.294 250022 DEBUG nova.scheduler.client.report [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.327 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.328 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.423 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.424 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:47:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:36.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.456 250022 INFO nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:47:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:36.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.497 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:47:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 20 14:47:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 20 14:47:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1246613625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2814695511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/589147395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.590 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.591 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.592 250022 INFO nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating image(s)
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.618 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.645 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.668 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.672 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.706 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.708 250022 DEBUG nova.policy [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1bd93d04cc4468abe1d5c61f5144191', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.709 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.747 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.748 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.748 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.748 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.775 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:36 compute-0 nova_compute[250018]: 2026-01-20 14:47:36.779 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.126 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.221 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.272 250022 DEBUG nova.network.neutron [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updated VIF entry in instance network info cache for port 5659965f-0485-4982-898c-f273d7898a5f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.273 250022 DEBUG nova.network.neutron [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating instance_info_cache with network_info: [{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.353 250022 DEBUG oslo_concurrency.lockutils [req-f7f44aec-103d-4b07-abc4-ce3d2c461c1e req-3f3a5da4-66fd-4665-bee4-62c79d00457b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.360 250022 DEBUG nova.objects.instance [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.363 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:37.363 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:37.365 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.429 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.430 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Ensure instance console log exists: /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.430 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.431 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.431 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:37 compute-0 ceph-mon[74360]: pgmap v1783: 321 pgs: 321 active+clean; 407 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 173 op/s
Jan 20 14:47:37 compute-0 ceph-mon[74360]: osdmap e239: 3 total, 3 up, 3 in
Jan 20 14:47:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1261327825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3809842418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.578 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Successfully created port: 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:47:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 411 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 14:47:37 compute-0 nova_compute[250018]: 2026-01-20 14:47:37.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:38.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:38.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2610479657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2919054783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:39 compute-0 nova_compute[250018]: 2026-01-20 14:47:39.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:39 compute-0 nova_compute[250018]: 2026-01-20 14:47:39.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:47:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:39 compute-0 ceph-mon[74360]: pgmap v1785: 321 pgs: 321 active+clean; 411 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 14:47:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1209793672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3930138932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.071 250022 DEBUG nova.compute.manager [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.072 250022 DEBUG oslo_concurrency.lockutils [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.072 250022 DEBUG oslo_concurrency.lockutils [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.072 250022 DEBUG oslo_concurrency.lockutils [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.073 250022 DEBUG nova.compute.manager [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] No waiting events found dispatching network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.073 250022 WARNING nova.compute.manager [req-e5d6d3e2-93ab-47ad-a0ea-59b1a68ded25 req-d619e353-1830-45b0-a608-bce4a2a63b90 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received unexpected event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f for instance with vm_state resized and task_state None.
Jan 20 14:47:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:40.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.846 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Successfully updated port: 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:47:40 compute-0 nova_compute[250018]: 2026-01-20 14:47:40.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:41 compute-0 sshd-session[307495]: Invalid user test from 157.245.78.139 port 36250
Jan 20 14:47:41 compute-0 sshd-session[307495]: Connection closed by invalid user test 157.245.78.139 port 36250 [preauth]
Jan 20 14:47:41 compute-0 nova_compute[250018]: 2026-01-20 14:47:41.523 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:41 compute-0 nova_compute[250018]: 2026-01-20 14:47:41.523 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:41 compute-0 nova_compute[250018]: 2026-01-20 14:47:41.524 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:41 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 20 14:47:41 compute-0 ceph-mon[74360]: pgmap v1786: 321 pgs: 321 active+clean; 433 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 1.6 MiB/s wr, 49 op/s
Jan 20 14:47:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 14:47:41 compute-0 nova_compute[250018]: 2026-01-20 14:47:41.918 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.198 250022 DEBUG nova.compute.manager [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-changed-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.198 250022 DEBUG nova.compute.manager [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Refreshing instance network info cache due to event network-changed-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.199 250022 DEBUG oslo_concurrency.lockutils [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.399 250022 DEBUG nova.compute.manager [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.400 250022 DEBUG oslo_concurrency.lockutils [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.401 250022 DEBUG oslo_concurrency.lockutils [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.401 250022 DEBUG oslo_concurrency.lockutils [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.401 250022 DEBUG nova.compute.manager [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] No waiting events found dispatching network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.402 250022 WARNING nova.compute.manager [req-785b2972-3136-48ad-9a7a-61ed3d572c4e req-ecca0e1c-631a-4ec2-993c-731a091d73a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Received unexpected event network-vif-plugged-5659965f-0485-4982-898c-f273d7898a5f for instance with vm_state resized and task_state None.
Jan 20 14:47:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:42.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.571 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.572 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.573 250022 DEBUG nova.compute.manager [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 14:47:42 compute-0 nova_compute[250018]: 2026-01-20 14:47:42.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.083 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.083 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.294 250022 DEBUG nova.network.neutron [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Updating instance_info_cache with network_info: [{"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.296 250022 DEBUG neutronclient.v2_0.client [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 5659965f-0485-4982-898c-f273d7898a5f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.296 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.297 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquired lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.297 250022 DEBUG nova.network.neutron [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.297 250022 DEBUG nova.objects.instance [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'info_cache' on Instance uuid bf7690ac-9b5a-41e3-83bf-3c83cbacc45c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.332 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.332 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance network_info: |[{"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.333 250022 DEBUG oslo_concurrency.lockutils [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.333 250022 DEBUG nova.network.neutron [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Refreshing network info cache for port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.336 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start _get_guest_xml network_info=[{"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.346 250022 WARNING nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.359 250022 DEBUG nova.virt.libvirt.host [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.360 250022 DEBUG nova.virt.libvirt.host [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.363 250022 DEBUG nova.virt.libvirt.host [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.364 250022 DEBUG nova.virt.libvirt.host [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.366 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.366 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.367 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.367 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.367 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.368 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.368 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.368 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.369 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.369 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.370 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.370 250022 DEBUG nova.virt.hardware [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.374 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:43 compute-0 ceph-mon[74360]: pgmap v1787: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 14:47:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517018525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.805 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.831 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:43 compute-0 nova_compute[250018]: 2026-01-20 14:47:43.835 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:47:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346847683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.262 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.263 250022 DEBUG nova.virt.libvirt.vif [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:36Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.264 250022 DEBUG nova.network.os_vif_util [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.265 250022 DEBUG nova.network.os_vif_util [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.266 250022 DEBUG nova.objects.instance [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.307 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <uuid>fc5c09c4-3e90-4b31-8610-2e555b7e2406</uuid>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <name>instance-00000066</name>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1122585928</nova:name>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:47:43</nova:creationTime>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <nova:port uuid="35f2fe4e-cb8e-467e-82ec-c12e870ac8a3">
Jan 20 14:47:44 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <system>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="serial">fc5c09c4-3e90-4b31-8610-2e555b7e2406</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="uuid">fc5c09c4-3e90-4b31-8610-2e555b7e2406</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </system>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <os>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </os>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <features>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </features>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk">
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </source>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config">
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </source>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:47:44 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e7:a1:c2"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <target dev="tap35f2fe4e-cb"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/console.log" append="off"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <video>
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </video>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:47:44 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:47:44 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:47:44 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:47:44 compute-0 nova_compute[250018]: </domain>
Jan 20 14:47:44 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.308 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Preparing to wait for external event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.308 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.309 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.309 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.310 250022 DEBUG nova.virt.libvirt.vif [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:36Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.310 250022 DEBUG nova.network.os_vif_util [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.311 250022 DEBUG nova.network.os_vif_util [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.312 250022 DEBUG os_vif [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.317 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.318 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.321 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f2fe4e-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.321 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f2fe4e-cb, col_values=(('external_ids', {'iface-id': '35f2fe4e-cb8e-467e-82ec-c12e870ac8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:a1:c2', 'vm-uuid': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:44 compute-0 NetworkManager[48960]: <info>  [1768920464.3752] manager: (tap35f2fe4e-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.378 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.380 250022 INFO os_vif [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb')
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.449 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.450 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.451 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:e7:a1:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.451 250022 INFO nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Using config drive
Jan 20 14:47:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:47:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:44.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:47:44 compute-0 nova_compute[250018]: 2026-01-20 14:47:44.477 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.147 250022 INFO nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating config drive at /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.153 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzyteu0o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.284 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkzyteu0o" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3517018525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1346847683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.318 250022 DEBUG nova.storage.rbd_utils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.323 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.406 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920450.4017537, bf7690ac-9b5a-41e3-83bf-3c83cbacc45c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.407 250022 INFO nova.compute.manager [-] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] VM Stopped (Lifecycle Event)
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.438 250022 DEBUG nova.compute.manager [None req-4412bf7a-a066-42d6-a479-518f3304b7f5 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.442 250022 DEBUG nova.compute.manager [None req-4412bf7a-a066-42d6-a479-518f3304b7f5 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.464 250022 INFO nova.compute.manager [None req-4412bf7a-a066-42d6-a479-518f3304b7f5 - - - - - -] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 14:47:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 453 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.887 250022 DEBUG nova.network.neutron [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Updated VIF entry in instance network info cache for port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.888 250022 DEBUG nova.network.neutron [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Updating instance_info_cache with network_info: [{"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.892 250022 DEBUG nova.network.neutron [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] [instance: bf7690ac-9b5a-41e3-83bf-3c83cbacc45c] Updating instance_info_cache with network_info: [{"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.909 250022 DEBUG oslo_concurrency.lockutils [req-990880ae-da07-4c7a-8c79-8cae7b2be5de req-426c5b10-81e3-4502-90df-154545b3da6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fc5c09c4-3e90-4b31-8610-2e555b7e2406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.912 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Releasing lock "refresh_cache-bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:47:45 compute-0 nova_compute[250018]: 2026-01-20 14:47:45.913 250022 DEBUG nova.objects.instance [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lazy-loading 'migration_context' on Instance uuid bf7690ac-9b5a-41e3-83bf-3c83cbacc45c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:46 compute-0 nova_compute[250018]: 2026-01-20 14:47:46.128 250022 DEBUG nova.storage.rbd_utils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] removing snapshot(nova-resize) on rbd image(bf7690ac-9b5a-41e3-83bf-3c83cbacc45c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:47:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:46 compute-0 ceph-mon[74360]: pgmap v1788: 321 pgs: 321 active+clean; 453 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 20 14:47:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:47.367 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 14:47:47 compute-0 ceph-mon[74360]: pgmap v1789: 321 pgs: 321 active+clean; 453 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 20 14:47:47 compute-0 nova_compute[250018]: 2026-01-20 14:47:47.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:48.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:48 compute-0 sudo[307659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:48 compute-0 sudo[307659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:48 compute-0 sudo[307659]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:48 compute-0 sudo[307684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:47:48 compute-0 sudo[307684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:47:48 compute-0 sudo[307684]: pam_unix(sudo:session): session closed for user root
Jan 20 14:47:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 20 14:47:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 20 14:47:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.938 250022 DEBUG oslo_concurrency.processutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.939 250022 INFO nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deleting local config drive /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config because it was imported into RBD.
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.983 250022 DEBUG nova.virt.libvirt.vif [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1331472194',display_name='tempest-DeleteServersTestJSON-server-1331472194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1331472194',id=99,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b31139b2a4e49cba5e7048febf901c4',ramdisk_id='',reservation_id='r-5fm4q1fz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1162922273',owner_user_name='tempest-DeleteServersTestJSON-1162922273-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:47:42Z,user_data=None,user_id='37e9ef97fbe0448e9fbe32d48b66211f',uuid=bf7690ac-9b5a-41e3-83bf-3c83cbacc45c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.984 250022 DEBUG nova.network.os_vif_util [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converting VIF {"id": "5659965f-0485-4982-898c-f273d7898a5f", "address": "fa:16:3e:b7:cb:a9", "network": {"id": "fbd5d614-a7d3-4563-913c-104506628e59", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-60721994-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b31139b2a4e49cba5e7048febf901c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5659965f-04", "ovs_interfaceid": "5659965f-0485-4982-898c-f273d7898a5f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.984 250022 DEBUG nova.network.os_vif_util [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.985 250022 DEBUG os_vif [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.987 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5659965f-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.987 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.988 250022 INFO os_vif [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:cb:a9,bridge_name='br-int',has_traffic_filtering=True,id=5659965f-0485-4982-898c-f273d7898a5f,network=Network(fbd5d614-a7d3-4563-913c-104506628e59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5659965f-04')
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.988 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:48 compute-0 nova_compute[250018]: 2026-01-20 14:47:48.989 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:49 compute-0 kernel: tap35f2fe4e-cb: entered promiscuous mode
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.0086] manager: (tap35f2fe4e-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 20 14:47:49 compute-0 ovn_controller[148666]: 2026-01-20T14:47:49Z|00326|binding|INFO|Claiming lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for this chassis.
Jan 20 14:47:49 compute-0 ovn_controller[148666]: 2026-01-20T14:47:49Z|00327|binding|INFO|35f2fe4e-cb8e-467e-82ec-c12e870ac8a3: Claiming fa:16:3e:e7:a1:c2 10.100.0.8
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.034 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 systemd-udevd[307719]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.044 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:a1:c2 10.100.0.8'], port_security=['fa:16:3e:e7:a1:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.045 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.046 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.0569] device (tap35f2fe4e-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.0582] device (tap35f2fe4e-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.060 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f19c8c75-2c05-4c8a-b9c0-e16855bfef73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.061 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.063 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.063 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[304f05d1-d30c-4803-a2ef-244d61e6e537]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.064 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3fdb00-7964-49d1-99ad-a803a1ca6ffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 systemd-machined[216401]: New machine qemu-43-instance-00000066.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.070 250022 DEBUG oslo_concurrency.processutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.075 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf03a0e-a757-4bf9-bf0a-6e3619baf386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000066.
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.101 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d32b54ec-aad0-4d1d-b340-ed21e8cea4e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_controller[148666]: 2026-01-20T14:47:49Z|00328|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 ovn-installed in OVS
Jan 20 14:47:49 compute-0 ovn_controller[148666]: 2026-01-20T14:47:49Z|00329|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 up in Southbound
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.127 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.136 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4235b8b5-c4c3-4533-8066-162242c9dfd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.1434] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.142 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2055d314-7afa-4381-a64b-f7e943249d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 systemd-udevd[307724]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.174 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0617e51a-1440-41f0-a09a-9e265f140d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.177 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2448dc4c-da3e-400b-a3c7-3ea24bf5ee3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.1989] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.203 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9233bc60-d66a-4ef9-83dc-dc48a7b4a89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.218 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[675efcbf-0380-41f5-8e71-a2b6f67b0a19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640992, 'reachable_time': 21119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307766, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.233 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0574e031-0644-4dd3-bea4-1c510d788094]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640992, 'tstamp': 640992}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307776, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.247 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[967ab586-9d99-4797-843f-fa3839342187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640992, 'reachable_time': 21119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307777, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.275 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5226a76b-de37-4d44-a637-8ecc13fc92c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.336 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1e73d5-1682-485a-bef4-071c1de66bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.338 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.338 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.339 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 NetworkManager[48960]: <info>  [1768920469.3421] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 20 14:47:49 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.344 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:47:49 compute-0 ovn_controller[148666]: 2026-01-20T14:47:49Z|00330|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.346 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.347 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.348 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[87e2c5d7-84e6-4b71-a219-1c3e67c9fbfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.349 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:47:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:47:49.349 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.374 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.405 250022 DEBUG nova.compute.manager [req-e2a85a15-f5a7-4dd4-beb0-d4b09dddcd6b req-5fd13603-0f1b-43af-b082-f287f666f33c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.406 250022 DEBUG oslo_concurrency.lockutils [req-e2a85a15-f5a7-4dd4-beb0-d4b09dddcd6b req-5fd13603-0f1b-43af-b082-f287f666f33c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.407 250022 DEBUG oslo_concurrency.lockutils [req-e2a85a15-f5a7-4dd4-beb0-d4b09dddcd6b req-5fd13603-0f1b-43af-b082-f287f666f33c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.407 250022 DEBUG oslo_concurrency.lockutils [req-e2a85a15-f5a7-4dd4-beb0-d4b09dddcd6b req-5fd13603-0f1b-43af-b082-f287f666f33c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.407 250022 DEBUG nova.compute.manager [req-e2a85a15-f5a7-4dd4-beb0-d4b09dddcd6b req-5fd13603-0f1b-43af-b082-f287f666f33c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Processing event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:47:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:47:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005195862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.521 250022 DEBUG oslo_concurrency.processutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.526 250022 DEBUG nova.compute.provider_tree [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.544 250022 DEBUG nova.scheduler.client.report [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.596 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:49 compute-0 podman[307826]: 2026-01-20 14:47:49.72499316 +0000 UTC m=+0.049744680 container create 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:47:49 compute-0 ceph-mon[74360]: pgmap v1790: 321 pgs: 321 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 14:47:49 compute-0 ceph-mon[74360]: osdmap e240: 3 total, 3 up, 3 in
Jan 20 14:47:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/551235169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3005195862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.762 250022 INFO nova.scheduler.client.report [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Deleted allocation for migration a1059fde-04ca-4e1c-8ccd-474c4cd4cbec
Jan 20 14:47:49 compute-0 systemd[1]: Started libpod-conmon-412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e.scope.
Jan 20 14:47:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:47:49 compute-0 podman[307826]: 2026-01-20 14:47:49.701669478 +0000 UTC m=+0.026421018 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56ac57c872fe66e076387e68a0a296466391ed2a966f96bf04d27b837d25a82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.803 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920469.8031318, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.804 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Started (Lifecycle Event)
Jan 20 14:47:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 454 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.2 MiB/s wr, 208 op/s
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.805 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.809 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:47:49 compute-0 podman[307826]: 2026-01-20 14:47:49.810513657 +0000 UTC m=+0.135265187 container init 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.812 250022 INFO nova.virt.libvirt.driver [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance spawned successfully.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.813 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:47:49 compute-0 podman[307826]: 2026-01-20 14:47:49.820833927 +0000 UTC m=+0.145585447 container start 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.830 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.838 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:49 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [NOTICE]   (307871) : New worker (307873) forked
Jan 20 14:47:49 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [NOTICE]   (307871) : Loading success.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.843 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.844 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.845 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.846 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.847 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.847 250022 DEBUG nova.virt.libvirt.driver [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.856 250022 DEBUG oslo_concurrency.lockutils [None req-78c57d6e-f9b9-4f93-bbb4-c780048e481a 37e9ef97fbe0448e9fbe32d48b66211f 3b31139b2a4e49cba5e7048febf901c4 - - default default] Lock "bf7690ac-9b5a-41e3-83bf-3c83cbacc45c" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.882 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.883 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920469.8040237, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.883 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Paused (Lifecycle Event)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.912 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.915 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920469.8082688, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.915 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Resumed (Lifecycle Event)
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.924 250022 INFO nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Took 13.33 seconds to spawn the instance on the hypervisor.
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.924 250022 DEBUG nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.936 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.938 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:47:49 compute-0 nova_compute[250018]: 2026-01-20 14:47:49.970 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:47:50 compute-0 nova_compute[250018]: 2026-01-20 14:47:50.007 250022 INFO nova.compute.manager [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Took 14.73 seconds to build instance.
Jan 20 14:47:50 compute-0 nova_compute[250018]: 2026-01-20 14:47:50.053 250022 DEBUG oslo_concurrency.lockutils [None req-657641bc-afc9-41eb-81cc-683cd8b428cf a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:50.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/619936034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.496 250022 DEBUG nova.compute.manager [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.496 250022 DEBUG oslo_concurrency.lockutils [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.496 250022 DEBUG oslo_concurrency.lockutils [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.497 250022 DEBUG oslo_concurrency.lockutils [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.497 250022 DEBUG nova.compute.manager [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:47:51 compute-0 nova_compute[250018]: 2026-01-20 14:47:51.497 250022 WARNING nova.compute.manager [req-756ff064-6829-487e-a44e-ad9f91bd3f23 req-d375696d-7685-4849-9ec3-f4e4a8a72f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received unexpected event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with vm_state active and task_state None.
Jan 20 14:47:51 compute-0 ceph-mon[74360]: pgmap v1792: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 454 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.2 MiB/s wr, 208 op/s
Jan 20 14:47:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 894 KiB/s wr, 172 op/s
Jan 20 14:47:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:52.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:52.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:47:52
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.log']
Jan 20 14:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:47:52 compute-0 nova_compute[250018]: 2026-01-20 14:47:52.923 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/919004951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.208 250022 INFO nova.compute.manager [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Rebuilding instance
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.433 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'trusted_certs' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.447 250022 DEBUG nova.compute.manager [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.485 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_requests' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.496 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.507 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.517 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.528 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:47:53 compute-0 nova_compute[250018]: 2026-01-20 14:47:53.530 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:47:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 456 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Jan 20 14:47:54 compute-0 nova_compute[250018]: 2026-01-20 14:47:54.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:47:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:47:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:54 compute-0 ceph-mon[74360]: pgmap v1793: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 473 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 894 KiB/s wr, 172 op/s
Jan 20 14:47:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1409179445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 421 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 20 14:47:56 compute-0 ceph-mon[74360]: pgmap v1794: 321 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 314 active+clean; 456 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 187 op/s
Jan 20 14:47:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2590787595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:47:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:56 compute-0 podman[307887]: 2026-01-20 14:47:56.484855465 +0000 UTC m=+0.066934035 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:47:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:56 compute-0 podman[307886]: 2026-01-20 14:47:56.516100991 +0000 UTC m=+0.098624533 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:47:57 compute-0 ceph-mon[74360]: pgmap v1795: 321 pgs: 321 active+clean; 421 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:47:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 425 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 225 op/s
Jan 20 14:47:57 compute-0 nova_compute[250018]: 2026-01-20 14:47:57.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:47:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:47:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:47:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:47:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:47:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:47:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:47:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:47:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/938089309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:47:59 compute-0 nova_compute[250018]: 2026-01-20 14:47:59.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:47:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:47:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 20 14:47:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 20 14:47:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 20 14:47:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 434 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 236 op/s
Jan 20 14:47:59 compute-0 ceph-mon[74360]: pgmap v1796: 321 pgs: 321 active+clean; 425 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 225 op/s
Jan 20 14:47:59 compute-0 ceph-mon[74360]: osdmap e241: 3 total, 3 up, 3 in
Jan 20 14:48:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:01 compute-0 anacron[102472]: Job `cron.monthly' started
Jan 20 14:48:01 compute-0 anacron[102472]: Job `cron.monthly' terminated
Jan 20 14:48:01 compute-0 anacron[102472]: Normal exit (3 jobs run)
Jan 20 14:48:01 compute-0 ceph-mon[74360]: pgmap v1798: 321 pgs: 321 active+clean; 434 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 236 op/s
Jan 20 14:48:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 446 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 197 op/s
Jan 20 14:48:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:02.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:02 compute-0 nova_compute[250018]: 2026-01-20 14:48:02.975 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:03 compute-0 ceph-mon[74360]: pgmap v1799: 321 pgs: 321 active+clean; 446 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 197 op/s
Jan 20 14:48:03 compute-0 nova_compute[250018]: 2026-01-20 14:48:03.574 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:48:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 448 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 243 op/s
Jan 20 14:48:04 compute-0 nova_compute[250018]: 2026-01-20 14:48:04.381 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:04.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:04.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:04 compute-0 ovn_controller[148666]: 2026-01-20T14:48:04Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:a1:c2 10.100.0.8
Jan 20 14:48:04 compute-0 ovn_controller[148666]: 2026-01-20T14:48:04Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:a1:c2 10.100.0.8
Jan 20 14:48:04 compute-0 ovn_controller[148666]: 2026-01-20T14:48:04Z|00331|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:48:05 compute-0 nova_compute[250018]: 2026-01-20 14:48:05.057 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:05 compute-0 ceph-mon[74360]: pgmap v1800: 321 pgs: 321 active+clean; 448 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 243 op/s
Jan 20 14:48:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 472 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 234 op/s
Jan 20 14:48:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:06.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:07 compute-0 ceph-mon[74360]: pgmap v1801: 321 pgs: 321 active+clean; 472 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 234 op/s
Jan 20 14:48:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 478 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Jan 20 14:48:07 compute-0 nova_compute[250018]: 2026-01-20 14:48:07.978 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:08.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:08 compute-0 sudo[307935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:08 compute-0 sudo[307935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:08 compute-0 sudo[307935]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:08 compute-0 sudo[307960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:08 compute-0 sudo[307960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:08 compute-0 sudo[307960]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:09 compute-0 ceph-mon[74360]: pgmap v1802: 321 pgs: 321 active+clean; 478 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Jan 20 14:48:09 compute-0 nova_compute[250018]: 2026-01-20 14:48:09.384 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 480 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 229 op/s
Jan 20 14:48:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:10.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:11 compute-0 ceph-mon[74360]: pgmap v1803: 321 pgs: 321 active+clean; 480 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 229 op/s
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009678928496649915 of space, bias 1.0, pg target 2.9036785489949746 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216414101758794 of space, bias 1.0, pg target 0.6449140232412061 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:48:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 197 op/s
Jan 20 14:48:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:12.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:12 compute-0 nova_compute[250018]: 2026-01-20 14:48:12.981 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4075507241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:13 compute-0 ceph-mon[74360]: pgmap v1804: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 197 op/s
Jan 20 14:48:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4075507241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:48:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:48:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 182 op/s
Jan 20 14:48:14 compute-0 nova_compute[250018]: 2026-01-20 14:48:14.430 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4143906486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3187814475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:14.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:14.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.535645) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494535732, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 651, "num_deletes": 258, "total_data_size": 740825, "memory_usage": 752856, "flush_reason": "Manual Compaction"}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494544294, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 732123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40345, "largest_seqno": 40995, "table_properties": {"data_size": 728727, "index_size": 1240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8094, "raw_average_key_size": 19, "raw_value_size": 721713, "raw_average_value_size": 1702, "num_data_blocks": 55, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920455, "oldest_key_time": 1768920455, "file_creation_time": 1768920494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 8674 microseconds, and 3466 cpu microseconds.
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.544334) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 732123 bytes OK
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.544350) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.546119) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.546132) EVENT_LOG_v1 {"time_micros": 1768920494546128, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.546148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 737342, prev total WAL file size 737342, number of live WAL files 2.
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.546673) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323631' seq:72057594037927935, type:22 .. '6C6F676D0031353134' seq:0, type:0; will stop at (end)
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(714KB)], [86(9449KB)]
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494546722, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 10408570, "oldest_snapshot_seqno": -1}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6811 keys, 10279301 bytes, temperature: kUnknown
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494603605, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10279301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10234771, "index_size": 26370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175623, "raw_average_key_size": 25, "raw_value_size": 10113963, "raw_average_value_size": 1484, "num_data_blocks": 1049, "num_entries": 6811, "num_filter_entries": 6811, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.603923) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10279301 bytes
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.606160) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(28.3) write-amplify(14.0) OK, records in: 7340, records dropped: 529 output_compression: NoCompression
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.606179) EVENT_LOG_v1 {"time_micros": 1768920494606170, "job": 50, "event": "compaction_finished", "compaction_time_micros": 56967, "compaction_time_cpu_micros": 24839, "output_level": 6, "num_output_files": 1, "total_output_size": 10279301, "num_input_records": 7340, "num_output_records": 6811, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494606451, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920494608234, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.546545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.608368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.608406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.608411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.608414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:14.608417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:14 compute-0 nova_compute[250018]: 2026-01-20 14:48:14.624 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:48:15 compute-0 ceph-mon[74360]: pgmap v1805: 321 pgs: 321 active+clean; 486 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 182 op/s
Jan 20 14:48:15 compute-0 nova_compute[250018]: 2026-01-20 14:48:15.583 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:15.583 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:15.585 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:48:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 508 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 20 14:48:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:16.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:16 compute-0 kernel: tap35f2fe4e-cb (unregistering): left promiscuous mode
Jan 20 14:48:16 compute-0 NetworkManager[48960]: <info>  [1768920496.9251] device (tap35f2fe4e-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:16 compute-0 nova_compute[250018]: 2026-01-20 14:48:16.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:16 compute-0 ovn_controller[148666]: 2026-01-20T14:48:16Z|00332|binding|INFO|Releasing lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 from this chassis (sb_readonly=0)
Jan 20 14:48:16 compute-0 ovn_controller[148666]: 2026-01-20T14:48:16Z|00333|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 down in Southbound
Jan 20 14:48:16 compute-0 ovn_controller[148666]: 2026-01-20T14:48:16Z|00334|binding|INFO|Removing iface tap35f2fe4e-cb ovn-installed in OVS
Jan 20 14:48:16 compute-0 nova_compute[250018]: 2026-01-20 14:48:16.942 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:16.952 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:a1:c2 10.100.0.8'], port_security=['fa:16:3e:e7:a1:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:16.954 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:48:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:16.958 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:48:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:16.959 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[431efab8-9976-42c7-9df7-c22fd29fb905]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:16.961 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:48:16 compute-0 nova_compute[250018]: 2026-01-20 14:48:16.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 20 14:48:17 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000066.scope: Consumed 15.114s CPU time.
Jan 20 14:48:17 compute-0 systemd-machined[216401]: Machine qemu-43-instance-00000066 terminated.
Jan 20 14:48:17 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [NOTICE]   (307871) : haproxy version is 2.8.14-c23fe91
Jan 20 14:48:17 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [NOTICE]   (307871) : path to executable is /usr/sbin/haproxy
Jan 20 14:48:17 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [WARNING]  (307871) : Exiting Master process...
Jan 20 14:48:17 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [ALERT]    (307871) : Current worker (307873) exited with code 143 (Terminated)
Jan 20 14:48:17 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[307866]: [WARNING]  (307871) : All workers exited. Exiting... (0)
Jan 20 14:48:17 compute-0 systemd[1]: libpod-412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e.scope: Deactivated successfully.
Jan 20 14:48:17 compute-0 podman[308014]: 2026-01-20 14:48:17.108936598 +0000 UTC m=+0.045710190 container died 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e-userdata-shm.mount: Deactivated successfully.
Jan 20 14:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c56ac57c872fe66e076387e68a0a296466391ed2a966f96bf04d27b837d25a82-merged.mount: Deactivated successfully.
Jan 20 14:48:17 compute-0 podman[308014]: 2026-01-20 14:48:17.149339673 +0000 UTC m=+0.086113265 container cleanup 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 14:48:17 compute-0 systemd[1]: libpod-conmon-412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e.scope: Deactivated successfully.
Jan 20 14:48:17 compute-0 podman[308046]: 2026-01-20 14:48:17.213159322 +0000 UTC m=+0.040274121 container remove 412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.219 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[77eaf172-fb0a-4527-9048-f1b704e8607d]: (4, ('Tue Jan 20 02:48:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e)\n412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e\nTue Jan 20 02:48:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e)\n412b7b21b8d86ec3ce83658aacf575fd2466c6305d1fecaf0b1d10ced4bce71e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.221 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fb081999-42b4-427b-87b1-b46805a1a2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.222 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.243 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5664b0-dec1-4e2f-ab0b-3a3d82ceac25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.256 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea18b0e0-6b5a-4430-9838-826dd9aaaf88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.257 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b36513ed-71c4-4b53-b69d-ca34400c08fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.272 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ad6891-e6f8-414f-b105-c9f03da89228]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640985, 'reachable_time': 36202, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308072, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.274 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:48:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:17.275 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fd6eab-1f3c-4be9-8fd9-269116cf11e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.382 250022 DEBUG nova.compute.manager [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.382 250022 DEBUG oslo_concurrency.lockutils [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.382 250022 DEBUG oslo_concurrency.lockutils [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.383 250022 DEBUG oslo_concurrency.lockutils [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.383 250022 DEBUG nova.compute.manager [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.383 250022 WARNING nova.compute.manager [req-d4f245e4-d842-4270-9927-b435a9b9fd19 req-cce12df2-2e0f-49a7-9859-a5cb82bc878e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received unexpected event network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with vm_state active and task_state rebuilding.
Jan 20 14:48:17 compute-0 ceph-mon[74360]: pgmap v1806: 321 pgs: 321 active+clean; 508 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.639 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance shutdown successfully after 24 seconds.
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.645 250022 INFO nova.virt.libvirt.driver [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance destroyed successfully.
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.649 250022 INFO nova.virt.libvirt.driver [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance destroyed successfully.
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.650 250022 DEBUG nova.virt.libvirt.vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:47:52Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.650 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.651 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.652 250022 DEBUG os_vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.653 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.653 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f2fe4e-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.656 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.658 250022 INFO os_vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb')
Jan 20 14:48:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 518 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 14:48:17 compute-0 nova_compute[250018]: 2026-01-20 14:48:17.983 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.047 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deleting instance files /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406_del
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.048 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deletion of /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406_del complete
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.203 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.203 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating image(s)
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.230 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.256 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.280 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.283 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.285 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:18.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:18.586 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:18 compute-0 nova_compute[250018]: 2026-01-20 14:48:18.929 250022 DEBUG nova.virt.libvirt.imagebackend [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/26699514-f465-4b50-98b7-36f2cfc6a308/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/26699514-f465-4b50-98b7-36f2cfc6a308/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.472 250022 DEBUG nova.compute.manager [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.473 250022 DEBUG oslo_concurrency.lockutils [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.473 250022 DEBUG oslo_concurrency.lockutils [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.473 250022 DEBUG oslo_concurrency.lockutils [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.473 250022 DEBUG nova.compute.manager [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:19 compute-0 nova_compute[250018]: 2026-01-20 14:48:19.473 250022 WARNING nova.compute.manager [req-79caff00-6b9f-4f6f-b723-94e643365200 req-19d76bbe-53f2-4371-8a58-275effed69bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received unexpected event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:48:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:19 compute-0 ceph-mon[74360]: pgmap v1807: 321 pgs: 321 active+clean; 518 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 14:48:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 464 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.239 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.317 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.318 250022 DEBUG nova.virt.images [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] 26699514-f465-4b50-98b7-36f2cfc6a308 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.319 250022 DEBUG nova.privsep.utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.320 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.495 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.part /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.500 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.565 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c.converted --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.567 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1269976554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.601 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:20 compute-0 nova_compute[250018]: 2026-01-20 14:48:20.605 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.242 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.322 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.444 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.445 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Ensure instance console log exists: /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.445 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.446 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.446 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.448 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start _get_guest_xml network_info=[{"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.452 250022 WARNING nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.458 250022 DEBUG nova.virt.libvirt.host [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.459 250022 DEBUG nova.virt.libvirt.host [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.461 250022 DEBUG nova.virt.libvirt.host [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.462 250022 DEBUG nova.virt.libvirt.host [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.463 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.463 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.463 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.463 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.464 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.464 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.464 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.464 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.465 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.465 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.465 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.466 250022 DEBUG nova.virt.hardware [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.466 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'vcpu_model' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.489 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:21 compute-0 ceph-mon[74360]: pgmap v1808: 321 pgs: 321 active+clean; 464 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Jan 20 14:48:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3194524310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 412 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 186 op/s
Jan 20 14:48:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691197459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.923 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.951 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:21 compute-0 nova_compute[250018]: 2026-01-20 14:48:21.955 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/163141579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.410 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.412 250022 DEBUG nova.virt.libvirt.vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:18Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.412 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.413 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.415 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <uuid>fc5c09c4-3e90-4b31-8610-2e555b7e2406</uuid>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <name>instance-00000066</name>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1122585928</nova:name>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:48:21</nova:creationTime>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <nova:port uuid="35f2fe4e-cb8e-467e-82ec-c12e870ac8a3">
Jan 20 14:48:22 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <system>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="serial">fc5c09c4-3e90-4b31-8610-2e555b7e2406</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="uuid">fc5c09c4-3e90-4b31-8610-2e555b7e2406</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </system>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <os>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </os>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <features>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </features>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk">
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config">
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:48:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e7:a1:c2"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <target dev="tap35f2fe4e-cb"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/console.log" append="off"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <video>
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </video>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:48:22 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:48:22 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:48:22 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:48:22 compute-0 nova_compute[250018]: </domain>
Jan 20 14:48:22 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.417 250022 DEBUG nova.compute.manager [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Preparing to wait for external event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.417 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.417 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.417 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.418 250022 DEBUG nova.virt.libvirt.vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:47:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:18Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.418 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.419 250022 DEBUG nova.network.os_vif_util [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.419 250022 DEBUG os_vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.420 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.420 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.420 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.422 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.423 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f2fe4e-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.423 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f2fe4e-cb, col_values=(('external_ids', {'iface-id': '35f2fe4e-cb8e-467e-82ec-c12e870ac8a3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:a1:c2', 'vm-uuid': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.424 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-0 NetworkManager[48960]: <info>  [1768920502.4258] manager: (tap35f2fe4e-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.427 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.429 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.430 250022 INFO os_vif [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb')
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.496 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.497 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.497 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:e7:a1:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.497 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Using config drive
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.520 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.567 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'ec2_ids' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.599 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'keypairs' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1691197459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/163141579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:22 compute-0 nova_compute[250018]: 2026-01-20 14:48:22.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.552 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Creating config drive at /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.560 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8igtrizt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:23 compute-0 ceph-mon[74360]: pgmap v1809: 321 pgs: 321 active+clean; 412 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 186 op/s
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.696 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8igtrizt" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.738 250022 DEBUG nova.storage.rbd_utils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.741 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 414 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.897 250022 DEBUG oslo_concurrency.processutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config fc5c09c4-3e90-4b31-8610-2e555b7e2406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.899 250022 INFO nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deleting local config drive /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406/disk.config because it was imported into RBD.
Jan 20 14:48:23 compute-0 kernel: tap35f2fe4e-cb: entered promiscuous mode
Jan 20 14:48:23 compute-0 ovn_controller[148666]: 2026-01-20T14:48:23Z|00335|binding|INFO|Claiming lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for this chassis.
Jan 20 14:48:23 compute-0 ovn_controller[148666]: 2026-01-20T14:48:23Z|00336|binding|INFO|35f2fe4e-cb8e-467e-82ec-c12e870ac8a3: Claiming fa:16:3e:e7:a1:c2 10.100.0.8
Jan 20 14:48:23 compute-0 NetworkManager[48960]: <info>  [1768920503.9477] manager: (tap35f2fe4e-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.947 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.959 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:a1:c2 10.100.0.8'], port_security=['fa:16:3e:e7:a1:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.960 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.962 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:23 compute-0 ovn_controller[148666]: 2026-01-20T14:48:23Z|00337|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 ovn-installed in OVS
Jan 20 14:48:23 compute-0 ovn_controller[148666]: 2026-01-20T14:48:23Z|00338|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 up in Southbound
Jan 20 14:48:23 compute-0 nova_compute[250018]: 2026-01-20 14:48:23.966 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fbecc84f-572b-41ef-9355-16b683babcf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.974 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.975 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:48:23 compute-0 systemd-udevd[308410]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.975 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[737fccbd-a30d-4418-bf13-d05613ecd26e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.977 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4356ddaa-b149-4855-a956-14a03c4b73d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:23 compute-0 systemd-machined[216401]: New machine qemu-44-instance-00000066.
Jan 20 14:48:23 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000066.
Jan 20 14:48:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:23.987 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b04dc371-ca0d-443b-9507-9b6d2ff87842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:23 compute-0 NetworkManager[48960]: <info>  [1768920503.9894] device (tap35f2fe4e-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:48:23 compute-0 NetworkManager[48960]: <info>  [1768920503.9904] device (tap35f2fe4e-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.011 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4af0760e-7612-4f4a-bcbe-4fea1d38ed88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.040 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2c5cec-304c-4b59-a48d-bb856e586232]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 systemd-udevd[308413]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.046 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2e88cf-b25f-4165-915e-d835e585e20b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 NetworkManager[48960]: <info>  [1768920504.0485] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.079 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fb3ef9-b1d9-482b-8e46-71c140df472a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.086 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebc1094-e048-4af9-a36d-853208020365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 NetworkManager[48960]: <info>  [1768920504.1070] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.112 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b41aa371-1d18-4208-bfa8-1bb72c48b13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.128 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[891829a4-56c3-4718-b630-5af70151ad7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644483, 'reachable_time': 22035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308442, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.140 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[48ea6098-9665-41cc-9cb1-2ef9f8d43a39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644483, 'tstamp': 644483}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308443, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.158 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[082b75ad-9588-4ed6-b188-be65d48b8029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644483, 'reachable_time': 22035, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308444, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.183 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9e153d92-62bc-4736-82c1-85ee7ef228d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.242 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9447ee6a-a717-4a94-a99f-dd21298f4f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.243 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.244 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.244 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:24 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:48:24 compute-0 NetworkManager[48960]: <info>  [1768920504.2470] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.250 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.251 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:24 compute-0 ovn_controller[148666]: 2026-01-20T14:48:24Z|00339|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.252 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.258 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffdaa27-2914-4d9e-bdff-890c1125ba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.259 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:48:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:24.260 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.267 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:24 compute-0 sshd-session[308449]: Invalid user test from 157.245.78.139 port 56816
Jan 20 14:48:24 compute-0 sshd-session[308449]: Connection closed by invalid user test 157.245.78.139 port 56816 [preauth]
Jan 20 14:48:24 compute-0 podman[308514]: 2026-01-20 14:48:24.649520737 +0000 UTC m=+0.057773546 container create dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:48:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/624077283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:24 compute-0 systemd[1]: Started libpod-conmon-dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab.scope.
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.705 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for fc5c09c4-3e90-4b31-8610-2e555b7e2406 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.706 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920504.704469, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.707 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Started (Lifecycle Event)
Jan 20 14:48:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:24 compute-0 podman[308514]: 2026-01-20 14:48:24.619676629 +0000 UTC m=+0.027929468 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf0ca42f6cc44af01eecf7cbf38fbb5966cb9d478cdf51e2a2948ea8bb8b12a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:24 compute-0 podman[308514]: 2026-01-20 14:48:24.733803362 +0000 UTC m=+0.142056191 container init dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:48:24 compute-0 podman[308514]: 2026-01-20 14:48:24.739693761 +0000 UTC m=+0.147946590 container start dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.741 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.746 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920504.7061064, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.746 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Paused (Lifecycle Event)
Jan 20 14:48:24 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [NOTICE]   (308539) : New worker (308541) forked
Jan 20 14:48:24 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [NOTICE]   (308539) : Loading success.
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.772 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.776 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:24 compute-0 nova_compute[250018]: 2026-01-20 14:48:24.807 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:48:25 compute-0 ceph-mon[74360]: pgmap v1810: 321 pgs: 321 active+clean; 414 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Jan 20 14:48:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 453 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 303 op/s
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.472 250022 DEBUG nova.compute.manager [req-8c5677f5-3edd-49ec-969c-154846c6a5b4 req-906b6189-8152-4b93-bc2e-ac6b8d0ff1d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.472 250022 DEBUG oslo_concurrency.lockutils [req-8c5677f5-3edd-49ec-969c-154846c6a5b4 req-906b6189-8152-4b93-bc2e-ac6b8d0ff1d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.473 250022 DEBUG oslo_concurrency.lockutils [req-8c5677f5-3edd-49ec-969c-154846c6a5b4 req-906b6189-8152-4b93-bc2e-ac6b8d0ff1d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.474 250022 DEBUG oslo_concurrency.lockutils [req-8c5677f5-3edd-49ec-969c-154846c6a5b4 req-906b6189-8152-4b93-bc2e-ac6b8d0ff1d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.474 250022 DEBUG nova.compute.manager [req-8c5677f5-3edd-49ec-969c-154846c6a5b4 req-906b6189-8152-4b93-bc2e-ac6b8d0ff1d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Processing event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.475 250022 DEBUG nova.compute.manager [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.481 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.482 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920506.4814465, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.483 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Resumed (Lifecycle Event)
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.491 250022 INFO nova.virt.libvirt.driver [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance spawned successfully.
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.492 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:48:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.514 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.525 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.531 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.531 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.532 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.533 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.534 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.534 250022 DEBUG nova.virt.libvirt.driver [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.560 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.638 250022 DEBUG nova.compute.manager [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.737 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.738 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.738 250022 DEBUG nova.objects.instance [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:48:26 compute-0 nova_compute[250018]: 2026-01-20 14:48:26.869 250022 DEBUG oslo_concurrency.lockutils [None req-52200910-098c-45f4-bd21-1cdc3b8fa5ff a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:27 compute-0 nova_compute[250018]: 2026-01-20 14:48:27.425 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:27 compute-0 podman[308552]: 2026-01-20 14:48:27.499094047 +0000 UTC m=+0.092075366 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:27 compute-0 podman[308551]: 2026-01-20 14:48:27.500205286 +0000 UTC m=+0.093653808 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:48:27 compute-0 ceph-mon[74360]: pgmap v1811: 321 pgs: 321 active+clean; 453 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.7 MiB/s wr, 303 op/s
Jan 20 14:48:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 437 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.1 MiB/s wr, 297 op/s
Jan 20 14:48:27 compute-0 nova_compute[250018]: 2026-01-20 14:48:27.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:28.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.595 250022 DEBUG nova.compute.manager [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.595 250022 DEBUG oslo_concurrency.lockutils [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.596 250022 DEBUG oslo_concurrency.lockutils [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.596 250022 DEBUG oslo_concurrency.lockutils [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.614 250022 DEBUG nova.compute.manager [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:28 compute-0 nova_compute[250018]: 2026-01-20 14:48:28.614 250022 WARNING nova.compute.manager [req-823f3315-22af-4516-9976-7dac2f9346e6 req-b3ca4ded-6d36-4a64-8970-efdcf33d60ac 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received unexpected event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with vm_state active and task_state None.
Jan 20 14:48:28 compute-0 sudo[308596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-0 sudo[308596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-0 sudo[308596]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-0 sudo[308621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:28 compute-0 sudo[308621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-0 sudo[308621]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-0 sudo[308635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-0 sudo[308635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-0 sudo[308635]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-0 sudo[308670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-0 sudo[308670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-0 sudo[308670]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-0 sudo[308679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:28 compute-0 sudo[308679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:28 compute-0 sudo[308679]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:28 compute-0 sudo[308721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:48:29 compute-0 sudo[308721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.191 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.195 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.195 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.196 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.196 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.197 250022 INFO nova.compute.manager [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Terminating instance
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.199 250022 DEBUG nova.compute.manager [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:48:29 compute-0 kernel: tap35f2fe4e-cb (unregistering): left promiscuous mode
Jan 20 14:48:29 compute-0 NetworkManager[48960]: <info>  [1768920509.2379] device (tap35f2fe4e-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 ovn_controller[148666]: 2026-01-20T14:48:29Z|00340|binding|INFO|Releasing lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 from this chassis (sb_readonly=0)
Jan 20 14:48:29 compute-0 ovn_controller[148666]: 2026-01-20T14:48:29Z|00341|binding|INFO|Setting lport 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 down in Southbound
Jan 20 14:48:29 compute-0 ovn_controller[148666]: 2026-01-20T14:48:29Z|00342|binding|INFO|Removing iface tap35f2fe4e-cb ovn-installed in OVS
Jan 20 14:48:29 compute-0 sudo[308721]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.266 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:a1:c2 10.100.0.8'], port_security=['fa:16:3e:e7:a1:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'fc5c09c4-3e90-4b31-8610-2e555b7e2406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.267 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.268 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.269 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.271 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0baca09b-7b54-41b5-adeb-272d0d15c841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.271 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:48:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:48:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:29 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 20 14:48:29 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000066.scope: Consumed 3.552s CPU time.
Jan 20 14:48:29 compute-0 systemd-machined[216401]: Machine qemu-44-instance-00000066 terminated.
Jan 20 14:48:29 compute-0 sudo[308776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:29 compute-0 sudo[308776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-0 sudo[308776]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [NOTICE]   (308539) : haproxy version is 2.8.14-c23fe91
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [NOTICE]   (308539) : path to executable is /usr/sbin/haproxy
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [WARNING]  (308539) : Exiting Master process...
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [WARNING]  (308539) : Exiting Master process...
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [ALERT]    (308539) : Current worker (308541) exited with code 143 (Terminated)
Jan 20 14:48:29 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[308535]: [WARNING]  (308539) : All workers exited. Exiting... (0)
Jan 20 14:48:29 compute-0 sudo[308814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:29 compute-0 systemd[1]: libpod-dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab.scope: Deactivated successfully.
Jan 20 14:48:29 compute-0 sudo[308814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-0 sudo[308814]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:29 compute-0 podman[308813]: 2026-01-20 14:48:29.403669238 +0000 UTC m=+0.042449991 container died dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.424 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab-userdata-shm.mount: Deactivated successfully.
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.438 250022 INFO nova.virt.libvirt.driver [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Instance destroyed successfully.
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.439 250022 DEBUG nova.objects.instance [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid fc5c09c4-3e90-4b31-8610-2e555b7e2406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbf0ca42f6cc44af01eecf7cbf38fbb5966cb9d478cdf51e2a2948ea8bb8b12a-merged.mount: Deactivated successfully.
Jan 20 14:48:29 compute-0 podman[308813]: 2026-01-20 14:48:29.452194743 +0000 UTC m=+0.090975486 container cleanup dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.451 250022 DEBUG nova.virt.libvirt.vif [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1122585928',display_name='tempest-ServerDiskConfigTestJSON-server-1122585928',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1122585928',id=102,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-o0jf2qbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:48:26Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=fc5c09c4-3e90-4b31-8610-2e555b7e2406,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.451 250022 DEBUG nova.network.os_vif_util [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "address": "fa:16:3e:e7:a1:c2", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f2fe4e-cb", "ovs_interfaceid": "35f2fe4e-cb8e-467e-82ec-c12e870ac8a3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.452 250022 DEBUG nova.network.os_vif_util [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.452 250022 DEBUG os_vif [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.454 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.454 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f2fe4e-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.458 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.461 250022 INFO os_vif [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:a1:c2,bridge_name='br-int',has_traffic_filtering=True,id=35f2fe4e-cb8e-467e-82ec-c12e870ac8a3,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f2fe4e-cb')
Jan 20 14:48:29 compute-0 systemd[1]: libpod-conmon-dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab.scope: Deactivated successfully.
Jan 20 14:48:29 compute-0 sudo[308860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:29 compute-0 sudo[308860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-0 sudo[308860]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:29 compute-0 podman[308899]: 2026-01-20 14:48:29.519958379 +0000 UTC m=+0.040125119 container remove dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.525 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d2820687-e77c-4bd6-9c98-cc99c997d318]: (4, ('Tue Jan 20 02:48:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab)\ndc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab\nTue Jan 20 02:48:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (dc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab)\ndc7da4b75049bfa7ccfce9cf22c006735ae0276de6b4ba450525c475ab3897ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.527 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d43315ce-341f-486a-91e3-4bade67e5198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.528 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:29 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.531 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.534 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f8581fec-52f4-40b2-bf6d-42478f8ce951]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.548 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.550 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[79373a90-35cc-4bee-bd35-a5345780ab6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.551 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2687bf73-2220-435f-9790-8fe622364724]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.566 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[76b492c2-634f-4b7f-8b1b-7c0ac5e75257]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644475, 'reachable_time': 39743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308934, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.570 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:48:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:29.570 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff2a939-e413-4079-9d05-c87fee455141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:29 compute-0 sudo[308935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:48:29 compute-0 sudo[308935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:29 compute-0 ceph-mon[74360]: pgmap v1812: 321 pgs: 321 active+clean; 437 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.1 MiB/s wr, 297 op/s
Jan 20 14:48:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/679145408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 422 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.861 250022 INFO nova.virt.libvirt.driver [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deleting instance files /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406_del
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.862 250022 INFO nova.virt.libvirt.driver [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deletion of /var/lib/nova/instances/fc5c09c4-3e90-4b31-8610-2e555b7e2406_del complete
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.913 250022 INFO nova.compute.manager [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.913 250022 DEBUG oslo.service.loopingcall [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.914 250022 DEBUG nova.compute.manager [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:48:29 compute-0 nova_compute[250018]: 2026-01-20 14:48:29.914 250022 DEBUG nova.network.neutron [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:30 compute-0 sudo[308935]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4f9b2dd0-bfda-41a7-a537-d49b89e16682 does not exist
Jan 20 14:48:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 61c3d833-db66-41c0-84f6-d8ddd7c54cc7 does not exist
Jan 20 14:48:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa68bcdb-8606-498e-88fe-b54c529db842 does not exist
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:48:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:30 compute-0 sudo[308993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:30 compute-0 sudo[308993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:30 compute-0 sudo[308993]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:30 compute-0 sudo[309018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:30 compute-0 sudo[309018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:30 compute-0 sudo[309018]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:30 compute-0 sudo[309043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:30 compute-0 sudo[309043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:30 compute-0 sudo[309043]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:30 compute-0 sudo[309068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:48:30 compute-0 sudo[309068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:30.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:30.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:30.760 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:30.761 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:30.761 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:48:30 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.787 250022 DEBUG nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.787 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.788 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.788 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.788 250022 DEBUG nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.788 250022 DEBUG nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-unplugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.789 250022 DEBUG nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.789 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.789 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.790 250022 DEBUG oslo_concurrency.lockutils [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.791 250022 DEBUG nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] No waiting events found dispatching network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:30 compute-0 nova_compute[250018]: 2026-01-20 14:48:30.792 250022 WARNING nova.compute.manager [req-065f45c6-4a2b-437d-830d-d18481bc55f1 req-2a350fe7-3736-4611-8869-b7cdb1801d14 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received unexpected event network-vif-plugged-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 for instance with vm_state active and task_state deleting.
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.806107043 +0000 UTC m=+0.052290049 container create 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:48:30 compute-0 systemd[1]: Started libpod-conmon-1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d.scope.
Jan 20 14:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.78905836 +0000 UTC m=+0.035241386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.894981831 +0000 UTC m=+0.141164867 container init 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.9030723 +0000 UTC m=+0.149255306 container start 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.90710876 +0000 UTC m=+0.153291786 container attach 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:48:30 compute-0 systemd[1]: libpod-1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d.scope: Deactivated successfully.
Jan 20 14:48:30 compute-0 nifty_elion[309150]: 167 167
Jan 20 14:48:30 compute-0 conmon[309150]: conmon 1605d23de9a161ad4b8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d.scope/container/memory.events
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.912784463 +0000 UTC m=+0.158967479 container died 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1336e642d09441014acc573492456547495c07a4b8afb6e3964bcd1c100cb7-merged.mount: Deactivated successfully.
Jan 20 14:48:30 compute-0 podman[309134]: 2026-01-20 14:48:30.956503958 +0000 UTC m=+0.202686964 container remove 1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:30 compute-0 systemd[1]: libpod-conmon-1605d23de9a161ad4b8a48c382152a3649455f6f0f539c61fdd84d3e9ce4393d.scope: Deactivated successfully.
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.037 250022 DEBUG nova.network.neutron [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.065 250022 INFO nova.compute.manager [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Took 1.15 seconds to deallocate network for instance.
Jan 20 14:48:31 compute-0 podman[309174]: 2026-01-20 14:48:31.134619674 +0000 UTC m=+0.043588712 container create f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.157 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.158 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:31 compute-0 systemd[1]: Started libpod-conmon-f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803.scope.
Jan 20 14:48:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:31 compute-0 podman[309174]: 2026-01-20 14:48:31.114935881 +0000 UTC m=+0.023904959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:31 compute-0 podman[309174]: 2026-01-20 14:48:31.229037664 +0000 UTC m=+0.138006722 container init f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:48:31 compute-0 podman[309174]: 2026-01-20 14:48:31.239169808 +0000 UTC m=+0.148138846 container start f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:48:31 compute-0 podman[309174]: 2026-01-20 14:48:31.254139164 +0000 UTC m=+0.163108302 container attach f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.410 250022 DEBUG nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.448 250022 DEBUG nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.448 250022 DEBUG nova.compute.provider_tree [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.481 250022 DEBUG nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.521 250022 DEBUG nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:48:31 compute-0 nova_compute[250018]: 2026-01-20 14:48:31.605 250022 DEBUG oslo_concurrency.processutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:31 compute-0 ceph-mon[74360]: pgmap v1813: 321 pgs: 321 active+clean; 422 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Jan 20 14:48:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 388 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Jan 20 14:48:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971129878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.030 250022 DEBUG oslo_concurrency.processutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.036 250022 DEBUG nova.compute.provider_tree [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.051 250022 DEBUG nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:32 compute-0 mystifying_lalande[309191]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:48:32 compute-0 mystifying_lalande[309191]: --> relative data size: 1.0
Jan 20 14:48:32 compute-0 mystifying_lalande[309191]: --> All data devices are unavailable
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.072 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:32 compute-0 systemd[1]: libpod-f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803.scope: Deactivated successfully.
Jan 20 14:48:32 compute-0 podman[309174]: 2026-01-20 14:48:32.082289075 +0000 UTC m=+0.991258133 container died f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3457dc11894967cd0bd0ce56f0f3a54cc9453f28366259965e3710409f2c2075-merged.mount: Deactivated successfully.
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.115 250022 INFO nova.scheduler.client.report [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocations for instance fc5c09c4-3e90-4b31-8610-2e555b7e2406
Jan 20 14:48:32 compute-0 podman[309174]: 2026-01-20 14:48:32.137971375 +0000 UTC m=+1.046940413 container remove f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:32 compute-0 systemd[1]: libpod-conmon-f228317fa0f584039d5a2e7748ab448892b25dea856b2fae0b31388ddb055803.scope: Deactivated successfully.
Jan 20 14:48:32 compute-0 sudo[309068]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.188 250022 DEBUG oslo_concurrency.lockutils [None req-527ac950-e024-471f-86f4-b2d4f0ebd84e a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "fc5c09c4-3e90-4b31-8610-2e555b7e2406" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:32 compute-0 sudo[309244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:32 compute-0 sudo[309244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:32 compute-0 sudo[309244]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:32 compute-0 sudo[309269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:32 compute-0 sudo[309269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:32 compute-0 sudo[309269]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:32 compute-0 sudo[309294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:32 compute-0 sudo[309294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:32 compute-0 sudo[309294]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:32 compute-0 sudo[309319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:48:32 compute-0 sudo[309319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:32.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:32 compute-0 ceph-mon[74360]: pgmap v1814: 321 pgs: 321 active+clean; 388 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.6 MiB/s wr, 332 op/s
Jan 20 14:48:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/971129878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:32 compute-0 podman[309384]: 2026-01-20 14:48:32.889039947 +0000 UTC m=+0.067415138 container create 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.899 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.900 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.915 250022 DEBUG nova.compute.manager [req-f399417f-39ec-4112-9a50-a6e1766638d1 req-1f027a23-323e-4422-87aa-a436ef2d0139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Received event network-vif-deleted-35f2fe4e-cb8e-467e-82ec-c12e870ac8a3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.920 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:48:32 compute-0 systemd[1]: Started libpod-conmon-1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6.scope.
Jan 20 14:48:32 compute-0 podman[309384]: 2026-01-20 14:48:32.860523894 +0000 UTC m=+0.038899155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:32 compute-0 nova_compute[250018]: 2026-01-20 14:48:32.987 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:32 compute-0 podman[309384]: 2026-01-20 14:48:32.988979806 +0000 UTC m=+0.167355007 container init 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:48:32 compute-0 podman[309384]: 2026-01-20 14:48:32.996629112 +0000 UTC m=+0.175004313 container start 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:48:33 compute-0 podman[309384]: 2026-01-20 14:48:33.000020414 +0000 UTC m=+0.178395615 container attach 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:48:33 compute-0 modest_wing[309400]: 167 167
Jan 20 14:48:33 compute-0 systemd[1]: libpod-1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6.scope: Deactivated successfully.
Jan 20 14:48:33 compute-0 podman[309384]: 2026-01-20 14:48:33.003114119 +0000 UTC m=+0.181489310 container died 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-34b14755f90e26bea8667b5f60d02df3c1e81de0f332123298f356bf35e447e3-merged.mount: Deactivated successfully.
Jan 20 14:48:33 compute-0 podman[309384]: 2026-01-20 14:48:33.036643747 +0000 UTC m=+0.215018938 container remove 1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:48:33 compute-0 systemd[1]: libpod-conmon-1b9b2bb41804acdcb3de252fb1c0c9a9b72c8c377902b4513647cafaf1f0e1f6.scope: Deactivated successfully.
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.063 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.065 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.075 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.076 250022 INFO nova.compute.claims [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:48:33 compute-0 podman[309422]: 2026-01-20 14:48:33.228827065 +0000 UTC m=+0.048101974 container create eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.236 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:33 compute-0 systemd[1]: Started libpod-conmon-eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969.scope.
Jan 20 14:48:33 compute-0 podman[309422]: 2026-01-20 14:48:33.206082749 +0000 UTC m=+0.025357698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaf9a7d99ca55ad712056e4e64d7faf51beb57f42c8cda29f3607606b603a7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaf9a7d99ca55ad712056e4e64d7faf51beb57f42c8cda29f3607606b603a7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaf9a7d99ca55ad712056e4e64d7faf51beb57f42c8cda29f3607606b603a7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eaf9a7d99ca55ad712056e4e64d7faf51beb57f42c8cda29f3607606b603a7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:33 compute-0 podman[309422]: 2026-01-20 14:48:33.321293471 +0000 UTC m=+0.140568380 container init eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:48:33 compute-0 podman[309422]: 2026-01-20 14:48:33.335116756 +0000 UTC m=+0.154391665 container start eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:48:33 compute-0 podman[309422]: 2026-01-20 14:48:33.339124344 +0000 UTC m=+0.158399253 container attach eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:48:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4144053837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.712 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.719 250022 DEBUG nova.compute.provider_tree [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.792 250022 DEBUG nova.scheduler.client.report [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 366 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.1 MiB/s wr, 319 op/s
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.825 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.826 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:48:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4144053837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.872 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.872 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.896 250022 INFO nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:48:33 compute-0 nova_compute[250018]: 2026-01-20 14:48:33.915 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.011 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.012 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.013 250022 INFO nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating image(s)
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.041 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.069 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:34 compute-0 brave_germain[309439]: {
Jan 20 14:48:34 compute-0 brave_germain[309439]:     "0": [
Jan 20 14:48:34 compute-0 brave_germain[309439]:         {
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "devices": [
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "/dev/loop3"
Jan 20 14:48:34 compute-0 brave_germain[309439]:             ],
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "lv_name": "ceph_lv0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "lv_size": "7511998464",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "name": "ceph_lv0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "tags": {
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.cluster_name": "ceph",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.crush_device_class": "",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.encrypted": "0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.osd_id": "0",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.type": "block",
Jan 20 14:48:34 compute-0 brave_germain[309439]:                 "ceph.vdo": "0"
Jan 20 14:48:34 compute-0 brave_germain[309439]:             },
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "type": "block",
Jan 20 14:48:34 compute-0 brave_germain[309439]:             "vg_name": "ceph_vg0"
Jan 20 14:48:34 compute-0 brave_germain[309439]:         }
Jan 20 14:48:34 compute-0 brave_germain[309439]:     ]
Jan 20 14:48:34 compute-0 brave_germain[309439]: }
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.099 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.104 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:34 compute-0 systemd[1]: libpod-eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969.scope: Deactivated successfully.
Jan 20 14:48:34 compute-0 podman[309422]: 2026-01-20 14:48:34.114491115 +0000 UTC m=+0.933766014 container died eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eaf9a7d99ca55ad712056e4e64d7faf51beb57f42c8cda29f3607606b603a7e-merged.mount: Deactivated successfully.
Jan 20 14:48:34 compute-0 podman[309422]: 2026-01-20 14:48:34.174305666 +0000 UTC m=+0.993580575 container remove eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:48:34 compute-0 systemd[1]: libpod-conmon-eae3b0f8ba54e50c4bd43821a0c3182f65b33d04332c37d19a22e4a978752969.scope: Deactivated successfully.
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.188 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.189 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.189 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.190 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:34 compute-0 sudo[309319]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.224 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.228 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.266 250022 DEBUG nova.policy [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1bd93d04cc4468abe1d5c61f5144191', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:48:34 compute-0 sudo[309554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:34 compute-0 sudo[309554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:34 compute-0 sudo[309554]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:34 compute-0 sudo[309597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:48:34 compute-0 sudo[309597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:34 compute-0 sudo[309597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:34 compute-0 sudo[309622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:34 compute-0 sudo[309622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:34 compute-0 sudo[309622]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:34 compute-0 nova_compute[250018]: 2026-01-20 14:48:34.457 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:34 compute-0 sudo[309647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:48:34 compute-0 sudo[309647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.790215687 +0000 UTC m=+0.037155379 container create 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:48:34 compute-0 systemd[1]: Started libpod-conmon-1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e.scope.
Jan 20 14:48:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.864637843 +0000 UTC m=+0.111577555 container init 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.775341403 +0000 UTC m=+0.022281115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.872358483 +0000 UTC m=+0.119298175 container start 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.875529029 +0000 UTC m=+0.122468721 container attach 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:48:34 compute-0 elegant_beaver[309729]: 167 167
Jan 20 14:48:34 compute-0 systemd[1]: libpod-1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e.scope: Deactivated successfully.
Jan 20 14:48:34 compute-0 conmon[309729]: conmon 1c856fd04f672711f71a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e.scope/container/memory.events
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.878840178 +0000 UTC m=+0.125779870 container died 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-586b08a46436f8c8dfbbde7e8b077f470ea77ef470d753279d64afbe928768d4-merged.mount: Deactivated successfully.
Jan 20 14:48:34 compute-0 podman[309712]: 2026-01-20 14:48:34.916738455 +0000 UTC m=+0.163678187 container remove 1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:48:34 compute-0 systemd[1]: libpod-conmon-1c856fd04f672711f71ab87744c1f0e858f3a65a827cedc7f1584dc5c44f549e.scope: Deactivated successfully.
Jan 20 14:48:34 compute-0 ceph-mon[74360]: pgmap v1815: 321 pgs: 321 active+clean; 366 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.1 MiB/s wr, 319 op/s
Jan 20 14:48:35 compute-0 podman[309757]: 2026-01-20 14:48:35.108596644 +0000 UTC m=+0.043311534 container create 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:48:35 compute-0 systemd[1]: Started libpod-conmon-4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8.scope.
Jan 20 14:48:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c762b96ee36050fcb4b7ba6b251a563259246b1bfa1864dd5977102f05ada78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c762b96ee36050fcb4b7ba6b251a563259246b1bfa1864dd5977102f05ada78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c762b96ee36050fcb4b7ba6b251a563259246b1bfa1864dd5977102f05ada78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c762b96ee36050fcb4b7ba6b251a563259246b1bfa1864dd5977102f05ada78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:35 compute-0 podman[309757]: 2026-01-20 14:48:35.177146282 +0000 UTC m=+0.111861232 container init 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:48:35 compute-0 podman[309757]: 2026-01-20 14:48:35.185518189 +0000 UTC m=+0.120233089 container start 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:48:35 compute-0 podman[309757]: 2026-01-20 14:48:35.09292772 +0000 UTC m=+0.027642640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:48:35 compute-0 podman[309757]: 2026-01-20 14:48:35.191058479 +0000 UTC m=+0.125773409 container attach 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.199 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.274 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.391 250022 DEBUG nova.objects.instance [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.417 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.418 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Ensure instance console log exists: /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.418 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.418 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.419 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 371 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 279 op/s
Jan 20 14:48:35 compute-0 nova_compute[250018]: 2026-01-20 14:48:35.910 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Successfully created port: 1c2faab3-ee0b-4878-b090-b075bbb97543 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:36 compute-0 keen_mestorf[309773]: {
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:         "osd_id": 0,
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:         "type": "bluestore"
Jan 20 14:48:36 compute-0 keen_mestorf[309773]:     }
Jan 20 14:48:36 compute-0 keen_mestorf[309773]: }
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:36 compute-0 systemd[1]: libpod-4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8.scope: Deactivated successfully.
Jan 20 14:48:36 compute-0 conmon[309773]: conmon 4e8dffe73e92c5662329 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8.scope/container/memory.events
Jan 20 14:48:36 compute-0 podman[309757]: 2026-01-20 14:48:36.113111046 +0000 UTC m=+1.047825956 container died 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:48:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c762b96ee36050fcb4b7ba6b251a563259246b1bfa1864dd5977102f05ada78-merged.mount: Deactivated successfully.
Jan 20 14:48:36 compute-0 podman[309757]: 2026-01-20 14:48:36.171106227 +0000 UTC m=+1.105821127 container remove 4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 14:48:36 compute-0 systemd[1]: libpod-conmon-4e8dffe73e92c5662329b56c7629cc8868ea2640bae54e12cba1bceddfb40ea8.scope: Deactivated successfully.
Jan 20 14:48:36 compute-0 sudo[309647]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:48:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:48:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4103562931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:36.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.533 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 47e9b972-a19d-4003-ab7f-d98948421b2b does not exist
Jan 20 14:48:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 315ab2b7-980f-4bba-9c7a-074fc3b4c598 does not exist
Jan 20 14:48:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a34f75fb-c5cd-4c8b-9352-9000b49c0ac6 does not exist
Jan 20 14:48:36 compute-0 sudo[309903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:36 compute-0 sudo[309903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:36 compute-0 sudo[309903]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.700 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.701 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4362MB free_disk=20.849884033203125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.701 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.702 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:36 compute-0 sudo[309928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:48:36 compute-0 sudo[309928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:36 compute-0 sudo[309928]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.837 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.837 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.838 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:48:36 compute-0 nova_compute[250018]: 2026-01-20 14:48:36.892 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:37 compute-0 ceph-mon[74360]: pgmap v1816: 321 pgs: 321 active+clean; 371 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 279 op/s
Jan 20 14:48:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/851343660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4103562931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:48:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:48:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867248921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.387 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.393 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.412 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.434 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.434 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.434 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.434 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.766 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Successfully updated port: 1c2faab3-ee0b-4878-b090-b075bbb97543 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.785 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.785 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.785 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:48:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 386 MiB data, 977 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 670 KiB/s wr, 217 op/s
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.878 250022 DEBUG nova.compute.manager [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-changed-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.879 250022 DEBUG nova.compute.manager [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Refreshing instance network info cache due to event network-changed-1c2faab3-ee0b-4878-b090-b075bbb97543. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.879 250022 DEBUG oslo_concurrency.lockutils [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:48:37 compute-0 nova_compute[250018]: 2026-01-20 14:48:37.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:38 compute-0 nova_compute[250018]: 2026-01-20 14:48:38.055 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:48:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1867248921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/821706828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4259718524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:38 compute-0 nova_compute[250018]: 2026-01-20 14:48:38.451 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:38.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:39 compute-0 ceph-mon[74360]: pgmap v1817: 321 pgs: 321 active+clean; 386 MiB data, 977 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 670 KiB/s wr, 217 op/s
Jan 20 14:48:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3717095713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:39 compute-0 nova_compute[250018]: 2026-01-20 14:48:39.460 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 392 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1006 KiB/s wr, 184 op/s
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.016 250022 DEBUG nova.network.neutron [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Updating instance_info_cache with network_info: [{"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.054 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.054 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance network_info: |[{"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.055 250022 DEBUG oslo_concurrency.lockutils [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.055 250022 DEBUG nova.network.neutron [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Refreshing network info cache for port 1c2faab3-ee0b-4878-b090-b075bbb97543 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.058 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start _get_guest_xml network_info=[{"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.062 250022 WARNING nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.067 250022 DEBUG nova.virt.libvirt.host [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.068 250022 DEBUG nova.virt.libvirt.host [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.070 250022 DEBUG nova.virt.libvirt.host [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.070 250022 DEBUG nova.virt.libvirt.host [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.071 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.071 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.072 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.072 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.072 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.072 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.073 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.073 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.073 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.073 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.073 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.074 250022 DEBUG nova.virt.hardware [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.076 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/946020620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633351073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.511 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.590 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:40 compute-0 nova_compute[250018]: 2026-01-20 14:48:40.598 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:48:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563049413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.074 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.076 250022 DEBUG nova.virt.libvirt.vif [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:33Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.076 250022 DEBUG nova.network.os_vif_util [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.078 250022 DEBUG nova.network.os_vif_util [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.079 250022 DEBUG nova.objects.instance [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.104 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <uuid>f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</uuid>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <name>instance-00000068</name>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-815005966</nova:name>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:48:40</nova:creationTime>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:48:41 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <nova:port uuid="1c2faab3-ee0b-4878-b090-b075bbb97543">
Jan 20 14:48:41 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <system>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="serial">f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="uuid">f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </system>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <os>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </os>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <features>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </features>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:48:41 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk">
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </source>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config">
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </source>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:48:41 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a5:02:f1"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <target dev="tap1c2faab3-ee"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/console.log" append="off"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <video>
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </video>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:48:41 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:48:41 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:48:41 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:48:41 compute-0 nova_compute[250018]: </domain>
Jan 20 14:48:41 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.106 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Preparing to wait for external event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.106 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.107 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.107 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.107 250022 DEBUG nova.virt.libvirt.vif [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:33Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.108 250022 DEBUG nova.network.os_vif_util [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.108 250022 DEBUG nova.network.os_vif_util [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.109 250022 DEBUG os_vif [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.109 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.110 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.110 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.115 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.115 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c2faab3-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.116 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c2faab3-ee, col_values=(('external_ids', {'iface-id': '1c2faab3-ee0b-4878-b090-b075bbb97543', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:02:f1', 'vm-uuid': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:41 compute-0 NetworkManager[48960]: <info>  [1768920521.1206] manager: (tap1c2faab3-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.123 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:48:41 compute-0 ceph-mon[74360]: pgmap v1818: 321 pgs: 321 active+clean; 392 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1006 KiB/s wr, 184 op/s
Jan 20 14:48:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2633351073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4024521406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2563049413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.127 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.128 250022 INFO os_vif [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee')
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.187 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.188 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.188 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:a5:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.188 250022 INFO nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Using config drive
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.214 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:48:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:48:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.766 250022 INFO nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating config drive at /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.775 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpry078px6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 409 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.914 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpry078px6" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.948 250022 DEBUG nova.storage.rbd_utils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:48:41 compute-0 nova_compute[250018]: 2026-01-20 14:48:41.952 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.096 250022 DEBUG oslo_concurrency.processutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.097 250022 INFO nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deleting local config drive /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config because it was imported into RBD.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/140876588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:42 compute-0 kernel: tap1c2faab3-ee: entered promiscuous mode
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.1488] manager: (tap1c2faab3-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Jan 20 14:48:42 compute-0 ovn_controller[148666]: 2026-01-20T14:48:42Z|00343|binding|INFO|Claiming lport 1c2faab3-ee0b-4878-b090-b075bbb97543 for this chassis.
Jan 20 14:48:42 compute-0 ovn_controller[148666]: 2026-01-20T14:48:42Z|00344|binding|INFO|1c2faab3-ee0b-4878-b090-b075bbb97543: Claiming fa:16:3e:a5:02:f1 10.100.0.5
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.157109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522157171, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 251, "total_data_size": 590270, "memory_usage": 600824, "flush_reason": "Manual Compaction"}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.159 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:02:f1 10.100.0.5'], port_security=['fa:16:3e:a5:02:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1c2faab3-ee0b-4878-b090-b075bbb97543) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.161 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2faab3-ee0b-4878-b090-b075bbb97543 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.163 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522164605, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 572733, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40996, "largest_seqno": 41565, "table_properties": {"data_size": 569651, "index_size": 990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7616, "raw_average_key_size": 19, "raw_value_size": 563367, "raw_average_value_size": 1444, "num_data_blocks": 43, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920495, "oldest_key_time": 1768920495, "file_creation_time": 1768920522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7535 microseconds, and 2402 cpu microseconds.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.164647) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 572733 bytes OK
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.164665) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.166433) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.166449) EVENT_LOG_v1 {"time_micros": 1768920522166444, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.166465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 587097, prev total WAL file size 587097, number of live WAL files 2.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.167102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(559KB)], [89(10038KB)]
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522167163, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10852034, "oldest_snapshot_seqno": -1}
Jan 20 14:48:42 compute-0 ovn_controller[148666]: 2026-01-20T14:48:42Z|00345|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 ovn-installed in OVS
Jan 20 14:48:42 compute-0 ovn_controller[148666]: 2026-01-20T14:48:42Z|00346|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 up in Southbound
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.172 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.173 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.177 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[723942f4-b48c-43b7-a257-fccad2a519f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.178 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:48:42 compute-0 systemd-udevd[310115]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.180 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.180 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[973bac7d-5487-471b-9a02-97486fded746]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.183 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6f311d-0d11-4987-8a3a-0dceaac4e934]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 systemd-machined[216401]: New machine qemu-45-instance-00000068.
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.1936] device (tap1c2faab3-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.1943] device (tap1c2faab3-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:48:42 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000068.
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.198 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[78701553-bd5b-44aa-8915-ea644c1bbff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.221 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7749eab6-41fd-4390-9cc2-ada3ad9319fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6687 keys, 8960567 bytes, temperature: kUnknown
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522228267, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 8960567, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8917936, "index_size": 24791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 173801, "raw_average_key_size": 25, "raw_value_size": 8800327, "raw_average_value_size": 1316, "num_data_blocks": 976, "num_entries": 6687, "num_filter_entries": 6687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.228832) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 8960567 bytes
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.230133) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.6 rd, 145.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(34.6) write-amplify(15.6) OK, records in: 7201, records dropped: 514 output_compression: NoCompression
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.230159) EVENT_LOG_v1 {"time_micros": 1768920522230147, "job": 52, "event": "compaction_finished", "compaction_time_micros": 61464, "compaction_time_cpu_micros": 27901, "output_level": 6, "num_output_files": 1, "total_output_size": 8960567, "num_input_records": 7201, "num_output_records": 6687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522230710, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920522233508, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.166991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.233634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.233640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.233642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.233644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:48:42.233646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.252 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[79ecab72-a7f2-4d46-a0f2-7c746c073e48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.257 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e6916d19-2f03-4374-945a-f8775463b776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.2587] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.291 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1cebb474-6f07-4444-8e98-98a1ccd32038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.295 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[010f6e38-9a96-4386-8954-96a754f844d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.301 250022 DEBUG nova.network.neutron [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Updated VIF entry in instance network info cache for port 1c2faab3-ee0b-4878-b090-b075bbb97543. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.302 250022 DEBUG nova.network.neutron [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Updating instance_info_cache with network_info: [{"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.3196] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.323 250022 DEBUG oslo_concurrency.lockutils [req-aa772249-fe3c-4687-9cf6-3009e08cea66 req-4201e242-f53e-4210-bc95-ba1e06017ece 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.325 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8f579313-1619-4464-a995-d0cb3044f925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.342 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7b239a-8a91-44c9-ab02-6b3cf58fab67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646304, 'reachable_time': 17260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310147, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.356 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c33f1fb0-ea67-4e70-b549-32da705a08ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 646304, 'tstamp': 646304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310148, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.373 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[294b6076-5778-4ec3-86ef-3037bbe4184c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646304, 'reachable_time': 17260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310149, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.405 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdec844-688d-4766-b393-712c138412b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.473 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dd63f9a5-f052-4a66-8645-efab5ee48a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.474 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.475 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.475 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.475 250022 DEBUG nova.compute.manager [req-74e31d80-f8e2-4bd9-8607-3f50e07d6582 req-1477eb1a-47da-48fe-bc5c-19f70f45ee56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.476 250022 DEBUG oslo_concurrency.lockutils [req-74e31d80-f8e2-4bd9-8607-3f50e07d6582 req-1477eb1a-47da-48fe-bc5c-19f70f45ee56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.476 250022 DEBUG oslo_concurrency.lockutils [req-74e31d80-f8e2-4bd9-8607-3f50e07d6582 req-1477eb1a-47da-48fe-bc5c-19f70f45ee56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.477 250022 DEBUG oslo_concurrency.lockutils [req-74e31d80-f8e2-4bd9-8607-3f50e07d6582 req-1477eb1a-47da-48fe-bc5c-19f70f45ee56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.477 250022 DEBUG nova.compute.manager [req-74e31d80-f8e2-4bd9-8607-3f50e07d6582 req-1477eb1a-47da-48fe-bc5c-19f70f45ee56 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Processing event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:48:42 compute-0 NetworkManager[48960]: <info>  [1768920522.4775] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.480 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.481 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.482 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 ovn_controller[148666]: 2026-01-20T14:48:42Z|00347|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.484 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.484 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.486 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebb22e9-9ff7-4923-a6f1-f81c9f5d71dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.486 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:48:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:48:42.487 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:48:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:42.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:42 compute-0 podman[310181]: 2026-01-20 14:48:42.950556572 +0000 UTC m=+0.048482604 container create 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:48:42 compute-0 systemd[1]: Started libpod-conmon-0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9.scope.
Jan 20 14:48:42 compute-0 nova_compute[250018]: 2026-01-20 14:48:42.990 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:48:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38f253e21d6cfc8d69db41d861d7a4ea0f059d215c210699ff3dee306c841900/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:48:43 compute-0 podman[310181]: 2026-01-20 14:48:42.920776455 +0000 UTC m=+0.018702517 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:48:43 compute-0 podman[310181]: 2026-01-20 14:48:43.024947898 +0000 UTC m=+0.122873950 container init 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:48:43 compute-0 podman[310181]: 2026-01-20 14:48:43.029340148 +0000 UTC m=+0.127266180 container start 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:48:43 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [NOTICE]   (310201) : New worker (310203) forked
Jan 20 14:48:43 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [NOTICE]   (310201) : Loading success.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.089 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.089 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:48:43 compute-0 ceph-mon[74360]: pgmap v1819: 321 pgs: 321 active+clean; 409 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.479 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.480 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920523.479206, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.480 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Started (Lifecycle Event)
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.484 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.487 250022 INFO nova.virt.libvirt.driver [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance spawned successfully.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.487 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.502 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.505 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.529 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.530 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920523.4803202, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.530 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Paused (Lifecycle Event)
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.537 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.538 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.539 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.539 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.540 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.541 250022 DEBUG nova.virt.libvirt.driver [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.585 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.590 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920523.4838154, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.590 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Resumed (Lifecycle Event)
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.629 250022 INFO nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Took 9.62 seconds to spawn the instance on the hypervisor.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.630 250022 DEBUG nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.632 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.642 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.684 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.722 250022 INFO nova.compute.manager [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Took 10.71 seconds to build instance.
Jan 20 14:48:43 compute-0 nova_compute[250018]: 2026-01-20 14:48:43.758 250022 DEBUG oslo_concurrency.lockutils [None req-dda63d3b-2d38-44fc-956e-7ef4e6119b15 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 382 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.430 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920509.4292147, fc5c09c4-3e90-4b31-8610-2e555b7e2406 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.431 250022 INFO nova.compute.manager [-] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] VM Stopped (Lifecycle Event)
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.461 250022 DEBUG nova.compute.manager [None req-d80a3f11-0e51-4958-a6fd-88ffaecbecfa - - - - - -] [instance: fc5c09c4-3e90-4b31-8610-2e555b7e2406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:44.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.635 250022 DEBUG nova.compute.manager [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.635 250022 DEBUG oslo_concurrency.lockutils [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.636 250022 DEBUG oslo_concurrency.lockutils [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.636 250022 DEBUG oslo_concurrency.lockutils [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.637 250022 DEBUG nova.compute.manager [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:48:44 compute-0 nova_compute[250018]: 2026-01-20 14:48:44.637 250022 WARNING nova.compute.manager [req-22750bee-5417-479d-a4e6-e7852409aa4d req-142d55aa-4302-4f7d-a3d3-7a7725e4d3a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received unexpected event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with vm_state active and task_state None.
Jan 20 14:48:45 compute-0 ceph-mon[74360]: pgmap v1820: 321 pgs: 321 active+clean; 382 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 14:48:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 274 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 14:48:46 compute-0 nova_compute[250018]: 2026-01-20 14:48:46.119 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2739244399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:48:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:46.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:48:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:46.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:47 compute-0 ceph-mon[74360]: pgmap v1821: 321 pgs: 321 active+clean; 274 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 14:48:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 258 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:48:47 compute-0 nova_compute[250018]: 2026-01-20 14:48:47.995 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:48 compute-0 nova_compute[250018]: 2026-01-20 14:48:48.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:48.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:49 compute-0 sudo[310257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:49 compute-0 sudo[310257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:49 compute-0 sudo[310257]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:49 compute-0 sudo[310282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:48:49 compute-0 sudo[310282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:48:49 compute-0 sudo[310282]: pam_unix(sudo:session): session closed for user root
Jan 20 14:48:49 compute-0 ceph-mon[74360]: pgmap v1822: 321 pgs: 321 active+clean; 258 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 160 op/s
Jan 20 14:48:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:48:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1043995877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:48:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 273 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Jan 20 14:48:50 compute-0 nova_compute[250018]: 2026-01-20 14:48:50.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:50.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:50 compute-0 nova_compute[250018]: 2026-01-20 14:48:50.963 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:48:51 compute-0 nova_compute[250018]: 2026-01-20 14:48:51.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 20 14:48:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 20 14:48:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 20 14:48:51 compute-0 ceph-mon[74360]: pgmap v1823: 321 pgs: 321 active+clean; 273 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Jan 20 14:48:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:48:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:52.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:48:52
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'images']
Jan 20 14:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:48:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:52.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:52 compute-0 nova_compute[250018]: 2026-01-20 14:48:52.606 250022 INFO nova.compute.manager [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Rebuilding instance
Jan 20 14:48:52 compute-0 ceph-mon[74360]: osdmap e242: 3 total, 3 up, 3 in
Jan 20 14:48:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1726552496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:48:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1290154729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:52 compute-0 nova_compute[250018]: 2026-01-20 14:48:52.997 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.018 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'trusted_certs' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.053 250022 DEBUG nova.compute.manager [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.121 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_requests' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.148 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.166 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.192 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.207 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:48:53 compute-0 nova_compute[250018]: 2026-01-20 14:48:53.210 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:48:53 compute-0 ceph-mon[74360]: pgmap v1825: 321 pgs: 321 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:48:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/344938960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:48:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 280 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Jan 20 14:48:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:54.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:54.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:54 compute-0 ceph-mon[74360]: pgmap v1826: 321 pgs: 321 active+clean; 280 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Jan 20 14:48:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 215 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 369 op/s
Jan 20 14:48:56 compute-0 nova_compute[250018]: 2026-01-20 14:48:56.123 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:56.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:56 compute-0 ceph-mon[74360]: pgmap v1827: 321 pgs: 321 active+clean; 215 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 369 op/s
Jan 20 14:48:57 compute-0 ovn_controller[148666]: 2026-01-20T14:48:57Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:02:f1 10.100.0.5
Jan 20 14:48:57 compute-0 ovn_controller[148666]: 2026-01-20T14:48:57Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:02:f1 10.100.0.5
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:48:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 222 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.4 MiB/s wr, 421 op/s
Jan 20 14:48:57 compute-0 nova_compute[250018]: 2026-01-20 14:48:57.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:48:58 compute-0 podman[310314]: 2026-01-20 14:48:58.482881287 +0000 UTC m=+0.062940736 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent)
Jan 20 14:48:58 compute-0 podman[310313]: 2026-01-20 14:48:58.511245416 +0000 UTC m=+0.101578814 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:48:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:48:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:48:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:48:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:48:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:48:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:48:58.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:48:59 compute-0 ceph-mon[74360]: pgmap v1828: 321 pgs: 321 active+clean; 222 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.4 MiB/s wr, 421 op/s
Jan 20 14:48:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:48:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 230 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 402 op/s
Jan 20 14:49:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:00.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:01 compute-0 nova_compute[250018]: 2026-01-20 14:49:01.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 20 14:49:01 compute-0 ceph-mon[74360]: pgmap v1829: 321 pgs: 321 active+clean; 230 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 402 op/s
Jan 20 14:49:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 20 14:49:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 20 14:49:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:01.237 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:01.238 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:49:01 compute-0 nova_compute[250018]: 2026-01-20 14:49:01.238 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 401 op/s
Jan 20 14:49:02 compute-0 ceph-mon[74360]: osdmap e243: 3 total, 3 up, 3 in
Jan 20 14:49:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4043656728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:03 compute-0 nova_compute[250018]: 2026-01-20 14:49:03.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:03 compute-0 ceph-mon[74360]: pgmap v1831: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 401 op/s
Jan 20 14:49:03 compute-0 nova_compute[250018]: 2026-01-20 14:49:03.249 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:49:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 356 op/s
Jan 20 14:49:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:04.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:49:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:49:05 compute-0 ceph-mon[74360]: pgmap v1832: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 356 op/s
Jan 20 14:49:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 20 14:49:06 compute-0 nova_compute[250018]: 2026-01-20 14:49:06.126 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:06 compute-0 sshd-session[310364]: Invalid user test from 157.245.78.139 port 33314
Jan 20 14:49:06 compute-0 sshd-session[310364]: Connection closed by invalid user test 157.245.78.139 port 33314 [preauth]
Jan 20 14:49:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:06.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:07 compute-0 kernel: tap1c2faab3-ee (unregistering): left promiscuous mode
Jan 20 14:49:07 compute-0 NetworkManager[48960]: <info>  [1768920547.1432] device (tap1c2faab3-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:07 compute-0 ovn_controller[148666]: 2026-01-20T14:49:07Z|00348|binding|INFO|Releasing lport 1c2faab3-ee0b-4878-b090-b075bbb97543 from this chassis (sb_readonly=0)
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.151 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 ovn_controller[148666]: 2026-01-20T14:49:07Z|00349|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 down in Southbound
Jan 20 14:49:07 compute-0 ovn_controller[148666]: 2026-01-20T14:49:07Z|00350|binding|INFO|Removing iface tap1c2faab3-ee ovn-installed in OVS
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.169 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:02:f1 10.100.0.5'], port_security=['fa:16:3e:a5:02:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1c2faab3-ee0b-4878-b090-b075bbb97543) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.170 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2faab3-ee0b-4878-b090-b075bbb97543 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.172 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.173 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[59deea78-dadd-423b-a5d9-e4895d7041c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.174 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.179 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 20 14:49:07 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000068.scope: Consumed 14.924s CPU time.
Jan 20 14:49:07 compute-0 systemd-machined[216401]: Machine qemu-45-instance-00000068 terminated.
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.240 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:07 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [NOTICE]   (310201) : haproxy version is 2.8.14-c23fe91
Jan 20 14:49:07 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [NOTICE]   (310201) : path to executable is /usr/sbin/haproxy
Jan 20 14:49:07 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [WARNING]  (310201) : Exiting Master process...
Jan 20 14:49:07 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [ALERT]    (310201) : Current worker (310203) exited with code 143 (Terminated)
Jan 20 14:49:07 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310197]: [WARNING]  (310201) : All workers exited. Exiting... (0)
Jan 20 14:49:07 compute-0 systemd[1]: libpod-0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9.scope: Deactivated successfully.
Jan 20 14:49:07 compute-0 podman[310389]: 2026-01-20 14:49:07.323324342 +0000 UTC m=+0.049740658 container died 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:49:07 compute-0 ceph-mon[74360]: pgmap v1833: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 20 14:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9-userdata-shm.mount: Deactivated successfully.
Jan 20 14:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-38f253e21d6cfc8d69db41d861d7a4ea0f059d215c210699ff3dee306c841900-merged.mount: Deactivated successfully.
Jan 20 14:49:07 compute-0 podman[310389]: 2026-01-20 14:49:07.378058934 +0000 UTC m=+0.104475260 container cleanup 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:49:07 compute-0 systemd[1]: libpod-conmon-0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9.scope: Deactivated successfully.
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.387 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance shutdown successfully after 14 seconds.
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.392 250022 INFO nova.virt.libvirt.driver [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance destroyed successfully.
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.396 250022 INFO nova.virt.libvirt.driver [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance destroyed successfully.
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.398 250022 DEBUG nova.virt.libvirt.vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:48:51Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.400 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.401 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.401 250022 DEBUG os_vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.402 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.403 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c2faab3-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.404 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.408 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.411 250022 INFO os_vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee')
Jan 20 14:49:07 compute-0 podman[310430]: 2026-01-20 14:49:07.459761289 +0000 UTC m=+0.050562270 container remove 0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.467 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[40b79ea2-c1d9-4fe4-a264-9bc3874bb1e5]: (4, ('Tue Jan 20 02:49:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9)\n0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9\nTue Jan 20 02:49:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9)\n0ed5d8351c0832c6ad512d50858121a61aa9ea15911f7e3c0a13acbbce4bcac9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.468 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4209debf-b643-448f-9ada-c8cca68ec799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.469 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.471 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.485 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.488 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a97cd9ad-a0d8-49ff-98e3-2c60c9bbe01c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.498 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[01700486-d99f-4241-98a9-5ca066e22a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.499 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[84059c80-99b7-47a4-a34c-0ef6fb5d2922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.514 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1c965d65-1589-4a9a-b89a-d9a84f7f5be2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 646297, 'reachable_time': 28875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310462, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.517 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:49:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:07.517 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[72b7fd6c-9838-44fb-99f6-0879da4b3e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.842 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deleting instance files /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_del
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.843 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deletion of /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_del complete
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.884 250022 DEBUG nova.compute.manager [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.884 250022 DEBUG oslo_concurrency.lockutils [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.885 250022 DEBUG oslo_concurrency.lockutils [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.885 250022 DEBUG oslo_concurrency.lockutils [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.885 250022 DEBUG nova.compute.manager [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:07 compute-0 nova_compute[250018]: 2026-01-20 14:49:07.886 250022 WARNING nova.compute.manager [req-a30042a3-0a95-4f95-a927-990aa8b7927e req-81383c99-c5ac-4b43-92b4-3d3f1bdcfde9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received unexpected event network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with vm_state active and task_state rebuilding.
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.002 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.123 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.124 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating image(s)
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.159 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.206 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.233 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.236 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.305 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.306 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.307 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.307 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.332 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.337 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:08.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.665 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.745 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.875 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.876 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Ensure instance console log exists: /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.877 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.877 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.878 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.882 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start _get_guest_xml network_info=[{"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.888 250022 WARNING nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.898 250022 DEBUG nova.virt.libvirt.host [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.899 250022 DEBUG nova.virt.libvirt.host [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.905 250022 DEBUG nova.virt.libvirt.host [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.906 250022 DEBUG nova.virt.libvirt.host [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.908 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.908 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.909 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.910 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.910 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.910 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.911 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.911 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.912 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.912 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.913 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.913 250022 DEBUG nova.virt.hardware [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.914 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'vcpu_model' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:08 compute-0 nova_compute[250018]: 2026-01-20 14:49:08.934 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:09 compute-0 sudo[310651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:09 compute-0 sudo[310651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:09 compute-0 sudo[310651]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:09 compute-0 sudo[310676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:09 compute-0 sudo[310676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:09 compute-0 sudo[310676]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:09 compute-0 ceph-mon[74360]: pgmap v1834: 321 pgs: 321 active+clean; 248 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 20 14:49:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1889894140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.406 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.451 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.457 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 20 14:49:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 20 14:49:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 20 14:49:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 237 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 503 KiB/s wr, 73 op/s
Jan 20 14:49:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236200116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.884 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.886 250022 DEBUG nova.virt.libvirt.vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:08Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.887 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.888 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.891 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <uuid>f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</uuid>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <name>instance-00000068</name>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-815005966</nova:name>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:49:08</nova:creationTime>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <nova:port uuid="1c2faab3-ee0b-4878-b090-b075bbb97543">
Jan 20 14:49:09 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <system>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="serial">f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="uuid">f55cf124-b1d8-47e0-80c6-8b9dd2b3f743</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </system>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <os>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </os>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <features>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </features>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk">
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config">
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:09 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a5:02:f1"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <target dev="tap1c2faab3-ee"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/console.log" append="off"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <video>
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </video>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:49:09 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:49:09 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:49:09 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:49:09 compute-0 nova_compute[250018]: </domain>
Jan 20 14:49:09 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.893 250022 DEBUG nova.compute.manager [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Preparing to wait for external event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.894 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.894 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.895 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.895 250022 DEBUG nova.virt.libvirt.vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:48:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:08Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.896 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.896 250022 DEBUG nova.network.os_vif_util [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.897 250022 DEBUG os_vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.897 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.898 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.898 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.901 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c2faab3-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.901 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c2faab3-ee, col_values=(('external_ids', {'iface-id': '1c2faab3-ee0b-4878-b090-b075bbb97543', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:02:f1', 'vm-uuid': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:09 compute-0 NetworkManager[48960]: <info>  [1768920549.9049] manager: (tap1c2faab3-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.909 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.910 250022 INFO os_vif [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee')
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.977 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.978 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.978 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:a5:02:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:49:09 compute-0 nova_compute[250018]: 2026-01-20 14:49:09.979 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Using config drive
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.011 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.030 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'ec2_ids' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.036 250022 DEBUG nova.compute.manager [req-2e7cfcbd-f792-4007-8d3b-0f2280821007 req-43877e76-eef5-49c9-871b-ddb0c2f35499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.036 250022 DEBUG oslo_concurrency.lockutils [req-2e7cfcbd-f792-4007-8d3b-0f2280821007 req-43877e76-eef5-49c9-871b-ddb0c2f35499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.036 250022 DEBUG oslo_concurrency.lockutils [req-2e7cfcbd-f792-4007-8d3b-0f2280821007 req-43877e76-eef5-49c9-871b-ddb0c2f35499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.036 250022 DEBUG oslo_concurrency.lockutils [req-2e7cfcbd-f792-4007-8d3b-0f2280821007 req-43877e76-eef5-49c9-871b-ddb0c2f35499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.037 250022 DEBUG nova.compute.manager [req-2e7cfcbd-f792-4007-8d3b-0f2280821007 req-43877e76-eef5-49c9-871b-ddb0c2f35499 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Processing event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.078 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'keypairs' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1889894140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:10 compute-0 ceph-mon[74360]: osdmap e244: 3 total, 3 up, 3 in
Jan 20 14:49:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/236200116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.810 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Creating config drive at /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.815 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ubb1twf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.952 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ubb1twf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.982 250022 DEBUG nova.storage.rbd_utils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:10 compute-0 nova_compute[250018]: 2026-01-20 14:49:10.987 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.159 250022 DEBUG oslo_concurrency.processutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.160 250022 INFO nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deleting local config drive /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743/disk.config because it was imported into RBD.
Jan 20 14:49:11 compute-0 kernel: tap1c2faab3-ee: entered promiscuous mode
Jan 20 14:49:11 compute-0 ovn_controller[148666]: 2026-01-20T14:49:11Z|00351|binding|INFO|Claiming lport 1c2faab3-ee0b-4878-b090-b075bbb97543 for this chassis.
Jan 20 14:49:11 compute-0 ovn_controller[148666]: 2026-01-20T14:49:11Z|00352|binding|INFO|1c2faab3-ee0b-4878-b090-b075bbb97543: Claiming fa:16:3e:a5:02:f1 10.100.0.5
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.222 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.2247] manager: (tap1c2faab3-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Jan 20 14:49:11 compute-0 ovn_controller[148666]: 2026-01-20T14:49:11Z|00353|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 ovn-installed in OVS
Jan 20 14:49:11 compute-0 ovn_controller[148666]: 2026-01-20T14:49:11Z|00354|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 up in Southbound
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.246 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:02:f1 10.100.0.5'], port_security=['fa:16:3e:a5:02:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1c2faab3-ee0b-4878-b090-b075bbb97543) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.247 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.248 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2faab3-ee0b-4878-b090-b075bbb97543 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.249 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.249 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:11 compute-0 systemd-machined[216401]: New machine qemu-46-instance-00000068.
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.260 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5186d2ac-ab35-4e78-992d-e0ba55e86c21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.261 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.264 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.264 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6d05dbe6-e840-4977-a277-afcc92c6c1b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.265 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[faf30b7c-bacd-4c87-8d73-9d1c65afd678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000068.
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.277 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e7054841-8600-4706-b1a9-07dee33e144c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 systemd-udevd[310820]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.290 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f53609f2-daf8-4534-aad6-d405bed4b533]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.2963] device (tap1c2faab3-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.2969] device (tap1c2faab3-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.322 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0d39a1a7-eef0-45ac-a957-99b65f097b7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.3312] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.331 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fd15827b-884c-444e-8ea1-308e272ef772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.365 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a9155658-f2e4-46a7-a797-21ced87a6faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.369 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8e52aa-5555-41d3-9361-6e6a2f6c3b93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.3942] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.397 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e83c45-3533-4715-8e3d-8fb75e849db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.418 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f836fd06-a4b1-462a-a41d-16b8740e0964]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649211, 'reachable_time': 21315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310851, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.435 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[64c8bc71-3018-4a96-9ff5-530ca34ea43b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649211, 'tstamp': 649211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310852, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00412617194770147 of space, bias 1.0, pg target 1.237851584310441 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.454 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed86af8-6a8d-4577-8bec-9aad89376abf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649211, 'reachable_time': 21315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310853, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.486 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[31c17836-c26a-4e6e-b182-8609b86efda4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.547 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[08bec05b-0ab4-4616-9e79-93b4a191ca29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.549 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.549 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.550 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.551 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 NetworkManager[48960]: <info>  [1768920551.5520] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 20 14:49:11 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.554 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.555 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:11 compute-0 ovn_controller[148666]: 2026-01-20T14:49:11Z|00355|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 ceph-mon[74360]: pgmap v1836: 321 pgs: 321 active+clean; 237 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 503 KiB/s wr, 73 op/s
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.576 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.577 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.578 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0551c214-f7d5-49ec-94ef-d0acf058f50e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.579 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:49:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:11.579 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.727 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.727 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920551.7265382, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.728 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Started (Lifecycle Event)
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.731 250022 DEBUG nova.compute.manager [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.740 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.745 250022 INFO nova.virt.libvirt.driver [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance spawned successfully.
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.746 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.765 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.769 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.805 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.805 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920551.7274573, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.805 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Paused (Lifecycle Event)
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.817 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.818 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.819 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.819 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.820 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.821 250022 DEBUG nova.virt.libvirt.driver [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 188 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.886 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.889 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920551.739459, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.890 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Resumed (Lifecycle Event)
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.917 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.920 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.946 250022 DEBUG nova.compute.manager [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:11 compute-0 podman[310929]: 2026-01-20 14:49:11.959980448 +0000 UTC m=+0.052112471 container create ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:49:11 compute-0 nova_compute[250018]: 2026-01-20 14:49:11.974 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.008 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.009 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.009 250022 DEBUG nova.objects.instance [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:49:12 compute-0 systemd[1]: Started libpod-conmon-ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513.scope.
Jan 20 14:49:12 compute-0 podman[310929]: 2026-01-20 14:49:11.93435963 +0000 UTC m=+0.026491673 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:49:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02c38333e4f31886bc0a57507e520bed52c4513bb8f1d9d5e22740cc5340a442/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:12 compute-0 podman[310929]: 2026-01-20 14:49:12.059066512 +0000 UTC m=+0.151198585 container init ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 20 14:49:12 compute-0 podman[310929]: 2026-01-20 14:49:12.064072296 +0000 UTC m=+0.156204329 container start ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:49:12 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [NOTICE]   (310949) : New worker (310951) forked
Jan 20 14:49:12 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [NOTICE]   (310949) : Loading success.
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.131 250022 DEBUG oslo_concurrency.lockutils [None req-0ee50b09-3218-49d0-86f0-21d46a1ab377 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.248 250022 DEBUG nova.compute.manager [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.249 250022 DEBUG oslo_concurrency.lockutils [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.249 250022 DEBUG oslo_concurrency.lockutils [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.249 250022 DEBUG oslo_concurrency.lockutils [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.250 250022 DEBUG nova.compute.manager [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:12 compute-0 nova_compute[250018]: 2026-01-20 14:49:12.250 250022 WARNING nova.compute.manager [req-3fd3e34d-0347-4c36-84ae-4bc9ca525a52 req-d00d64c5-1d22-40ed-b77b-36938326b63e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received unexpected event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with vm_state active and task_state None.
Jan 20 14:49:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:12.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:13 compute-0 ceph-mon[74360]: pgmap v1837: 321 pgs: 321 active+clean; 188 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 20 14:49:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 134 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.865 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.866 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.866 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.866 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.867 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.868 250022 INFO nova.compute.manager [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Terminating instance
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.869 250022 DEBUG nova.compute.manager [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:49:13 compute-0 kernel: tap1c2faab3-ee (unregistering): left promiscuous mode
Jan 20 14:49:13 compute-0 NetworkManager[48960]: <info>  [1768920553.9376] device (tap1c2faab3-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.944 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:13 compute-0 ovn_controller[148666]: 2026-01-20T14:49:13Z|00356|binding|INFO|Releasing lport 1c2faab3-ee0b-4878-b090-b075bbb97543 from this chassis (sb_readonly=0)
Jan 20 14:49:13 compute-0 ovn_controller[148666]: 2026-01-20T14:49:13Z|00357|binding|INFO|Setting lport 1c2faab3-ee0b-4878-b090-b075bbb97543 down in Southbound
Jan 20 14:49:13 compute-0 ovn_controller[148666]: 2026-01-20T14:49:13Z|00358|binding|INFO|Removing iface tap1c2faab3-ee ovn-installed in OVS
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:13.957 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:02:f1 10.100.0.5'], port_security=['fa:16:3e:a5:02:f1 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f55cf124-b1d8-47e0-80c6-8b9dd2b3f743', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1c2faab3-ee0b-4878-b090-b075bbb97543) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:13.958 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2faab3-ee0b-4878-b090-b075bbb97543 in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:49:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:13.959 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:13.960 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4a122e9a-86cc-49d0-9abd-bb53375a9e75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:13.960 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:49:13 compute-0 nova_compute[250018]: 2026-01-20 14:49:13.981 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:14 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 20 14:49:14 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000068.scope: Consumed 2.695s CPU time.
Jan 20 14:49:14 compute-0 systemd-machined[216401]: Machine qemu-46-instance-00000068 terminated.
Jan 20 14:49:14 compute-0 NetworkManager[48960]: <info>  [1768920554.0851] manager: (tap1c2faab3-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [NOTICE]   (310949) : haproxy version is 2.8.14-c23fe91
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [NOTICE]   (310949) : path to executable is /usr/sbin/haproxy
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [WARNING]  (310949) : Exiting Master process...
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [WARNING]  (310949) : Exiting Master process...
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [ALERT]    (310949) : Current worker (310951) exited with code 143 (Terminated)
Jan 20 14:49:14 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[310945]: [WARNING]  (310949) : All workers exited. Exiting... (0)
Jan 20 14:49:14 compute-0 systemd[1]: libpod-ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513.scope: Deactivated successfully.
Jan 20 14:49:14 compute-0 podman[310983]: 2026-01-20 14:49:14.100043104 +0000 UTC m=+0.048180069 container died ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.107 250022 INFO nova.virt.libvirt.driver [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Instance destroyed successfully.
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.108 250022 DEBUG nova.objects.instance [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513-userdata-shm.mount: Deactivated successfully.
Jan 20 14:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-02c38333e4f31886bc0a57507e520bed52c4513bb8f1d9d5e22740cc5340a442-merged.mount: Deactivated successfully.
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.131 250022 DEBUG nova.virt.libvirt.vif [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:48:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-815005966',display_name='tempest-ServerDiskConfigTestJSON-server-815005966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-815005966',id=104,image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-walplx2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:12Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=f55cf124-b1d8-47e0-80c6-8b9dd2b3f743,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.132 250022 DEBUG nova.network.os_vif_util [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "1c2faab3-ee0b-4878-b090-b075bbb97543", "address": "fa:16:3e:a5:02:f1", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2faab3-ee", "ovs_interfaceid": "1c2faab3-ee0b-4878-b090-b075bbb97543", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.133 250022 DEBUG nova.network.os_vif_util [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.133 250022 DEBUG os_vif [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.135 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.135 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c2faab3-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:14 compute-0 podman[310983]: 2026-01-20 14:49:14.137135471 +0000 UTC m=+0.085272426 container cleanup ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:49:14 compute-0 systemd[1]: libpod-conmon-ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513.scope: Deactivated successfully.
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.182 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.187 250022 INFO os_vif [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:02:f1,bridge_name='br-int',has_traffic_filtering=True,id=1c2faab3-ee0b-4878-b090-b075bbb97543,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2faab3-ee')
Jan 20 14:49:14 compute-0 podman[311021]: 2026-01-20 14:49:14.360007199 +0000 UTC m=+0.156744909 container remove ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.369 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c36f51e4-3166-4ab8-bf38-6b5bd5f517d2]: (4, ('Tue Jan 20 02:49:14 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513)\nad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513\nTue Jan 20 02:49:14 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (ad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513)\nad2722c13541430dcc24306b17e0b12a427bf6486a861de5b6aaeb4fc8c50513\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.371 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8365a3-e6b8-4cc4-b41a-582714b973d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.372 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:14 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.392 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a77135e-bcd1-4841-b6fc-96372f564e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.412 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[af143e10-7cd3-458b-93b8-0552fa430484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.414 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[13f7bb9f-898e-432f-9c24-6f352bb0971d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.428 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf3b6d6-95de-4940-9d7d-27a8432d4546]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649203, 'reachable_time': 41127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311048, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.431 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:49:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:14.432 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[48e95fa2-aefd-4b2e-a19f-ba9d2688c777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:49:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.624 250022 DEBUG nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.624 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.624 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.625 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.625 250022 DEBUG nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.625 250022 WARNING nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received unexpected event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with vm_state active and task_state deleting.
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.625 250022 DEBUG nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.626 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.626 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.626 250022 DEBUG oslo_concurrency.lockutils [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.626 250022 DEBUG nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:14 compute-0 nova_compute[250018]: 2026-01-20 14:49:14.626 250022 DEBUG nova.compute.manager [req-821c8ade-6cd7-4efb-b786-17fd537f9f55 req-b6641aee-e224-4712-bb66-fcab0b6f9afb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-unplugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:49:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/95823729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:49:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/95823729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:49:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 107 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:49:16 compute-0 ceph-mon[74360]: pgmap v1838: 321 pgs: 321 active+clean; 134 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.091 250022 INFO nova.virt.libvirt.driver [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deleting instance files /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_del
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.091 250022 INFO nova.virt.libvirt.driver [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deletion of /var/lib/nova/instances/f55cf124-b1d8-47e0-80c6-8b9dd2b3f743_del complete
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.156 250022 INFO nova.compute.manager [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Took 2.29 seconds to destroy the instance on the hypervisor.
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.156 250022 DEBUG oslo.service.loopingcall [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.156 250022 DEBUG nova.compute.manager [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.157 250022 DEBUG nova.network.neutron [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:49:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:16.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.802 250022 DEBUG nova.compute.manager [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.802 250022 DEBUG oslo_concurrency.lockutils [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.803 250022 DEBUG oslo_concurrency.lockutils [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.803 250022 DEBUG oslo_concurrency.lockutils [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.803 250022 DEBUG nova.compute.manager [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] No waiting events found dispatching network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:16 compute-0 nova_compute[250018]: 2026-01-20 14:49:16.803 250022 WARNING nova.compute.manager [req-98c58b86-e362-44c2-ba62-bdee60d9a8b6 req-fa43af9a-7dbb-4f75-aa5e-b969b9c859d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received unexpected event network-vif-plugged-1c2faab3-ee0b-4878-b090-b075bbb97543 for instance with vm_state active and task_state deleting.
Jan 20 14:49:17 compute-0 ceph-mon[74360]: pgmap v1839: 321 pgs: 321 active+clean; 107 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.188 250022 DEBUG nova.network.neutron [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.219 250022 INFO nova.compute.manager [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Took 1.06 seconds to deallocate network for instance.
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.334 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.335 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.345 250022 DEBUG nova.compute.manager [req-776d1b23-4b29-42e7-be85-b50b4bfdd218 req-ddee7d49-51d0-42aa-8abc-f24e1550dd97 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Received event network-vif-deleted-1c2faab3-ee0b-4878-b090-b075bbb97543 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:17 compute-0 nova_compute[250018]: 2026-01-20 14:49:17.583 250022 DEBUG oslo_concurrency.processutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 101 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Jan 20 14:49:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734497401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.024 250022 DEBUG oslo_concurrency.processutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.033 250022 DEBUG nova.compute.provider_tree [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.063 250022 DEBUG nova.scheduler.client.report [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3734497401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.107 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.141 250022 INFO nova.scheduler.client.report [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocations for instance f55cf124-b1d8-47e0-80c6-8b9dd2b3f743
Jan 20 14:49:18 compute-0 nova_compute[250018]: 2026-01-20 14:49:18.226 250022 DEBUG oslo_concurrency.lockutils [None req-4cc67927-e23f-4220-a06b-a16dd94bec19 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "f55cf124-b1d8-47e0-80c6-8b9dd2b3f743" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:19 compute-0 ceph-mon[74360]: pgmap v1840: 321 pgs: 321 active+clean; 101 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Jan 20 14:49:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3509042914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.182 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.474 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.474 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.503 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:49:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.604 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.605 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.610 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.611 250022 INFO nova.compute.claims [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:49:19 compute-0 nova_compute[250018]: 2026-01-20 14:49:19.825 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Jan 20 14:49:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531960273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.258 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.263 250022 DEBUG nova.compute.provider_tree [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2531960273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.301 250022 DEBUG nova.scheduler.client.report [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.323 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.324 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.387 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.387 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.430 250022 INFO nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.459 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:49:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.586 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.588 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.589 250022 INFO nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Creating image(s)
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.625 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.654 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.686 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.690 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.720 250022 DEBUG nova.policy [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1bd93d04cc4468abe1d5c61f5144191', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.752 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.753 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.754 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.754 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.786 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:20 compute-0 nova_compute[250018]: 2026-01-20 14:49:20.789 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 52477e64-7989-4aa2-88e1-31600bfae2ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:21 compute-0 ceph-mon[74360]: pgmap v1841: 321 pgs: 321 active+clean; 88 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Jan 20 14:49:21 compute-0 nova_compute[250018]: 2026-01-20 14:49:21.359 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 52477e64-7989-4aa2-88e1-31600bfae2ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:21 compute-0 nova_compute[250018]: 2026-01-20 14:49:21.446 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:49:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 109 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.034 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Successfully created port: 8286e975-4b57-4b5a-9018-82187a854a2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.263 250022 DEBUG nova.objects.instance [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.287 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.288 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Ensure instance console log exists: /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.289 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.289 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:22 compute-0 nova_compute[250018]: 2026-01-20 14:49:22.290 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:22.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.007 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.071 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Successfully updated port: 8286e975-4b57-4b5a-9018-82187a854a2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.092 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.092 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.093 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.189 250022 DEBUG nova.compute.manager [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.190 250022 DEBUG nova.compute.manager [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing instance network info cache due to event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.190 250022 DEBUG oslo_concurrency.lockutils [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:23 compute-0 nova_compute[250018]: 2026-01-20 14:49:23.413 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:49:23 compute-0 ceph-mon[74360]: pgmap v1842: 321 pgs: 321 active+clean; 109 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 191 op/s
Jan 20 14:49:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 138 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Jan 20 14:49:24 compute-0 nova_compute[250018]: 2026-01-20 14:49:24.184 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:24.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.051 250022 DEBUG nova.network.neutron [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.078 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.078 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance network_info: |[{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.079 250022 DEBUG oslo_concurrency.lockutils [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.079 250022 DEBUG nova.network.neutron [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.081 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Start _get_guest_xml network_info=[{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.085 250022 WARNING nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.091 250022 DEBUG nova.virt.libvirt.host [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.092 250022 DEBUG nova.virt.libvirt.host [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.099 250022 DEBUG nova.virt.libvirt.host [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.100 250022 DEBUG nova.virt.libvirt.host [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.101 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.101 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.101 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.101 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.102 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.102 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.102 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.102 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.103 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.103 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.103 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.103 250022 DEBUG nova.virt.hardware [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.114 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:25 compute-0 ceph-mon[74360]: pgmap v1843: 321 pgs: 321 active+clean; 138 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Jan 20 14:49:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640913900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.558 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.587 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:25 compute-0 nova_compute[250018]: 2026-01-20 14:49:25.591 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Jan 20 14:49:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209954475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.052 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.054 250022 DEBUG nova.virt.libvirt.vif [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:20Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.054 250022 DEBUG nova.network.os_vif_util [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.055 250022 DEBUG nova.network.os_vif_util [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.056 250022 DEBUG nova.objects.instance [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.079 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <uuid>52477e64-7989-4aa2-88e1-31600bfae2ef</uuid>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <name>instance-0000006a</name>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1663251192</nova:name>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:49:25</nova:creationTime>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <nova:port uuid="8286e975-4b57-4b5a-9018-82187a854a2d">
Jan 20 14:49:26 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <system>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="serial">52477e64-7989-4aa2-88e1-31600bfae2ef</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="uuid">52477e64-7989-4aa2-88e1-31600bfae2ef</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </system>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <os>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </os>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <features>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </features>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/52477e64-7989-4aa2-88e1-31600bfae2ef_disk">
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config">
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:26 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:19:a9:8c"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <target dev="tap8286e975-4b"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/console.log" append="off"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <video>
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </video>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:49:26 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:49:26 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:49:26 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:49:26 compute-0 nova_compute[250018]: </domain>
Jan 20 14:49:26 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.080 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Preparing to wait for external event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.081 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.081 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.081 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.082 250022 DEBUG nova.virt.libvirt.vif [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:20Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.082 250022 DEBUG nova.network.os_vif_util [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.083 250022 DEBUG nova.network.os_vif_util [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.083 250022 DEBUG os_vif [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.084 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.085 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.085 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.088 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.088 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8286e975-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.089 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8286e975-4b, col_values=(('external_ids', {'iface-id': '8286e975-4b57-4b5a-9018-82187a854a2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:a9:8c', 'vm-uuid': '52477e64-7989-4aa2-88e1-31600bfae2ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.090 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:26 compute-0 NetworkManager[48960]: <info>  [1768920566.0917] manager: (tap8286e975-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.097 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.098 250022 INFO os_vif [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b')
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.187 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.188 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.188 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:19:a9:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.188 250022 INFO nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Using config drive
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.211 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/640913900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/209954475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:26 compute-0 nova_compute[250018]: 2026-01-20 14:49:26.990 250022 INFO nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Creating config drive at /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.000 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpze5f2p28 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.157 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpze5f2p28" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.188 250022 DEBUG nova.storage.rbd_utils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.192 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config 52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.226 250022 DEBUG nova.network.neutron [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updated VIF entry in instance network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.227 250022 DEBUG nova.network.neutron [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.255 250022 DEBUG oslo_concurrency.lockutils [req-63af2824-d5b2-42b0-90fb-191d56751b6a req-20c70cd8-5e83-4460-badc-1a73cd2e6738 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:27 compute-0 ceph-mon[74360]: pgmap v1844: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Jan 20 14:49:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/617682879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4113171858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.530 250022 DEBUG oslo_concurrency.processutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config 52477e64-7989-4aa2-88e1-31600bfae2ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.531 250022 INFO nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Deleting local config drive /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/disk.config because it was imported into RBD.
Jan 20 14:49:27 compute-0 kernel: tap8286e975-4b: entered promiscuous mode
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.5991] manager: (tap8286e975-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/184)
Jan 20 14:49:27 compute-0 ovn_controller[148666]: 2026-01-20T14:49:27Z|00359|binding|INFO|Claiming lport 8286e975-4b57-4b5a-9018-82187a854a2d for this chassis.
Jan 20 14:49:27 compute-0 ovn_controller[148666]: 2026-01-20T14:49:27Z|00360|binding|INFO|8286e975-4b57-4b5a-9018-82187a854a2d: Claiming fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.600 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.606 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:a9:8c 10.100.0.6'], port_security=['fa:16:3e:19:a9:8c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52477e64-7989-4aa2-88e1-31600bfae2ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8286e975-4b57-4b5a-9018-82187a854a2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.608 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8286e975-4b57-4b5a-9018-82187a854a2d in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.610 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:27 compute-0 ovn_controller[148666]: 2026-01-20T14:49:27Z|00361|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d ovn-installed in OVS
Jan 20 14:49:27 compute-0 ovn_controller[148666]: 2026-01-20T14:49:27Z|00362|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d up in Southbound
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.618 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.621 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4127def6-7dd6-4205-8c1d-f2ec5d9c99e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.622 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.624 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.625 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1eae68d7-97f2-4c85-8a90-ddb4c23dc089]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.626 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca519f7-31ca-46e4-a06f-c2b858b27524]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 systemd-udevd[311407]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:27 compute-0 systemd-machined[216401]: New machine qemu-47-instance-0000006a.
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.637 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[9c02642c-9feb-45c0-968e-5e564d455fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.6458] device (tap8286e975-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.6464] device (tap8286e975-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:49:27 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000006a.
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.661 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7cac3925-e32a-4f3e-83ae-7bb1bf6e9c0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.687 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f084e844-dd02-400e-ab80-9f4d0af8a732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.6935] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/185)
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.692 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[26c12a8f-880e-486c-9178-723473c2dcaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.722 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a3dabef0-c17c-4c9c-af9f-1419712a10bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.725 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fea358-0945-4d2e-ac20-514408ddb366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.7465] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.753 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[04df9cfa-b092-41d2-b129-b1d7deb52797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.770 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d762b154-6845-431b-92ae-7cb23b0c2c5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650846, 'reachable_time': 44299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311440, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.785 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[efbe82e4-03b1-4952-bc40-5b70128bef97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650846, 'tstamp': 650846}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311441, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.803 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[54e226cd-eee1-47fa-be1b-6c46b0a811e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650846, 'reachable_time': 44299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311442, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.828 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b82d69ed-7628-4ddb-b503-2be2b3f6809e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.882 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[36d2171c-fcd1-427a-8ccc-63a5a61b1a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.883 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.883 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.884 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.885 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:49:27 compute-0 NetworkManager[48960]: <info>  [1768920567.8858] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.888 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 ovn_controller[148666]: 2026-01-20T14:49:27Z|00363|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.909 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 nova_compute[250018]: 2026-01-20 14:49:27.910 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.911 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.912 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[725393cb-a9b9-41ca-9d10-9b583e2e3c1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.912 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:49:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:27.913 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.048 250022 DEBUG nova.compute.manager [req-de6ce028-6e65-4256-afe5-2c63747557a9 req-216e6141-2aa1-4199-9ad4-d0988283231b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.049 250022 DEBUG oslo_concurrency.lockutils [req-de6ce028-6e65-4256-afe5-2c63747557a9 req-216e6141-2aa1-4199-9ad4-d0988283231b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.049 250022 DEBUG oslo_concurrency.lockutils [req-de6ce028-6e65-4256-afe5-2c63747557a9 req-216e6141-2aa1-4199-9ad4-d0988283231b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.050 250022 DEBUG oslo_concurrency.lockutils [req-de6ce028-6e65-4256-afe5-2c63747557a9 req-216e6141-2aa1-4199-9ad4-d0988283231b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.050 250022 DEBUG nova.compute.manager [req-de6ce028-6e65-4256-afe5-2c63747557a9 req-216e6141-2aa1-4199-9ad4-d0988283231b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Processing event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.056 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.204 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920568.203932, 52477e64-7989-4aa2-88e1-31600bfae2ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.206 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Started (Lifecycle Event)
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.207 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.211 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.214 250022 INFO nova.virt.libvirt.driver [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance spawned successfully.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.214 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.240 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.247 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.252 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.252 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.253 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.253 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.254 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.254 250022 DEBUG nova.virt.libvirt.driver [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.295 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.296 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920568.2044568, 52477e64-7989-4aa2-88e1-31600bfae2ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.296 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Paused (Lifecycle Event)
Jan 20 14:49:28 compute-0 podman[311516]: 2026-01-20 14:49:28.302660316 +0000 UTC m=+0.044759566 container create a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 14:49:28 compute-0 systemd[1]: Started libpod-conmon-a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3.scope.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.340 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.344 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920568.210333, 52477e64-7989-4aa2-88e1-31600bfae2ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.344 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Resumed (Lifecycle Event)
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.352 250022 INFO nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Took 7.77 seconds to spawn the instance on the hypervisor.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.353 250022 DEBUG nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b44a806abe6f789351602e4b7da8f61db2188a7dd4c8fb9bec75342c50d71f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:28 compute-0 podman[311516]: 2026-01-20 14:49:28.279894353 +0000 UTC m=+0.021993623 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:49:28 compute-0 podman[311516]: 2026-01-20 14:49:28.381499118 +0000 UTC m=+0.123598388 container init a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:49:28 compute-0 podman[311516]: 2026-01-20 14:49:28.386747539 +0000 UTC m=+0.128846789 container start a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.391 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.394 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:28 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [NOTICE]   (311535) : New worker (311537) forked
Jan 20 14:49:28 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [NOTICE]   (311535) : Loading success.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.425 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.435 250022 INFO nova.compute.manager [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Took 8.86 seconds to build instance.
Jan 20 14:49:28 compute-0 nova_compute[250018]: 2026-01-20 14:49:28.460 250022 DEBUG oslo_concurrency.lockutils [None req-b38011d4-8161-4aa1-8035-0f021ff0a522 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2640238559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2854563677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:28.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:28.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:29 compute-0 nova_compute[250018]: 2026-01-20 14:49:29.101 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920554.101256, f55cf124-b1d8-47e0-80c6-8b9dd2b3f743 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:29 compute-0 nova_compute[250018]: 2026-01-20 14:49:29.102 250022 INFO nova.compute.manager [-] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] VM Stopped (Lifecycle Event)
Jan 20 14:49:29 compute-0 nova_compute[250018]: 2026-01-20 14:49:29.132 250022 DEBUG nova.compute.manager [None req-84670197-95aa-4240-85a2-fe49f03ec460 - - - - - -] [instance: f55cf124-b1d8-47e0-80c6-8b9dd2b3f743] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:29 compute-0 sudo[311546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:29 compute-0 sudo[311546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:29 compute-0 sudo[311546]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:29 compute-0 sudo[311573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:29 compute-0 sudo[311573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:29 compute-0 sudo[311573]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:29 compute-0 podman[311571]: 2026-01-20 14:49:29.462108039 +0000 UTC m=+0.062666237 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:49:29 compute-0 podman[311570]: 2026-01-20 14:49:29.491170011 +0000 UTC m=+0.091736429 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:49:29 compute-0 ceph-mon[74360]: pgmap v1845: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 3.5 MiB/s wr, 70 op/s
Jan 20 14:49:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.195 250022 DEBUG nova.compute.manager [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.196 250022 DEBUG oslo_concurrency.lockutils [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.196 250022 DEBUG oslo_concurrency.lockutils [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.196 250022 DEBUG oslo_concurrency.lockutils [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.196 250022 DEBUG nova.compute.manager [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:30 compute-0 nova_compute[250018]: 2026-01-20 14:49:30.196 250022 WARNING nova.compute.manager [req-930a09f0-eb38-4a96-b57f-ff8fe9c5131e req-33e33c11-d646-41fd-953b-5b232511c865 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state None.
Jan 20 14:49:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4003458176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:30.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:30.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:30.761 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:30.762 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:30.763 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:31 compute-0 nova_compute[250018]: 2026-01-20 14:49:31.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:31 compute-0 ceph-mon[74360]: pgmap v1846: 321 pgs: 321 active+clean; 180 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 20 14:49:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4060347495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.062 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.063 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.652 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.653 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.676 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.770 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.771 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.779 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.779 250022 INFO nova.compute.claims [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.894 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.914 250022 WARNING nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.915 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid 52477e64-7989-4aa2-88e1-31600bfae2ef _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.915 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid fc3bdf75-942a-4d44-b8eb-3d83f787fad1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.916 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.916 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:32 compute-0 nova_compute[250018]: 2026-01-20 14:49:32.917 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.096 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.202 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:33 compute-0 ceph-mon[74360]: pgmap v1847: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 121 op/s
Jan 20 14:49:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44349596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.672 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.682 250022 DEBUG nova.compute.provider_tree [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.740 250022 DEBUG nova.scheduler.client.report [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.804 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.856 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "25dcaeca-6e10-4670-9fec-7a4571453bab" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.856 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "25dcaeca-6e10-4670-9fec-7a4571453bab" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.876 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "25dcaeca-6e10-4670-9fec-7a4571453bab" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.878 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.943 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.944 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.971 250022 INFO nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:49:33 compute-0 nova_compute[250018]: 2026-01-20 14:49:33.998 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.121 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.123 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.124 250022 INFO nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Creating image(s)
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.166 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.204 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.231 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.234 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.260 250022 DEBUG nova.policy [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38ef13d691534d06a5110be95454010f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a08a6f2a8aee493980fd658fae9e7fb4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.296 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.297 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.297 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.297 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.325 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.329 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:34.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/44349596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:34 compute-0 nova_compute[250018]: 2026-01-20 14:49:34.959 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.040 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] resizing rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.167 250022 DEBUG nova.objects.instance [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lazy-loading 'migration_context' on Instance uuid fc3bdf75-942a-4d44-b8eb-3d83f787fad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.187 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.187 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Ensure instance console log exists: /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.188 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.188 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.188 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:35 compute-0 nova_compute[250018]: 2026-01-20 14:49:35.219 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Successfully created port: 2186c94a-fb28-491f-9215-dd9a32546223 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:49:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 211 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 182 op/s
Jan 20 14:49:35 compute-0 ceph-mon[74360]: pgmap v1848: 321 pgs: 321 active+clean; 180 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Jan 20 14:49:36 compute-0 nova_compute[250018]: 2026-01-20 14:49:36.092 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:36 compute-0 ceph-mon[74360]: pgmap v1849: 321 pgs: 321 active+clean; 211 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.8 MiB/s wr, 182 op/s
Jan 20 14:49:37 compute-0 nova_compute[250018]: 2026-01-20 14:49:37.075 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:37 compute-0 nova_compute[250018]: 2026-01-20 14:49:37.075 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:37 compute-0 sudo[311832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-0 sudo[311832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-0 sudo[311832]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-0 sudo[311857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:37 compute-0 sudo[311857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-0 sudo[311857]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-0 sudo[311882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:37 compute-0 sudo[311882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-0 sudo[311882]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-0 sudo[311907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:49:37 compute-0 sudo[311907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:49:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:49:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:49:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:49:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 sudo[311907]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:49:37 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 220 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1764439710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2025183021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2922371658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.085 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.085 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 57177a80-1e25-4850-ba85-c1ec29391637 does not exist
Jan 20 14:49:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6a2dab49-0613-4ea4-a24b-30cc5e3644da does not exist
Jan 20 14:49:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 89c20f80-8e39-4fb0-b28f-b4cd0c9db427 does not exist
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:38 compute-0 sudo[311983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:38 compute-0 sudo[311983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:38 compute-0 sudo[311983]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553675854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.543 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:38 compute-0 sudo[312008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:38 compute-0 sudo[312008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:38 compute-0 sudo[312008]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:38.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:38 compute-0 sudo[312036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:38 compute-0 sudo[312036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:38 compute-0 sudo[312036]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:38 compute-0 sudo[312061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:49:38 compute-0 sudo[312061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.710 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.712 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.879 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.880 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4250MB free_disk=20.935195922851562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.880 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.880 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.962 250022 INFO nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating resource usage from migration 4a873a64-1379-4cac-913e-e81f3f300ec7
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.994 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance fc3bdf75-942a-4d44-b8eb-3d83f787fad1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.995 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration 4a873a64-1379-4cac-913e-e81f3f300ec7 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.996 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:49:38 compute-0 nova_compute[250018]: 2026-01-20 14:49:38.996 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:49:38 compute-0 ceph-mon[74360]: pgmap v1850: 321 pgs: 321 active+clean; 220 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3553675854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1374144073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.063 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.081276332 +0000 UTC m=+0.040695596 container create 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:49:39 compute-0 systemd[1]: Started libpod-conmon-79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be.scope.
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.062505578 +0000 UTC m=+0.021924792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.181319275 +0000 UTC m=+0.140738529 container init 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.19522116 +0000 UTC m=+0.154640394 container start 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.199624358 +0000 UTC m=+0.159043602 container attach 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:49:39 compute-0 sleepy_turing[312143]: 167 167
Jan 20 14:49:39 compute-0 systemd[1]: libpod-79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be.scope: Deactivated successfully.
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.206145064 +0000 UTC m=+0.165564318 container died 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbdcba3b9e05cdd24ca60236e7a3be77b9691b7d111ba1953bb5c04d8df16c58-merged.mount: Deactivated successfully.
Jan 20 14:49:39 compute-0 podman[312126]: 2026-01-20 14:49:39.251965866 +0000 UTC m=+0.211385110 container remove 79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_turing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:39 compute-0 systemd[1]: libpod-conmon-79e44230d69d228bad157fdb2f4e33d166b3c5901f9623c71281ccc700ae17be.scope: Deactivated successfully.
Jan 20 14:49:39 compute-0 podman[312187]: 2026-01-20 14:49:39.493255291 +0000 UTC m=+0.078031112 container create a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:49:39 compute-0 systemd[1]: Started libpod-conmon-a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269.scope.
Jan 20 14:49:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:39 compute-0 podman[312187]: 2026-01-20 14:49:39.464736252 +0000 UTC m=+0.049512063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163135150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.589 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.606 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:39 compute-0 podman[312187]: 2026-01-20 14:49:39.61769374 +0000 UTC m=+0.202469561 container init a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:49:39 compute-0 podman[312187]: 2026-01-20 14:49:39.630167654 +0000 UTC m=+0.214943465 container start a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.632 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:39 compute-0 podman[312187]: 2026-01-20 14:49:39.634524143 +0000 UTC m=+0.219299934 container attach a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.696 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Successfully updated port: 2186c94a-fb28-491f-9215-dd9a32546223 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.699 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.699 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.723 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.724 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquired lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.724 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.839 250022 DEBUG nova.compute.manager [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-changed-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.840 250022 DEBUG nova.compute.manager [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Refreshing instance network info cache due to event network-changed-2186c94a-fb28-491f-9215-dd9a32546223. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:39 compute-0 nova_compute[250018]: 2026-01-20 14:49:39.840 250022 DEBUG oslo_concurrency.lockutils [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 227 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 20 14:49:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4163135150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:40 compute-0 unruffled_keldysh[312202]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:49:40 compute-0 unruffled_keldysh[312202]: --> relative data size: 1.0
Jan 20 14:49:40 compute-0 unruffled_keldysh[312202]: --> All data devices are unavailable
Jan 20 14:49:40 compute-0 systemd[1]: libpod-a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269.scope: Deactivated successfully.
Jan 20 14:49:40 compute-0 nova_compute[250018]: 2026-01-20 14:49:40.494 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:40 compute-0 podman[312187]: 2026-01-20 14:49:40.495565065 +0000 UTC m=+1.080340846 container died a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:49:40 compute-0 nova_compute[250018]: 2026-01-20 14:49:40.496 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:40 compute-0 nova_compute[250018]: 2026-01-20 14:49:40.496 250022 DEBUG nova.network.neutron [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-22e3f88c9798a4a2fc2edb7bd033688ce7e2c72dc626fce6cadfa136e6ba38f0-merged.mount: Deactivated successfully.
Jan 20 14:49:40 compute-0 podman[312187]: 2026-01-20 14:49:40.557466151 +0000 UTC m=+1.142241942 container remove a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:49:40 compute-0 systemd[1]: libpod-conmon-a3891b39aa98479aaf88984658f49d597a01d5f9a018f8fac4d03905b91e9269.scope: Deactivated successfully.
Jan 20 14:49:40 compute-0 sudo[312061]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:40 compute-0 sudo[312231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:40 compute-0 sudo[312231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:40 compute-0 sudo[312231]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:40 compute-0 nova_compute[250018]: 2026-01-20 14:49:40.699 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:40 compute-0 sudo[312256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:40 compute-0 sudo[312256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:40 compute-0 sudo[312256]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:40 compute-0 sudo[312281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:40 compute-0 sudo[312281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:40 compute-0 sudo[312281]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:40 compute-0 sudo[312306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:49:40 compute-0 sudo[312306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:40 compute-0 nova_compute[250018]: 2026-01-20 14:49:40.856 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:49:41 compute-0 ceph-mon[74360]: pgmap v1851: 321 pgs: 321 active+clean; 227 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 20 14:49:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/728206553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:41 compute-0 nova_compute[250018]: 2026-01-20 14:49:41.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:41 compute-0 nova_compute[250018]: 2026-01-20 14:49:41.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:49:41 compute-0 nova_compute[250018]: 2026-01-20 14:49:41.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.157466068 +0000 UTC m=+0.048528577 container create 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 14:49:41 compute-0 systemd[1]: Started libpod-conmon-3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2.scope.
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.133356369 +0000 UTC m=+0.024418928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.248551719 +0000 UTC m=+0.139614248 container init 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.257777968 +0000 UTC m=+0.148840467 container start 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.262199326 +0000 UTC m=+0.153261845 container attach 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:49:41 compute-0 great_bartik[312387]: 167 167
Jan 20 14:49:41 compute-0 systemd[1]: libpod-3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2.scope: Deactivated successfully.
Jan 20 14:49:41 compute-0 conmon[312387]: conmon 3588b0fb0233cbbce2d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2.scope/container/memory.events
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.265131215 +0000 UTC m=+0.156193724 container died 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa7b57559785bd8165492895998798ef032f4fdb2c0d78325dfb87a50ad6b0e4-merged.mount: Deactivated successfully.
Jan 20 14:49:41 compute-0 podman[312371]: 2026-01-20 14:49:41.308329238 +0000 UTC m=+0.199391757 container remove 3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:49:41 compute-0 systemd[1]: libpod-conmon-3588b0fb0233cbbce2d9696d7a6f1ad90801ec39d9a7b97c0885bbec32bf1cc2.scope: Deactivated successfully.
Jan 20 14:49:41 compute-0 podman[312413]: 2026-01-20 14:49:41.47634763 +0000 UTC m=+0.040867111 container create 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:41 compute-0 systemd[1]: Started libpod-conmon-123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745.scope.
Jan 20 14:49:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0ca7b1151dfc7d18a6e11b37834ada99b60390cff2bf3363ba237296ef342a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0ca7b1151dfc7d18a6e11b37834ada99b60390cff2bf3363ba237296ef342a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0ca7b1151dfc7d18a6e11b37834ada99b60390cff2bf3363ba237296ef342a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0ca7b1151dfc7d18a6e11b37834ada99b60390cff2bf3363ba237296ef342a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:41 compute-0 podman[312413]: 2026-01-20 14:49:41.457851352 +0000 UTC m=+0.022370863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:41 compute-0 podman[312413]: 2026-01-20 14:49:41.564003369 +0000 UTC m=+0.128522900 container init 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:49:41 compute-0 podman[312413]: 2026-01-20 14:49:41.569520508 +0000 UTC m=+0.134039989 container start 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 14:49:41 compute-0 podman[312413]: 2026-01-20 14:49:41.572617001 +0000 UTC m=+0.137136482 container attach 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:49:41 compute-0 ovn_controller[148666]: 2026-01-20T14:49:41Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:49:41 compute-0 ovn_controller[148666]: 2026-01-20T14:49:41Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:a9:8c 10.100.0.6
Jan 20 14:49:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 227 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:42 compute-0 sharp_raman[312429]: {
Jan 20 14:49:42 compute-0 sharp_raman[312429]:     "0": [
Jan 20 14:49:42 compute-0 sharp_raman[312429]:         {
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "devices": [
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "/dev/loop3"
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             ],
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "lv_name": "ceph_lv0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "lv_size": "7511998464",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "name": "ceph_lv0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "tags": {
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.cluster_name": "ceph",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.crush_device_class": "",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.encrypted": "0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.osd_id": "0",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.type": "block",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:                 "ceph.vdo": "0"
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             },
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "type": "block",
Jan 20 14:49:42 compute-0 sharp_raman[312429]:             "vg_name": "ceph_vg0"
Jan 20 14:49:42 compute-0 sharp_raman[312429]:         }
Jan 20 14:49:42 compute-0 sharp_raman[312429]:     ]
Jan 20 14:49:42 compute-0 sharp_raman[312429]: }
Jan 20 14:49:42 compute-0 systemd[1]: libpod-123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745.scope: Deactivated successfully.
Jan 20 14:49:42 compute-0 podman[312413]: 2026-01-20 14:49:42.367002899 +0000 UTC m=+0.931522400 container died 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb0ca7b1151dfc7d18a6e11b37834ada99b60390cff2bf3363ba237296ef342a-merged.mount: Deactivated successfully.
Jan 20 14:49:42 compute-0 podman[312413]: 2026-01-20 14:49:42.447232988 +0000 UTC m=+1.011752479 container remove 123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:49:42 compute-0 systemd[1]: libpod-conmon-123c3dff0498b44f939f59c82753be49e505a9d4a2a23cfc1ec420b7ddc15745.scope: Deactivated successfully.
Jan 20 14:49:42 compute-0 sudo[312306]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:42 compute-0 sudo[312452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:42 compute-0 sudo[312452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:42 compute-0 sudo[312452]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:42 compute-0 sudo[312477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:49:42 compute-0 sudo[312477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:42 compute-0 sudo[312477]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:42.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.668 250022 DEBUG nova.network.neutron [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Updating instance_info_cache with network_info: [{"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:42 compute-0 sudo[312502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:42 compute-0 sudo[312502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.690 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Releasing lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.691 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Instance network_info: |[{"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.691 250022 DEBUG oslo_concurrency.lockutils [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.691 250022 DEBUG nova.network.neutron [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Refreshing network info cache for port 2186c94a-fb28-491f-9215-dd9a32546223 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:42 compute-0 sudo[312502]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.695 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Start _get_guest_xml network_info=[{"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.700 250022 WARNING nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.707 250022 DEBUG nova.virt.libvirt.host [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.708 250022 DEBUG nova.virt.libvirt.host [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.711 250022 DEBUG nova.virt.libvirt.host [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.712 250022 DEBUG nova.virt.libvirt.host [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.713 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.713 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.714 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.714 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.714 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.715 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.715 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.715 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.716 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.716 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.716 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.717 250022 DEBUG nova.virt.hardware [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:49:42 compute-0 nova_compute[250018]: 2026-01-20 14:49:42.721 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:42 compute-0 sudo[312527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:49:42 compute-0 sudo[312527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:43 compute-0 ceph-mon[74360]: pgmap v1852: 321 pgs: 321 active+clean; 227 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.062 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.103929482 +0000 UTC m=+0.048826615 container create 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:49:43 compute-0 systemd[1]: Started libpod-conmon-16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200.scope.
Jan 20 14:49:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450672286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.170 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.078882748 +0000 UTC m=+0.023779971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.192875695 +0000 UTC m=+0.137772858 container init 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.199997317 +0000 UTC m=+0.144894460 container start 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.203 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:43 compute-0 elastic_faraday[312627]: 167 167
Jan 20 14:49:43 compute-0 systemd[1]: libpod-16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200.scope: Deactivated successfully.
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.206336518 +0000 UTC m=+0.151233661 container attach 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.206728408 +0000 UTC m=+0.151625551 container died 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.209 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d6c538214dda7e8f8e7cfd9de56d58135fbac9509bd521dc6ac17deacb9ee0-merged.mount: Deactivated successfully.
Jan 20 14:49:43 compute-0 podman[312611]: 2026-01-20 14:49:43.254212036 +0000 UTC m=+0.199109169 container remove 16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:49:43 compute-0 systemd[1]: libpod-conmon-16a6ad18ddde9abccf51943aa8fdb1857d9394d7391dd967ff4a2b41df657200.scope: Deactivated successfully.
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.404 250022 DEBUG nova.network.neutron [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:43 compute-0 podman[312691]: 2026-01-20 14:49:43.41452205 +0000 UTC m=+0.035297291 container create b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.436 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:43 compute-0 systemd[1]: Started libpod-conmon-b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33.scope.
Jan 20 14:49:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73d113b15e9bbd9bc87af8cf44887522405ece24ce57a92750462c98b9eea5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73d113b15e9bbd9bc87af8cf44887522405ece24ce57a92750462c98b9eea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73d113b15e9bbd9bc87af8cf44887522405ece24ce57a92750462c98b9eea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd73d113b15e9bbd9bc87af8cf44887522405ece24ce57a92750462c98b9eea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:43 compute-0 podman[312691]: 2026-01-20 14:49:43.400204645 +0000 UTC m=+0.020979906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:49:43 compute-0 podman[312691]: 2026-01-20 14:49:43.524755327 +0000 UTC m=+0.145530608 container init b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:43 compute-0 podman[312691]: 2026-01-20 14:49:43.53082787 +0000 UTC m=+0.151603111 container start b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:49:43 compute-0 podman[312691]: 2026-01-20 14:49:43.533801641 +0000 UTC m=+0.154576932 container attach b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.586 250022 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.587 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Creating file /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/a6ec928707ba4d63b1535a6ced14057c.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.587 250022 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/a6ec928707ba4d63b1535a6ced14057c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:49:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171830442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.670 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.672 250022 DEBUG nova.virt.libvirt.vif [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1333128055',display_name='tempest-ServerGroupTestJSON-server-1333128055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1333128055',id=108,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a08a6f2a8aee493980fd658fae9e7fb4',ramdisk_id='',reservation_id='r-ejyc3vtj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-778624312',owner_user_name='tempest-ServerGroupTestJSON-778624312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:34Z,user_data=None,user_id='38ef13d691534d06a5110be95454010f',uuid=fc3bdf75-942a-4d44-b8eb-3d83f787fad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.673 250022 DEBUG nova.network.os_vif_util [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converting VIF {"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.674 250022 DEBUG nova.network.os_vif_util [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.675 250022 DEBUG nova.objects.instance [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc3bdf75-942a-4d44-b8eb-3d83f787fad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.693 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <uuid>fc3bdf75-942a-4d44-b8eb-3d83f787fad1</uuid>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <name>instance-0000006c</name>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerGroupTestJSON-server-1333128055</nova:name>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:49:42</nova:creationTime>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:user uuid="38ef13d691534d06a5110be95454010f">tempest-ServerGroupTestJSON-778624312-project-member</nova:user>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:project uuid="a08a6f2a8aee493980fd658fae9e7fb4">tempest-ServerGroupTestJSON-778624312</nova:project>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <nova:port uuid="2186c94a-fb28-491f-9215-dd9a32546223">
Jan 20 14:49:43 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <system>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="serial">fc3bdf75-942a-4d44-b8eb-3d83f787fad1</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="uuid">fc3bdf75-942a-4d44-b8eb-3d83f787fad1</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </system>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <os>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </os>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <features>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </features>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk">
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config">
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </source>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:49:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e1:5d:2b"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <target dev="tap2186c94a-fb"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/console.log" append="off"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <video>
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </video>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:49:43 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:49:43 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:49:43 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:49:43 compute-0 nova_compute[250018]: </domain>
Jan 20 14:49:43 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.693 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Preparing to wait for external event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.694 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.694 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.695 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.696 250022 DEBUG nova.virt.libvirt.vif [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1333128055',display_name='tempest-ServerGroupTestJSON-server-1333128055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1333128055',id=108,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a08a6f2a8aee493980fd658fae9e7fb4',ramdisk_id='',reservation_id='r-ejyc3vtj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-778624312',owner_user_name='tempest-ServerGroupTestJSON-778624312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:49:34Z,user_data=None,user_id='38ef13d691534d06a5110be95454010f',uuid=fc3bdf75-942a-4d44-b8eb-3d83f787fad1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.697 250022 DEBUG nova.network.os_vif_util [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converting VIF {"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.697 250022 DEBUG nova.network.os_vif_util [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.698 250022 DEBUG os_vif [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.699 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.699 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.700 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.704 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.704 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2186c94a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.705 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2186c94a-fb, col_values=(('external_ids', {'iface-id': '2186c94a-fb28-491f-9215-dd9a32546223', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:5d:2b', 'vm-uuid': 'fc3bdf75-942a-4d44-b8eb-3d83f787fad1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-0 NetworkManager[48960]: <info>  [1768920583.7074] manager: (tap2186c94a-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.708 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.715 250022 INFO os_vif [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb')
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.792 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.793 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.793 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] No VIF found with MAC fa:16:3e:e1:5d:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.794 250022 INFO nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Using config drive
Jan 20 14:49:43 compute-0 nova_compute[250018]: 2026-01-20 14:49:43.823 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 239 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.043 250022 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/a6ec928707ba4d63b1535a6ced14057c.tmp" returned: 1 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.044 250022 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef/a6ec928707ba4d63b1535a6ced14057c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.044 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Creating directory /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.045 250022 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/450672286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4171830442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.080 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.080 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.080 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.103 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.104 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.104 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.104 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.104 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.274 250022 DEBUG oslo_concurrency.processutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/52477e64-7989-4aa2-88e1-31600bfae2ef" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:44 compute-0 nova_compute[250018]: 2026-01-20 14:49:44.280 250022 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]: {
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:         "osd_id": 0,
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:         "type": "bluestore"
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]:     }
Jan 20 14:49:44 compute-0 sharp_ritchie[312707]: }
Jan 20 14:49:44 compute-0 systemd[1]: libpod-b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33.scope: Deactivated successfully.
Jan 20 14:49:44 compute-0 podman[312691]: 2026-01-20 14:49:44.418346765 +0000 UTC m=+1.039122026 container died b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd73d113b15e9bbd9bc87af8cf44887522405ece24ce57a92750462c98b9eea5-merged.mount: Deactivated successfully.
Jan 20 14:49:44 compute-0 podman[312691]: 2026-01-20 14:49:44.472246436 +0000 UTC m=+1.093021667 container remove b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:49:44 compute-0 systemd[1]: libpod-conmon-b1ef9166827c914cd0e259d7fb69d872613572da804bad50c2ef448c38bc8c33.scope: Deactivated successfully.
Jan 20 14:49:44 compute-0 sudo[312527]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:49:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:49:44 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 67ab7b5e-bee5-4959-87d4-79931b782d5e does not exist
Jan 20 14:49:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c56daa12-4840-496c-bb31-bdefffd6ffee does not exist
Jan 20 14:49:44 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d9306103-3afa-4b6d-a2da-7ddbf9174de7 does not exist
Jan 20 14:49:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:44 compute-0 sudo[312766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:44 compute-0 sudo[312766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:44 compute-0 sudo[312766]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:44.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:44 compute-0 sudo[312791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:49:44 compute-0 sudo[312791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:44 compute-0 sudo[312791]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:45 compute-0 ceph-mon[74360]: pgmap v1853: 321 pgs: 321 active+clean; 239 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 20 14:49:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:49:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 290 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:49:45 compute-0 nova_compute[250018]: 2026-01-20 14:49:45.951 250022 INFO nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Creating config drive at /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config
Jan 20 14:49:45 compute-0 nova_compute[250018]: 2026-01-20 14:49:45.955 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0mbz8e_i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.092 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0mbz8e_i" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.125 250022 DEBUG nova.storage.rbd_utils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] rbd image fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.130 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.310 250022 DEBUG oslo_concurrency.processutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config fc3bdf75-942a-4d44-b8eb-3d83f787fad1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.312 250022 INFO nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Deleting local config drive /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1/disk.config because it was imported into RBD.
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.3719] manager: (tap2186c94a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Jan 20 14:49:46 compute-0 kernel: tap2186c94a-fb: entered promiscuous mode
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00364|binding|INFO|Claiming lport 2186c94a-fb28-491f-9215-dd9a32546223 for this chassis.
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00365|binding|INFO|2186c94a-fb28-491f-9215-dd9a32546223: Claiming fa:16:3e:e1:5d:2b 10.100.0.7
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.387 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5d:2b 10.100.0.7'], port_security=['fa:16:3e:e1:5d:2b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc3bdf75-942a-4d44-b8eb-3d83f787fad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0acabbdd-eec1-4793-b4fa-b8460f681925', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a08a6f2a8aee493980fd658fae9e7fb4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78fa914c-4283-4b9d-8d92-448d5293ceb3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f3cbb7f-f345-4a5e-a790-2fbc3a5594f2, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2186c94a-fb28-491f-9215-dd9a32546223) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.388 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2186c94a-fb28-491f-9215-dd9a32546223 in datapath 0acabbdd-eec1-4793-b4fa-b8460f681925 bound to our chassis
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.389 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0acabbdd-eec1-4793-b4fa-b8460f681925
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.402 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[51bc915e-9016-4105-a497-ea71b053d5c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.403 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0acabbdd-e1 in ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.406 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0acabbdd-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.406 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[db458b86-0661-41c3-a542-d21f059fe4c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.408 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0a45e5-67ba-493b-82aa-7554884a61c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 systemd-udevd[312873]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:46 compute-0 systemd-machined[216401]: New machine qemu-48-instance-0000006c.
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.423 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5255e8-1e38-4467-836f-1d8df2455717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.4301] device (tap2186c94a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.4310] device (tap2186c94a-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c57906-126d-4cc4-ac08-ddabf04450b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000006c.
Jan 20 14:49:46 compute-0 sshd-session[312827]: Invalid user test from 157.245.78.139 port 36472
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.469 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.471 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4889d2-53d9-400d-b060-7378d0ae1579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00366|binding|INFO|Setting lport 2186c94a-fb28-491f-9215-dd9a32546223 ovn-installed in OVS
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00367|binding|INFO|Setting lport 2186c94a-fb28-491f-9215-dd9a32546223 up in Southbound
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.475 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 systemd-udevd[312877]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.477 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b22e92b0-3584-4961-bffb-b5a70de67b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.4817] manager: (tap0acabbdd-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.513 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b0702e91-2bb1-46eb-82bf-dcca9a8830a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.516 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c21b5d4c-a541-4727-a6cc-ec7537bb17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.5353] device (tap0acabbdd-e0): carrier: link connected
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.541 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[12e553ce-2286-44cb-b88f-0c184b056cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 sshd-session[312827]: Connection closed by invalid user test 157.245.78.139 port 36472 [preauth]
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.559 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d233925-a580-4b33-8212-3c73e4cff229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0acabbdd-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:09:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652725, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312905, 'error': None, 'target': 'ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 kernel: tap8286e975-4b (unregistering): left promiscuous mode
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.576 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7c82cd-d32d-4332-808f-40d44f440c8a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:9e1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652725, 'tstamp': 652725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312906, 'error': None, 'target': 'ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.5805] device (tap8286e975-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00368|binding|INFO|Releasing lport 8286e975-4b57-4b5a-9018-82187a854a2d from this chassis (sb_readonly=0)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.589 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00369|binding|INFO|Setting lport 8286e975-4b57-4b5a-9018-82187a854a2d down in Southbound
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00370|binding|INFO|Removing iface tap8286e975-4b ovn-installed in OVS
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.596 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:a9:8c 10.100.0.6'], port_security=['fa:16:3e:19:a9:8c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52477e64-7989-4aa2-88e1-31600bfae2ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8286e975-4b57-4b5a-9018-82187a854a2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.604 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[387ede29-9770-4b6d-9c8c-5ed7a29fbcb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0acabbdd-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:09:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652725, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312910, 'error': None, 'target': 'ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:49:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.635 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea15cdfd-3cdf-4417-b6e9-2517be68f270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 20 14:49:46 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000006a.scope: Consumed 14.022s CPU time.
Jan 20 14:49:46 compute-0 systemd-machined[216401]: Machine qemu-47-instance-0000006a terminated.
Jan 20 14:49:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.695 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9d932035-c2f4-4ba3-86d6-f5c1a9967459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.696 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0acabbdd-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.696 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.697 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0acabbdd-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.698 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.6991] manager: (tap0acabbdd-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 20 14:49:46 compute-0 kernel: tap0acabbdd-e0: entered promiscuous mode
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.703 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.704 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0acabbdd-e0, col_values=(('external_ids', {'iface-id': 'aefdf453-c871-4d7d-9a1d-17e0b3f32f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:46 compute-0 ovn_controller[148666]: 2026-01-20T14:49:46Z|00371|binding|INFO|Releasing lport aefdf453-c871-4d7d-9a1d-17e0b3f32f9e from this chassis (sb_readonly=0)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.726 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.726 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0acabbdd-eec1-4793-b4fa-b8460f681925.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0acabbdd-eec1-4793-b4fa-b8460f681925.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.727 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b08b14d2-5992-4b61-8b71-f8960e6610da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.728 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-0acabbdd-eec1-4793-b4fa-b8460f681925
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/0acabbdd-eec1-4793-b4fa-b8460f681925.pid.haproxy
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 0acabbdd-eec1-4793-b4fa-b8460f681925
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:49:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:46.729 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925', 'env', 'PROCESS_TAG=haproxy-0acabbdd-eec1-4793-b4fa-b8460f681925', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0acabbdd-eec1-4793-b4fa-b8460f681925.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:49:46 compute-0 NetworkManager[48960]: <info>  [1768920586.8029] manager: (tap8286e975-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.820 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920586.819287, fc3bdf75-942a-4d44-b8eb-3d83f787fad1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.820 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] VM Started (Lifecycle Event)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.846 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.854 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920586.8193202, fc3bdf75-942a-4d44-b8eb-3d83f787fad1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.855 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] VM Paused (Lifecycle Event)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.890 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.907 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.955 250022 DEBUG nova.compute.manager [req-f0c3f036-6ce6-411b-9d84-a6774e7a075e req-75e29d9b-ba75-4a10-a776-13db0984508a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.956 250022 DEBUG oslo_concurrency.lockutils [req-f0c3f036-6ce6-411b-9d84-a6774e7a075e req-75e29d9b-ba75-4a10-a776-13db0984508a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.957 250022 DEBUG oslo_concurrency.lockutils [req-f0c3f036-6ce6-411b-9d84-a6774e7a075e req-75e29d9b-ba75-4a10-a776-13db0984508a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.957 250022 DEBUG oslo_concurrency.lockutils [req-f0c3f036-6ce6-411b-9d84-a6774e7a075e req-75e29d9b-ba75-4a10-a776-13db0984508a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.957 250022 DEBUG nova.compute.manager [req-f0c3f036-6ce6-411b-9d84-a6774e7a075e req-75e29d9b-ba75-4a10-a776-13db0984508a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Processing event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.958 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.961 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.964 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.964 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920586.9610097, fc3bdf75-942a-4d44-b8eb-3d83f787fad1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.964 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] VM Resumed (Lifecycle Event)
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.968 250022 INFO nova.virt.libvirt.driver [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Instance spawned successfully.
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.968 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.986 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.991 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.992 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.992 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.993 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.993 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.994 250022 DEBUG nova.virt.libvirt.driver [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:49:46 compute-0 nova_compute[250018]: 2026-01-20 14:49:46.998 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.021 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.048 250022 INFO nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Took 12.93 seconds to spawn the instance on the hypervisor.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.049 250022 DEBUG nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:49:47 compute-0 podman[312997]: 2026-01-20 14:49:47.093038677 +0000 UTC m=+0.057158129 container create af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.115 250022 INFO nova.compute.manager [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Took 14.40 seconds to build instance.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.130 250022 DEBUG oslo_concurrency.lockutils [None req-21da2739-0d7f-4dd9-8b17-cc91d1fed7ab 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.131 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 14.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:47 compute-0 systemd[1]: Started libpod-conmon-af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5.scope.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.159 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:47 compute-0 podman[312997]: 2026-01-20 14:49:47.066480462 +0000 UTC m=+0.030599934 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:49:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acbd18d6932a0b2acd6c0d25d14ea7a16d4fa04d344f7255da7f2eb2fcb1e71a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:49:47 compute-0 podman[312997]: 2026-01-20 14:49:47.190960073 +0000 UTC m=+0.155079545 container init af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:49:47 compute-0 podman[312997]: 2026-01-20 14:49:47.196173162 +0000 UTC m=+0.160292604 container start af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [NOTICE]   (313016) : New worker (313018) forked
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [NOTICE]   (313016) : Loading success.
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.257 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8286e975-4b57-4b5a-9018-82187a854a2d in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.258 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.259 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b299c13f-078f-4754-bece-edc76617cb08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.260 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.298 250022 INFO nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance shutdown successfully after 3 seconds.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.303 250022 INFO nova.virt.libvirt.driver [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Instance destroyed successfully.
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.304 250022 DEBUG nova.virt.libvirt.vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:39Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.304 250022 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "vif_mac": "fa:16:3e:19:a9:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.305 250022 DEBUG nova.network.os_vif_util [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.305 250022 DEBUG os_vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.306 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.307 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8286e975-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.367 250022 INFO os_vif [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b')
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.370 250022 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.371 250022 DEBUG nova.virt.libvirt.driver [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [NOTICE]   (311535) : haproxy version is 2.8.14-c23fe91
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [NOTICE]   (311535) : path to executable is /usr/sbin/haproxy
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [WARNING]  (311535) : Exiting Master process...
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [ALERT]    (311535) : Current worker (311537) exited with code 143 (Terminated)
Jan 20 14:49:47 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[311531]: [WARNING]  (311535) : All workers exited. Exiting... (0)
Jan 20 14:49:47 compute-0 systemd[1]: libpod-a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3.scope: Deactivated successfully.
Jan 20 14:49:47 compute-0 podman[313045]: 2026-01-20 14:49:47.424486617 +0000 UTC m=+0.042805763 container died a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3-userdata-shm.mount: Deactivated successfully.
Jan 20 14:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b44a806abe6f789351602e4b7da8f61db2188a7dd4c8fb9bec75342c50d71f-merged.mount: Deactivated successfully.
Jan 20 14:49:47 compute-0 podman[313045]: 2026-01-20 14:49:47.457248729 +0000 UTC m=+0.075567875 container cleanup a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:49:47 compute-0 systemd[1]: libpod-conmon-a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3.scope: Deactivated successfully.
Jan 20 14:49:47 compute-0 podman[313074]: 2026-01-20 14:49:47.516619746 +0000 UTC m=+0.039395971 container remove a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.522 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9c4b3e-d445-48b5-a3f8-2f224d7fb7a0]: (4, ('Tue Jan 20 02:49:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3)\na766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3\nTue Jan 20 02:49:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (a766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3)\na766701dec35d1b08742d192f283c2362f39c243cb6a5457db8b75b47cac69b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.524 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1703a1-c5cd-4141-b0bb-4d09f20268ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.525 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.526 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:49:47 compute-0 ceph-mon[74360]: pgmap v1854: 321 pgs: 321 active+clean; 290 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 190 op/s
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.543 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.546 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[922091f6-b972-48b2-a7b4-4b5332509892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.558 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee90b784-502a-488c-b5ba-c15ddf1e4b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.559 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e180733-f65e-471d-b811-cef59a88c18a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.578 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[417f2424-f66b-4fa7-8411-ebdffc52e523]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650840, 'reachable_time': 39596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313090, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.580 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.581 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf942a7-7372-4381-9c2f-aad1747a52b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:47 compute-0 nova_compute[250018]: 2026-01-20 14:49:47.819 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.820 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:47.821 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:49:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 640 KiB/s rd, 5.1 MiB/s wr, 153 op/s
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:48.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.861 250022 DEBUG neutronclient.v2_0.client [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8286e975-4b57-4b5a-9018-82187a854a2d for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.956 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.957 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.957 250022 DEBUG oslo_concurrency.lockutils [None req-daed7e12-e146-4686-9444-14f2e75c6ad9 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.973 250022 DEBUG nova.network.neutron [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Updated VIF entry in instance network info cache for port 2186c94a-fb28-491f-9215-dd9a32546223. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.973 250022 DEBUG nova.network.neutron [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Updating instance_info_cache with network_info: [{"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:48 compute-0 nova_compute[250018]: 2026-01-20 14:49:48.998 250022 DEBUG oslo_concurrency.lockutils [req-d385a94c-0fbb-45d2-8619-0589ecab317c req-9c6c0dd1-d8e8-4780-8fd3-195a863e4930 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-fc3bdf75-942a-4d44-b8eb-3d83f787fad1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.085 250022 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.085 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.085 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.086 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.086 250022 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] No waiting events found dispatching network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.086 250022 WARNING nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received unexpected event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 for instance with vm_state active and task_state None.
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.086 250022 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.086 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.087 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.087 250022 DEBUG oslo_concurrency.lockutils [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.087 250022 DEBUG nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.087 250022 WARNING nova.compute.manager [req-ab956100-da64-4328-aa1c-51e8ba0af36c req-e0293cd6-6f0d-403c-84eb-63a433002bb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-unplugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state resize_migrated.
Jan 20 14:49:49 compute-0 sudo[313092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:49 compute-0 sudo[313092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:49 compute-0 sudo[313092]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:49 compute-0 ceph-mon[74360]: pgmap v1855: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 640 KiB/s rd, 5.1 MiB/s wr, 153 op/s
Jan 20 14:49:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.568 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:49 compute-0 sudo[313117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:49:49 compute-0 sudo[313117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:49:49 compute-0 sudo[313117]: pam_unix(sudo:session): session closed for user root
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.589 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:49 compute-0 nova_compute[250018]: 2026-01-20 14:49:49.590 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:49:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 14:49:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:50.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:50.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.510 250022 DEBUG nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.510 250022 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.511 250022 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.511 250022 DEBUG oslo_concurrency.lockutils [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.511 250022 DEBUG nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.511 250022 WARNING nova.compute.manager [req-3e6b9d77-46e1-46f6-8611-3612121dda9d req-a1d4fdc6-a417-458e-a594-1368b2e84d06 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state active and task_state resize_migrated.
Jan 20 14:49:51 compute-0 ceph-mon[74360]: pgmap v1856: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.826 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.827 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.827 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.828 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.829 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.830 250022 INFO nova.compute.manager [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Terminating instance
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.832 250022 DEBUG nova.compute.manager [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:49:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Jan 20 14:49:51 compute-0 kernel: tap2186c94a-fb (unregistering): left promiscuous mode
Jan 20 14:49:51 compute-0 NetworkManager[48960]: <info>  [1768920591.8805] device (tap2186c94a-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:49:51 compute-0 ovn_controller[148666]: 2026-01-20T14:49:51Z|00372|binding|INFO|Releasing lport 2186c94a-fb28-491f-9215-dd9a32546223 from this chassis (sb_readonly=0)
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:51 compute-0 ovn_controller[148666]: 2026-01-20T14:49:51Z|00373|binding|INFO|Setting lport 2186c94a-fb28-491f-9215-dd9a32546223 down in Southbound
Jan 20 14:49:51 compute-0 ovn_controller[148666]: 2026-01-20T14:49:51Z|00374|binding|INFO|Removing iface tap2186c94a-fb ovn-installed in OVS
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:51.893 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:5d:2b 10.100.0.7'], port_security=['fa:16:3e:e1:5d:2b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc3bdf75-942a-4d44-b8eb-3d83f787fad1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0acabbdd-eec1-4793-b4fa-b8460f681925', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a08a6f2a8aee493980fd658fae9e7fb4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78fa914c-4283-4b9d-8d92-448d5293ceb3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f3cbb7f-f345-4a5e-a790-2fbc3a5594f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2186c94a-fb28-491f-9215-dd9a32546223) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:49:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:51.894 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2186c94a-fb28-491f-9215-dd9a32546223 in datapath 0acabbdd-eec1-4793-b4fa-b8460f681925 unbound from our chassis
Jan 20 14:49:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:51.896 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0acabbdd-eec1-4793-b4fa-b8460f681925, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:49:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:51.896 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a035cc49-78a5-42b0-aa70-c6c57f69ce60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:51.897 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925 namespace which is not needed anymore
Jan 20 14:49:51 compute-0 nova_compute[250018]: 2026-01-20 14:49:51.962 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:51 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 20 14:49:51 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006c.scope: Consumed 5.403s CPU time.
Jan 20 14:49:51 compute-0 systemd-machined[216401]: Machine qemu-48-instance-0000006c terminated.
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [NOTICE]   (313016) : haproxy version is 2.8.14-c23fe91
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [NOTICE]   (313016) : path to executable is /usr/sbin/haproxy
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [WARNING]  (313016) : Exiting Master process...
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [WARNING]  (313016) : Exiting Master process...
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [ALERT]    (313016) : Current worker (313018) exited with code 143 (Terminated)
Jan 20 14:49:52 compute-0 neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925[313012]: [WARNING]  (313016) : All workers exited. Exiting... (0)
Jan 20 14:49:52 compute-0 systemd[1]: libpod-af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5.scope: Deactivated successfully.
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.066 250022 INFO nova.virt.libvirt.driver [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Instance destroyed successfully.
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.067 250022 DEBUG nova.objects.instance [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lazy-loading 'resources' on Instance uuid fc3bdf75-942a-4d44-b8eb-3d83f787fad1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:49:52 compute-0 podman[313169]: 2026-01-20 14:49:52.067916461 +0000 UTC m=+0.045038223 container died af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.088 250022 DEBUG nova.virt.libvirt.vif [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1333128055',display_name='tempest-ServerGroupTestJSON-server-1333128055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1333128055',id=108,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a08a6f2a8aee493980fd658fae9e7fb4',ramdisk_id='',reservation_id='r-ejyc3vtj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-778624312',owner_user_name='tempest-ServerGroupTestJSON-778624312-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:47Z,user_data=None,user_id='38ef13d691534d06a5110be95454010f',uuid=fc3bdf75-942a-4d44-b8eb-3d83f787fad1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.089 250022 DEBUG nova.network.os_vif_util [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converting VIF {"id": "2186c94a-fb28-491f-9215-dd9a32546223", "address": "fa:16:3e:e1:5d:2b", "network": {"id": "0acabbdd-eec1-4793-b4fa-b8460f681925", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1538335699-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a08a6f2a8aee493980fd658fae9e7fb4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2186c94a-fb", "ovs_interfaceid": "2186c94a-fb28-491f-9215-dd9a32546223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.089 250022 DEBUG nova.network.os_vif_util [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.090 250022 DEBUG os_vif [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.092 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2186c94a-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.098 250022 INFO os_vif [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:5d:2b,bridge_name='br-int',has_traffic_filtering=True,id=2186c94a-fb28-491f-9215-dd9a32546223,network=Network(0acabbdd-eec1-4793-b4fa-b8460f681925),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2186c94a-fb')
Jan 20 14:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5-userdata-shm.mount: Deactivated successfully.
Jan 20 14:49:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-acbd18d6932a0b2acd6c0d25d14ea7a16d4fa04d344f7255da7f2eb2fcb1e71a-merged.mount: Deactivated successfully.
Jan 20 14:49:52 compute-0 podman[313169]: 2026-01-20 14:49:52.107125337 +0000 UTC m=+0.084247079 container cleanup af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:49:52 compute-0 systemd[1]: libpod-conmon-af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5.scope: Deactivated successfully.
Jan 20 14:49:52 compute-0 podman[313225]: 2026-01-20 14:49:52.177691865 +0000 UTC m=+0.048625810 container remove af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.182 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c194609b-0770-4543-b614-3a72011e18c7]: (4, ('Tue Jan 20 02:49:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925 (af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5)\naf46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5\nTue Jan 20 02:49:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925 (af46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5)\naf46b8d0e202b5f86b2bc1e16358454647c49162b682a7b6c2df3a0811b60fd5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.184 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c94ed30e-6ab3-4d8b-af02-f889d49d25a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.184 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0acabbdd-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:52 compute-0 kernel: tap0acabbdd-e0: left promiscuous mode
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.199 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.201 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2b30d36d-df1b-487e-bd25-e4b1166e84e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.215 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ca65bafb-48ef-4c3f-8b89-eef4dec02f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.216 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[942481d3-42ce-42d4-b672-b07f779dc295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.233 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bfdaf9-ca26-4666-bc30-42a76f40e1c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652719, 'reachable_time': 21291, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313243, 'error': None, 'target': 'ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d0acabbdd\x2deec1\x2d4793\x2db4fa\x2db8460f681925.mount: Deactivated successfully.
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.235 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0acabbdd-eec1-4793-b4fa-b8460f681925 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:49:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:52.236 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd45dab-6ec1-458b-b6f7-30571074450f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.505 250022 INFO nova.virt.libvirt.driver [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Deleting instance files /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1_del
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.506 250022 INFO nova.virt.libvirt.driver [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Deletion of /var/lib/nova/instances/fc3bdf75-942a-4d44-b8eb-3d83f787fad1_del complete
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:49:52
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'backups']
Jan 20 14:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.602 250022 INFO nova.compute.manager [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.603 250022 DEBUG oslo.service.loopingcall [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.603 250022 DEBUG nova.compute.manager [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:49:52 compute-0 nova_compute[250018]: 2026-01-20 14:49:52.604 250022 DEBUG nova.network.neutron [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:49:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:52.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.113 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:53 compute-0 ceph-mon[74360]: pgmap v1857: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 193 op/s
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.661 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.661 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing instance network info cache due to event network-changed-8286e975-4b57-4b5a-9018-82187a854a2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.662 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.662 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.662 250022 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Refreshing network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.748 250022 DEBUG nova.network.neutron [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.769 250022 INFO nova.compute.manager [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Took 1.16 seconds to deallocate network for instance.
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.842 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.843 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Jan 20 14:49:53 compute-0 nova_compute[250018]: 2026-01-20 14:49:53.972 250022 DEBUG oslo_concurrency.processutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:49:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:49:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454238869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-0 nova_compute[250018]: 2026-01-20 14:49:54.407 250022 DEBUG oslo_concurrency.processutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:49:54 compute-0 nova_compute[250018]: 2026-01-20 14:49:54.413 250022 DEBUG nova.compute.provider_tree [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:49:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 20 14:49:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 20 14:49:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2195194502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2454238869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:49:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:54.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:54 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 20 14:49:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:54.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:55 compute-0 nova_compute[250018]: 2026-01-20 14:49:55.138 250022 DEBUG nova.scheduler.client.report [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:49:55 compute-0 nova_compute[250018]: 2026-01-20 14:49:55.202 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:55 compute-0 nova_compute[250018]: 2026-01-20 14:49:55.243 250022 INFO nova.scheduler.client.report [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Deleted allocations for instance fc3bdf75-942a-4d44-b8eb-3d83f787fad1
Jan 20 14:49:55 compute-0 nova_compute[250018]: 2026-01-20 14:49:55.298 250022 DEBUG nova.compute.manager [req-662b280f-6ef8-4265-bda3-3ef4cb62d301 req-b6f5ea26-3eb6-4d83-8d2b-4aa1e046782d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-deleted-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:55 compute-0 nova_compute[250018]: 2026-01-20 14:49:55.341 250022 DEBUG oslo_concurrency.lockutils [None req-d82b3572-0680-4d26-bd57-fa622ddac854 38ef13d691534d06a5110be95454010f a08a6f2a8aee493980fd658fae9e7fb4 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:55 compute-0 ceph-mon[74360]: pgmap v1858: 321 pgs: 321 active+clean; 293 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Jan 20 14:49:55 compute-0 ceph-mon[74360]: osdmap e245: 3 total, 3 up, 3 in
Jan 20 14:49:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 255 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 120 KiB/s wr, 124 op/s
Jan 20 14:49:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:49:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:56.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:49:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/87628794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/848721760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:49:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:49:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.784 250022 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updated VIF entry in instance network info cache for port 8286e975-4b57-4b5a-9018-82187a854a2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.784 250022 DEBUG nova.network.neutron [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.819 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.819 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-unplugged-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] No waiting events found dispatching network-vif-unplugged-2186c94a-fb28-491f-9215-dd9a32546223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-unplugged-2186c94a-fb28-491f-9215-dd9a32546223 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.820 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.821 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.821 250022 DEBUG oslo_concurrency.lockutils [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "fc3bdf75-942a-4d44-b8eb-3d83f787fad1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.821 250022 DEBUG nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] No waiting events found dispatching network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:56 compute-0 nova_compute[250018]: 2026-01-20 14:49:56.821 250022 WARNING nova.compute.manager [req-0bffe202-7fe9-492b-bcf2-9e81adca93b4 req-9be68d86-36e4-4aba-afff-4deea996d0d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Received unexpected event network-vif-plugged-2186c94a-fb28-491f-9215-dd9a32546223 for instance with vm_state active and task_state deleting.
Jan 20 14:49:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:49:56.822 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:49:57 compute-0 nova_compute[250018]: 2026-01-20 14:49:57.095 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:49:57 compute-0 ceph-mon[74360]: pgmap v1860: 321 pgs: 321 active+clean; 255 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 120 KiB/s wr, 124 op/s
Jan 20 14:49:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.075 250022 DEBUG nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.075 250022 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.076 250022 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.076 250022 DEBUG oslo_concurrency.lockutils [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.076 250022 DEBUG nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.076 250022 WARNING nova.compute.manager [req-baf2153e-8186-4fec-aa51-62f15ccad980 req-55bc6dfb-46b7-4520-8731-ddad6e0e4c46 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state resized and task_state None.
Jan 20 14:49:58 compute-0 nova_compute[250018]: 2026-01-20 14:49:58.115 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:49:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:49:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:49:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:49:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:49:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:49:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:49:59 compute-0 ceph-mon[74360]: pgmap v1861: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Jan 20 14:49:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 14:50:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.271 250022 DEBUG nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.272 250022 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.273 250022 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.273 250022 DEBUG oslo_concurrency.lockutils [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.274 250022 DEBUG nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] No waiting events found dispatching network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.274 250022 WARNING nova.compute.manager [req-6439b544-5584-467c-92b7-5a2058bf7b71 req-ec47bd55-e9a1-4561-b20d-116bb2520b35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Received unexpected event network-vif-plugged-8286e975-4b57-4b5a-9018-82187a854a2d for instance with vm_state resized and task_state None.
Jan 20 14:50:00 compute-0 podman[313273]: 2026-01-20 14:50:00.481091317 +0000 UTC m=+0.064863057 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 14:50:00 compute-0 podman[313272]: 2026-01-20 14:50:00.52651054 +0000 UTC m=+0.115582962 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:50:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.747 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "52477e64-7989-4aa2-88e1-31600bfae2ef" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.747 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:00 compute-0 nova_compute[250018]: 2026-01-20 14:50:00.748 250022 DEBUG nova.compute.manager [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 14:50:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 14:50:01 compute-0 ceph-mon[74360]: pgmap v1862: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 14:50:01 compute-0 nova_compute[250018]: 2026-01-20 14:50:01.820 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920586.8190584, 52477e64-7989-4aa2-88e1-31600bfae2ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:01 compute-0 nova_compute[250018]: 2026-01-20 14:50:01.820 250022 INFO nova.compute.manager [-] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] VM Stopped (Lifecycle Event)
Jan 20 14:50:01 compute-0 nova_compute[250018]: 2026-01-20 14:50:01.844 250022 DEBUG nova.compute.manager [None req-c740b555-2d0b-4d87-9615-6838b8280950 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:01 compute-0 nova_compute[250018]: 2026-01-20 14:50:01.847 250022 DEBUG nova.compute.manager [None req-c740b555-2d0b-4d87-9615-6838b8280950 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 142 op/s
Jan 20 14:50:01 compute-0 nova_compute[250018]: 2026-01-20 14:50:01.873 250022 INFO nova.compute.manager [None req-c740b555-2d0b-4d87-9615-6838b8280950 - - - - - -] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.015 250022 DEBUG neutronclient.v2_0.client [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8286e975-4b57-4b5a-9018-82187a854a2d for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.015 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.016 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.016 250022 DEBUG nova.network.neutron [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.016 250022 DEBUG nova.objects.instance [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'info_cache' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:02 compute-0 nova_compute[250018]: 2026-01-20 14:50:02.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:02.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:02 compute-0 ceph-mon[74360]: pgmap v1863: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 142 op/s
Jan 20 14:50:03 compute-0 nova_compute[250018]: 2026-01-20 14:50:03.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 140 op/s
Jan 20 14:50:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 20 14:50:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 20 14:50:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.519 250022 DEBUG nova.network.neutron [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 52477e64-7989-4aa2-88e1-31600bfae2ef] Updating instance_info_cache with network_info: [{"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.558 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-52477e64-7989-4aa2-88e1-31600bfae2ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.558 250022 DEBUG nova.objects.instance [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid 52477e64-7989-4aa2-88e1-31600bfae2ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.667 250022 DEBUG nova.storage.rbd_utils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] removing snapshot(nova-resize) on rbd image(52477e64-7989-4aa2-88e1-31600bfae2ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:50:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:04.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 20 14:50:04 compute-0 ceph-mon[74360]: pgmap v1864: 321 pgs: 321 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 140 op/s
Jan 20 14:50:04 compute-0 ceph-mon[74360]: osdmap e246: 3 total, 3 up, 3 in
Jan 20 14:50:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2075039030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 20 14:50:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.971 250022 DEBUG nova.virt.libvirt.vif [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:49:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1663251192',display_name='tempest-ServerDiskConfigTestJSON-server-1663251192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1663251192',id=106,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:49:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-nykd0j3m',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:49:57Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=52477e64-7989-4aa2-88e1-31600bfae2ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.972 250022 DEBUG nova.network.os_vif_util [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "8286e975-4b57-4b5a-9018-82187a854a2d", "address": "fa:16:3e:19:a9:8c", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8286e975-4b", "ovs_interfaceid": "8286e975-4b57-4b5a-9018-82187a854a2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.973 250022 DEBUG nova.network.os_vif_util [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.974 250022 DEBUG os_vif [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.976 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.976 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8286e975-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.977 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.979 250022 INFO os_vif [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:a9:8c,bridge_name='br-int',has_traffic_filtering=True,id=8286e975-4b57-4b5a-9018-82187a854a2d,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8286e975-4b')
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.980 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:04 compute-0 nova_compute[250018]: 2026-01-20 14:50:04.980 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.083 250022 DEBUG oslo_concurrency.processutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811767033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.542 250022 DEBUG oslo_concurrency.processutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.548 250022 DEBUG nova.compute.provider_tree [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.575 250022 DEBUG nova.scheduler.client.report [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.645 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.823 250022 INFO nova.scheduler.client.report [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocation for migration 4a873a64-1379-4cac-913e-e81f3f300ec7
Jan 20 14:50:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:50:05 compute-0 nova_compute[250018]: 2026-01-20 14:50:05.909 250022 DEBUG oslo_concurrency.lockutils [None req-6eb985f1-306b-499b-acf4-c9f69407b056 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "52477e64-7989-4aa2-88e1-31600bfae2ef" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:05 compute-0 ceph-mon[74360]: osdmap e247: 3 total, 3 up, 3 in
Jan 20 14:50:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1676165450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1811767033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:06.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:06.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:07 compute-0 ceph-mon[74360]: pgmap v1867: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 137 op/s
Jan 20 14:50:07 compute-0 nova_compute[250018]: 2026-01-20 14:50:07.065 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920592.0637822, fc3bdf75-942a-4d44-b8eb-3d83f787fad1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:07 compute-0 nova_compute[250018]: 2026-01-20 14:50:07.065 250022 INFO nova.compute.manager [-] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] VM Stopped (Lifecycle Event)
Jan 20 14:50:07 compute-0 nova_compute[250018]: 2026-01-20 14:50:07.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:07 compute-0 nova_compute[250018]: 2026-01-20 14:50:07.140 250022 DEBUG nova.compute.manager [None req-3dc12099-2ed9-408c-88a7-5bc07ee70e50 - - - - - -] [instance: fc3bdf75-942a-4d44-b8eb-3d83f787fad1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 25 KiB/s wr, 181 op/s
Jan 20 14:50:08 compute-0 nova_compute[250018]: 2026-01-20 14:50:08.118 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:08.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:09 compute-0 ceph-mon[74360]: pgmap v1868: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 25 KiB/s wr, 181 op/s
Jan 20 14:50:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:09 compute-0 sudo[313379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:09 compute-0 sudo[313379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:09 compute-0 sudo[313379]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:09 compute-0 sudo[313404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:09 compute-0 sudo[313404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:09 compute-0 sudo[313404]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 23 KiB/s wr, 123 op/s
Jan 20 14:50:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:10.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:10.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:11 compute-0 ceph-mon[74360]: pgmap v1869: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 246 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 23 KiB/s wr, 123 op/s
Jan 20 14:50:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3567694707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004342640575563 of space, bias 1.0, pg target 1.3027921726689 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:50:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 193 op/s
Jan 20 14:50:12 compute-0 nova_compute[250018]: 2026-01-20 14:50:12.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:12.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:12.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:13 compute-0 nova_compute[250018]: 2026-01-20 14:50:13.120 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:13 compute-0 ceph-mon[74360]: pgmap v1870: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 193 op/s
Jan 20 14:50:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 165 op/s
Jan 20 14:50:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:50:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/276800458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:50:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 20 14:50:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 20 14:50:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 20 14:50:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:14.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:14.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:15 compute-0 ceph-mon[74360]: pgmap v1871: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 165 op/s
Jan 20 14:50:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2969810977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:15 compute-0 ceph-mon[74360]: osdmap e248: 3 total, 3 up, 3 in
Jan 20 14:50:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 32 KiB/s wr, 172 op/s
Jan 20 14:50:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:17 compute-0 ceph-mon[74360]: pgmap v1873: 321 pgs: 321 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 32 KiB/s wr, 172 op/s
Jan 20 14:50:17 compute-0 nova_compute[250018]: 2026-01-20 14:50:17.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 231 MiB data, 899 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 133 op/s
Jan 20 14:50:18 compute-0 nova_compute[250018]: 2026-01-20 14:50:18.170 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 20 14:50:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 20 14:50:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 20 14:50:19 compute-0 ceph-mon[74360]: pgmap v1874: 321 pgs: 321 active+clean; 231 MiB data, 899 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 133 op/s
Jan 20 14:50:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 210 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 40 KiB/s wr, 99 op/s
Jan 20 14:50:20 compute-0 ceph-mon[74360]: osdmap e249: 3 total, 3 up, 3 in
Jan 20 14:50:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/172413703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:20.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:21 compute-0 ceph-mon[74360]: pgmap v1876: 321 pgs: 321 active+clean; 210 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 40 KiB/s wr, 99 op/s
Jan 20 14:50:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4223586813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3610407819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 167 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 39 KiB/s wr, 151 op/s
Jan 20 14:50:22 compute-0 nova_compute[250018]: 2026-01-20 14:50:22.190 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:22 compute-0 ceph-mon[74360]: pgmap v1877: 321 pgs: 321 active+clean; 167 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 39 KiB/s wr, 151 op/s
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:22.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:23 compute-0 nova_compute[250018]: 2026-01-20 14:50:23.173 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 184 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 955 KiB/s wr, 168 op/s
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.221 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.221 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.242 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.342 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.343 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.353 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.354 250022 INFO nova.compute.claims [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.473 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:24.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735507930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.932 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:24 compute-0 ceph-mon[74360]: pgmap v1878: 321 pgs: 321 active+clean; 184 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 955 KiB/s wr, 168 op/s
Jan 20 14:50:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/735507930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.942 250022 DEBUG nova.compute.provider_tree [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:24 compute-0 nova_compute[250018]: 2026-01-20 14:50:24.990 250022 DEBUG nova.scheduler.client.report [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.019 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.019 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.076 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.077 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.101 250022 INFO nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.119 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.237 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.238 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.239 250022 INFO nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Creating image(s)
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.272 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.311 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.345 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.349 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.381 250022 DEBUG nova.policy [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd85d286ce6224326a0f4a15a06afbfea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.416 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.417 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.418 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.419 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.452 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.456 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.743 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.822 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] resizing rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:50:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.932 250022 DEBUG nova.objects.instance [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.970 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.970 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Ensure instance console log exists: /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.971 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.971 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:25 compute-0 nova_compute[250018]: 2026-01-20 14:50:25.972 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:26 compute-0 nova_compute[250018]: 2026-01-20 14:50:26.369 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Successfully created port: 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:50:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:26.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:26.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:26 compute-0 ceph-mon[74360]: pgmap v1879: 321 pgs: 321 active+clean; 213 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:50:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2577488927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3209917238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:27 compute-0 nova_compute[250018]: 2026-01-20 14:50:27.238 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 209 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 259 op/s
Jan 20 14:50:27 compute-0 nova_compute[250018]: 2026-01-20 14:50:27.909 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Successfully updated port: 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:50:27 compute-0 nova_compute[250018]: 2026-01-20 14:50:27.925 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:27 compute-0 nova_compute[250018]: 2026-01-20 14:50:27.925 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:27 compute-0 nova_compute[250018]: 2026-01-20 14:50:27.925 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:28 compute-0 nova_compute[250018]: 2026-01-20 14:50:28.060 250022 DEBUG nova.compute.manager [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-changed-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:28 compute-0 nova_compute[250018]: 2026-01-20 14:50:28.060 250022 DEBUG nova.compute.manager [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Refreshing instance network info cache due to event network-changed-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:50:28 compute-0 nova_compute[250018]: 2026-01-20 14:50:28.060 250022 DEBUG oslo_concurrency.lockutils [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:28 compute-0 nova_compute[250018]: 2026-01-20 14:50:28.175 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:50:28 compute-0 nova_compute[250018]: 2026-01-20 14:50:28.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:28 compute-0 sshd-session[313628]: Invalid user test from 157.245.78.139 port 34180
Jan 20 14:50:28 compute-0 sshd-session[313628]: Connection closed by invalid user test 157.245.78.139 port 34180 [preauth]
Jan 20 14:50:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:28.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:28 compute-0 ceph-mon[74360]: pgmap v1880: 321 pgs: 321 active+clean; 209 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 259 op/s
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.138 250022 DEBUG nova.network.neutron [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.303 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.303 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance network_info: |[{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.307 250022 DEBUG oslo_concurrency.lockutils [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.308 250022 DEBUG nova.network.neutron [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Refreshing network info cache for port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.313 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start _get_guest_xml network_info=[{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.319 250022 WARNING nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.325 250022 DEBUG nova.virt.libvirt.host [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.326 250022 DEBUG nova.virt.libvirt.host [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.329 250022 DEBUG nova.virt.libvirt.host [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.330 250022 DEBUG nova.virt.libvirt.host [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.332 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.332 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.333 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.334 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.334 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.334 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.335 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.335 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.336 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.336 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.337 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.337 250022 DEBUG nova.virt.hardware [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.342 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 20 14:50:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 20 14:50:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 20 14:50:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768006211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.814 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.845 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:29 compute-0 nova_compute[250018]: 2026-01-20 14:50:29.850 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 203 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 249 op/s
Jan 20 14:50:29 compute-0 sudo[313653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:29 compute-0 sudo[313653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:29 compute-0 sudo[313653]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:29 compute-0 sudo[313697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:29 compute-0 sudo[313697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:29 compute-0 sudo[313697]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3027020217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.299 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.300 250022 DEBUG nova.virt.libvirt.vif [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-178332738',display_name='tempest-ServerStableDeviceRescueTest-server-178332738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-178332738',id=110,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-6cagqhwr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:25Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=c3b4d4c6-c42f-4abc-9c01-89ec3e10c677,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.301 250022 DEBUG nova.network.os_vif_util [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.302 250022 DEBUG nova.network.os_vif_util [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.303 250022 DEBUG nova.objects.instance [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.546 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <uuid>c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</uuid>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <name>instance-0000006e</name>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-178332738</nova:name>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:50:29</nova:creationTime>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <nova:port uuid="7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9">
Jan 20 14:50:30 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <system>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="serial">c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="uuid">c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </system>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <os>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </os>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <features>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </features>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk">
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </source>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config">
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </source>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:50:30 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:7f:92:68"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <target dev="tap7869c4f4-45"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/console.log" append="off"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <video>
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </video>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:50:30 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:50:30 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:50:30 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:50:30 compute-0 nova_compute[250018]: </domain>
Jan 20 14:50:30 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.548 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Preparing to wait for external event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.548 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.548 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.548 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.549 250022 DEBUG nova.virt.libvirt.vif [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-178332738',display_name='tempest-ServerStableDeviceRescueTest-server-178332738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-178332738',id=110,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-6cagqhwr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:25Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=c3b4d4c6-c42f-4abc-9c01-89ec3e10c677,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.549 250022 DEBUG nova.network.os_vif_util [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.550 250022 DEBUG nova.network.os_vif_util [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.550 250022 DEBUG os_vif [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.551 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.551 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.551 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.554 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.554 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7869c4f4-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.555 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7869c4f4-45, col_values=(('external_ids', {'iface-id': '7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:92:68', 'vm-uuid': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:30 compute-0 NetworkManager[48960]: <info>  [1768920630.5581] manager: (tap7869c4f4-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.559 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.563 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.564 250022 INFO os_vif [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45')
Jan 20 14:50:30 compute-0 ceph-mon[74360]: osdmap e250: 3 total, 3 up, 3 in
Jan 20 14:50:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3768006211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:30 compute-0 ceph-mon[74360]: pgmap v1882: 321 pgs: 321 active+clean; 203 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 249 op/s
Jan 20 14:50:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3027020217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.654 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.655 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.655 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:7f:92:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.655 250022 INFO nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Using config drive
Jan 20 14:50:30 compute-0 podman[313747]: 2026-01-20 14:50:30.668077036 +0000 UTC m=+0.054480147 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:50:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:30.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.690 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:30 compute-0 podman[313746]: 2026-01-20 14:50:30.710892239 +0000 UTC m=+0.097042132 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:50:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:30.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:30.762 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:30.764 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:30.765 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.919 250022 DEBUG nova.network.neutron [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updated VIF entry in instance network info cache for port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.920 250022 DEBUG nova.network.neutron [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.958 250022 DEBUG oslo_concurrency.lockutils [req-593fc3cd-c376-4216-9b88-eba04a07a187 req-120b588d-1df2-491a-a069-0093f14c2aa1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.972 250022 INFO nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Creating config drive at /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config
Jan 20 14:50:30 compute-0 nova_compute[250018]: 2026-01-20 14:50:30.978 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp65ptn261 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.114 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp65ptn261" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.154 250022 DEBUG nova.storage.rbd_utils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.159 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.319 250022 DEBUG oslo_concurrency.processutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.320 250022 INFO nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Deleting local config drive /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config because it was imported into RBD.
Jan 20 14:50:31 compute-0 kernel: tap7869c4f4-45: entered promiscuous mode
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.3726] manager: (tap7869c4f4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 ovn_controller[148666]: 2026-01-20T14:50:31Z|00375|binding|INFO|Claiming lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for this chassis.
Jan 20 14:50:31 compute-0 ovn_controller[148666]: 2026-01-20T14:50:31Z|00376|binding|INFO|7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9: Claiming fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.400 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.402 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.404 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:50:31 compute-0 systemd-machined[216401]: New machine qemu-49-instance-0000006e.
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.417 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[07f773a1-90c8-4025-9194-c3936c363e7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.418 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.419 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.419 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6c7a97-266f-4588-b2e9-b605b5e63f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.420 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73e2e822-44d2-47b2-8784-168bdd4a7fc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.433 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[0942f340-7bb6-49c7-99c3-6493906f3069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000006e.
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.459 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c47783dd-a015-4ca2-aa24-284287690321]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 systemd-udevd[313862]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:31 compute-0 ovn_controller[148666]: 2026-01-20T14:50:31Z|00377|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 ovn-installed in OVS
Jan 20 14:50:31 compute-0 ovn_controller[148666]: 2026-01-20T14:50:31Z|00378|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 up in Southbound
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.480 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.4847] device (tap7869c4f4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.4854] device (tap7869c4f4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.494 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[aaab838e-1df7-4e0d-8480-3d101a08e107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.500 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fd283a93-f706-46be-a383-3484e99c7b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 systemd-udevd[313869]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.5017] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.537 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3d676be8-94a6-4e64-9a3a-4710f8ad80da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.540 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e76be6f8-b70f-44cd-9032-aaeb19794158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.5654] device (tap79184781-10): carrier: link connected
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.574 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2d05b4cf-d032-4588-85d7-c8a9d437dc25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.597 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc12e44-0918-4f43-9a70-255a5f2efc3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657228, 'reachable_time': 40386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313892, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.612 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[faafecc6-eefb-439f-bdc3-6516cfebc4bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657228, 'tstamp': 657228}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313893, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.631 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f66b2e6f-7692-4c65-aba1-7032e3c586c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657228, 'reachable_time': 40386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313894, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.664 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2a68ee-2476-459b-bd20-e714c13a9424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.726 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[66f939c0-d15d-4920-8c07-a3256a744f41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.727 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.728 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.728 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.731 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 NetworkManager[48960]: <info>  [1768920631.7320] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 20 14:50:31 compute-0 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.734 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.734 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 ovn_controller[148666]: 2026-01-20T14:50:31Z|00379|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.759 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.760 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7c5ff5-d0d6-40e6-9562-8577e7018a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.761 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:50:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:31.761 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.830 250022 DEBUG nova.compute.manager [req-6bb7b4ed-7c48-4b36-b35f-944cc20fea8c req-5d7ee089-53f3-4106-a5c2-07ccb10e792a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.830 250022 DEBUG oslo_concurrency.lockutils [req-6bb7b4ed-7c48-4b36-b35f-944cc20fea8c req-5d7ee089-53f3-4106-a5c2-07ccb10e792a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.831 250022 DEBUG oslo_concurrency.lockutils [req-6bb7b4ed-7c48-4b36-b35f-944cc20fea8c req-5d7ee089-53f3-4106-a5c2-07ccb10e792a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.831 250022 DEBUG oslo_concurrency.lockutils [req-6bb7b4ed-7c48-4b36-b35f-944cc20fea8c req-5d7ee089-53f3-4106-a5c2-07ccb10e792a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.832 250022 DEBUG nova.compute.manager [req-6bb7b4ed-7c48-4b36-b35f-944cc20fea8c req-5d7ee089-53f3-4106-a5c2-07ccb10e792a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Processing event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:50:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 186 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 240 op/s
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.902 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920631.9015756, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.902 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Started (Lifecycle Event)
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.905 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.909 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.912 250022 INFO nova.virt.libvirt.driver [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance spawned successfully.
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.913 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.930 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.942 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.948 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.948 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.949 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.950 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.951 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.951 250022 DEBUG nova.virt.libvirt.driver [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.978 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.979 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920631.9026048, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:31 compute-0 nova_compute[250018]: 2026-01-20 14:50:31.979 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Paused (Lifecycle Event)
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.023 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.026 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920631.907779, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.026 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Resumed (Lifecycle Event)
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.043 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.046 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.049 250022 INFO nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Took 6.81 seconds to spawn the instance on the hypervisor.
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.050 250022 DEBUG nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.064 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:50:32 compute-0 podman[313969]: 2026-01-20 14:50:32.108327287 +0000 UTC m=+0.041434666 container create 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.108 250022 INFO nova.compute.manager [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Took 7.80 seconds to build instance.
Jan 20 14:50:32 compute-0 nova_compute[250018]: 2026-01-20 14:50:32.136 250022 DEBUG oslo_concurrency.lockutils [None req-bfd27c3c-c5b9-4124-ba87-631778c9093b d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:32 compute-0 systemd[1]: Started libpod-conmon-7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641.scope.
Jan 20 14:50:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b0cbc9de21a864c844b5218cb858181d4219572eacadeaffcd39a810600e1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:32 compute-0 podman[313969]: 2026-01-20 14:50:32.088194466 +0000 UTC m=+0.021301865 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:50:32 compute-0 podman[313969]: 2026-01-20 14:50:32.184699153 +0000 UTC m=+0.117806562 container init 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:32 compute-0 podman[313969]: 2026-01-20 14:50:32.190334475 +0000 UTC m=+0.123441854 container start 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:32 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [NOTICE]   (313989) : New worker (313991) forked
Jan 20 14:50:32 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [NOTICE]   (313989) : Loading success.
Jan 20 14:50:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:32.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:32 compute-0 ceph-mon[74360]: pgmap v1883: 321 pgs: 321 active+clean; 186 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.3 MiB/s wr, 240 op/s
Jan 20 14:50:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2955992127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.177 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 195 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 246 op/s
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.936 250022 DEBUG nova.compute.manager [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.936 250022 DEBUG oslo_concurrency.lockutils [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.937 250022 DEBUG oslo_concurrency.lockutils [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.937 250022 DEBUG oslo_concurrency.lockutils [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.938 250022 DEBUG nova.compute.manager [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:50:33 compute-0 nova_compute[250018]: 2026-01-20 14:50:33.938 250022 WARNING nova.compute.manager [req-2c096e0f-f5d8-4357-b17e-4da32e1128f4 req-dcaa11bc-5c99-4225-8055-aef1e7ad3e01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state None.
Jan 20 14:50:34 compute-0 ceph-mon[74360]: pgmap v1884: 321 pgs: 321 active+clean; 195 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.1 MiB/s wr, 246 op/s
Jan 20 14:50:34 compute-0 nova_compute[250018]: 2026-01-20 14:50:34.150 250022 DEBUG nova.compute.manager [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:50:34 compute-0 nova_compute[250018]: 2026-01-20 14:50:34.199 250022 INFO nova.compute.manager [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] instance snapshotting
Jan 20 14:50:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:34 compute-0 nova_compute[250018]: 2026-01-20 14:50:34.618 250022 INFO nova.virt.libvirt.driver [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Beginning live snapshot process
Jan 20 14:50:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:34.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:34 compute-0 nova_compute[250018]: 2026-01-20 14:50:34.776 250022 DEBUG nova.virt.libvirt.imagebackend [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.053 250022 DEBUG nova.storage.rbd_utils [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(4e7b7d7713af44f78be9e28a297daded) on rbd image(c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:35.103 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:35.104 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:50:35 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:35.105 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 20 14:50:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 20 14:50:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.200 250022 DEBUG nova.storage.rbd_utils [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk@4e7b7d7713af44f78be9e28a297daded to images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.332 250022 DEBUG nova.storage.rbd_utils [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] flattening images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:35 compute-0 nova_compute[250018]: 2026-01-20 14:50:35.643 250022 DEBUG nova.storage.rbd_utils [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] removing snapshot(4e7b7d7713af44f78be9e28a297daded) on rbd image(c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:50:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 242 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 355 op/s
Jan 20 14:50:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 20 14:50:36 compute-0 ceph-mon[74360]: osdmap e251: 3 total, 3 up, 3 in
Jan 20 14:50:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/170477161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:36 compute-0 ceph-mon[74360]: pgmap v1886: 321 pgs: 321 active+clean; 242 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 355 op/s
Jan 20 14:50:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 20 14:50:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 20 14:50:36 compute-0 nova_compute[250018]: 2026-01-20 14:50:36.202 250022 DEBUG nova.storage.rbd_utils [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] creating snapshot(snap) on rbd image(7586ccfe-36ea-40bd-b70d-ce54b80b5faa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:50:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:36.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:36.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 20 14:50:37 compute-0 ceph-mon[74360]: osdmap e252: 3 total, 3 up, 3 in
Jan 20 14:50:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 20 14:50:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 20 14:50:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 278 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 8.1 MiB/s wr, 457 op/s
Jan 20 14:50:38 compute-0 nova_compute[250018]: 2026-01-20 14:50:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:38 compute-0 nova_compute[250018]: 2026-01-20 14:50:38.199 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:38 compute-0 ceph-mon[74360]: osdmap e253: 3 total, 3 up, 3 in
Jan 20 14:50:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3655918105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/547360188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-0 ceph-mon[74360]: pgmap v1889: 321 pgs: 321 active+clean; 278 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 8.1 MiB/s wr, 457 op/s
Jan 20 14:50:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2564209469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:38.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:38.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:38 compute-0 nova_compute[250018]: 2026-01-20 14:50:38.861 250022 INFO nova.virt.libvirt.driver [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Snapshot image upload complete
Jan 20 14:50:38 compute-0 nova_compute[250018]: 2026-01-20 14:50:38.862 250022 INFO nova.compute.manager [None req-6c56e99b-8d65-4833-9f8b-7ae8c17b9763 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Took 4.66 seconds to snapshot the instance on the hypervisor.
Jan 20 14:50:39 compute-0 nova_compute[250018]: 2026-01-20 14:50:39.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:39 compute-0 nova_compute[250018]: 2026-01-20 14:50:39.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2975627258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 298 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 416 op/s
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.080 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.080 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:40 compute-0 ceph-mon[74360]: pgmap v1890: 321 pgs: 321 active+clean; 298 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 416 op/s
Jan 20 14:50:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3077486001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.561 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.581 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.646 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.647 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:50:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.820 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.821 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4316MB free_disk=20.925552368164062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.822 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.822 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:50:40 compute-0 nova_compute[250018]: 2026-01-20 14:50:40.901 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.016 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3077486001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:50:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154529792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.482 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.490 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.502 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.524 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.525 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.822 250022 INFO nova.compute.manager [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Rescuing
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.823 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.824 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:41 compute-0 nova_compute[250018]: 2026-01-20 14:50:41.824 250022 DEBUG nova.network.neutron [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:50:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.4 MiB/s wr, 179 op/s
Jan 20 14:50:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4154529792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:42 compute-0 ceph-mon[74360]: pgmap v1891: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.4 MiB/s wr, 179 op/s
Jan 20 14:50:42 compute-0 nova_compute[250018]: 2026-01-20 14:50:42.525 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:42 compute-0 nova_compute[250018]: 2026-01-20 14:50:42.527 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:42 compute-0 nova_compute[250018]: 2026-01-20 14:50:42.528 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:50:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:42.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:42.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:43 compute-0 nova_compute[250018]: 2026-01-20 14:50:43.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:43 compute-0 nova_compute[250018]: 2026-01-20 14:50:43.377 250022 DEBUG nova.network.neutron [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:43 compute-0 nova_compute[250018]: 2026-01-20 14:50:43.400 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:43 compute-0 nova_compute[250018]: 2026-01-20 14:50:43.630 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:50:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 208 op/s
Jan 20 14:50:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 20 14:50:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 20 14:50:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 20 14:50:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:44.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:44 compute-0 ceph-mon[74360]: pgmap v1892: 321 pgs: 321 active+clean; 307 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.0 MiB/s wr, 208 op/s
Jan 20 14:50:44 compute-0 ceph-mon[74360]: osdmap e254: 3 total, 3 up, 3 in
Jan 20 14:50:44 compute-0 sudo[314192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:44 compute-0 sudo[314192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:44 compute-0 sudo[314192]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:44 compute-0 ovn_controller[148666]: 2026-01-20T14:50:44Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:50:44 compute-0 ovn_controller[148666]: 2026-01-20T14:50:44Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:50:45 compute-0 sudo[314217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:50:45 compute-0 sudo[314217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-0 sudo[314217]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-0 sudo[314242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:45 compute-0 sudo[314242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-0 sudo[314242]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-0 sudo[314267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:50:45 compute-0 sudo[314267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-0 nova_compute[250018]: 2026-01-20 14:50:45.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:45 compute-0 sudo[314267]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev aaa15026-2ec8-42b1-b72e-e63230957089 does not exist
Jan 20 14:50:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 76e6c4c7-4fd7-48cb-aefa-d9be682e19e0 does not exist
Jan 20 14:50:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d37e15b6-fd83-4ae7-84e4-abcd815d1b84 does not exist
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:50:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 344 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 14:50:45 compute-0 sudo[314324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:45 compute-0 sudo[314324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:45 compute-0 sudo[314324]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:50:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/203403028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:46 compute-0 sudo[314349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:50:46 compute-0 sudo[314349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:46 compute-0 sudo[314349]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:50:46 compute-0 sudo[314374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:46 compute-0 sudo[314374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:46 compute-0 sudo[314374]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.075 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:50:46 compute-0 nova_compute[250018]: 2026-01-20 14:50:46.075 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:46 compute-0 sudo[314399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:50:46 compute-0 sudo[314399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.569948172 +0000 UTC m=+0.069941123 container create 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:50:46 compute-0 systemd[1]: Started libpod-conmon-1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1.scope.
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.540784447 +0000 UTC m=+0.040777498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.664023294 +0000 UTC m=+0.164016265 container init 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.669932773 +0000 UTC m=+0.169925724 container start 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.67282013 +0000 UTC m=+0.172813161 container attach 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:50:46 compute-0 loving_shaw[314478]: 167 167
Jan 20 14:50:46 compute-0 systemd[1]: libpod-1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1.scope: Deactivated successfully.
Jan 20 14:50:46 compute-0 conmon[314478]: conmon 1c7594660f76668d833f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1.scope/container/memory.events
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.676663804 +0000 UTC m=+0.176656755 container died 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:50:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7db40cb82a3fb4b874d359dae22c9d7e07db0f85be31aa352f8e9f0574a9ebee-merged.mount: Deactivated successfully.
Jan 20 14:50:46 compute-0 podman[314462]: 2026-01-20 14:50:46.723473323 +0000 UTC m=+0.223466274 container remove 1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:50:46 compute-0 systemd[1]: libpod-conmon-1c7594660f76668d833f1fcfe5c859455d31a779ba68a0eb4db62d40a7f950f1.scope: Deactivated successfully.
Jan 20 14:50:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:46 compute-0 podman[314503]: 2026-01-20 14:50:46.896238433 +0000 UTC m=+0.046895483 container create 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:50:46 compute-0 systemd[1]: Started libpod-conmon-1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1.scope.
Jan 20 14:50:46 compute-0 ceph-mon[74360]: pgmap v1894: 321 pgs: 321 active+clean; 344 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 14:50:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2235680583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:50:46 compute-0 podman[314503]: 2026-01-20 14:50:46.872770562 +0000 UTC m=+0.023427642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:47 compute-0 podman[314503]: 2026-01-20 14:50:47.010760846 +0000 UTC m=+0.161417916 container init 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:50:47 compute-0 podman[314503]: 2026-01-20 14:50:47.018568785 +0000 UTC m=+0.169225835 container start 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:50:47 compute-0 podman[314503]: 2026-01-20 14:50:47.02208154 +0000 UTC m=+0.172738630 container attach 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:50:47 compute-0 nova_compute[250018]: 2026-01-20 14:50:47.266 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:50:47 compute-0 nova_compute[250018]: 2026-01-20 14:50:47.295 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:50:47 compute-0 nova_compute[250018]: 2026-01-20 14:50:47.295 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:50:47 compute-0 peaceful_fermi[314519]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:50:47 compute-0 peaceful_fermi[314519]: --> relative data size: 1.0
Jan 20 14:50:47 compute-0 peaceful_fermi[314519]: --> All data devices are unavailable
Jan 20 14:50:47 compute-0 systemd[1]: libpod-1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1.scope: Deactivated successfully.
Jan 20 14:50:47 compute-0 podman[314503]: 2026-01-20 14:50:47.84215627 +0000 UTC m=+0.992813320 container died 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5eb85d77fbb857da29dd7808909ef5b05458d4dde10a297775140ccf9007896-merged.mount: Deactivated successfully.
Jan 20 14:50:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 366 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 20 14:50:47 compute-0 podman[314503]: 2026-01-20 14:50:47.892755111 +0000 UTC m=+1.043412161 container remove 1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:50:47 compute-0 systemd[1]: libpod-conmon-1a3ccb2a24cfb439c03444d020c3f30578e3a4de92d237e94e39eb34845b4cc1.scope: Deactivated successfully.
Jan 20 14:50:47 compute-0 sudo[314399]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:47 compute-0 sudo[314547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:47 compute-0 sudo[314547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:47 compute-0 sudo[314547]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:48 compute-0 sudo[314572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:50:48 compute-0 sudo[314572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:48 compute-0 sudo[314572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:48 compute-0 sudo[314597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:48 compute-0 sudo[314597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:48 compute-0 sudo[314597]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:48 compute-0 sudo[314622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:50:48 compute-0 sudo[314622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:48 compute-0 nova_compute[250018]: 2026-01-20 14:50:48.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.510716733 +0000 UTC m=+0.048971789 container create 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:50:48 compute-0 systemd[1]: Started libpod-conmon-6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4.scope.
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.487778795 +0000 UTC m=+0.026033861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.612165722 +0000 UTC m=+0.150420788 container init 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.621884174 +0000 UTC m=+0.160139210 container start 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.626368535 +0000 UTC m=+0.164623591 container attach 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 14:50:48 compute-0 optimistic_matsumoto[314704]: 167 167
Jan 20 14:50:48 compute-0 systemd[1]: libpod-6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4.scope: Deactivated successfully.
Jan 20 14:50:48 compute-0 conmon[314704]: conmon 6e5278b1dff082f85794 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4.scope/container/memory.events
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.631192795 +0000 UTC m=+0.169447841 container died 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25a8d9b10903b9651a8f4206c355f69e059fe49c4d571115cdbb6d97ffa6715-merged.mount: Deactivated successfully.
Jan 20 14:50:48 compute-0 podman[314687]: 2026-01-20 14:50:48.670802221 +0000 UTC m=+0.209057237 container remove 6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 14:50:48 compute-0 systemd[1]: libpod-conmon-6e5278b1dff082f85794825c1ef6be4aac61511774064086782a793f62f599d4.scope: Deactivated successfully.
Jan 20 14:50:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:48.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:48 compute-0 podman[314727]: 2026-01-20 14:50:48.841667599 +0000 UTC m=+0.040252325 container create d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:50:48 compute-0 systemd[1]: Started libpod-conmon-d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e.scope.
Jan 20 14:50:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3497cfd42635fc8b14585955975780061c49e487ee558149177662972e161557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3497cfd42635fc8b14585955975780061c49e487ee558149177662972e161557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3497cfd42635fc8b14585955975780061c49e487ee558149177662972e161557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3497cfd42635fc8b14585955975780061c49e487ee558149177662972e161557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:48 compute-0 podman[314727]: 2026-01-20 14:50:48.82201618 +0000 UTC m=+0.020600926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:48 compute-0 podman[314727]: 2026-01-20 14:50:48.923264365 +0000 UTC m=+0.121849171 container init d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:50:48 compute-0 podman[314727]: 2026-01-20 14:50:48.933826999 +0000 UTC m=+0.132411725 container start d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:50:48 compute-0 podman[314727]: 2026-01-20 14:50:48.937753035 +0000 UTC m=+0.136337831 container attach d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:50:49 compute-0 ceph-mon[74360]: pgmap v1895: 321 pgs: 321 active+clean; 366 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 20 14:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]: {
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:     "0": [
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:         {
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "devices": [
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "/dev/loop3"
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             ],
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "lv_name": "ceph_lv0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "lv_size": "7511998464",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "name": "ceph_lv0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "tags": {
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.cluster_name": "ceph",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.crush_device_class": "",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.encrypted": "0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.osd_id": "0",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.type": "block",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:                 "ceph.vdo": "0"
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             },
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "type": "block",
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:             "vg_name": "ceph_vg0"
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:         }
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]:     ]
Jan 20 14:50:49 compute-0 quirky_blackwell[314744]: }
Jan 20 14:50:49 compute-0 systemd[1]: libpod-d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e.scope: Deactivated successfully.
Jan 20 14:50:49 compute-0 podman[314727]: 2026-01-20 14:50:49.70004221 +0000 UTC m=+0.898626956 container died d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-3497cfd42635fc8b14585955975780061c49e487ee558149177662972e161557-merged.mount: Deactivated successfully.
Jan 20 14:50:49 compute-0 podman[314727]: 2026-01-20 14:50:49.760353293 +0000 UTC m=+0.958938009 container remove d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 20 14:50:49 compute-0 systemd[1]: libpod-conmon-d6e6450b86eff52d4b99068c4c1c4baf22e41fbc9e2149b20484d6d03d26d72e.scope: Deactivated successfully.
Jan 20 14:50:49 compute-0 sudo[314622]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:49 compute-0 sudo[314768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 368 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 261 op/s
Jan 20 14:50:49 compute-0 sudo[314768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:49 compute-0 sudo[314768]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:49 compute-0 sudo[314793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:50:49 compute-0 sudo[314793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:49 compute-0 sudo[314793]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-0 sudo[314817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:50 compute-0 sudo[314817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-0 sudo[314817]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-0 sudo[314826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:50 compute-0 sudo[314826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-0 sudo[314826]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-0 sudo[314868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:50 compute-0 sudo[314868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-0 sudo[314868]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:50 compute-0 sudo[314872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:50:50 compute-0 sudo[314872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:50 compute-0 ceph-mon[74360]: pgmap v1896: 321 pgs: 321 active+clean; 368 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 261 op/s
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.465080189 +0000 UTC m=+0.036027281 container create 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:50 compute-0 systemd[1]: Started libpod-conmon-0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04.scope.
Jan 20 14:50:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.525267229 +0000 UTC m=+0.096214331 container init 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.531869956 +0000 UTC m=+0.102817048 container start 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.534988591 +0000 UTC m=+0.105935683 container attach 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:50:50 compute-0 systemd[1]: libpod-0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04.scope: Deactivated successfully.
Jan 20 14:50:50 compute-0 boring_williams[314975]: 167 167
Jan 20 14:50:50 compute-0 conmon[314975]: conmon 0ab679168bdb59417a8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04.scope/container/memory.events
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.53871769 +0000 UTC m=+0.109664782 container died 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.449591942 +0000 UTC m=+0.020539054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61b17d58fb75d810de384e1e22fa2adbf3eefc5465ddccb995d560c88d91693-merged.mount: Deactivated successfully.
Jan 20 14:50:50 compute-0 podman[314959]: 2026-01-20 14:50:50.574792531 +0000 UTC m=+0.145739623 container remove 0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:50:50 compute-0 systemd[1]: libpod-conmon-0ab679168bdb59417a8e1be063dd2e6fe6dabbbffee38d53698685bc4872da04.scope: Deactivated successfully.
Jan 20 14:50:50 compute-0 nova_compute[250018]: 2026-01-20 14:50:50.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:50.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:50 compute-0 podman[314999]: 2026-01-20 14:50:50.729098524 +0000 UTC m=+0.039272258 container create 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:50:50 compute-0 systemd[1]: Started libpod-conmon-541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb.scope.
Jan 20 14:50:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:50.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bf14ae77c4b2615a747de34799d2ab31d8b7cfa7c242ed1b31383764510b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bf14ae77c4b2615a747de34799d2ab31d8b7cfa7c242ed1b31383764510b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bf14ae77c4b2615a747de34799d2ab31d8b7cfa7c242ed1b31383764510b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82bf14ae77c4b2615a747de34799d2ab31d8b7cfa7c242ed1b31383764510b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:50:50 compute-0 podman[314999]: 2026-01-20 14:50:50.802120449 +0000 UTC m=+0.112294183 container init 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:50:50 compute-0 podman[314999]: 2026-01-20 14:50:50.710305539 +0000 UTC m=+0.020479283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:50:50 compute-0 podman[314999]: 2026-01-20 14:50:50.812500339 +0000 UTC m=+0.122674073 container start 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:50:50 compute-0 podman[314999]: 2026-01-20 14:50:50.816173647 +0000 UTC m=+0.126347381 container attach 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:50:51 compute-0 nova_compute[250018]: 2026-01-20 14:50:51.288 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]: {
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:         "osd_id": 0,
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:         "type": "bluestore"
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]:     }
Jan 20 14:50:51 compute-0 lucid_wescoff[315015]: }
Jan 20 14:50:51 compute-0 systemd[1]: libpod-541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb.scope: Deactivated successfully.
Jan 20 14:50:51 compute-0 podman[314999]: 2026-01-20 14:50:51.623155766 +0000 UTC m=+0.933329510 container died 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-82bf14ae77c4b2615a747de34799d2ab31d8b7cfa7c242ed1b31383764510b6d-merged.mount: Deactivated successfully.
Jan 20 14:50:51 compute-0 podman[314999]: 2026-01-20 14:50:51.676882171 +0000 UTC m=+0.987055905 container remove 541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:50:51 compute-0 systemd[1]: libpod-conmon-541db14719125a284b049efe4db5e4d152ae9138b96a420dd89ce3a1b921f1eb.scope: Deactivated successfully.
Jan 20 14:50:51 compute-0 sudo[314872]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:50:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:50:51 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6a76beca-fa92-4609-82da-44867681b78e does not exist
Jan 20 14:50:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5defec3e-2dc4-4a1e-bcc8-5b2e1a1980d1 does not exist
Jan 20 14:50:51 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 14162ef1-9c45-4c88-8167-fff6b52bdf1b does not exist
Jan 20 14:50:51 compute-0 sudo[315048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:50:51 compute-0 sudo[315048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:51 compute-0 sudo[315048]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:51 compute-0 sudo[315073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:50:51 compute-0 sudo[315073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:50:51 compute-0 sudo[315073]: pam_unix(sudo:session): session closed for user root
Jan 20 14:50:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 372 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 240 op/s
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:50:52
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 20 14:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:50:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:52.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:52 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:50:52 compute-0 ceph-mon[74360]: pgmap v1897: 321 pgs: 321 active+clean; 372 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 240 op/s
Jan 20 14:50:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:52.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:53 compute-0 nova_compute[250018]: 2026-01-20 14:50:53.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:53 compute-0 nova_compute[250018]: 2026-01-20 14:50:53.677 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:50:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 378 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Jan 20 14:50:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:54.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:54.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:54 compute-0 ceph-mon[74360]: pgmap v1898: 321 pgs: 321 active+clean; 378 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 201 op/s
Jan 20 14:50:55 compute-0 nova_compute[250018]: 2026-01-20 14:50:55.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 402 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Jan 20 14:50:55 compute-0 kernel: tap7869c4f4-45 (unregistering): left promiscuous mode
Jan 20 14:50:55 compute-0 NetworkManager[48960]: <info>  [1768920655.9510] device (tap7869c4f4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:50:55 compute-0 ovn_controller[148666]: 2026-01-20T14:50:55Z|00380|binding|INFO|Releasing lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 from this chassis (sb_readonly=0)
Jan 20 14:50:55 compute-0 ovn_controller[148666]: 2026-01-20T14:50:55Z|00381|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 down in Southbound
Jan 20 14:50:55 compute-0 nova_compute[250018]: 2026-01-20 14:50:55.959 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:55 compute-0 ovn_controller[148666]: 2026-01-20T14:50:55Z|00382|binding|INFO|Removing iface tap7869c4f4-45 ovn-installed in OVS
Jan 20 14:50:55 compute-0 nova_compute[250018]: 2026-01-20 14:50:55.961 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:55.967 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:50:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:55.969 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:50:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:55.972 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:50:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:55.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1babfd2b-5848-4ec5-bcb6-c0b32fc4979e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:55.974 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:50:55 compute-0 nova_compute[250018]: 2026-01-20 14:50:55.981 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:56 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 20 14:50:56 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006e.scope: Consumed 14.917s CPU time.
Jan 20 14:50:56 compute-0 systemd-machined[216401]: Machine qemu-49-instance-0000006e terminated.
Jan 20 14:50:56 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [NOTICE]   (313989) : haproxy version is 2.8.14-c23fe91
Jan 20 14:50:56 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [NOTICE]   (313989) : path to executable is /usr/sbin/haproxy
Jan 20 14:50:56 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [WARNING]  (313989) : Exiting Master process...
Jan 20 14:50:56 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [ALERT]    (313989) : Current worker (313991) exited with code 143 (Terminated)
Jan 20 14:50:56 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[313985]: [WARNING]  (313989) : All workers exited. Exiting... (0)
Jan 20 14:50:56 compute-0 systemd[1]: libpod-7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641.scope: Deactivated successfully.
Jan 20 14:50:56 compute-0 podman[315126]: 2026-01-20 14:50:56.142766958 +0000 UTC m=+0.062625937 container died 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641-userdata-shm.mount: Deactivated successfully.
Jan 20 14:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b0cbc9de21a864c844b5218cb858181d4219572eacadeaffcd39a810600e1f-merged.mount: Deactivated successfully.
Jan 20 14:50:56 compute-0 NetworkManager[48960]: <info>  [1768920656.1792] manager: (tap7869c4f4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 20 14:50:56 compute-0 podman[315126]: 2026-01-20 14:50:56.182979819 +0000 UTC m=+0.102838788 container cleanup 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:50:56 compute-0 systemd[1]: libpod-conmon-7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641.scope: Deactivated successfully.
Jan 20 14:50:56 compute-0 podman[315159]: 2026-01-20 14:50:56.247248019 +0000 UTC m=+0.042104484 container remove 7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.254 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73e00c57-3756-4787-92c1-73b1b7678b74]: (4, ('Tue Jan 20 02:50:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641)\n7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641\nTue Jan 20 02:50:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641)\n7866b45a9bdda5092f25f44c1000777e6bd8ae91584097deca054a90b6920641\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.257 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1a4085-4840-4847-b42c-3940126eda43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.261 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:56 compute-0 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.282 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.283 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.286 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d76fd58-401e-4b61-b360-c139adddf89d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.306 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a5876a72-2975-4b72-b54c-9d4dc483660c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.308 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[073f1c14-7256-4fa9-aee5-c296044bff99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.330 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6d896e-acb9-4171-a106-d118995525a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657221, 'reachable_time': 44420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315178, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.333 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:50:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:50:56.333 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e0575c30-56e1-45a3-acb1-59e72dae7fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:50:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.691 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance shutdown successfully after 13 seconds.
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.699 250022 INFO nova.virt.libvirt.driver [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance destroyed successfully.
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.699 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:50:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:56.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:50:56 compute-0 nova_compute[250018]: 2026-01-20 14:50:56.729 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Attempting a stable device rescue
Jan 20 14:50:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:50:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:56.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:50:56 compute-0 ceph-mon[74360]: pgmap v1899: 321 pgs: 321 active+clean; 402 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.073 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.078 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.078 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Creating image(s)
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.110 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.114 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'trusted_certs' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.178 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.206 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.209 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "f7bde980556c52c000eb7f0e0e0fb551c7414622" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.210 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "f7bde980556c52c000eb7f0e0e0fb551c7414622" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.392 250022 DEBUG nova.virt.libvirt.imagebackend [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.464 250022 DEBUG nova.virt.libvirt.imagebackend [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.465 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] cloning images/7586ccfe-36ea-40bd-b70d-ce54b80b5faa@snap to None/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.611 250022 DEBUG oslo_concurrency.lockutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "f7bde980556c52c000eb7f0e0e0fb551c7414622" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.671 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'migration_context' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.688 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.693 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start _get_guest_xml network_info=[{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:7f:92:68"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7586ccfe-36ea-40bd-b70d-ce54b80b5faa', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.693 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.712 250022 WARNING nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.726 250022 DEBUG nova.virt.libvirt.host [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.728 250022 DEBUG nova.virt.libvirt.host [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.732 250022 DEBUG nova.virt.libvirt.host [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.732 250022 DEBUG nova.virt.libvirt.host [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.734 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.734 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.735 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.736 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.736 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.736 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.737 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.737 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.738 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.738 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.739 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.739 250022 DEBUG nova.virt.hardware [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.740 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'vcpu_model' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:57 compute-0 nova_compute[250018]: 2026-01-20 14:50:57.765 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 4.0 MiB/s wr, 136 op/s
Jan 20 14:50:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081531429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:58 compute-0 nova_compute[250018]: 2026-01-20 14:50:58.223 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:58 compute-0 nova_compute[250018]: 2026-01-20 14:50:58.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:50:58 compute-0 nova_compute[250018]: 2026-01-20 14:50:58.257 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264256095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:50:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:50:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:50:58 compute-0 nova_compute[250018]: 2026-01-20 14:50:58.714 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:58 compute-0 nova_compute[250018]: 2026-01-20 14:50:58.715 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:50:58 compute-0 ceph-mon[74360]: pgmap v1900: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 4.0 MiB/s wr, 136 op/s
Jan 20 14:50:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:50:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:50:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:50:58.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:50:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:50:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182888219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.177 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.179 250022 DEBUG nova.virt.libvirt.vif [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-178332738',display_name='tempest-ServerStableDeviceRescueTest-server-178332738',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-178332738',id=110,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-6cagqhwr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:50:38Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=c3b4d4c6-c42f-4abc-9c01-89ec3e10c677,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:7f:92:68"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.179 250022 DEBUG nova.network.os_vif_util [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "vif_mac": "fa:16:3e:7f:92:68"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.180 250022 DEBUG nova.network.os_vif_util [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.181 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'pci_devices' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.199 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <uuid>c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</uuid>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <name>instance-0000006e</name>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-178332738</nova:name>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:50:57</nova:creationTime>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:user uuid="d85d286ce6224326a0f4a15a06afbfea">tempest-ServerStableDeviceRescueTest-129078052-project-member</nova:user>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:project uuid="0a29915e0dd2403fbd7b7e847696b00a">tempest-ServerStableDeviceRescueTest-129078052</nova:project>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <nova:port uuid="7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9">
Jan 20 14:50:59 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <system>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="serial">c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="uuid">c3b4d4c6-c42f-4abc-9c01-89ec3e10c677</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </system>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <os>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </os>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <features>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </features>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </source>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </source>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.rescue">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </source>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:50:59 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <target dev="sdb" bus="scsi"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <boot order="1"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:7f:92:68"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <target dev="tap7869c4f4-45"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/console.log" append="off"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <video>
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </video>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:50:59 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:50:59 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:50:59 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:50:59 compute-0 nova_compute[250018]: </domain>
Jan 20 14:50:59 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.207 250022 INFO nova.virt.libvirt.driver [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance destroyed successfully.
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.278 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.280 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.280 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.280 250022 DEBUG nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] No VIF found with MAC fa:16:3e:7f:92:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.281 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Using config drive
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.311 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.334 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'ec2_ids' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:59 compute-0 nova_compute[250018]: 2026-01-20 14:50:59.367 250022 DEBUG nova.objects.instance [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'keypairs' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:50:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:50:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2081531429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2264256095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/182888219' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:50:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.081 250022 DEBUG nova.compute.manager [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.082 250022 DEBUG oslo_concurrency.lockutils [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.082 250022 DEBUG oslo_concurrency.lockutils [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.082 250022 DEBUG oslo_concurrency.lockutils [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.083 250022 DEBUG nova.compute.manager [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.083 250022 WARNING nova.compute.manager [req-b512af3b-a128-4dcc-b948-c3c2c6de7b2f req-5033908d-41e6-44ce-81b4-a9dd86cbdada 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state rescuing.
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:00.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.954 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Creating config drive at /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue
Jan 20 14:51:00 compute-0 nova_compute[250018]: 2026-01-20 14:51:00.962 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcmiimuu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:01 compute-0 ceph-mon[74360]: pgmap v1901: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.101 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkcmiimuu" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.137 250022 DEBUG nova.storage.rbd_utils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] rbd image c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.141 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.399 250022 DEBUG oslo_concurrency.processutils [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.400 250022 INFO nova.virt.libvirt.driver [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Deleting local config drive /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677/disk.config.rescue because it was imported into RBD.
Jan 20 14:51:01 compute-0 kernel: tap7869c4f4-45: entered promiscuous mode
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.4567] manager: (tap7869c4f4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.491 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:01 compute-0 systemd-udevd[315505]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:01 compute-0 ovn_controller[148666]: 2026-01-20T14:51:01Z|00383|binding|INFO|Claiming lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for this chassis.
Jan 20 14:51:01 compute-0 ovn_controller[148666]: 2026-01-20T14:51:01Z|00384|binding|INFO|7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9: Claiming fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.497 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.498 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 bound to our chassis
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.500 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:01 compute-0 podman[315466]: 2026-01-20 14:51:01.507483303 +0000 UTC m=+0.091205425 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:51:01 compute-0 ovn_controller[148666]: 2026-01-20T14:51:01Z|00385|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 ovn-installed in OVS
Jan 20 14:51:01 compute-0 ovn_controller[148666]: 2026-01-20T14:51:01Z|00386|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 up in Southbound
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.510 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.5177] device (tap7869c4f4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.5190] device (tap7869c4f4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.521 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[432edc4a-a123-465f-9976-f509a1a6a5a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.522 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.524 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.524 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[baab34b9-e43a-41e2-a467-26162ab74a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.525 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[357595b9-90c5-4b4b-b3cc-2b5a2962ed62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 systemd-machined[216401]: New machine qemu-50-instance-0000006e.
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.539 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[2b947ee7-b3ad-4f2e-b542-f359797c2a47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000006e.
Jan 20 14:51:01 compute-0 podman[315465]: 2026-01-20 14:51:01.554354085 +0000 UTC m=+0.139486365 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.564 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cd032850-6661-44d1-896b-bf96ca89df89]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.600 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ba255799-f9c4-426b-b5a6-e07d7b0c6829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.605 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[775748a2-972e-46c5-9422-930ab4fc18d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.6069] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Jan 20 14:51:01 compute-0 systemd-udevd[315517]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.637 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4f65dd-c713-45f9-b1b0-43c8c9ec92dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.640 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c33b3c4e-9b4c-49b7-b7cb-8ff84a3d0181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.6699] device (tap79184781-10): carrier: link connected
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.676 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e73934f3-7627-4c1c-a5f4-2ddaee85e173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.695 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[88efe2bf-dcff-44c8-b7d8-23876ae002eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660239, 'reachable_time': 16182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315555, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.714 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e2255cb2-7457-44b4-9a0a-4c15ec183b50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660239, 'tstamp': 660239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315556, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.732 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[530172ce-c162-4355-b7f1-5d487bb24c7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660239, 'reachable_time': 16182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315557, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.771 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[40dd764e-ff14-4629-bb62-f32e17a43c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.831 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[63327bd8-4716-4541-abb3-2b5857dbcfeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.833 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.834 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.834 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:01 compute-0 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:51:01 compute-0 NetworkManager[48960]: <info>  [1768920661.8386] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.850 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:01 compute-0 ovn_controller[148666]: 2026-01-20T14:51:01Z|00387|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.852 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:01 compute-0 nova_compute[250018]: 2026-01-20 14:51:01.883 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.885 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.886 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc11451-6ed0-4053-9e8d-63f2a736bbaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.887 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:51:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:01.888 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:51:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 20 14:51:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 20 14:51:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 20 14:51:02 compute-0 ceph-mon[74360]: pgmap v1902: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.177 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.178 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.178 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.178 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.178 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.179 250022 WARNING nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state rescuing.
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.179 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.179 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.179 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.180 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.180 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.180 250022 WARNING nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state rescuing.
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.181 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.181 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.181 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.181 250022 DEBUG oslo_concurrency.lockutils [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.182 250022 DEBUG nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.182 250022 WARNING nova.compute.manager [req-a4716a9a-1d97-4c91-b400-49abdda7f61e req-fa538d9e-0d16-4d66-a0af-9eb289f76d5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state rescuing.
Jan 20 14:51:02 compute-0 podman[315590]: 2026-01-20 14:51:02.298047829 +0000 UTC m=+0.058165476 container create 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:51:02 compute-0 systemd[1]: Started libpod-conmon-1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22.scope.
Jan 20 14:51:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:02 compute-0 podman[315590]: 2026-01-20 14:51:02.270446526 +0000 UTC m=+0.030564223 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:51:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3739dcc59600a0cfa33f8d732575257c1312a30cd332acf75318082a4ad5d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:02 compute-0 podman[315590]: 2026-01-20 14:51:02.381844325 +0000 UTC m=+0.141961972 container init 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:51:02 compute-0 podman[315590]: 2026-01-20 14:51:02.387334952 +0000 UTC m=+0.147452599 container start 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:51:02 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [NOTICE]   (315663) : New worker (315665) forked
Jan 20 14:51:02 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [NOTICE]   (315663) : Loading success.
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.523 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.523 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920662.5226464, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.524 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Resumed (Lifecycle Event)
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.528 250022 DEBUG nova.compute.manager [None req-26ab0780-1dff-40c9-b541-cb8829153de8 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.561 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.565 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.597 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920662.5257452, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.598 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Started (Lifecycle Event)
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.622 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:02 compute-0 nova_compute[250018]: 2026-01-20 14:51:02.625 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:03 compute-0 ceph-mon[74360]: osdmap e255: 3 total, 3 up, 3 in
Jan 20 14:51:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1368014388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:03 compute-0 nova_compute[250018]: 2026-01-20 14:51:03.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 421 KiB/s rd, 1.7 MiB/s wr, 108 op/s
Jan 20 14:51:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2263803824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:04 compute-0 ceph-mon[74360]: pgmap v1904: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 421 KiB/s rd, 1.7 MiB/s wr, 108 op/s
Jan 20 14:51:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:05 compute-0 nova_compute[250018]: 2026-01-20 14:51:05.059 250022 INFO nova.compute.manager [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Unrescuing
Jan 20 14:51:05 compute-0 nova_compute[250018]: 2026-01-20 14:51:05.060 250022 DEBUG oslo_concurrency.lockutils [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:05 compute-0 nova_compute[250018]: 2026-01-20 14:51:05.060 250022 DEBUG oslo_concurrency.lockutils [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:05 compute-0 nova_compute[250018]: 2026-01-20 14:51:05.060 250022 DEBUG nova.network.neutron [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:51:05 compute-0 nova_compute[250018]: 2026-01-20 14:51:05.621 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 310 KiB/s wr, 154 op/s
Jan 20 14:51:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:06.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:06 compute-0 ceph-mon[74360]: pgmap v1905: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 310 KiB/s wr, 154 op/s
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.253 250022 DEBUG nova.network.neutron [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.277 250022 DEBUG oslo_concurrency.lockutils [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.278 250022 DEBUG nova.objects.instance [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'flavor' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:07 compute-0 kernel: tap7869c4f4-45 (unregistering): left promiscuous mode
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.3542] device (tap7869c4f4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00388|binding|INFO|Releasing lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 from this chassis (sb_readonly=0)
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00389|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 down in Southbound
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00390|binding|INFO|Removing iface tap7869c4f4-45 ovn-installed in OVS
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.415 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.416 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.418 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.419 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5d109cb6-c607-496e-9e6f-04505a8acd7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.421 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.433 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 20 14:51:07 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006e.scope: Consumed 5.944s CPU time.
Jan 20 14:51:07 compute-0 systemd-machined[216401]: Machine qemu-50-instance-0000006e terminated.
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.544 250022 INFO nova.virt.libvirt.driver [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance destroyed successfully.
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.546 250022 DEBUG nova.objects.instance [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'numa_topology' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:07 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [NOTICE]   (315663) : haproxy version is 2.8.14-c23fe91
Jan 20 14:51:07 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [NOTICE]   (315663) : path to executable is /usr/sbin/haproxy
Jan 20 14:51:07 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [WARNING]  (315663) : Exiting Master process...
Jan 20 14:51:07 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [ALERT]    (315663) : Current worker (315665) exited with code 143 (Terminated)
Jan 20 14:51:07 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315638]: [WARNING]  (315663) : All workers exited. Exiting... (0)
Jan 20 14:51:07 compute-0 systemd[1]: libpod-1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22.scope: Deactivated successfully.
Jan 20 14:51:07 compute-0 podman[315707]: 2026-01-20 14:51:07.573453811 +0000 UTC m=+0.058839934 container died 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22-userdata-shm.mount: Deactivated successfully.
Jan 20 14:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba3739dcc59600a0cfa33f8d732575257c1312a30cd332acf75318082a4ad5d4-merged.mount: Deactivated successfully.
Jan 20 14:51:07 compute-0 podman[315707]: 2026-01-20 14:51:07.622359118 +0000 UTC m=+0.107745221 container cleanup 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:51:07 compute-0 systemd[1]: libpod-conmon-1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22.scope: Deactivated successfully.
Jan 20 14:51:07 compute-0 kernel: tap7869c4f4-45: entered promiscuous mode
Jan 20 14:51:07 compute-0 systemd-udevd[315689]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.6466] manager: (tap7869c4f4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00391|binding|INFO|Claiming lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for this chassis.
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00392|binding|INFO|7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9: Claiming fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.650 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.653 250022 DEBUG nova.compute.manager [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.654 250022 DEBUG oslo_concurrency.lockutils [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.654 250022 DEBUG oslo_concurrency.lockutils [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.655 250022 DEBUG oslo_concurrency.lockutils [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.655 250022 DEBUG nova.compute.manager [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.655 250022 WARNING nova.compute.manager [req-53f7b2f5-7a5a-4ee1-8ece-1833cb01c9df req-9ff1cdc9-d9de-4995-b891-6cf0503d500c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state rescued and task_state unrescuing.
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.658 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.6600] device (tap7869c4f4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.6611] device (tap7869c4f4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.663 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00393|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 ovn-installed in OVS
Jan 20 14:51:07 compute-0 ovn_controller[148666]: 2026-01-20T14:51:07Z|00394|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 up in Southbound
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.667 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 systemd-machined[216401]: New machine qemu-51-instance-0000006e.
Jan 20 14:51:07 compute-0 podman[315753]: 2026-01-20 14:51:07.703749718 +0000 UTC m=+0.050492940 container remove 1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:51:07 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000006e.
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.709 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd103e-5eed-4d02-b5b7-d45f02f176b9]: (4, ('Tue Jan 20 02:51:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22)\n1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22\nTue Jan 20 02:51:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22)\n1105c9057d6d8ddf45d1a013fd1b1b69015a1cdefa5d0bb277d5aca9ccc2fa22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.712 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d9a417-b99b-4eaa-b42d-da51ef372382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.714 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:07 compute-0 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.717 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 nova_compute[250018]: 2026-01-20 14:51:07.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.736 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdce2b3-a7a5-4ae0-bc4b-8e4d96cde817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.749 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[66ea7989-a898-4507-868d-bcfe40121d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.751 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[af5f42e9-8820-4681-bd64-810db3984688]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.773 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a23fd71-981f-4dc2-b179-8b22d4a0d0c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660231, 'reachable_time': 19332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315778, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.776 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.776 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[6e45774a-916e-4832-8667-316c553a52f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.777 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.778 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.792 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2e02c247-35a2-414c-85df-61cb1189bdf8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.793 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79184781-11 in ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.795 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79184781-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.795 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f86c6d6e-141d-4a2f-8f1b-7a71c00a4851]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.797 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d25a4629-969d-4031-8fa1-85f5b65b9d0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.808 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d92e2b3c-274f-440e-9233-ec0e22672053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.835 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[91a9f130-25f1-438d-87b3-4e0fb6c7cf6b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.865 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[22f4074d-be3c-4d6b-8416-8bc11ec7c30f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.873 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bc690010-2d3a-4db5-832f-024f16c686e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.8747] manager: (tap79184781-10): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Jan 20 14:51:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 56 KiB/s wr, 221 op/s
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.915 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5358da79-94e5-4341-a8ed-877ae1ac5377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.919 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cf937b8b-8888-4a5b-b688-bb69e2ee11ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 NetworkManager[48960]: <info>  [1768920667.9413] device (tap79184781-10): carrier: link connected
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.950 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c73ebf8f-22a2-476e-9492-8ecf5d9f1203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.972 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b966612-bf7d-4f70-a1b8-bcc66db1ca55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660866, 'reachable_time': 30826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315806, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:07.990 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e75f4351-e7d0-4fbe-9187-888ccf37f4d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:7c2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 660866, 'tstamp': 660866}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315807, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.006 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d355bad3-a0d0-44d2-a13d-4ee878d3744f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79184781-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:7c:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660866, 'reachable_time': 30826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315808, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.031 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[11e1d2bd-9400-43fd-9079-1825a40d9738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.093 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[614e39be-0c2e-4022-8b72-5115ad20b692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.095 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.095 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.095 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79184781-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.097 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:08 compute-0 NetworkManager[48960]: <info>  [1768920668.0979] manager: (tap79184781-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 20 14:51:08 compute-0 kernel: tap79184781-10: entered promiscuous mode
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.101 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79184781-10, col_values=(('external_ids', {'iface-id': 'b033e9e6-9781-4424-a20f-7b48a14e2c80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:08 compute-0 ovn_controller[148666]: 2026-01-20T14:51:08Z|00395|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.104 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.105 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[627c87d9-dce0-445a-a068-6ca7a0a7a578]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.105 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/79184781-1f23-4584-87de-08e262242488.pid.haproxy
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 79184781-1f23-4584-87de-08e262242488
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:51:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:08.106 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'env', 'PROCESS_TAG=haproxy-79184781-1f23-4584-87de-08e262242488', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79184781-1f23-4584-87de-08e262242488.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.118 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.216 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.218 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920668.2160277, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.218 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Resumed (Lifecycle Event)
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.246 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.250 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.252 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.266 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.267 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920668.2172303, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.267 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Started (Lifecycle Event)
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.283 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.286 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.310 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 20 14:51:08 compute-0 podman[315900]: 2026-01-20 14:51:08.461710006 +0000 UTC m=+0.046348988 container create bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:51:08 compute-0 systemd[1]: Started libpod-conmon-bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d.scope.
Jan 20 14:51:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/691026effad5f1390d9f39691f429dc01db6c0a2d89a24e97f7db25d7a7619df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:08 compute-0 podman[315900]: 2026-01-20 14:51:08.438415309 +0000 UTC m=+0.023054311 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:51:08 compute-0 podman[315900]: 2026-01-20 14:51:08.544675489 +0000 UTC m=+0.129314491 container init bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:51:08 compute-0 podman[315900]: 2026-01-20 14:51:08.549614382 +0000 UTC m=+0.134253364 container start bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:51:08 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [NOTICE]   (315920) : New worker (315922) forked
Jan 20 14:51:08 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [NOTICE]   (315920) : Loading success.
Jan 20 14:51:08 compute-0 nova_compute[250018]: 2026-01-20 14:51:08.601 250022 DEBUG nova.compute.manager [None req-cdb4df73-187e-4e7a-9a1e-022c86626f59 d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:08.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 20 14:51:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 20 14:51:08 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 20 14:51:08 compute-0 ceph-mon[74360]: pgmap v1906: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 56 KiB/s wr, 221 op/s
Jan 20 14:51:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2039283088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.741 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.743 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.744 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.744 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.745 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.746 250022 WARNING nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state None.
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.746 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.747 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.748 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.748 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.749 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.749 250022 WARNING nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state None.
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.750 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.750 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.751 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.751 250022 DEBUG oslo_concurrency.lockutils [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.752 250022 DEBUG nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:09 compute-0 nova_compute[250018]: 2026-01-20 14:51:09.753 250022 WARNING nova.compute.manager [req-ad1f0dbf-d5b3-49a4-9902-5c7f3af27670 req-94182c6b-7ba7-4fcd-965a-f7fefa1e15be 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state None.
Jan 20 14:51:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 64 KiB/s wr, 298 op/s
Jan 20 14:51:10 compute-0 ceph-mon[74360]: osdmap e256: 3 total, 3 up, 3 in
Jan 20 14:51:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2382876353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2607352360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:10 compute-0 sudo[315934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:10 compute-0 sudo[315934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:10 compute-0 sudo[315934]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:10 compute-0 sshd-session[315932]: Invalid user postgres from 157.245.78.139 port 56346
Jan 20 14:51:10 compute-0 sudo[315959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:10 compute-0 sudo[315959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:10 compute-0 sudo[315959]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:10 compute-0 sshd-session[315932]: Connection closed by invalid user postgres 157.245.78.139 port 56346 [preauth]
Jan 20 14:51:10 compute-0 nova_compute[250018]: 2026-01-20 14:51:10.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:10.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:10.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:11 compute-0 ceph-mon[74360]: pgmap v1908: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 64 KiB/s wr, 298 op/s
Jan 20 14:51:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2098571758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006518777335752839 of space, bias 1.0, pg target 1.9556332007258517 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021699571352131026 of space, bias 1.0, pg target 0.6488171834287176 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:51:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 53 KiB/s wr, 283 op/s
Jan 20 14:51:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:13 compute-0 ceph-mon[74360]: pgmap v1909: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 53 KiB/s wr, 283 op/s
Jan 20 14:51:13 compute-0 nova_compute[250018]: 2026-01-20 14:51:13.254 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 17 KiB/s wr, 332 op/s
Jan 20 14:51:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4280913832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:14.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:14.945 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:14.946 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:51:14 compute-0 nova_compute[250018]: 2026-01-20 14:51:14.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:15 compute-0 ceph-mon[74360]: pgmap v1910: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 17 KiB/s wr, 332 op/s
Jan 20 14:51:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1484244040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:15 compute-0 nova_compute[250018]: 2026-01-20 14:51:15.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 349 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.1 KiB/s wr, 310 op/s
Jan 20 14:51:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:15.948 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:16 compute-0 ceph-mon[74360]: pgmap v1911: 321 pgs: 321 active+clean; 349 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.1 KiB/s wr, 310 op/s
Jan 20 14:51:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:16 compute-0 nova_compute[250018]: 2026-01-20 14:51:16.948 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:16 compute-0 nova_compute[250018]: 2026-01-20 14:51:16.949 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:16 compute-0 nova_compute[250018]: 2026-01-20 14:51:16.979 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.175 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.176 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.184 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.184 250022 INFO nova.compute.claims [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.427 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100085165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.867 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.873 250022 DEBUG nova.compute.provider_tree [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 347 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 870 KiB/s wr, 265 op/s
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.897 250022 DEBUG nova.scheduler.client.report [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/100085165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.980 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:17 compute-0 nova_compute[250018]: 2026-01-20 14:51:17.980 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.155 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.155 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.237 250022 INFO nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.256 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.285 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.414 250022 DEBUG nova.policy [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1bd93d04cc4468abe1d5c61f5144191', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.470 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.473 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.474 250022 INFO nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Creating image(s)
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.510 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.541 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.566 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.570 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.638 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.639 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.640 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.640 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.668 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:18 compute-0 nova_compute[250018]: 2026-01-20 14:51:18.673 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6d73207e-59fd-4b15-875b-c735b5930933_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:18.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:18.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:18 compute-0 ceph-mon[74360]: pgmap v1912: 321 pgs: 321 active+clean; 347 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 870 KiB/s wr, 265 op/s
Jan 20 14:51:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/35703789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/35703789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.008 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Successfully created port: 92744b24-2cfe-43e7-9df7-107b17d3eabc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.098 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 6d73207e-59fd-4b15-875b-c735b5930933_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.166 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] resizing rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.312 250022 DEBUG nova.objects.instance [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'migration_context' on Instance uuid 6d73207e-59fd-4b15-875b-c735b5930933 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.362 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.362 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Ensure instance console log exists: /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.363 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.363 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.363 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 20 14:51:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 20 14:51:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 20 14:51:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 356 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.3 MiB/s wr, 241 op/s
Jan 20 14:51:19 compute-0 nova_compute[250018]: 2026-01-20 14:51:19.986 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Successfully updated port: 92744b24-2cfe-43e7-9df7-107b17d3eabc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.039 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.040 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquired lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.040 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.130 250022 DEBUG nova.compute.manager [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-changed-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.130 250022 DEBUG nova.compute.manager [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Refreshing instance network info cache due to event network-changed-92744b24-2cfe-43e7-9df7-107b17d3eabc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.130 250022 DEBUG oslo_concurrency.lockutils [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.267 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:51:20 compute-0 nova_compute[250018]: 2026-01-20 14:51:20.630 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:20 compute-0 ceph-mon[74360]: osdmap e257: 3 total, 3 up, 3 in
Jan 20 14:51:20 compute-0 ceph-mon[74360]: pgmap v1914: 321 pgs: 321 active+clean; 356 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.3 MiB/s wr, 241 op/s
Jan 20 14:51:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2416694841' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2416694841' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2798121711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:20.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.510 250022 DEBUG nova.network.neutron [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Updating instance_info_cache with network_info: [{"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.546 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Releasing lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.546 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Instance network_info: |[{"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.546 250022 DEBUG oslo_concurrency.lockutils [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.547 250022 DEBUG nova.network.neutron [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Refreshing network info cache for port 92744b24-2cfe-43e7-9df7-107b17d3eabc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.549 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Start _get_guest_xml network_info=[{"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.555 250022 WARNING nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.564 250022 DEBUG nova.virt.libvirt.host [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.565 250022 DEBUG nova.virt.libvirt.host [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.568 250022 DEBUG nova.virt.libvirt.host [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.569 250022 DEBUG nova.virt.libvirt.host [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.570 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.570 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.571 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.571 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.571 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.572 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.572 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.572 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.572 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.573 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.573 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.573 250022 DEBUG nova.virt.hardware [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:51:21 compute-0 nova_compute[250018]: 2026-01-20 14:51:21.576 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3218029730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:51:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:51:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 388 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 234 op/s
Jan 20 14:51:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623500243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.028 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.057 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.063 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:22 compute-0 ovn_controller[148666]: 2026-01-20T14:51:22Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:92:68 10.100.0.11
Jan 20 14:51:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223132071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.519 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.522 250022 DEBUG nova.virt.libvirt.vif [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-438454409',display_name='tempest-ServerDiskConfigTestJSON-server-438454409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-438454409',id=113,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-amva5euo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:18Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=6d73207e-59fd-4b15-875b-c735b5930933,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.523 250022 DEBUG nova.network.os_vif_util [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.525 250022 DEBUG nova.network.os_vif_util [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.527 250022 DEBUG nova.objects.instance [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d73207e-59fd-4b15-875b-c735b5930933 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.571 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <uuid>6d73207e-59fd-4b15-875b-c735b5930933</uuid>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <name>instance-00000071</name>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-438454409</nova:name>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:51:21</nova:creationTime>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:user uuid="a1bd93d04cc4468abe1d5c61f5144191">tempest-ServerDiskConfigTestJSON-1806346246-project-member</nova:user>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:project uuid="acb30fbc0e3749e390d7f867060b5a2a">tempest-ServerDiskConfigTestJSON-1806346246</nova:project>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <nova:port uuid="92744b24-2cfe-43e7-9df7-107b17d3eabc">
Jan 20 14:51:22 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <system>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="serial">6d73207e-59fd-4b15-875b-c735b5930933</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="uuid">6d73207e-59fd-4b15-875b-c735b5930933</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </system>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <os>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </os>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <features>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </features>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6d73207e-59fd-4b15-875b-c735b5930933_disk">
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/6d73207e-59fd-4b15-875b-c735b5930933_disk.config">
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </source>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:51:22 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:6a:2d:45"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <target dev="tap92744b24-2c"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/console.log" append="off"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <video>
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </video>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:51:22 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:51:22 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:51:22 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:51:22 compute-0 nova_compute[250018]: </domain>
Jan 20 14:51:22 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.573 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Preparing to wait for external event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.574 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.574 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.575 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.575 250022 DEBUG nova.virt.libvirt.vif [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-438454409',display_name='tempest-ServerDiskConfigTestJSON-server-438454409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-438454409',id=113,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-amva5euo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:18Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=6d73207e-59fd-4b15-875b-c735b5930933,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.576 250022 DEBUG nova.network.os_vif_util [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.577 250022 DEBUG nova.network.os_vif_util [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.577 250022 DEBUG os_vif [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.583 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.583 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.588 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap92744b24-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.589 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap92744b24-2c, col_values=(('external_ids', {'iface-id': '92744b24-2cfe-43e7-9df7-107b17d3eabc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:2d:45', 'vm-uuid': '6d73207e-59fd-4b15-875b-c735b5930933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:22 compute-0 NetworkManager[48960]: <info>  [1768920682.6111] manager: (tap92744b24-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.611 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.621 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.623 250022 INFO os_vif [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c')
Jan 20 14:51:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:51:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3118254261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:51:22 compute-0 ceph-mon[74360]: pgmap v1915: 321 pgs: 321 active+clean; 388 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 234 op/s
Jan 20 14:51:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1623500243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1223132071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.940 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.941 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.942 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] No VIF found with MAC fa:16:3e:6a:2d:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.943 250022 INFO nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Using config drive
Jan 20 14:51:22 compute-0 nova_compute[250018]: 2026-01-20 14:51:22.970 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.112 250022 DEBUG nova.network.neutron [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Updated VIF entry in instance network info cache for port 92744b24-2cfe-43e7-9df7-107b17d3eabc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.113 250022 DEBUG nova.network.neutron [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Updating instance_info_cache with network_info: [{"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.130 250022 DEBUG oslo_concurrency.lockutils [req-a861b4f9-e800-4324-aa3b-9d0111f32bb4 req-8bcd5ac7-dd3d-4676-bb88-37f076871dd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-6d73207e-59fd-4b15-875b-c735b5930933" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.258 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.425 250022 INFO nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Creating config drive at /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.430 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7bp6cd9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.584 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc7bp6cd9" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.613 250022 DEBUG nova.storage.rbd_utils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] rbd image 6d73207e-59fd-4b15-875b-c735b5930933_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:23 compute-0 nova_compute[250018]: 2026-01-20 14:51:23.617 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config 6d73207e-59fd-4b15-875b-c735b5930933_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 393 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 232 op/s
Jan 20 14:51:24 compute-0 ceph-mon[74360]: pgmap v1916: 321 pgs: 321 active+clean; 393 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 232 op/s
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.579 250022 DEBUG oslo_concurrency.processutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config 6d73207e-59fd-4b15-875b-c735b5930933_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.581 250022 INFO nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Deleting local config drive /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933/disk.config because it was imported into RBD.
Jan 20 14:51:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:24 compute-0 kernel: tap92744b24-2c: entered promiscuous mode
Jan 20 14:51:24 compute-0 NetworkManager[48960]: <info>  [1768920684.6458] manager: (tap92744b24-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 20 14:51:24 compute-0 ovn_controller[148666]: 2026-01-20T14:51:24Z|00396|binding|INFO|Claiming lport 92744b24-2cfe-43e7-9df7-107b17d3eabc for this chassis.
Jan 20 14:51:24 compute-0 ovn_controller[148666]: 2026-01-20T14:51:24Z|00397|binding|INFO|92744b24-2cfe-43e7-9df7-107b17d3eabc: Claiming fa:16:3e:6a:2d:45 10.100.0.9
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.658 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:2d:45 10.100.0.9'], port_security=['fa:16:3e:6a:2d:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6d73207e-59fd-4b15-875b-c735b5930933', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=92744b24-2cfe-43e7-9df7-107b17d3eabc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.660 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 92744b24-2cfe-43e7-9df7-107b17d3eabc in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 bound to our chassis
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.662 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.677 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[52444ff2-9df8-498a-b817-bfb4291557db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.678 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3379e2b3-f1 in ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.681 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3379e2b3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.681 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8db0ebeb-e338-4dcc-b3f3-03c0bff7ae15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.682 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[324251df-7ccd-4d20-9bea-b9537db8f4ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 systemd-machined[216401]: New machine qemu-52-instance-00000071.
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.697 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbe3da0-33cd-4c1c-b558-7b96f265c1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000071.
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:24 compute-0 ovn_controller[148666]: 2026-01-20T14:51:24Z|00398|binding|INFO|Setting lport 92744b24-2cfe-43e7-9df7-107b17d3eabc ovn-installed in OVS
Jan 20 14:51:24 compute-0 ovn_controller[148666]: 2026-01-20T14:51:24Z|00399|binding|INFO|Setting lport 92744b24-2cfe-43e7-9df7-107b17d3eabc up in Southbound
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.724 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4b38e0-6dbb-4ba0-8028-ebfeca3fb103]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 nova_compute[250018]: 2026-01-20 14:51:24.724 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:24 compute-0 systemd-udevd[316318]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:24 compute-0 NetworkManager[48960]: <info>  [1768920684.7419] device (tap92744b24-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:51:24 compute-0 NetworkManager[48960]: <info>  [1768920684.7424] device (tap92744b24-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:51:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.759 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3439435c-bc44-45a8-81c1-c2ee04815787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.763 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d22a9670-a0b2-40bf-ba94-4b894c3f10a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 systemd-udevd[316322]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:51:24 compute-0 NetworkManager[48960]: <info>  [1768920684.7651] manager: (tap3379e2b3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.792 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a7793d78-4e0c-44c7-b5e5-b7396ea2a7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.796 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2223a84e-62f0-4b21-8743-1a91839f5332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 NetworkManager[48960]: <info>  [1768920684.8214] device (tap3379e2b3-f0): carrier: link connected
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.825 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[869c7a94-ba19-494b-9734-7c3bd036f63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.841 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9143bf63-5e44-4429-b1e8-5ee3bdaf1896]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662554, 'reachable_time': 19755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316348, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.852 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b12784-da0a-4bb3-9de1-dc2c3b363c5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:86fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662554, 'tstamp': 662554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316349, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.866 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3922a246-8748-4763-bf0a-70081fb34bc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3379e2b3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:86:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662554, 'reachable_time': 19755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316350, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.897 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b89fc55d-1c6e-4341-8f95-c9f446740c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.952 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6d480289-90d4-4a28-9e99-b95b2ab98c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.954 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.954 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:51:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:24.955 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3379e2b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.008 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:25 compute-0 NetworkManager[48960]: <info>  [1768920685.0089] manager: (tap3379e2b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Jan 20 14:51:25 compute-0 kernel: tap3379e2b3-f0: entered promiscuous mode
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:25.011 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3379e2b3-f0, col_values=(('external_ids', {'iface-id': 'b32ddf23-a8dd-4e6d-a410-ccb24b214d35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.012 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:25 compute-0 ovn_controller[148666]: 2026-01-20T14:51:25Z|00400|binding|INFO|Releasing lport b32ddf23-a8dd-4e6d-a410-ccb24b214d35 from this chassis (sb_readonly=0)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.027 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:25.029 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:25.030 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e7f597-fde5-4a21-b0c5-6fbdb9260203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:25.031 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.pid.haproxy
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3379e2b3-ffb2-4391-969b-c9dc51bfbe25
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:51:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:25.032 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'env', 'PROCESS_TAG=haproxy-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3379e2b3-ffb2-4391-969b-c9dc51bfbe25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.120 250022 DEBUG nova.compute.manager [req-18143eb1-977a-4fbc-80fa-24898b0eafbb req-45c633d0-f196-4163-b718-ce88f90dac9b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.120 250022 DEBUG oslo_concurrency.lockutils [req-18143eb1-977a-4fbc-80fa-24898b0eafbb req-45c633d0-f196-4163-b718-ce88f90dac9b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.121 250022 DEBUG oslo_concurrency.lockutils [req-18143eb1-977a-4fbc-80fa-24898b0eafbb req-45c633d0-f196-4163-b718-ce88f90dac9b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.121 250022 DEBUG oslo_concurrency.lockutils [req-18143eb1-977a-4fbc-80fa-24898b0eafbb req-45c633d0-f196-4163-b718-ce88f90dac9b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.121 250022 DEBUG nova.compute.manager [req-18143eb1-977a-4fbc-80fa-24898b0eafbb req-45c633d0-f196-4163-b718-ce88f90dac9b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Processing event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.257 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.258 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920685.2568688, 6d73207e-59fd-4b15-875b-c735b5930933 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.258 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] VM Started (Lifecycle Event)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.260 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.263 250022 INFO nova.virt.libvirt.driver [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Instance spawned successfully.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.264 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.467 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.476 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:25 compute-0 podman[316421]: 2026-01-20 14:51:25.382134403 +0000 UTC m=+0.020317118 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.481 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.481 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.482 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.483 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.483 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.484 250022 DEBUG nova.virt.libvirt.driver [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:25 compute-0 podman[316421]: 2026-01-20 14:51:25.526501328 +0000 UTC m=+0.164684023 container create 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.560 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.561 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920685.2578046, 6d73207e-59fd-4b15-875b-c735b5930933 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.562 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] VM Paused (Lifecycle Event)
Jan 20 14:51:25 compute-0 systemd[1]: Started libpod-conmon-4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444.scope.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.614 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.618 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920685.2606492, 6d73207e-59fd-4b15-875b-c735b5930933 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.619 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] VM Resumed (Lifecycle Event)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.639 250022 INFO nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Took 7.17 seconds to spawn the instance on the hypervisor.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.640 250022 DEBUG nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6747954c3f437eaf93c52ea65a0072078f6e2ee4391cb8f5267dc712146a2e05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.673 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.678 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:25 compute-0 podman[316421]: 2026-01-20 14:51:25.681189301 +0000 UTC m=+0.319372026 container init 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:51:25 compute-0 podman[316421]: 2026-01-20 14:51:25.688665112 +0000 UTC m=+0.326847807 container start 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.720 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.723 250022 INFO nova.compute.manager [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Took 8.58 seconds to build instance.
Jan 20 14:51:25 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [NOTICE]   (316440) : New worker (316442) forked
Jan 20 14:51:25 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [NOTICE]   (316440) : Loading success.
Jan 20 14:51:25 compute-0 nova_compute[250018]: 2026-01-20 14:51:25.751 250022 DEBUG oslo_concurrency.lockutils [None req-a47261cd-cc45-46eb-bbda-64047499d17b a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 339 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Jan 20 14:51:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:26.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 20 14:51:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 20 14:51:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 20 14:51:27 compute-0 ceph-mon[74360]: pgmap v1917: 321 pgs: 321 active+clean; 339 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.457 250022 DEBUG nova.compute.manager [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.458 250022 DEBUG oslo_concurrency.lockutils [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.458 250022 DEBUG oslo_concurrency.lockutils [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.459 250022 DEBUG oslo_concurrency.lockutils [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.459 250022 DEBUG nova.compute.manager [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] No waiting events found dispatching network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.459 250022 WARNING nova.compute.manager [req-106f8088-377b-40e7-8da4-6be8ffe263cd req-0ff1cf59-89a8-4564-bb2d-44e7fe0b7d75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received unexpected event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc for instance with vm_state active and task_state None.
Jan 20 14:51:27 compute-0 nova_compute[250018]: 2026-01-20 14:51:27.610 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 312 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 352 op/s
Jan 20 14:51:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 20 14:51:28 compute-0 ceph-mon[74360]: osdmap e258: 3 total, 3 up, 3 in
Jan 20 14:51:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/890546304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 20 14:51:28 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 20 14:51:28 compute-0 nova_compute[250018]: 2026-01-20 14:51:28.260 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:28.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:28.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 20 14:51:29 compute-0 ceph-mon[74360]: pgmap v1919: 321 pgs: 321 active+clean; 312 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 352 op/s
Jan 20 14:51:29 compute-0 ceph-mon[74360]: osdmap e259: 3 total, 3 up, 3 in
Jan 20 14:51:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 20 14:51:29 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 20 14:51:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 317 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.6 MiB/s wr, 395 op/s
Jan 20 14:51:30 compute-0 ceph-mon[74360]: osdmap e260: 3 total, 3 up, 3 in
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.113 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.114 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.114 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.114 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.115 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.116 250022 INFO nova.compute.manager [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Terminating instance
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.117 250022 DEBUG nova.compute.manager [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:51:30 compute-0 kernel: tap92744b24-2c (unregistering): left promiscuous mode
Jan 20 14:51:30 compute-0 NetworkManager[48960]: <info>  [1768920690.1574] device (tap92744b24-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.166 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 ovn_controller[148666]: 2026-01-20T14:51:30Z|00401|binding|INFO|Releasing lport 92744b24-2cfe-43e7-9df7-107b17d3eabc from this chassis (sb_readonly=0)
Jan 20 14:51:30 compute-0 ovn_controller[148666]: 2026-01-20T14:51:30Z|00402|binding|INFO|Setting lport 92744b24-2cfe-43e7-9df7-107b17d3eabc down in Southbound
Jan 20 14:51:30 compute-0 ovn_controller[148666]: 2026-01-20T14:51:30Z|00403|binding|INFO|Removing iface tap92744b24-2c ovn-installed in OVS
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.167 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.171 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:2d:45 10.100.0.9'], port_security=['fa:16:3e:6a:2d:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6d73207e-59fd-4b15-875b-c735b5930933', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acb30fbc0e3749e390d7f867060b5a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19fab802-7db8-4c89-8f8e-8dcfc14d4627', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0e287ba-f88b-46f5-bb7f-3cc2a74be88e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=92744b24-2cfe-43e7-9df7-107b17d3eabc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.173 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 92744b24-2cfe-43e7-9df7-107b17d3eabc in datapath 3379e2b3-ffb2-4391-969b-c9dc51bfbe25 unbound from our chassis
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.174 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3379e2b3-ffb2-4391-969b-c9dc51bfbe25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ba23f39a-1b01-4e1f-abbb-15f6e10a46cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.175 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 namespace which is not needed anymore
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 20 14:51:30 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000071.scope: Consumed 5.476s CPU time.
Jan 20 14:51:30 compute-0 systemd-machined[216401]: Machine qemu-52-instance-00000071 terminated.
Jan 20 14:51:30 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [NOTICE]   (316440) : haproxy version is 2.8.14-c23fe91
Jan 20 14:51:30 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [NOTICE]   (316440) : path to executable is /usr/sbin/haproxy
Jan 20 14:51:30 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [WARNING]  (316440) : Exiting Master process...
Jan 20 14:51:30 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [ALERT]    (316440) : Current worker (316442) exited with code 143 (Terminated)
Jan 20 14:51:30 compute-0 neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25[316436]: [WARNING]  (316440) : All workers exited. Exiting... (0)
Jan 20 14:51:30 compute-0 systemd[1]: libpod-4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444.scope: Deactivated successfully.
Jan 20 14:51:30 compute-0 podman[316476]: 2026-01-20 14:51:30.309598532 +0000 UTC m=+0.046267186 container died 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444-userdata-shm.mount: Deactivated successfully.
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6747954c3f437eaf93c52ea65a0072078f6e2ee4391cb8f5267dc712146a2e05-merged.mount: Deactivated successfully.
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.345 250022 INFO nova.virt.libvirt.driver [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Instance destroyed successfully.
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.345 250022 DEBUG nova.objects.instance [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lazy-loading 'resources' on Instance uuid 6d73207e-59fd-4b15-875b-c735b5930933 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:30 compute-0 podman[316476]: 2026-01-20 14:51:30.354937292 +0000 UTC m=+0.091605946 container cleanup 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.373 250022 DEBUG nova.virt.libvirt.vif [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-438454409',display_name='tempest-ServerDiskConfigTestJSON-server-438454409',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-438454409',id=113,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acb30fbc0e3749e390d7f867060b5a2a',ramdisk_id='',reservation_id='r-amva5euo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1806346246',owner_user_name='tempest-ServerDiskConfigTestJSON-1806346246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:51:27Z,user_data=None,user_id='a1bd93d04cc4468abe1d5c61f5144191',uuid=6d73207e-59fd-4b15-875b-c735b5930933,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.374 250022 DEBUG nova.network.os_vif_util [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converting VIF {"id": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "address": "fa:16:3e:6a:2d:45", "network": {"id": "3379e2b3-ffb2-4391-969b-c9dc51bfbe25", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1112843240-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acb30fbc0e3749e390d7f867060b5a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap92744b24-2c", "ovs_interfaceid": "92744b24-2cfe-43e7-9df7-107b17d3eabc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.375 250022 DEBUG nova.network.os_vif_util [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:51:30 compute-0 systemd[1]: libpod-conmon-4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444.scope: Deactivated successfully.
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.375 250022 DEBUG os_vif [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.377 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap92744b24-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.382 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.384 250022 INFO os_vif [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:2d:45,bridge_name='br-int',has_traffic_filtering=True,id=92744b24-2cfe-43e7-9df7-107b17d3eabc,network=Network(3379e2b3-ffb2-4391-969b-c9dc51bfbe25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap92744b24-2c')
Jan 20 14:51:30 compute-0 podman[316513]: 2026-01-20 14:51:30.432270774 +0000 UTC m=+0.050434259 container remove 4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 14:51:30 compute-0 sudo[316514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.440 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d89ee92-1d32-49ff-85a6-db59e20da81d]: (4, ('Tue Jan 20 02:51:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444)\n4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444\nTue Jan 20 02:51:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 (4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444)\n4a138fea6679046b7c35eac87508422e00a573e647a1be5c840a3862882c4444\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.442 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e8212d-5197-4dbc-bb62-4490f11d771c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 sudo[316514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.443 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3379e2b3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:51:30 compute-0 sudo[316514]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.446 250022 DEBUG nova.compute.manager [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-unplugged-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.446 250022 DEBUG oslo_concurrency.lockutils [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.447 250022 DEBUG oslo_concurrency.lockutils [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.447 250022 DEBUG oslo_concurrency.lockutils [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.448 250022 DEBUG nova.compute.manager [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] No waiting events found dispatching network-vif-unplugged-92744b24-2cfe-43e7-9df7-107b17d3eabc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.448 250022 DEBUG nova.compute.manager [req-e415d51a-146f-4e2f-ab17-c7841e4031d4 req-009dd252-0f87-4166-a5fd-b24867b1d1ba 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-unplugged-92744b24-2cfe-43e7-9df7-107b17d3eabc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 kernel: tap3379e2b3-f0: left promiscuous mode
Jan 20 14:51:30 compute-0 nova_compute[250018]: 2026-01-20 14:51:30.460 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.464 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[97e6e502-62f3-418a-961f-6c93a00f2ca0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.481 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4989d7d4-fe93-4466-a568-657382eb0227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.483 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d3b174-ccca-4bf1-a686-a28825a8d523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.499 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0b8527-8500-41f4-84bb-929ef92aa41d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662547, 'reachable_time': 19609, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316594, 'error': None, 'target': 'ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.502 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3379e2b3-ffb2-4391-969b-c9dc51bfbe25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.502 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2b74b7-64d6-41fb-b4f8-e56bbcc17239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:51:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d3379e2b3\x2dffb2\x2d4391\x2d969b\x2dc9dc51bfbe25.mount: Deactivated successfully.
Jan 20 14:51:30 compute-0 sudo[316569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:30 compute-0 sudo[316569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:30 compute-0 sudo[316569]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:30.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.763 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.763 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:51:30.764 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:31 compute-0 ceph-mon[74360]: pgmap v1922: 321 pgs: 321 active+clean; 317 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.6 MiB/s wr, 395 op/s
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.225 250022 INFO nova.virt.libvirt.driver [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Deleting instance files /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933_del
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.226 250022 INFO nova.virt.libvirt.driver [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Deletion of /var/lib/nova/instances/6d73207e-59fd-4b15-875b-c735b5930933_del complete
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.317 250022 INFO nova.compute.manager [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Took 1.20 seconds to destroy the instance on the hypervisor.
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.318 250022 DEBUG oslo.service.loopingcall [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.318 250022 DEBUG nova.compute.manager [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:51:31 compute-0 nova_compute[250018]: 2026-01-20 14:51:31.318 250022 DEBUG nova.network.neutron [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:51:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 291 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.1 MiB/s wr, 353 op/s
Jan 20 14:51:32 compute-0 ceph-mon[74360]: pgmap v1923: 321 pgs: 321 active+clean; 291 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.1 MiB/s wr, 353 op/s
Jan 20 14:51:32 compute-0 podman[316600]: 2026-01-20 14:51:32.524603802 +0000 UTC m=+0.116868595 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 20 14:51:32 compute-0 podman[316599]: 2026-01-20 14:51:32.569682815 +0000 UTC m=+0.164136958 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:51:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:32.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.793 250022 DEBUG nova.compute.manager [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.794 250022 DEBUG oslo_concurrency.lockutils [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "6d73207e-59fd-4b15-875b-c735b5930933-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.794 250022 DEBUG oslo_concurrency.lockutils [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.795 250022 DEBUG oslo_concurrency.lockutils [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.795 250022 DEBUG nova.compute.manager [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] No waiting events found dispatching network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.795 250022 WARNING nova.compute.manager [req-e2e3c4ef-13ae-4095-887d-b935db29a75a req-f5d9a92a-ba75-41a3-a11d-0d89bd0ef69b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received unexpected event network-vif-plugged-92744b24-2cfe-43e7-9df7-107b17d3eabc for instance with vm_state active and task_state deleting.
Jan 20 14:51:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.918 250022 DEBUG nova.network.neutron [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.945 250022 INFO nova.compute.manager [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Took 1.63 seconds to deallocate network for instance.
Jan 20 14:51:32 compute-0 nova_compute[250018]: 2026-01-20 14:51:32.964 250022 DEBUG nova.compute.manager [req-81e82331-990e-41e2-b724-7ee511ecc8a6 req-9a1420bf-94f0-470e-ab09-8f1e37c2f900 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Received event network-vif-deleted-92744b24-2cfe-43e7-9df7-107b17d3eabc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.031 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.032 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.112 250022 DEBUG oslo_concurrency.processutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.261 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599957301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.544 250022 DEBUG oslo_concurrency.processutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.551 250022 DEBUG nova.compute.provider_tree [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.568 250022 DEBUG nova.scheduler.client.report [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1599957301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.595 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.624 250022 INFO nova.scheduler.client.report [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Deleted allocations for instance 6d73207e-59fd-4b15-875b-c735b5930933
Jan 20 14:51:33 compute-0 nova_compute[250018]: 2026-01-20 14:51:33.707 250022 DEBUG oslo_concurrency.lockutils [None req-eb1097d0-70ec-4367-8e8e-c2da148e4bb8 a1bd93d04cc4468abe1d5c61f5144191 acb30fbc0e3749e390d7f867060b5a2a - - default default] Lock "6d73207e-59fd-4b15-875b-c735b5930933" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 284 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 3.1 MiB/s wr, 374 op/s
Jan 20 14:51:34 compute-0 nova_compute[250018]: 2026-01-20 14:51:34.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:34 compute-0 ceph-mon[74360]: pgmap v1924: 321 pgs: 321 active+clean; 284 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 3.1 MiB/s wr, 374 op/s
Jan 20 14:51:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:35 compute-0 nova_compute[250018]: 2026-01-20 14:51:35.381 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 261 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 231 op/s
Jan 20 14:51:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:37 compute-0 ceph-mon[74360]: pgmap v1925: 321 pgs: 321 active+clean; 261 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 231 op/s
Jan 20 14:51:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 269 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 215 op/s
Jan 20 14:51:38 compute-0 nova_compute[250018]: 2026-01-20 14:51:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:38 compute-0 nova_compute[250018]: 2026-01-20 14:51:38.264 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:39 compute-0 nova_compute[250018]: 2026-01-20 14:51:39.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:39 compute-0 ceph-mon[74360]: pgmap v1926: 321 pgs: 321 active+clean; 269 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 215 op/s
Jan 20 14:51:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 20 14:51:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 20 14:51:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 20 14:51:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 275 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 198 op/s
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.081 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.081 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.082 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.420 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:40 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350871441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.597 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:40 compute-0 ceph-mon[74360]: osdmap e261: 3 total, 3 up, 3 in
Jan 20 14:51:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1849351552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-0 ceph-mon[74360]: pgmap v1928: 321 pgs: 321 active+clean; 275 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 198 op/s
Jan 20 14:51:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1423700390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2350871441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.667 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.668 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:51:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:40.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.825 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.826 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4257MB free_disk=20.91362762451172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.827 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.827 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:40.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.901 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.902 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.902 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:51:40 compute-0 nova_compute[250018]: 2026-01-20 14:51:40.962 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808001398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.383 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.391 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.412 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.439 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.439 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2869754477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3808001398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:41 compute-0 ovn_controller[148666]: 2026-01-20T14:51:41Z|00404|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:51:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 291 MiB data, 972 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 151 op/s
Jan 20 14:51:41 compute-0 nova_compute[250018]: 2026-01-20 14:51:41.949 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:42 compute-0 nova_compute[250018]: 2026-01-20 14:51:42.439 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:42 compute-0 nova_compute[250018]: 2026-01-20 14:51:42.440 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:42 compute-0 nova_compute[250018]: 2026-01-20 14:51:42.441 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:42 compute-0 nova_compute[250018]: 2026-01-20 14:51:42.441 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:51:42 compute-0 ceph-mon[74360]: pgmap v1929: 321 pgs: 321 active+clean; 291 MiB data, 972 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 151 op/s
Jan 20 14:51:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/166196684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/29521595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2877049591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:51:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:42.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.265 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3514914750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2407416010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.705 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "8392c22f-8472-42e6-bd6b-68724d9e0734" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.705 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.723 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.793 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.794 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.799 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.799 250022 INFO nova.compute.claims [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:51:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 294 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 14:51:43 compute-0 nova_compute[250018]: 2026-01-20 14:51:43.951 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2029288579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.411 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.419 250022 DEBUG nova.compute.provider_tree [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.440 250022 DEBUG nova.scheduler.client.report [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.463 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.463 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.534 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.566 250022 INFO nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:51:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.603 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:51:44 compute-0 ceph-mon[74360]: pgmap v1930: 321 pgs: 321 active+clean; 294 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 14:51:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2029288579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.727 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.729 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.729 250022 INFO nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating image(s)
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.760 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.792 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.824 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.829 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:44.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.914 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.915 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.916 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.916 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.947 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:44 compute-0 nova_compute[250018]: 2026-01-20 14:51:44.952 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 8392c22f-8472-42e6-bd6b-68724d9e0734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.228 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 8392c22f-8472-42e6-bd6b-68724d9e0734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.288 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] resizing rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.343 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920690.342456, 6d73207e-59fd-4b15-875b-c735b5930933 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.343 250022 INFO nova.compute.manager [-] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] VM Stopped (Lifecycle Event)
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.389 250022 DEBUG nova.compute.manager [None req-1b0d5b68-7ebf-4af1-ab2b-7b36bd1ce94b - - - - - -] [instance: 6d73207e-59fd-4b15-875b-c735b5930933] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.394 250022 DEBUG nova.objects.instance [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.408 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.409 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Ensure instance console log exists: /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.409 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.410 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.410 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.411 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.415 250022 WARNING nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.419 250022 DEBUG nova.virt.libvirt.host [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.420 250022 DEBUG nova.virt.libvirt.host [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.422 250022 DEBUG nova.virt.libvirt.host [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.423 250022 DEBUG nova.virt.libvirt.host [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.424 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.424 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.425 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.425 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.425 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.425 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.426 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.426 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.426 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.427 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.427 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.427 250022 DEBUG nova.virt.hardware [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.429 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.458 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/297072441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/664632545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622414733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.896 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 331 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 494 KiB/s rd, 4.2 MiB/s wr, 124 op/s
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.921 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:45 compute-0 nova_compute[250018]: 2026-01-20 14:51:45.925 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.288 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.288 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.288 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.289 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.360 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.362 250022 DEBUG nova.objects.instance [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.405 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <uuid>8392c22f-8472-42e6-bd6b-68724d9e0734</uuid>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <name>instance-00000073</name>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerShowV247Test-server-592811457</nova:name>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:51:45</nova:creationTime>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:user uuid="cdcdce94e7354b3bafb34285408888b9">tempest-ServerShowV247Test-1508434892-project-member</nova:user>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <nova:project uuid="ecfc3366b9194864a3f15ce0114b5ee3">tempest-ServerShowV247Test-1508434892</nova:project>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <system>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="serial">8392c22f-8472-42e6-bd6b-68724d9e0734</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="uuid">8392c22f-8472-42e6-bd6b-68724d9e0734</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </system>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <os>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </os>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <features>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </features>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/8392c22f-8472-42e6-bd6b-68724d9e0734_disk">
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </source>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config">
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </source>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:51:46 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/console.log" append="off"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <video>
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </video>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:51:46 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:51:46 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:51:46 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:51:46 compute-0 nova_compute[250018]: </domain>
Jan 20 14:51:46 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.453 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.454 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.454 250022 INFO nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Using config drive
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.482 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3622414733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:46 compute-0 ceph-mon[74360]: pgmap v1931: 321 pgs: 321 active+clean; 331 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 494 KiB/s rd, 4.2 MiB/s wr, 124 op/s
Jan 20 14:51:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2312341428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.735 250022 INFO nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating config drive at /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.744 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptwora8u0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:46.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.903 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptwora8u0" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.931 250022 DEBUG nova.storage.rbd_utils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:46 compute-0 nova_compute[250018]: 2026-01-20 14:51:46.935 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.108 250022 DEBUG oslo_concurrency.processutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.109 250022 INFO nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deleting local config drive /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config because it was imported into RBD.
Jan 20 14:51:47 compute-0 systemd-machined[216401]: New machine qemu-53-instance-00000073.
Jan 20 14:51:47 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000073.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.607 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920707.6069796, 8392c22f-8472-42e6-bd6b-68724d9e0734 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.608 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] VM Resumed (Lifecycle Event)
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.611 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.612 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.615 250022 INFO nova.virt.libvirt.driver [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance spawned successfully.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.615 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.640 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.649 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.652 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.652 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.652 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.653 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.653 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.653 250022 DEBUG nova.virt.libvirt.driver [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.687 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.687 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920707.6089232, 8392c22f-8472-42e6-bd6b-68724d9e0734 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.687 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] VM Started (Lifecycle Event)
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.713 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.716 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.738 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.744 250022 INFO nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Took 3.02 seconds to spawn the instance on the hypervisor.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.744 250022 DEBUG nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.811 250022 INFO nova.compute.manager [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Took 4.04 seconds to build instance.
Jan 20 14:51:47 compute-0 nova_compute[250018]: 2026-01-20 14:51:47.826 250022 DEBUG oslo_concurrency.lockutils [None req-e87dc67e-60c5-43f4-9eec-2056a781204f cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 356 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 4.7 MiB/s wr, 143 op/s
Jan 20 14:51:48 compute-0 nova_compute[250018]: 2026-01-20 14:51:48.107 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:51:48 compute-0 nova_compute[250018]: 2026-01-20 14:51:48.122 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:51:48 compute-0 nova_compute[250018]: 2026-01-20 14:51:48.122 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:51:48 compute-0 nova_compute[250018]: 2026-01-20 14:51:48.267 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:48.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:49 compute-0 ceph-mon[74360]: pgmap v1932: 321 pgs: 321 active+clean; 356 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 4.7 MiB/s wr, 143 op/s
Jan 20 14:51:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:49 compute-0 nova_compute[250018]: 2026-01-20 14:51:49.784 250022 INFO nova.compute.manager [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Rebuilding instance
Jan 20 14:51:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 375 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 172 op/s
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.021903) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710022010, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2256, "num_deletes": 258, "total_data_size": 3702200, "memory_usage": 3766032, "flush_reason": "Manual Compaction"}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710045234, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3643008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41567, "largest_seqno": 43821, "table_properties": {"data_size": 3632724, "index_size": 6522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22189, "raw_average_key_size": 21, "raw_value_size": 3611937, "raw_average_value_size": 3430, "num_data_blocks": 281, "num_entries": 1053, "num_filter_entries": 1053, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920522, "oldest_key_time": 1768920522, "file_creation_time": 1768920710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 23370 microseconds, and 8052 cpu microseconds.
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.045293) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3643008 bytes OK
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.045319) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.047131) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.047161) EVENT_LOG_v1 {"time_micros": 1768920710047152, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.047190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3692870, prev total WAL file size 3692870, number of live WAL files 2.
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.048553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3557KB)], [92(8750KB)]
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710048663, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12603575, "oldest_snapshot_seqno": -1}
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.093 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.121 250022 DEBUG nova.compute.manager [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7209 keys, 10794786 bytes, temperature: kUnknown
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710128175, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10794786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10746895, "index_size": 28771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 185828, "raw_average_key_size": 25, "raw_value_size": 10618348, "raw_average_value_size": 1472, "num_data_blocks": 1139, "num_entries": 7209, "num_filter_entries": 7209, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.128572) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10794786 bytes
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.130450) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.3 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.5 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 7740, records dropped: 531 output_compression: NoCompression
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.130490) EVENT_LOG_v1 {"time_micros": 1768920710130471, "job": 54, "event": "compaction_finished", "compaction_time_micros": 79603, "compaction_time_cpu_micros": 48964, "output_level": 6, "num_output_files": 1, "total_output_size": 10794786, "num_input_records": 7740, "num_output_records": 7209, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710132102, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920710136099, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.048312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.136170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.136175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.136177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.136178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:51:50.136180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.172 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.185 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.197 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'resources' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.208 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.225 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.228 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:51:50 compute-0 nova_compute[250018]: 2026-01-20 14:51:50.459 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:50 compute-0 sshd-session[317084]: Invalid user postgres from 157.245.78.139 port 55690
Jan 20 14:51:50 compute-0 sudo[317086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:50 compute-0 sudo[317086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:50 compute-0 sudo[317086]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:50 compute-0 sshd-session[317084]: Connection closed by invalid user postgres 157.245.78.139 port 55690 [preauth]
Jan 20 14:51:50 compute-0 sudo[317111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:50 compute-0 sudo[317111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:50 compute-0 sudo[317111]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:50.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:51 compute-0 ceph-mon[74360]: pgmap v1933: 321 pgs: 321 active+clean; 375 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 172 op/s
Jan 20 14:51:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 387 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.5 MiB/s wr, 234 op/s
Jan 20 14:51:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2764376824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:52 compute-0 sudo[317137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:52 compute-0 sudo[317137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-0 sudo[317137]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-0 sudo[317162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:52 compute-0 sudo[317162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-0 sudo[317162]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-0 sudo[317187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:52 compute-0 sudo[317187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-0 sudo[317187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:52 compute-0 sudo[317212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 14:51:52 compute-0 sudo[317212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:51:52
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'images', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 20 14:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:51:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:52 compute-0 podman[317309]: 2026-01-20 14:51:52.974227817 +0000 UTC m=+0.065189295 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:51:53 compute-0 ceph-mon[74360]: pgmap v1934: 321 pgs: 321 active+clean; 387 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.5 MiB/s wr, 234 op/s
Jan 20 14:51:53 compute-0 podman[317309]: 2026-01-20 14:51:53.068485914 +0000 UTC m=+0.159447432 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 14:51:53 compute-0 nova_compute[250018]: 2026-01-20 14:51:53.268 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:51:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:51:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:53 compute-0 podman[317466]: 2026-01-20 14:51:53.684368009 +0000 UTC m=+0.058233508 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:51:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:51:53 compute-0 podman[317488]: 2026-01-20 14:51:53.745521124 +0000 UTC m=+0.053540441 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:51:53 compute-0 podman[317466]: 2026-01-20 14:51:53.751707721 +0000 UTC m=+0.125573200 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 14:51:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:51:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 387 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 299 op/s
Jan 20 14:51:53 compute-0 podman[317532]: 2026-01-20 14:51:53.943220474 +0000 UTC m=+0.055278848 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Jan 20 14:51:53 compute-0 podman[317532]: 2026-01-20 14:51:53.955120245 +0000 UTC m=+0.067178609 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, name=keepalived, description=keepalived for Ceph, version=2.2.4, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived)
Jan 20 14:51:53 compute-0 sudo[317212]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 sudo[317565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:54 compute-0 sudo[317565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-0 sudo[317565]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-0 sudo[317590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:54 compute-0 sudo[317590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-0 sudo[317590]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-0 sudo[317615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:54 compute-0 sudo[317615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-0 sudo[317615]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-0 sudo[317640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:51:54 compute-0 sudo[317640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: pgmap v1935: 321 pgs: 321 active+clean; 387 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 299 op/s
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:54.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:54 compute-0 sudo[317640]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2dfb3a27-ce74-4817-be4e-6c8492ba4ca6 does not exist
Jan 20 14:51:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7ee2a8d6-f0f5-4f07-8306-c7206be0b3d2 does not exist
Jan 20 14:51:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0c7f9f3c-6a2d-4ae5-8003-8f353f05af24 does not exist
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:51:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:51:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:54.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:54 compute-0 sudo[317695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:54 compute-0 sudo[317695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:54 compute-0 sudo[317695]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:55 compute-0 sudo[317721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:55 compute-0 sudo[317721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:55 compute-0 sudo[317721]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:55 compute-0 sudo[317746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:55 compute-0 sudo[317746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:55 compute-0 sudo[317746]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:55 compute-0 sudo[317771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:51:55 compute-0 sudo[317771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:55 compute-0 nova_compute[250018]: 2026-01-20 14:51:55.459 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.575167414 +0000 UTC m=+0.045966318 container create 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 14:51:55 compute-0 systemd[1]: Started libpod-conmon-963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d.scope.
Jan 20 14:51:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.55608467 +0000 UTC m=+0.026883594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.657013948 +0000 UTC m=+0.127812872 container init 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.664970651 +0000 UTC m=+0.135769565 container start 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.668603489 +0000 UTC m=+0.139402413 container attach 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:55 compute-0 beautiful_swirles[317852]: 167 167
Jan 20 14:51:55 compute-0 systemd[1]: libpod-963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d.scope: Deactivated successfully.
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.671413425 +0000 UTC m=+0.142212329 container died 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c2becf9f171825811f0a8bba6c9766f086eb9a0c0d4ef98ace9a0f2ed964d0-merged.mount: Deactivated successfully.
Jan 20 14:51:55 compute-0 podman[317835]: 2026-01-20 14:51:55.716928639 +0000 UTC m=+0.187727543 container remove 963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:51:55 compute-0 systemd[1]: libpod-conmon-963c7dd0df6d78ec607cb6d1ae1c118a35740ce5693df4814d326a8b18c4794d.scope: Deactivated successfully.
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/995231318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:51:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 20 14:51:55 compute-0 podman[317876]: 2026-01-20 14:51:55.931129224 +0000 UTC m=+0.064678202 container create 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 20 14:51:55 compute-0 systemd[1]: Started libpod-conmon-48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56.scope.
Jan 20 14:51:55 compute-0 podman[317876]: 2026-01-20 14:51:55.904110557 +0000 UTC m=+0.037659625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:56 compute-0 podman[317876]: 2026-01-20 14:51:56.023534971 +0000 UTC m=+0.157083969 container init 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:51:56 compute-0 podman[317876]: 2026-01-20 14:51:56.032719518 +0000 UTC m=+0.166268486 container start 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:51:56 compute-0 podman[317876]: 2026-01-20 14:51:56.036234852 +0000 UTC m=+0.169783880 container attach 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:51:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:51:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1396617212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:56.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:56 compute-0 ceph-mon[74360]: pgmap v1936: 321 pgs: 321 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 20 14:51:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1396617212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:56 compute-0 naughty_shamir[317893]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:51:56 compute-0 naughty_shamir[317893]: --> relative data size: 1.0
Jan 20 14:51:56 compute-0 naughty_shamir[317893]: --> All data devices are unavailable
Jan 20 14:51:56 compute-0 systemd[1]: libpod-48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56.scope: Deactivated successfully.
Jan 20 14:51:56 compute-0 conmon[317893]: conmon 48366156f1ade46fb851 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56.scope/container/memory.events
Jan 20 14:51:56 compute-0 podman[317876]: 2026-01-20 14:51:56.84691127 +0000 UTC m=+0.980460238 container died 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-21a6081b54c562d6cd2c30b1a4962ea814cd76de58faca2fbe0a9685dc8dcd52-merged.mount: Deactivated successfully.
Jan 20 14:51:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:56.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:56 compute-0 podman[317876]: 2026-01-20 14:51:56.899163846 +0000 UTC m=+1.032712814 container remove 48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shamir, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:51:56 compute-0 systemd[1]: libpod-conmon-48366156f1ade46fb8510ce0c99bd201bf9cf34bcd5f4a07f9c3f147cdea0a56.scope: Deactivated successfully.
Jan 20 14:51:56 compute-0 sudo[317771]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:56 compute-0 sudo[317920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:57 compute-0 sudo[317920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:57 compute-0 sudo[317920]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:57 compute-0 sudo[317945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:57 compute-0 sudo[317945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:57 compute-0 sudo[317945]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:57 compute-0 sudo[317970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:57 compute-0 sudo[317970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:57 compute-0 sudo[317970]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:57 compute-0 sudo[317995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:51:57 compute-0 sudo[317995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.515466912 +0000 UTC m=+0.038018884 container create 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:51:57 compute-0 systemd[1]: Started libpod-conmon-5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b.scope.
Jan 20 14:51:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.588750754 +0000 UTC m=+0.111302756 container init 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.498035193 +0000 UTC m=+0.020587235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.596117912 +0000 UTC m=+0.118669894 container start 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:51:57 compute-0 silly_agnesi[318078]: 167 167
Jan 20 14:51:57 compute-0 systemd[1]: libpod-5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b.scope: Deactivated successfully.
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.601543218 +0000 UTC m=+0.124095220 container attach 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.602219717 +0000 UTC m=+0.124771709 container died 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa77fa0678eb0b4517b87cda43b0cc9b7af8ff5cbba5fdafb80c7bf574d43d0-merged.mount: Deactivated successfully.
Jan 20 14:51:57 compute-0 podman[318061]: 2026-01-20 14:51:57.64392202 +0000 UTC m=+0.166474002 container remove 5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_agnesi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:57 compute-0 systemd[1]: libpod-conmon-5129ed99b2559b875f2bb5ac3db81d787add11c11c4a56fd3c88943d39ddea3b.scope: Deactivated successfully.
Jan 20 14:51:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3771881679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:57 compute-0 podman[318103]: 2026-01-20 14:51:57.823844631 +0000 UTC m=+0.047386266 container create 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:51:57 compute-0 systemd[1]: Started libpod-conmon-4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93.scope.
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.878 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.880 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.896 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:51:57 compute-0 podman[318103]: 2026-01-20 14:51:57.80370394 +0000 UTC m=+0.027245615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23ebebc12b7fbc0c0183f66ecc228d1f45a788d0dffee37530c11a5cfd30ec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23ebebc12b7fbc0c0183f66ecc228d1f45a788d0dffee37530c11a5cfd30ec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23ebebc12b7fbc0c0183f66ecc228d1f45a788d0dffee37530c11a5cfd30ec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23ebebc12b7fbc0c0183f66ecc228d1f45a788d0dffee37530c11a5cfd30ec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 333 op/s
Jan 20 14:51:57 compute-0 podman[318103]: 2026-01-20 14:51:57.919795904 +0000 UTC m=+0.143337549 container init 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:51:57 compute-0 podman[318103]: 2026-01-20 14:51:57.926431132 +0000 UTC m=+0.149972767 container start 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:51:57 compute-0 podman[318103]: 2026-01-20 14:51:57.931476668 +0000 UTC m=+0.155018503 container attach 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.982 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.982 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.989 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:51:57 compute-0 nova_compute[250018]: 2026-01-20 14:51:57.990 250022 INFO nova.compute.claims [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.118 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:51:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:51:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260160675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.575 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.585 250022 DEBUG nova.compute.provider_tree [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.615 250022 DEBUG nova.scheduler.client.report [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.645 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.647 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]: {
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:     "0": [
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:         {
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "devices": [
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "/dev/loop3"
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             ],
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "lv_name": "ceph_lv0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "lv_size": "7511998464",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "name": "ceph_lv0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "tags": {
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.cluster_name": "ceph",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.crush_device_class": "",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.encrypted": "0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.osd_id": "0",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.type": "block",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:                 "ceph.vdo": "0"
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             },
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "type": "block",
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:             "vg_name": "ceph_vg0"
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:         }
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]:     ]
Jan 20 14:51:58 compute-0 relaxed_wiles[318120]: }
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.686 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.686 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:51:58 compute-0 systemd[1]: libpod-4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93.scope: Deactivated successfully.
Jan 20 14:51:58 compute-0 podman[318103]: 2026-01-20 14:51:58.708493219 +0000 UTC m=+0.932034844 container died 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.709 250022 INFO nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.737 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d23ebebc12b7fbc0c0183f66ecc228d1f45a788d0dffee37530c11a5cfd30ec3-merged.mount: Deactivated successfully.
Jan 20 14:51:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:51:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:51:58.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:51:58 compute-0 podman[318103]: 2026-01-20 14:51:58.781205196 +0000 UTC m=+1.004746831 container remove 4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:58 compute-0 systemd[1]: libpod-conmon-4ae540a04b653fe8ec611535d68acebe02469e200d4513391ffb9f4d479c2d93.scope: Deactivated successfully.
Jan 20 14:51:58 compute-0 ceph-mon[74360]: pgmap v1937: 321 pgs: 321 active+clean; 449 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 333 op/s
Jan 20 14:51:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3260160675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:51:58 compute-0 sudo[317995]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.854 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.855 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.856 250022 INFO nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Creating image(s)
Jan 20 14:51:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:51:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:51:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:51:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:51:58 compute-0 sudo[318165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:58 compute-0 sudo[318165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:58 compute-0 sudo[318165]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.921 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.957 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:58 compute-0 sudo[318206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:51:58 compute-0 sudo[318206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:58 compute-0 sudo[318206]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.988 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:58 compute-0 nova_compute[250018]: 2026-01-20 14:51:58.992 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.023 250022 DEBUG nova.policy [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '215db37373dc4ae5a75cbd6866f471da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:51:59 compute-0 sudo[318267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:51:59 compute-0 sudo[318267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:59 compute-0 sudo[318267]: pam_unix(sudo:session): session closed for user root
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.061 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.063 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.064 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.064 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.094 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:51:59 compute-0 sudo[318295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:51:59 compute-0 sudo[318295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.099 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.413 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.439630086 +0000 UTC m=+0.043561494 container create fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:51:59 compute-0 systemd[1]: Started libpod-conmon-fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0.scope.
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.495 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] resizing rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:51:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.421838357 +0000 UTC m=+0.025769785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.515707633 +0000 UTC m=+0.119639041 container init fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.522556157 +0000 UTC m=+0.126487565 container start fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.525550698 +0000 UTC m=+0.129482106 container attach fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:51:59 compute-0 eloquent_easley[318448]: 167 167
Jan 20 14:51:59 compute-0 systemd[1]: libpod-fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0.scope: Deactivated successfully.
Jan 20 14:51:59 compute-0 conmon[318448]: conmon fde0281b2e90fcde60c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0.scope/container/memory.events
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.528361443 +0000 UTC m=+0.132292841 container died fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb0eecf2f5f8a3e510fbef110d507887d8d6a729b1a8824aa87d9e97d7b57ae1-merged.mount: Deactivated successfully.
Jan 20 14:51:59 compute-0 podman[318398]: 2026-01-20 14:51:59.56349675 +0000 UTC m=+0.167428158 container remove fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_easley, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:51:59 compute-0 systemd[1]: libpod-conmon-fde0281b2e90fcde60c21a8aebd4e9c0244a9a09f9005ac382739085ba516dc0.scope: Deactivated successfully.
Jan 20 14:51:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.651 250022 DEBUG nova.objects.instance [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.669 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.670 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Ensure instance console log exists: /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.670 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.671 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.672 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:51:59 compute-0 podman[318512]: 2026-01-20 14:51:59.728633254 +0000 UTC m=+0.042030503 container create bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:51:59 compute-0 systemd[1]: Started libpod-conmon-bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9.scope.
Jan 20 14:51:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ffd83aabae2dd44dabee1537d3eac8dfd58bfec29e5b45e747ec95fde69472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ffd83aabae2dd44dabee1537d3eac8dfd58bfec29e5b45e747ec95fde69472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ffd83aabae2dd44dabee1537d3eac8dfd58bfec29e5b45e747ec95fde69472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ffd83aabae2dd44dabee1537d3eac8dfd58bfec29e5b45e747ec95fde69472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:51:59 compute-0 podman[318512]: 2026-01-20 14:51:59.712631323 +0000 UTC m=+0.026028592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:51:59 compute-0 podman[318512]: 2026-01-20 14:51:59.828316367 +0000 UTC m=+0.141713656 container init bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 14:51:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1996025638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/256008764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:51:59 compute-0 podman[318512]: 2026-01-20 14:51:59.843307269 +0000 UTC m=+0.156704518 container start bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 14:51:59 compute-0 podman[318512]: 2026-01-20 14:51:59.846778393 +0000 UTC m=+0.160175642 container attach bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:51:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 470 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.3 MiB/s wr, 313 op/s
Jan 20 14:51:59 compute-0 nova_compute[250018]: 2026-01-20 14:51:59.939 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Successfully created port: 87b0cab5-af2f-4440-8f58-840860a23f68 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.279 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.462 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:00 compute-0 brave_nightingale[318530]: {
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:         "osd_id": 0,
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:         "type": "bluestore"
Jan 20 14:52:00 compute-0 brave_nightingale[318530]:     }
Jan 20 14:52:00 compute-0 brave_nightingale[318530]: }
Jan 20 14:52:00 compute-0 systemd[1]: libpod-bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9.scope: Deactivated successfully.
Jan 20 14:52:00 compute-0 podman[318512]: 2026-01-20 14:52:00.666896925 +0000 UTC m=+0.980294174 container died bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.669 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Successfully updated port: 87b0cab5-af2f-4440-8f58-840860a23f68 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.682 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.682 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.682 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ffd83aabae2dd44dabee1537d3eac8dfd58bfec29e5b45e747ec95fde69472-merged.mount: Deactivated successfully.
Jan 20 14:52:00 compute-0 podman[318512]: 2026-01-20 14:52:00.726143289 +0000 UTC m=+1.039540538 container remove bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:52:00 compute-0 systemd[1]: libpod-conmon-bc6265e498ca18f9d80df7fd33f525e10d81ab952adac44ca09ca484a1b75ae9.scope: Deactivated successfully.
Jan 20 14:52:00 compute-0 sudo[318295]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:52:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:52:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 58ce00a8-87b4-474a-8b5e-0480ce0fdb5b does not exist
Jan 20 14:52:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:00.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 25faae4d-3312-4f6f-9f30-4109848aeb8a does not exist
Jan 20 14:52:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 80e48254-017f-4eea-ae43-9f78181a5c35 does not exist
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.794 250022 DEBUG nova.compute.manager [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.794 250022 DEBUG nova.compute.manager [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing instance network info cache due to event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.794 250022 DEBUG oslo_concurrency.lockutils [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:00 compute-0 sudo[318565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:00 compute-0 sudo[318565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:00 compute-0 sudo[318565]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:00 compute-0 ceph-mon[74360]: pgmap v1938: 321 pgs: 321 active+clean; 470 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.3 MiB/s wr, 313 op/s
Jan 20 14:52:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:52:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:00 compute-0 sudo[318590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:52:00 compute-0 sudo[318590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:00 compute-0 sudo[318590]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:00 compute-0 nova_compute[250018]: 2026-01-20 14:52:00.928 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.854 250022 DEBUG nova.network.neutron [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.873 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.873 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance network_info: |[{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.874 250022 DEBUG oslo_concurrency.lockutils [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.874 250022 DEBUG nova.network.neutron [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.876 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Start _get_guest_xml network_info=[{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.881 250022 WARNING nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.885 250022 DEBUG nova.virt.libvirt.host [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.886 250022 DEBUG nova.virt.libvirt.host [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.888 250022 DEBUG nova.virt.libvirt.host [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.888 250022 DEBUG nova.virt.libvirt.host [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.889 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.889 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.890 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.890 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.890 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.890 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.891 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.891 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.891 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.891 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.891 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.892 250022 DEBUG nova.virt.hardware [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:52:01 compute-0 nova_compute[250018]: 2026-01-20 14:52:01.894 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 347 op/s
Jan 20 14:52:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640400790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.319 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.344 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.348 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432111964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:02.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.787 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.791 250022 DEBUG nova.virt.libvirt.vif [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.791 250022 DEBUG nova.network.os_vif_util [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.794 250022 DEBUG nova.network.os_vif_util [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.797 250022 DEBUG nova.objects.instance [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.832 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <uuid>7f5cfffe-c1dc-4b00-844e-0fb35b340f44</uuid>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <name>instance-00000076</name>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsTestOtherB-server-1654627482</nova:name>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:52:01</nova:creationTime>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <nova:port uuid="87b0cab5-af2f-4440-8f58-840860a23f68">
Jan 20 14:52:02 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <system>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="serial">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="uuid">7f5cfffe-c1dc-4b00-844e-0fb35b340f44</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </system>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <os>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </os>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <features>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </features>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk">
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </source>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config">
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </source>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:52:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:2b:79:1b"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <target dev="tap87b0cab5-af"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/console.log" append="off"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <video>
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </video>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:52:02 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:52:02 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:52:02 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:52:02 compute-0 nova_compute[250018]: </domain>
Jan 20 14:52:02 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.832 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Preparing to wait for external event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.833 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.833 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.833 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.834 250022 DEBUG nova.virt.libvirt.vif [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:51:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.835 250022 DEBUG nova.network.os_vif_util [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.835 250022 DEBUG nova.network.os_vif_util [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.836 250022 DEBUG os_vif [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.839 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.840 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.840 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.845 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87b0cab5-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.845 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87b0cab5-af, col_values=(('external_ids', {'iface-id': '87b0cab5-af2f-4440-8f58-840860a23f68', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:79:1b', 'vm-uuid': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:02 compute-0 NetworkManager[48960]: <info>  [1768920722.8498] manager: (tap87b0cab5-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.856 250022 INFO os_vif [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:52:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:02.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.956 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.957 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.957 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:2b:79:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.957 250022 INFO nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Using config drive
Jan 20 14:52:02 compute-0 ceph-mon[74360]: pgmap v1939: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 347 op/s
Jan 20 14:52:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/640400790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1432111964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:02 compute-0 podman[318682]: 2026-01-20 14:52:02.989238243 +0000 UTC m=+0.093201829 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:52:02 compute-0 nova_compute[250018]: 2026-01-20 14:52:02.994 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:03 compute-0 podman[318681]: 2026-01-20 14:52:03.023968868 +0000 UTC m=+0.128105838 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.272 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.441 250022 INFO nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Creating config drive at /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.451 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjwz6x53m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.600 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjwz6x53m" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:03 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 20 14:52:03 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000073.scope: Consumed 13.265s CPU time.
Jan 20 14:52:03 compute-0 systemd-machined[216401]: Machine qemu-53-instance-00000073 terminated.
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.641 250022 DEBUG nova.storage.rbd_utils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.647 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.728 250022 DEBUG nova.network.neutron [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updated VIF entry in instance network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.729 250022 DEBUG nova.network.neutron [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.752 250022 DEBUG oslo_concurrency.lockutils [req-61e9f415-ca64-4838-b908-ec26f9830472 req-10d95289-be43-4b0c-8326-5a129fc62eaf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.794 250022 DEBUG oslo_concurrency.processutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config 7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.795 250022 INFO nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Deleting local config drive /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/disk.config because it was imported into RBD.
Jan 20 14:52:03 compute-0 kernel: tap87b0cab5-af: entered promiscuous mode
Jan 20 14:52:03 compute-0 NetworkManager[48960]: <info>  [1768920723.8468] manager: (tap87b0cab5-af): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 20 14:52:03 compute-0 systemd-udevd[318761]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:52:03 compute-0 NetworkManager[48960]: <info>  [1768920723.8623] device (tap87b0cab5-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:52:03 compute-0 NetworkManager[48960]: <info>  [1768920723.8636] device (tap87b0cab5-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.886 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:03 compute-0 ovn_controller[148666]: 2026-01-20T14:52:03Z|00405|binding|INFO|Claiming lport 87b0cab5-af2f-4440-8f58-840860a23f68 for this chassis.
Jan 20 14:52:03 compute-0 ovn_controller[148666]: 2026-01-20T14:52:03Z|00406|binding|INFO|87b0cab5-af2f-4440-8f58-840860a23f68: Claiming fa:16:3e:2b:79:1b 10.100.0.9
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.896 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:79:1b 10.100.0.9'], port_security=['fa:16:3e:2b:79:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=87b0cab5-af2f-4440-8f58-840860a23f68) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.897 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 87b0cab5-af2f-4440-8f58-840860a23f68 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.898 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:52:03 compute-0 systemd-machined[216401]: New machine qemu-54-instance-00000076.
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.910 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bf621f-6ecf-4ce2-bdce-82e61627f494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.911 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:52:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.8 MiB/s wr, 345 op/s
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.913 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.913 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5fb417-9b56-4eee-a1e1-7fbef782af7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.914 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[007d5914-c184-479c-81f3-0a4d2ba56af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.926 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1612df-b323-4505-9dd7-9fdd90abe16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000076.
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.950 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a41435-53b9-4a84-a141-6a8daedb05a4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 ovn_controller[148666]: 2026-01-20T14:52:03Z|00407|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 ovn-installed in OVS
Jan 20 14:52:03 compute-0 ovn_controller[148666]: 2026-01-20T14:52:03Z|00408|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 up in Southbound
Jan 20 14:52:03 compute-0 nova_compute[250018]: 2026-01-20 14:52:03.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3728354836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.984 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3276227e-70af-4441-a85a-b5825fa14533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:03.990 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[33902b66-ae59-4349-8c5d-b16cfd0c520c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:03 compute-0 NetworkManager[48960]: <info>  [1768920723.9920] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.027 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c7243c6b-3941-49b3-8644-cee850364f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.032 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fe41e9c2-00a2-4df8-9591-f49f925bef9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 NetworkManager[48960]: <info>  [1768920724.0523] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.058 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0996a866-d9b3-49ee-8193-d2fa2e5cb084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.077 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d701704-2db7-421a-9d93-72bdb287b650]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666477, 'reachable_time': 32123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318828, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.091 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[366b9c85-d3ee-48df-8a97-2a6dcf087ce3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666477, 'tstamp': 666477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318829, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.111 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[188140ec-ae33-4025-ab15-7169ed1d83f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666477, 'reachable_time': 32123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318830, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.140 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7950d434-a576-4b95-b038-8a2deb8c9611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.195 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[64e94c4e-2057-40d2-982e-e2e3b09c8256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.197 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.197 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.198 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:04 compute-0 NetworkManager[48960]: <info>  [1768920724.2000] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 20 14:52:04 compute-0 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.199 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.204 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.205 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:04 compute-0 ovn_controller[148666]: 2026-01-20T14:52:04Z|00409|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.208 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.208 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb89154-e0fb-49f3-b54f-dffd2fc8899c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.209 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:52:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:04.210 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.220 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.298 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance shutdown successfully after 14 seconds.
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.305 250022 INFO nova.virt.libvirt.driver [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance destroyed successfully.
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.310 250022 INFO nova.virt.libvirt.driver [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance destroyed successfully.
Jan 20 14:52:04 compute-0 podman[318918]: 2026-01-20 14:52:04.572274837 +0000 UTC m=+0.052094913 container create 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.596 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920724.596632, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.597 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Started (Lifecycle Event)
Jan 20 14:52:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:04 compute-0 systemd[1]: Started libpod-conmon-5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659.scope.
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.620 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.624 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920724.5973082, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.625 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Paused (Lifecycle Event)
Jan 20 14:52:04 compute-0 podman[318918]: 2026-01-20 14:52:04.54564516 +0000 UTC m=+0.025465266 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.641 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.645 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c961241b07c1a1ce872fb46b5509192d775baf97df50ff9c51115f8c0f98866/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.665 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:52:04 compute-0 podman[318918]: 2026-01-20 14:52:04.670004627 +0000 UTC m=+0.149824703 container init 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:52:04 compute-0 podman[318918]: 2026-01-20 14:52:04.676323627 +0000 UTC m=+0.156143703 container start 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:52:04 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [NOTICE]   (318940) : New worker (318942) forked
Jan 20 14:52:04 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [NOTICE]   (318940) : Loading success.
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.701 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deleting instance files /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734_del
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.702 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deletion of /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734_del complete
Jan 20 14:52:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:04.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.844 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.845 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating image(s)
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.872 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:04.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.898 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.925 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:04 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.929 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:04.999 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.000 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.001 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.001 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "a4ed0d2b98aa460c005e878d78a49ccb6f511f7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:05 compute-0 ceph-mon[74360]: pgmap v1940: 321 pgs: 321 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.8 MiB/s wr, 345 op/s
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.027 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.030 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 8392c22f-8472-42e6-bd6b-68724d9e0734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.317 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c 8392c22f-8472-42e6-bd6b-68724d9e0734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.392 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] resizing rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.506 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.507 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Ensure instance console log exists: /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.507 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.508 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.508 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.509 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.514 250022 WARNING nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.519 250022 DEBUG nova.virt.libvirt.host [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.519 250022 DEBUG nova.virt.libvirt.host [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.527 250022 DEBUG nova.virt.libvirt.host [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.527 250022 DEBUG nova.virt.libvirt.host [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.528 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.529 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.529 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.529 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.529 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.530 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.530 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.530 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.530 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.530 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.531 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.531 250022 DEBUG nova.virt.hardware [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.531 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.602 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.676 250022 DEBUG nova.compute.manager [req-6a776200-4fb9-4907-8f09-a877e907cecf req-e8e40292-a892-4dda-803d-909215544d93 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.677 250022 DEBUG oslo_concurrency.lockutils [req-6a776200-4fb9-4907-8f09-a877e907cecf req-e8e40292-a892-4dda-803d-909215544d93 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.677 250022 DEBUG oslo_concurrency.lockutils [req-6a776200-4fb9-4907-8f09-a877e907cecf req-e8e40292-a892-4dda-803d-909215544d93 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.677 250022 DEBUG oslo_concurrency.lockutils [req-6a776200-4fb9-4907-8f09-a877e907cecf req-e8e40292-a892-4dda-803d-909215544d93 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.677 250022 DEBUG nova.compute.manager [req-6a776200-4fb9-4907-8f09-a877e907cecf req-e8e40292-a892-4dda-803d-909215544d93 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Processing event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.678 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.682 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920725.6810706, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.682 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Resumed (Lifecycle Event)
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.685 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.689 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance spawned successfully.
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.690 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.703 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.706 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.725 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.726 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.727 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.727 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.728 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.729 250022 DEBUG nova.virt.libvirt.driver [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.735 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.787 250022 INFO nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 6.93 seconds to spawn the instance on the hypervisor.
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.787 250022 DEBUG nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.867 250022 INFO nova.compute.manager [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 7.91 seconds to build instance.
Jan 20 14:52:05 compute-0 nova_compute[250018]: 2026-01-20 14:52:05.894 250022 DEBUG oslo_concurrency.lockutils [None req-3b27b052-7348-4146-a47c-649b2abea55e 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 516 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 9.6 MiB/s wr, 467 op/s
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.121 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.144 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.149 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:52:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1459808343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.587 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.591 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <uuid>8392c22f-8472-42e6-bd6b-68724d9e0734</uuid>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <name>instance-00000073</name>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerShowV247Test-server-592811457</nova:name>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:52:05</nova:creationTime>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:user uuid="cdcdce94e7354b3bafb34285408888b9">tempest-ServerShowV247Test-1508434892-project-member</nova:user>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <nova:project uuid="ecfc3366b9194864a3f15ce0114b5ee3">tempest-ServerShowV247Test-1508434892</nova:project>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="26699514-f465-4b50-98b7-36f2cfc6a308"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <nova:ports/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <system>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="serial">8392c22f-8472-42e6-bd6b-68724d9e0734</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="uuid">8392c22f-8472-42e6-bd6b-68724d9e0734</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </system>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <os>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </os>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <features>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </features>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/8392c22f-8472-42e6-bd6b-68724d9e0734_disk">
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </source>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config">
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </source>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:52:06 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/console.log" append="off"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <video>
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </video>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:52:06 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:52:06 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:52:06 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:52:06 compute-0 nova_compute[250018]: </domain>
Jan 20 14:52:06 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.641 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.642 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.643 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Using config drive
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.672 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.717 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:06 compute-0 nova_compute[250018]: 2026-01-20 14:52:06.751 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'keypairs' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:06.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 20 14:52:07 compute-0 ceph-mon[74360]: pgmap v1941: 321 pgs: 321 active+clean; 516 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 9.6 MiB/s wr, 467 op/s
Jan 20 14:52:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2460387741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1459808343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 20 14:52:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.166 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Creating config drive at /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.170 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn2lpwgmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.302 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn2lpwgmg" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.338 250022 DEBUG nova.storage.rbd_utils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] rbd image 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.345 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.493 250022 DEBUG oslo_concurrency.processutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config 8392c22f-8472-42e6-bd6b-68724d9e0734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.494 250022 INFO nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deleting local config drive /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734/disk.config because it was imported into RBD.
Jan 20 14:52:07 compute-0 systemd-machined[216401]: New machine qemu-55-instance-00000073.
Jan 20 14:52:07 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000073.
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.755 250022 DEBUG nova.compute.manager [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.756 250022 DEBUG oslo_concurrency.lockutils [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.756 250022 DEBUG oslo_concurrency.lockutils [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.757 250022 DEBUG oslo_concurrency.lockutils [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.757 250022 DEBUG nova.compute.manager [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.757 250022 WARNING nova.compute.manager [req-b219bd90-1dfd-46e7-bf20-c3dc2cb349f2 req-b9ca2c05-498f-4e33-a611-8f66274b627e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state active and task_state None.
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 10 MiB/s wr, 538 op/s
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.954 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for 8392c22f-8472-42e6-bd6b-68724d9e0734 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.955 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920727.9542305, 8392c22f-8472-42e6-bd6b-68724d9e0734 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.955 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] VM Resumed (Lifecycle Event)
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.985 250022 DEBUG nova.compute.manager [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.985 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.988 250022 INFO nova.virt.libvirt.driver [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance spawned successfully.
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.988 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.995 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:07 compute-0 nova_compute[250018]: 2026-01-20 14:52:07.997 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.008 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.009 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.009 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.010 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.010 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.011 250022 DEBUG nova.virt.libvirt.driver [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.019 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.020 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920727.984653, 8392c22f-8472-42e6-bd6b-68724d9e0734 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.020 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] VM Started (Lifecycle Event)
Jan 20 14:52:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.046 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.049 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:52:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 20 14:52:08 compute-0 ceph-mon[74360]: osdmap e262: 3 total, 3 up, 3 in
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.072 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:52:08 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.087 250022 DEBUG nova.compute.manager [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.160 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.160 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.161 250022 DEBUG nova.objects.instance [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.219 250022 DEBUG oslo_concurrency.lockutils [None req-42aebf58-38d3-43ca-a586-afbe64386435 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:08 compute-0 nova_compute[250018]: 2026-01-20 14:52:08.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 20 14:52:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 20 14:52:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 20 14:52:09 compute-0 ceph-mon[74360]: pgmap v1943: 321 pgs: 321 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 10 MiB/s wr, 538 op/s
Jan 20 14:52:09 compute-0 ceph-mon[74360]: osdmap e263: 3 total, 3 up, 3 in
Jan 20 14:52:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 505 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 627 op/s
Jan 20 14:52:10 compute-0 ceph-mon[74360]: osdmap e264: 3 total, 3 up, 3 in
Jan 20 14:52:10 compute-0 ceph-mon[74360]: pgmap v1946: 321 pgs: 321 active+clean; 505 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 627 op/s
Jan 20 14:52:10 compute-0 sudo[319298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:10 compute-0 sudo[319298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:10 compute-0 sudo[319298]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:10.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:10 compute-0 sudo[319323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:10 compute-0 sudo[319323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:10 compute-0 sudo[319323]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:10.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.151 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "8392c22f-8472-42e6-bd6b-68724d9e0734" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.152 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.152 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "8392c22f-8472-42e6-bd6b-68724d9e0734-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.152 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.153 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.154 250022 INFO nova.compute.manager [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Terminating instance
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.154 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "refresh_cache-8392c22f-8472-42e6-bd6b-68724d9e0734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.155 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquired lock "refresh_cache-8392c22f-8472-42e6-bd6b-68724d9e0734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.155 250022 DEBUG nova.network.neutron [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008927013539921832 of space, bias 1.0, pg target 2.6781040619765495 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004243948329611018 of space, bias 1.0, pg target 1.2646966022240833 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 20 14:52:11 compute-0 nova_compute[250018]: 2026-01-20 14:52:11.737 250022 DEBUG nova.network.neutron [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:52:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 419 op/s
Jan 20 14:52:12 compute-0 NetworkManager[48960]: <info>  [1768920732.1070] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Jan 20 14:52:12 compute-0 NetworkManager[48960]: <info>  [1768920732.1081] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.106 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:12 compute-0 ovn_controller[148666]: 2026-01-20T14:52:12Z|00410|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:52:12 compute-0 ovn_controller[148666]: 2026-01-20T14:52:12Z|00411|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.168 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.179 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.181 250022 DEBUG nova.network.neutron [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.197 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Releasing lock "refresh_cache-8392c22f-8472-42e6-bd6b-68724d9e0734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.198 250022 DEBUG nova.compute.manager [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:52:12 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 20 14:52:12 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000073.scope: Consumed 4.694s CPU time.
Jan 20 14:52:12 compute-0 systemd-machined[216401]: Machine qemu-55-instance-00000073 terminated.
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.421 250022 INFO nova.virt.libvirt.driver [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance destroyed successfully.
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.422 250022 DEBUG nova.objects.instance [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lazy-loading 'resources' on Instance uuid 8392c22f-8472-42e6-bd6b-68724d9e0734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:12.810 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:12.811 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:52:12 compute-0 nova_compute[250018]: 2026-01-20 14:52:12.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:13 compute-0 ceph-mon[74360]: pgmap v1947: 321 pgs: 321 active+clean; 552 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 419 op/s
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.321 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.448 250022 DEBUG nova.compute.manager [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.448 250022 DEBUG nova.compute.manager [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing instance network info cache due to event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.448 250022 DEBUG oslo_concurrency.lockutils [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.449 250022 DEBUG oslo_concurrency.lockutils [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.449 250022 DEBUG nova.network.neutron [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:52:13 compute-0 ovn_controller[148666]: 2026-01-20T14:52:13Z|00412|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:52:13 compute-0 ovn_controller[148666]: 2026-01-20T14:52:13Z|00413|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.508 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.793 250022 INFO nova.virt.libvirt.driver [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deleting instance files /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734_del
Jan 20 14:52:13 compute-0 nova_compute[250018]: 2026-01-20 14:52:13.793 250022 INFO nova.virt.libvirt.driver [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deletion of /var/lib/nova/instances/8392c22f-8472-42e6-bd6b-68724d9e0734_del complete
Jan 20 14:52:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 461 op/s
Jan 20 14:52:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 20 14:52:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 20 14:52:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 20 14:52:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:14.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.061 250022 INFO nova.compute.manager [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Took 2.86 seconds to destroy the instance on the hypervisor.
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.062 250022 DEBUG oslo.service.loopingcall [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.062 250022 DEBUG nova.compute.manager [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.062 250022 DEBUG nova.network.neutron [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:52:15 compute-0 ceph-mon[74360]: pgmap v1948: 321 pgs: 321 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.3 MiB/s wr, 461 op/s
Jan 20 14:52:15 compute-0 ceph-mon[74360]: osdmap e265: 3 total, 3 up, 3 in
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.299 250022 DEBUG nova.network.neutron [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.317 250022 DEBUG nova.network.neutron [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.340 250022 INFO nova.compute.manager [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Took 0.28 seconds to deallocate network for instance.
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.408 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.408 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.539 250022 DEBUG oslo_concurrency.processutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.758 250022 DEBUG nova.network.neutron [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updated VIF entry in instance network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.759 250022 DEBUG nova.network.neutron [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:15 compute-0 nova_compute[250018]: 2026-01-20 14:52:15.793 250022 DEBUG oslo_concurrency.lockutils [req-24d76d0c-7c22-429b-82a1-f6fbae8d8c1e req-eb36e178-10ac-4d7b-aa86-cb39b2483f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 550 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 6.5 MiB/s wr, 369 op/s
Jan 20 14:52:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592155962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:16 compute-0 nova_compute[250018]: 2026-01-20 14:52:16.009 250022 DEBUG oslo_concurrency.processutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:16 compute-0 nova_compute[250018]: 2026-01-20 14:52:16.015 250022 DEBUG nova.compute.provider_tree [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:52:16 compute-0 ceph-mon[74360]: pgmap v1950: 321 pgs: 321 active+clean; 550 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 6.5 MiB/s wr, 369 op/s
Jan 20 14:52:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/592155962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:16.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:17 compute-0 nova_compute[250018]: 2026-01-20 14:52:17.125 250022 DEBUG nova.scheduler.client.report [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:52:17 compute-0 nova_compute[250018]: 2026-01-20 14:52:17.173 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:17 compute-0 nova_compute[250018]: 2026-01-20 14:52:17.261 250022 INFO nova.scheduler.client.report [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Deleted allocations for instance 8392c22f-8472-42e6-bd6b-68724d9e0734
Jan 20 14:52:17 compute-0 nova_compute[250018]: 2026-01-20 14:52:17.408 250022 DEBUG oslo_concurrency.lockutils [None req-d45819d9-9a7e-4f9f-948c-1a198a6af2a9 cdcdce94e7354b3bafb34285408888b9 ecfc3366b9194864a3f15ce0114b5ee3 - - default default] Lock "8392c22f-8472-42e6-bd6b-68724d9e0734" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:17 compute-0 nova_compute[250018]: 2026-01-20 14:52:17.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 312 op/s
Jan 20 14:52:18 compute-0 nova_compute[250018]: 2026-01-20 14:52:18.323 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:18.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:18 compute-0 ceph-mon[74360]: pgmap v1951: 321 pgs: 321 active+clean; 538 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 312 op/s
Jan 20 14:52:19 compute-0 ovn_controller[148666]: 2026-01-20T14:52:19Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:79:1b 10.100.0.9
Jan 20 14:52:19 compute-0 ovn_controller[148666]: 2026-01-20T14:52:19Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:79:1b 10.100.0.9
Jan 20 14:52:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 543 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.2 MiB/s wr, 299 op/s
Jan 20 14:52:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:52:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:52:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:20 compute-0 ceph-mon[74360]: pgmap v1952: 321 pgs: 321 active+clean; 543 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.2 MiB/s wr, 299 op/s
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:52:22 compute-0 ceph-mon[74360]: pgmap v1953: 321 pgs: 321 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.0 MiB/s wr, 210 op/s
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:22.813 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:52:22 compute-0 nova_compute[250018]: 2026-01-20 14:52:22.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:23 compute-0 nova_compute[250018]: 2026-01-20 14:52:23.327 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 578 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.0 MiB/s wr, 178 op/s
Jan 20 14:52:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:24.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:24 compute-0 ceph-mon[74360]: pgmap v1954: 321 pgs: 321 active+clean; 578 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.0 MiB/s wr, 178 op/s
Jan 20 14:52:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 521 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 716 KiB/s rd, 4.5 MiB/s wr, 172 op/s
Jan 20 14:52:26 compute-0 ceph-mon[74360]: pgmap v1955: 321 pgs: 321 active+clean; 521 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 716 KiB/s rd, 4.5 MiB/s wr, 172 op/s
Jan 20 14:52:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:26.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:27 compute-0 nova_compute[250018]: 2026-01-20 14:52:27.420 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920732.419087, 8392c22f-8472-42e6-bd6b-68724d9e0734 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:52:27 compute-0 nova_compute[250018]: 2026-01-20 14:52:27.420 250022 INFO nova.compute.manager [-] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] VM Stopped (Lifecycle Event)
Jan 20 14:52:27 compute-0 nova_compute[250018]: 2026-01-20 14:52:27.475 250022 DEBUG nova.compute.manager [None req-1462d223-c382-4aba-900e-814b039749bb - - - - - -] [instance: 8392c22f-8472-42e6-bd6b-68724d9e0734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:27 compute-0 nova_compute[250018]: 2026-01-20 14:52:27.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.0 MiB/s wr, 154 op/s
Jan 20 14:52:28 compute-0 nova_compute[250018]: 2026-01-20 14:52:28.330 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:28 compute-0 ceph-mon[74360]: pgmap v1956: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.0 MiB/s wr, 154 op/s
Jan 20 14:52:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3273996106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 488 KiB/s rd, 2.5 MiB/s wr, 119 op/s
Jan 20 14:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:30.763 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:30.764 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:52:30.765 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:30.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:30 compute-0 sudo[319403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:30.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:30 compute-0 sudo[319403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:30 compute-0 sudo[319403]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:31 compute-0 sudo[319428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:31 compute-0 sudo[319428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:31 compute-0 sudo[319428]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:31 compute-0 ceph-mon[74360]: pgmap v1957: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 488 KiB/s rd, 2.5 MiB/s wr, 119 op/s
Jan 20 14:52:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 14:52:32 compute-0 sshd-session[319454]: Invalid user postgres from 157.245.78.139 port 42164
Jan 20 14:52:32 compute-0 sshd-session[319454]: Connection closed by invalid user postgres 157.245.78.139 port 42164 [preauth]
Jan 20 14:52:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:32.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:32 compute-0 nova_compute[250018]: 2026-01-20 14:52:32.876 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:33 compute-0 ceph-mon[74360]: pgmap v1958: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 14:52:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/861012501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:33 compute-0 nova_compute[250018]: 2026-01-20 14:52:33.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:33 compute-0 podman[319457]: 2026-01-20 14:52:33.511088342 +0000 UTC m=+0.086583751 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 14:52:33 compute-0 podman[319456]: 2026-01-20 14:52:33.555151718 +0000 UTC m=+0.130692179 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:52:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 714 KiB/s wr, 70 op/s
Jan 20 14:52:34 compute-0 nova_compute[250018]: 2026-01-20 14:52:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3561049918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2828053923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:52:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:34.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:35 compute-0 nova_compute[250018]: 2026-01-20 14:52:35.235 250022 DEBUG nova.compute.manager [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:35 compute-0 nova_compute[250018]: 2026-01-20 14:52:35.399 250022 INFO nova.compute.manager [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] instance snapshotting
Jan 20 14:52:35 compute-0 nova_compute[250018]: 2026-01-20 14:52:35.400 250022 DEBUG nova.objects.instance [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:35 compute-0 ceph-mon[74360]: pgmap v1959: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 714 KiB/s wr, 70 op/s
Jan 20 14:52:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 130 KiB/s wr, 51 op/s
Jan 20 14:52:36 compute-0 nova_compute[250018]: 2026-01-20 14:52:36.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:36 compute-0 nova_compute[250018]: 2026-01-20 14:52:36.208 250022 INFO nova.virt.libvirt.driver [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Beginning live snapshot process
Jan 20 14:52:36 compute-0 ceph-mon[74360]: pgmap v1960: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 130 KiB/s wr, 51 op/s
Jan 20 14:52:36 compute-0 nova_compute[250018]: 2026-01-20 14:52:36.760 250022 DEBUG nova.virt.libvirt.imagebackend [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:52:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:36.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:37 compute-0 nova_compute[250018]: 2026-01-20 14:52:37.504 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:37 compute-0 nova_compute[250018]: 2026-01-20 14:52:37.583 250022 DEBUG nova.storage.rbd_utils [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(52e1d66d234a44779bfd42cf8801276d) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 20 14:52:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 20 14:52:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 20 14:52:37 compute-0 nova_compute[250018]: 2026-01-20 14:52:37.743 250022 DEBUG nova.storage.rbd_utils [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] cloning vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk@52e1d66d234a44779bfd42cf8801276d to images/165549fb-68ff-4d9c-ab65-c85189d23d7d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:52:37 compute-0 nova_compute[250018]: 2026-01-20 14:52:37.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 33 KiB/s wr, 31 op/s
Jan 20 14:52:37 compute-0 nova_compute[250018]: 2026-01-20 14:52:37.990 250022 DEBUG nova.storage.rbd_utils [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] flattening images/165549fb-68ff-4d9c-ab65-c85189d23d7d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:52:38 compute-0 nova_compute[250018]: 2026-01-20 14:52:38.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:38 compute-0 nova_compute[250018]: 2026-01-20 14:52:38.433 250022 DEBUG nova.storage.rbd_utils [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(52e1d66d234a44779bfd42cf8801276d) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:52:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 20 14:52:38 compute-0 ceph-mon[74360]: osdmap e266: 3 total, 3 up, 3 in
Jan 20 14:52:38 compute-0 ceph-mon[74360]: pgmap v1962: 321 pgs: 321 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 33 KiB/s wr, 31 op/s
Jan 20 14:52:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 20 14:52:38 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 20 14:52:38 compute-0 nova_compute[250018]: 2026-01-20 14:52:38.743 250022 DEBUG nova.storage.rbd_utils [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(snap) on rbd image(165549fb-68ff-4d9c-ab65-c85189d23d7d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:38.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 20 14:52:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 20 14:52:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 20 14:52:39 compute-0 ceph-mon[74360]: osdmap e267: 3 total, 3 up, 3 in
Jan 20 14:52:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 526 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Jan 20 14:52:40 compute-0 nova_compute[250018]: 2026-01-20 14:52:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:40 compute-0 ceph-mon[74360]: osdmap e268: 3 total, 3 up, 3 in
Jan 20 14:52:40 compute-0 ceph-mon[74360]: pgmap v1965: 321 pgs: 321 active+clean; 526 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Jan 20 14:52:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:40.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:41 compute-0 nova_compute[250018]: 2026-01-20 14:52:41.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:41 compute-0 nova_compute[250018]: 2026-01-20 14:52:41.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:41 compute-0 nova_compute[250018]: 2026-01-20 14:52:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:52:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/383931634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 548 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.110 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.110 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.111 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.111 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.112 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155031433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.586 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:42 compute-0 nova_compute[250018]: 2026-01-20 14:52:42.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:42 compute-0 ceph-mon[74360]: pgmap v1966: 321 pgs: 321 active+clean; 548 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Jan 20 14:52:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2883729981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3155031433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:42.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:43 compute-0 nova_compute[250018]: 2026-01-20 14:52:43.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.5 MiB/s wr, 293 op/s
Jan 20 14:52:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2181614927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.195 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.196 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.200 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.200 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.443 250022 INFO nova.virt.libvirt.driver [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Snapshot image upload complete
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.444 250022 INFO nova.compute.manager [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 9.01 seconds to snapshot the instance on the hypervisor.
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.471 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.472 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4033MB free_disk=20.806049346923828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.472 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.472 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:52:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 20 14:52:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 20 14:52:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 20 14:52:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.861 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.862 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.862 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.862 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:52:44 compute-0 nova_compute[250018]: 2026-01-20 14:52:44.946 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:52:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:44.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:44 compute-0 ceph-mon[74360]: pgmap v1967: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.5 MiB/s wr, 293 op/s
Jan 20 14:52:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1761720748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:44 compute-0 ceph-mon[74360]: osdmap e269: 3 total, 3 up, 3 in
Jan 20 14:52:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:52:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979801871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.428 250022 DEBUG nova.compute.manager [None req-a73a8574-f4ac-4b33-9912-f52b3832f5e7 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.441 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.446 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.701 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.739 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:52:45 compute-0 nova_compute[250018]: 2026-01-20 14:52:45.740 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:52:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 253 op/s
Jan 20 14:52:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2979801871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:52:46 compute-0 nova_compute[250018]: 2026-01-20 14:52:46.720 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:46 compute-0 nova_compute[250018]: 2026-01-20 14:52:46.740 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:46 compute-0 nova_compute[250018]: 2026-01-20 14:52:46.741 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:46.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:47 compute-0 ceph-mon[74360]: pgmap v1969: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 253 op/s
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.895 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.896 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.896 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.896 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:47 compute-0 nova_compute[250018]: 2026-01-20 14:52:47.898 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 4.5 MiB/s wr, 211 op/s
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.038 250022 DEBUG nova.compute.manager [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.132 250022 INFO nova.compute.manager [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] instance snapshotting
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.134 250022 DEBUG nova.objects.instance [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.339 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.503 250022 INFO nova.virt.libvirt.driver [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Beginning live snapshot process
Jan 20 14:52:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:48.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:48 compute-0 nova_compute[250018]: 2026-01-20 14:52:48.988 250022 DEBUG nova.virt.libvirt.imagebackend [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:52:49 compute-0 ceph-mon[74360]: pgmap v1970: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 4.5 MiB/s wr, 211 op/s
Jan 20 14:52:49 compute-0 nova_compute[250018]: 2026-01-20 14:52:49.509 250022 DEBUG nova.storage.rbd_utils [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(ab862a84036a4de4a170cb9dd5633adb) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Jan 20 14:52:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 20 14:52:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 20 14:52:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.095 250022 DEBUG nova.storage.rbd_utils [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] cloning vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk@ab862a84036a4de4a170cb9dd5633adb to images/d9e08301-4600-4321-b633-e7777cc1e842 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.209 250022 DEBUG nova.storage.rbd_utils [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] flattening images/d9e08301-4600-4321-b633-e7777cc1e842 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.451 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.489 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.490 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:52:50 compute-0 nova_compute[250018]: 2026-01-20 14:52:50.753 250022 DEBUG nova.storage.rbd_utils [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(ab862a84036a4de4a170cb9dd5633adb) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:52:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:51 compute-0 ceph-mon[74360]: pgmap v1971: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Jan 20 14:52:51 compute-0 ceph-mon[74360]: osdmap e270: 3 total, 3 up, 3 in
Jan 20 14:52:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 20 14:52:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 20 14:52:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 20 14:52:51 compute-0 sudo[319821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:51 compute-0 sudo[319821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:51 compute-0 sudo[319821]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:51 compute-0 nova_compute[250018]: 2026-01-20 14:52:51.101 250022 DEBUG nova.storage.rbd_utils [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(snap) on rbd image(d9e08301-4600-4321-b633-e7777cc1e842) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:52:51 compute-0 sudo[319853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:52:51 compute-0 sudo[319853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:52:51 compute-0 sudo[319853]: pam_unix(sudo:session): session closed for user root
Jan 20 14:52:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.5 KiB/s wr, 185 op/s
Jan 20 14:52:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 20 14:52:52 compute-0 ceph-mon[74360]: osdmap e271: 3 total, 3 up, 3 in
Jan 20 14:52:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 20 14:52:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:52:52
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Jan 20 14:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:52:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:52 compute-0 nova_compute[250018]: 2026-01-20 14:52:52.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:52:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:52:53 compute-0 ceph-mon[74360]: pgmap v1974: 321 pgs: 321 active+clean; 580 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.5 KiB/s wr, 185 op/s
Jan 20 14:52:53 compute-0 ceph-mon[74360]: osdmap e272: 3 total, 3 up, 3 in
Jan 20 14:52:53 compute-0 nova_compute[250018]: 2026-01-20 14:52:53.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:53 compute-0 nova_compute[250018]: 2026-01-20 14:52:53.572 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 628 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 20 14:52:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:54.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:55 compute-0 nova_compute[250018]: 2026-01-20 14:52:55.066 250022 INFO nova.virt.libvirt.driver [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Snapshot image upload complete
Jan 20 14:52:55 compute-0 nova_compute[250018]: 2026-01-20 14:52:55.067 250022 INFO nova.compute.manager [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 6.88 seconds to snapshot the instance on the hypervisor.
Jan 20 14:52:55 compute-0 ceph-mon[74360]: pgmap v1976: 321 pgs: 321 active+clean; 628 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 20 14:52:55 compute-0 nova_compute[250018]: 2026-01-20 14:52:55.496 250022 DEBUG nova.compute.manager [None req-3e7fecf4-12a0-40e0-98a9-6a4b7f1ed069 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 20 14:52:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 7.8 MiB/s wr, 258 op/s
Jan 20 14:52:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:56.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:57 compute-0 ceph-mon[74360]: pgmap v1977: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 7.8 MiB/s wr, 258 op/s
Jan 20 14:52:57 compute-0 nova_compute[250018]: 2026-01-20 14:52:57.483 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:52:57 compute-0 nova_compute[250018]: 2026-01-20 14:52:57.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Jan 20 14:52:58 compute-0 ceph-mon[74360]: pgmap v1978: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Jan 20 14:52:58 compute-0 nova_compute[250018]: 2026-01-20 14:52:58.344 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:52:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:52:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:52:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:52:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:52:58.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:52:59 compute-0 nova_compute[250018]: 2026-01-20 14:52:59.442 250022 DEBUG nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:52:59 compute-0 nova_compute[250018]: 2026-01-20 14:52:59.506 250022 INFO nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] instance snapshotting
Jan 20 14:52:59 compute-0 nova_compute[250018]: 2026-01-20 14:52:59.507 250022 DEBUG nova.objects.instance [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:52:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:52:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 20 14:52:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 20 14:52:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 20 14:52:59 compute-0 nova_compute[250018]: 2026-01-20 14:52:59.867 250022 INFO nova.virt.libvirt.driver [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Beginning live snapshot process
Jan 20 14:52:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 145 op/s
Jan 20 14:53:00 compute-0 nova_compute[250018]: 2026-01-20 14:53:00.108 250022 DEBUG nova.virt.libvirt.imagebackend [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:53:00 compute-0 nova_compute[250018]: 2026-01-20 14:53:00.500 250022 DEBUG nova.storage.rbd_utils [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(36bc81e8b2ab442e8f94cb62069352c6) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:53:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 20 14:53:00 compute-0 ceph-mon[74360]: osdmap e273: 3 total, 3 up, 3 in
Jan 20 14:53:00 compute-0 ceph-mon[74360]: pgmap v1980: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 145 op/s
Jan 20 14:53:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 20 14:53:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 20 14:53:00 compute-0 nova_compute[250018]: 2026-01-20 14:53:00.680 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:00 compute-0 nova_compute[250018]: 2026-01-20 14:53:00.712 250022 DEBUG nova.storage.rbd_utils [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] cloning vms/7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk@36bc81e8b2ab442e8f94cb62069352c6 to images/bf636010-1d3c-4028-9e00-bc1bb08e7dca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:53:00 compute-0 nova_compute[250018]: 2026-01-20 14:53:00.839 250022 DEBUG nova.storage.rbd_utils [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] flattening images/bf636010-1d3c-4028-9e00-bc1bb08e7dca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:53:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:00.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:01 compute-0 sudo[320000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:01 compute-0 sudo[320000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-0 sudo[320000]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-0 nova_compute[250018]: 2026-01-20 14:53:01.282 250022 DEBUG nova.storage.rbd_utils [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(36bc81e8b2ab442e8f94cb62069352c6) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:53:01 compute-0 sudo[320043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:53:01 compute-0 sudo[320043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-0 sudo[320043]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-0 sudo[320068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:01 compute-0 sudo[320068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-0 sudo[320068]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:01 compute-0 sudo[320093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:53:01 compute-0 sudo[320093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 20 14:53:01 compute-0 ceph-mon[74360]: osdmap e274: 3 total, 3 up, 3 in
Jan 20 14:53:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4058498692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 20 14:53:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 20 14:53:01 compute-0 nova_compute[250018]: 2026-01-20 14:53:01.723 250022 DEBUG nova.storage.rbd_utils [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(snap) on rbd image(bf636010-1d3c-4028-9e00-bc1bb08e7dca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:53:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 21 KiB/s wr, 72 op/s
Jan 20 14:53:02 compute-0 sudo[320093]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9f23b6d2-22cf-456a-98c4-f86a5fa1307c does not exist
Jan 20 14:53:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 185b403a-4bb8-4b23-a944-647002929ddf does not exist
Jan 20 14:53:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 73026b9c-3e39-4c30-b948-dfdaedc36a1b does not exist
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-0 sudo[320169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:02 compute-0 sudo[320169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:02 compute-0 sudo[320169]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:02 compute-0 sudo[320194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:53:02 compute-0 sudo[320194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:02 compute-0 sudo[320194]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:02 compute-0 sudo[320219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:02 compute-0 sudo[320219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:02 compute-0 sudo[320219]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:02 compute-0 sudo[320244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:53:02 compute-0 sudo[320244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 20 14:53:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 20 14:53:02 compute-0 ceph-mon[74360]: osdmap e275: 3 total, 3 up, 3 in
Jan 20 14:53:02 compute-0 ceph-mon[74360]: pgmap v1983: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 21 KiB/s wr, 72 op/s
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:53:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.782445937 +0000 UTC m=+0.061972599 container create 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:53:02 compute-0 systemd[1]: Started libpod-conmon-42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63.scope.
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.752805769 +0000 UTC m=+0.032332512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:02.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.903489565 +0000 UTC m=+0.183016307 container init 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:53:02 compute-0 nova_compute[250018]: 2026-01-20 14:53:02.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.912888467 +0000 UTC m=+0.192415159 container start 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.917292987 +0000 UTC m=+0.196819669 container attach 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:53:02 compute-0 quirky_tesla[320326]: 167 167
Jan 20 14:53:02 compute-0 systemd[1]: libpod-42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63.scope: Deactivated successfully.
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.921259573 +0000 UTC m=+0.200786265 container died 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ea81cffdbad386869946d865f9bf9f96b12925d94351a0cef10f350aca6f54-merged.mount: Deactivated successfully.
Jan 20 14:53:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:02.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:02 compute-0 podman[320309]: 2026-01-20 14:53:02.988693388 +0000 UTC m=+0.268220040 container remove 42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:53:02 compute-0 systemd[1]: libpod-conmon-42b7d3e942d3a450248de5b6033cd0afe759493b83b76b9da101fc22e57eaa63.scope: Deactivated successfully.
Jan 20 14:53:03 compute-0 podman[320352]: 2026-01-20 14:53:03.164963052 +0000 UTC m=+0.045472635 container create a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:53:03 compute-0 systemd[1]: Started libpod-conmon-a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b.scope.
Jan 20 14:53:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:03 compute-0 podman[320352]: 2026-01-20 14:53:03.14295167 +0000 UTC m=+0.023461293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:03 compute-0 podman[320352]: 2026-01-20 14:53:03.240991518 +0000 UTC m=+0.121501121 container init a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:53:03 compute-0 podman[320352]: 2026-01-20 14:53:03.248604882 +0000 UTC m=+0.129114466 container start a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:53:03 compute-0 podman[320352]: 2026-01-20 14:53:03.252185119 +0000 UTC m=+0.132694702 container attach a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:53:03 compute-0 nova_compute[250018]: 2026-01-20 14:53:03.346 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:03 compute-0 ceph-mon[74360]: osdmap e276: 3 total, 3 up, 3 in
Jan 20 14:53:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 20 14:53:04 compute-0 blissful_proskuriakova[320368]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:53:04 compute-0 blissful_proskuriakova[320368]: --> relative data size: 1.0
Jan 20 14:53:04 compute-0 blissful_proskuriakova[320368]: --> All data devices are unavailable
Jan 20 14:53:04 compute-0 systemd[1]: libpod-a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b.scope: Deactivated successfully.
Jan 20 14:53:04 compute-0 conmon[320368]: conmon a81d5a5b7c7a56d26726 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b.scope/container/memory.events
Jan 20 14:53:04 compute-0 podman[320352]: 2026-01-20 14:53:04.079171745 +0000 UTC m=+0.959681358 container died a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 14:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-03cf507af0ecd01f7abac5ca74f670f8a3f15949e82d4fed38d8bcf325142586-merged.mount: Deactivated successfully.
Jan 20 14:53:04 compute-0 podman[320352]: 2026-01-20 14:53:04.157994977 +0000 UTC m=+1.038504580 container remove a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:53:04 compute-0 systemd[1]: libpod-conmon-a81d5a5b7c7a56d267261ea5bd6cfdb66ff976fded66d8584b5064cc0967365b.scope: Deactivated successfully.
Jan 20 14:53:04 compute-0 sudo[320244]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:04 compute-0 podman[320392]: 2026-01-20 14:53:04.220794387 +0000 UTC m=+0.095572424 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:53:04 compute-0 podman[320384]: 2026-01-20 14:53:04.296212436 +0000 UTC m=+0.169602395 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 20 14:53:04 compute-0 sudo[320437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:04 compute-0 sudo[320437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:04 compute-0 sudo[320437]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:04 compute-0 sudo[320467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:53:04 compute-0 sudo[320467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:04 compute-0 sudo[320467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:04 compute-0 sudo[320492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:04 compute-0 sudo[320492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:04 compute-0 sudo[320492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:04 compute-0 sudo[320517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:53:04 compute-0 sudo[320517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:04 compute-0 ceph-mon[74360]: pgmap v1985: 321 pgs: 321 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 20 14:53:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:04.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.87490579 +0000 UTC m=+0.044314233 container create de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:53:04 compute-0 systemd[1]: Started libpod-conmon-de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4.scope.
Jan 20 14:53:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.942795277 +0000 UTC m=+0.112203740 container init de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.94997532 +0000 UTC m=+0.119383763 container start de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.953477485 +0000 UTC m=+0.122885948 container attach de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:53:04 compute-0 epic_bose[320602]: 167 167
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.859355692 +0000 UTC m=+0.028764135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:04 compute-0 systemd[1]: libpod-de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4.scope: Deactivated successfully.
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.955946242 +0000 UTC m=+0.125354685 container died de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-956f022462f730f41d82581bec3960d59f87da8a27b126e61b35fc95ca333fdd-merged.mount: Deactivated successfully.
Jan 20 14:53:04 compute-0 podman[320586]: 2026-01-20 14:53:04.988753714 +0000 UTC m=+0.158162157 container remove de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:53:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:04.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:04 compute-0 systemd[1]: libpod-conmon-de89178ca125d546c6dc6feaa151e7987617162e4eff0cc05ac0044349a236e4.scope: Deactivated successfully.
Jan 20 14:53:05 compute-0 podman[320626]: 2026-01-20 14:53:05.163553278 +0000 UTC m=+0.039576585 container create 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:53:05 compute-0 systemd[1]: Started libpod-conmon-420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00.scope.
Jan 20 14:53:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38ac4916fe7702cf272c3102e456230eea277234ad65e8bb5375ec00b0d9576/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38ac4916fe7702cf272c3102e456230eea277234ad65e8bb5375ec00b0d9576/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38ac4916fe7702cf272c3102e456230eea277234ad65e8bb5375ec00b0d9576/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:05 compute-0 podman[320626]: 2026-01-20 14:53:05.146202552 +0000 UTC m=+0.022225879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38ac4916fe7702cf272c3102e456230eea277234ad65e8bb5375ec00b0d9576/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:05 compute-0 podman[320626]: 2026-01-20 14:53:05.264580267 +0000 UTC m=+0.140603634 container init 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 14:53:05 compute-0 podman[320626]: 2026-01-20 14:53:05.280189327 +0000 UTC m=+0.156212644 container start 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 14:53:05 compute-0 podman[320626]: 2026-01-20 14:53:05.284548144 +0000 UTC m=+0.160571461 container attach 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 14:53:05 compute-0 nova_compute[250018]: 2026-01-20 14:53:05.583 250022 INFO nova.virt.libvirt.driver [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Snapshot image upload complete
Jan 20 14:53:05 compute-0 nova_compute[250018]: 2026-01-20 14:53:05.584 250022 INFO nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Took 6.06 seconds to snapshot the instance on the hypervisor.
Jan 20 14:53:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 780 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 254 op/s
Jan 20 14:53:06 compute-0 pensive_banzai[320642]: {
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:     "0": [
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:         {
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "devices": [
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "/dev/loop3"
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             ],
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "lv_name": "ceph_lv0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "lv_size": "7511998464",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "name": "ceph_lv0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "tags": {
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.cluster_name": "ceph",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.crush_device_class": "",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.encrypted": "0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.osd_id": "0",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.type": "block",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:                 "ceph.vdo": "0"
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             },
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "type": "block",
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:             "vg_name": "ceph_vg0"
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:         }
Jan 20 14:53:06 compute-0 pensive_banzai[320642]:     ]
Jan 20 14:53:06 compute-0 pensive_banzai[320642]: }
Jan 20 14:53:06 compute-0 systemd[1]: libpod-420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00.scope: Deactivated successfully.
Jan 20 14:53:06 compute-0 podman[320652]: 2026-01-20 14:53:06.123722759 +0000 UTC m=+0.029528656 container died 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38ac4916fe7702cf272c3102e456230eea277234ad65e8bb5375ec00b0d9576-merged.mount: Deactivated successfully.
Jan 20 14:53:06 compute-0 podman[320652]: 2026-01-20 14:53:06.17390953 +0000 UTC m=+0.079715377 container remove 420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banzai, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:53:06 compute-0 nova_compute[250018]: 2026-01-20 14:53:06.176 250022 DEBUG nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 20 14:53:06 compute-0 nova_compute[250018]: 2026-01-20 14:53:06.177 250022 DEBUG nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Jan 20 14:53:06 compute-0 nova_compute[250018]: 2026-01-20 14:53:06.177 250022 DEBUG nova.compute.manager [None req-45edb9e3-1734-4b65-8386-b5504ff8fa9a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Deleting image 165549fb-68ff-4d9c-ab65-c85189d23d7d _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Jan 20 14:53:06 compute-0 systemd[1]: libpod-conmon-420230eeb7c8ae5229af5c1e4137865d31a45eab3305824b8dc159da18dd1f00.scope: Deactivated successfully.
Jan 20 14:53:06 compute-0 sudo[320517]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:06 compute-0 sudo[320668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:06 compute-0 sudo[320668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:06 compute-0 sudo[320668]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:06 compute-0 sudo[320693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:53:06 compute-0 sudo[320693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:06 compute-0 sudo[320693]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:06 compute-0 sudo[320718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:06 compute-0 sudo[320718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:06 compute-0 sudo[320718]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:06 compute-0 sudo[320743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:53:06 compute-0 sudo[320743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.74242789 +0000 UTC m=+0.036057602 container create 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:53:06 compute-0 systemd[1]: Started libpod-conmon-73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0.scope.
Jan 20 14:53:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.725940376 +0000 UTC m=+0.019570108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.825834314 +0000 UTC m=+0.119464046 container init 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.83199797 +0000 UTC m=+0.125627682 container start 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.83497739 +0000 UTC m=+0.128607172 container attach 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:53:06 compute-0 happy_euclid[320823]: 167 167
Jan 20 14:53:06 compute-0 systemd[1]: libpod-73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0.scope: Deactivated successfully.
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.83683565 +0000 UTC m=+0.130465352 container died 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:53:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:06.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-955577ecb8b5e4abbb875c04ee526535ec010a23080536a2197cf24621599b26-merged.mount: Deactivated successfully.
Jan 20 14:53:06 compute-0 podman[320807]: 2026-01-20 14:53:06.873021714 +0000 UTC m=+0.166651426 container remove 73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 14:53:06 compute-0 systemd[1]: libpod-conmon-73a7d71fc827ef51d14f8d97be65c79392aa6e6b283051b7024f654bb69a7ee0.scope: Deactivated successfully.
Jan 20 14:53:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 20 14:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 20 14:53:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 20 14:53:07 compute-0 ceph-mon[74360]: pgmap v1986: 321 pgs: 321 active+clean; 780 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 254 op/s
Jan 20 14:53:07 compute-0 podman[320846]: 2026-01-20 14:53:07.042448754 +0000 UTC m=+0.048959919 container create eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:53:07 compute-0 systemd[1]: Started libpod-conmon-eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a.scope.
Jan 20 14:53:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:07 compute-0 podman[320846]: 2026-01-20 14:53:07.02260472 +0000 UTC m=+0.029115885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd14f09ad77f5142d0f8ba535d740fcab2f26d0dc0b35834ba1167b249ecad8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd14f09ad77f5142d0f8ba535d740fcab2f26d0dc0b35834ba1167b249ecad8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd14f09ad77f5142d0f8ba535d740fcab2f26d0dc0b35834ba1167b249ecad8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dd14f09ad77f5142d0f8ba535d740fcab2f26d0dc0b35834ba1167b249ecad8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:07 compute-0 podman[320846]: 2026-01-20 14:53:07.126701331 +0000 UTC m=+0.133212486 container init eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:53:07 compute-0 podman[320846]: 2026-01-20 14:53:07.135359524 +0000 UTC m=+0.141870679 container start eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:53:07 compute-0 podman[320846]: 2026-01-20 14:53:07.139061114 +0000 UTC m=+0.145572309 container attach eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:53:07 compute-0 nova_compute[250018]: 2026-01-20 14:53:07.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 281 op/s
Jan 20 14:53:07 compute-0 reverent_allen[320862]: {
Jan 20 14:53:07 compute-0 reverent_allen[320862]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:53:07 compute-0 reverent_allen[320862]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:53:07 compute-0 reverent_allen[320862]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:53:07 compute-0 reverent_allen[320862]:         "osd_id": 0,
Jan 20 14:53:07 compute-0 reverent_allen[320862]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:53:07 compute-0 reverent_allen[320862]:         "type": "bluestore"
Jan 20 14:53:07 compute-0 reverent_allen[320862]:     }
Jan 20 14:53:07 compute-0 reverent_allen[320862]: }
Jan 20 14:53:08 compute-0 ceph-mon[74360]: osdmap e277: 3 total, 3 up, 3 in
Jan 20 14:53:08 compute-0 systemd[1]: libpod-eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a.scope: Deactivated successfully.
Jan 20 14:53:08 compute-0 podman[320884]: 2026-01-20 14:53:08.086416819 +0000 UTC m=+0.027495571 container died eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd14f09ad77f5142d0f8ba535d740fcab2f26d0dc0b35834ba1167b249ecad8-merged.mount: Deactivated successfully.
Jan 20 14:53:08 compute-0 podman[320884]: 2026-01-20 14:53:08.142372495 +0000 UTC m=+0.083451167 container remove eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_allen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:53:08 compute-0 systemd[1]: libpod-conmon-eea950e45f2f185c8a2a83a3af7431c570f95054797c23550fdb143f87f59f3a.scope: Deactivated successfully.
Jan 20 14:53:08 compute-0 sudo[320743]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:53:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:53:08 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 90955950-4b13-4bd1-8f8d-c73a01d84020 does not exist
Jan 20 14:53:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f7eeb795-a875-4699-9b6f-fd4ea5e5a4d0 does not exist
Jan 20 14:53:08 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a665093c-b5d3-46ed-98ed-57c233364b70 does not exist
Jan 20 14:53:08 compute-0 sudo[320899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:08 compute-0 sudo[320899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:08 compute-0 sudo[320899]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:08 compute-0 sudo[320924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:53:08 compute-0 sudo[320924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:08 compute-0 nova_compute[250018]: 2026-01-20 14:53:08.348 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:08 compute-0 sudo[320924]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:09 compute-0 ceph-mon[74360]: pgmap v1988: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 281 op/s
Jan 20 14:53:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:09 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:53:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2491785817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 20 14:53:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 20 14:53:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 20 14:53:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 775 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 237 op/s
Jan 20 14:53:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3601927553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:10 compute-0 ceph-mon[74360]: osdmap e278: 3 total, 3 up, 3 in
Jan 20 14:53:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 20 14:53:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 20 14:53:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 20 14:53:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:53:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:53:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:10.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:11 compute-0 ceph-mon[74360]: pgmap v1990: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 775 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.9 MiB/s wr, 237 op/s
Jan 20 14:53:11 compute-0 ceph-mon[74360]: osdmap e279: 3 total, 3 up, 3 in
Jan 20 14:53:11 compute-0 sudo[320950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:11 compute-0 sudo[320950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:11 compute-0 sudo[320950]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:11 compute-0 sudo[320975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:11 compute-0 sudo[320975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:11 compute-0 sudo[320975]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009671658349618462 of space, bias 1.0, pg target 2.9014975048855387 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006953895635585334 of space, bias 1.0, pg target 0.20722608994044295 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0103430564279732 of space, bias 1.0, pg target 3.0822308155360134 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 20 14:53:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 769 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1108883090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:12 compute-0 nova_compute[250018]: 2026-01-20 14:53:12.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 20 14:53:13 compute-0 ceph-mon[74360]: pgmap v1992: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 769 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.8 MiB/s wr, 153 op/s
Jan 20 14:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 20 14:53:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 20 14:53:13 compute-0 nova_compute[250018]: 2026-01-20 14:53:13.351 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:53:13 compute-0 nova_compute[250018]: 2026-01-20 14:53:13.728 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:13.729 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:13 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:13.731 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:53:13 compute-0 sshd-session[321001]: Invalid user postgres from 157.245.78.139 port 41490
Jan 20 14:53:13 compute-0 sshd-session[321001]: Connection closed by invalid user postgres 157.245.78.139 port 41490 [preauth]
Jan 20 14:53:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 719 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Jan 20 14:53:14 compute-0 ceph-mon[74360]: osdmap e280: 3 total, 3 up, 3 in
Jan 20 14:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2714408348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 20 14:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 20 14:53:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.730 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.731 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:14.733 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.766 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:53:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.867 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.868 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.874 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:53:14 compute-0 nova_compute[250018]: 2026-01-20 14:53:14.874 250022 INFO nova.compute.claims [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:53:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.026 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:15 compute-0 ceph-mon[74360]: pgmap v1994: 321 pgs: 321 active+clean; 719 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 113 KiB/s rd, 3.6 MiB/s wr, 130 op/s
Jan 20 14:53:15 compute-0 ceph-mon[74360]: osdmap e281: 3 total, 3 up, 3 in
Jan 20 14:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821572000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.501 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.509 250022 DEBUG nova.compute.provider_tree [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.526 250022 DEBUG nova.scheduler.client.report [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.573 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.573 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.642 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.643 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.671 250022 INFO nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.705 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:53:15 compute-0 nova_compute[250018]: 2026-01-20 14:53:15.831 250022 INFO nova.virt.block_device [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Booting with volume b687fb44-6160-427b-b91a-091715876a58 at /dev/vda
Jan 20 14:53:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 287 op/s
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.029 250022 DEBUG nova.policy [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '15c455b119784bb9abe8e4774dadd01e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0ad54030e5cc477e939e073b52024ec4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.042 250022 DEBUG os_brick.utils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.044 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.065 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.066 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9593e1-85c3-4521-9ae3-bf623d485133]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.067 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.078 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.078 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[49e197d3-13ba-4ddc-b0c2-b90773fc3a64]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.082 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.096 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.097 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f3a18f-5322-4bb5-98e3-a0c6c4c3b584]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.099 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[c94db3e9-3ec9-4d20-b271-86eb6cf1207e]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.100 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.147 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "nvme version" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.150 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.150 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.150 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.151 250022 DEBUG os_brick.utils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] <== get_connector_properties: return (108ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:53:16 compute-0 nova_compute[250018]: 2026-01-20 14:53:16.151 250022 DEBUG nova.virt.block_device [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating existing volume attachment record: 3fc4aa11-5b94-4c96-a371-59de6fc196c2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/821572000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:16 compute-0 ceph-mon[74360]: pgmap v1996: 321 pgs: 321 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 287 op/s
Jan 20 14:53:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:17.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/720863058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.407 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Successfully created port: 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.729 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.731 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.731 250022 INFO nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating image(s)
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.731 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.732 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Ensure instance console log exists: /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.732 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.732 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.732 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:17 compute-0 nova_compute[250018]: 2026-01-20 14:53:17.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.8 MiB/s wr, 255 op/s
Jan 20 14:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/894774041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:18 compute-0 ceph-mon[74360]: pgmap v1997: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.8 MiB/s wr, 255 op/s
Jan 20 14:53:18 compute-0 nova_compute[250018]: 2026-01-20 14:53:18.353 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:18 compute-0 nova_compute[250018]: 2026-01-20 14:53:18.960 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Successfully updated port: 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:53:18 compute-0 nova_compute[250018]: 2026-01-20 14:53:18.981 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:18 compute-0 nova_compute[250018]: 2026-01-20 14:53:18.982 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquired lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:18 compute-0 nova_compute[250018]: 2026-01-20 14:53:18.982 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:19.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:19 compute-0 nova_compute[250018]: 2026-01-20 14:53:19.064 250022 DEBUG nova.compute.manager [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-changed-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:19 compute-0 nova_compute[250018]: 2026-01-20 14:53:19.065 250022 DEBUG nova.compute.manager [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Refreshing instance network info cache due to event network-changed-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:19 compute-0 nova_compute[250018]: 2026-01-20 14:53:19.065 250022 DEBUG oslo_concurrency.lockutils [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1212389849' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 20 14:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 20 14:53:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 20 14:53:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 234 op/s
Jan 20 14:53:19 compute-0 nova_compute[250018]: 2026-01-20 14:53:19.957 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:53:20 compute-0 ceph-mon[74360]: osdmap e282: 3 total, 3 up, 3 in
Jan 20 14:53:20 compute-0 ceph-mon[74360]: pgmap v1999: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.1 MiB/s wr, 234 op/s
Jan 20 14:53:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:20.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:22 compute-0 nova_compute[250018]: 2026-01-20 14:53:22.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:23 compute-0 ceph-mon[74360]: pgmap v2000: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.087 250022 DEBUG nova.network.neutron [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating instance_info_cache with network_info: [{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.122 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Releasing lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.122 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance network_info: |[{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.123 250022 DEBUG oslo_concurrency.lockutils [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.123 250022 DEBUG nova.network.neutron [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Refreshing network info cache for port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.127 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start _get_guest_xml network_info=[{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': '3fc4aa11-5b94-4c96-a371-59de6fc196c2', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b687fb44-6160-427b-b91a-091715876a58', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b687fb44-6160-427b-b91a-091715876a58', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'attached_at': '', 'detached_at': '', 'volume_id': 'b687fb44-6160-427b-b91a-091715876a58', 'serial': 'b687fb44-6160-427b-b91a-091715876a58'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.134 250022 WARNING nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.139 250022 DEBUG nova.virt.libvirt.host [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.140 250022 DEBUG nova.virt.libvirt.host [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.148 250022 DEBUG nova.virt.libvirt.host [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.149 250022 DEBUG nova.virt.libvirt.host [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.150 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.150 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.151 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.151 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.151 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.152 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.152 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.152 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.153 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.153 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.153 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.153 250022 DEBUG nova.virt.hardware [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.193 250022 DEBUG nova.storage.rbd_utils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.198 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.357 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554013804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.718 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.771 250022 DEBUG nova.virt.libvirt.vif [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-278330687',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.772 250022 DEBUG nova.network.os_vif_util [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.774 250022 DEBUG nova.network.os_vif_util [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.776 250022 DEBUG nova.objects.instance [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'pci_devices' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.797 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <uuid>d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</uuid>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <name>instance-00000079</name>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsV293TestJSON-server-278330687</nova:name>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:53:23</nova:creationTime>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:user uuid="15c455b119784bb9abe8e4774dadd01e">tempest-ServerActionsV293TestJSON-747539193-project-member</nova:user>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:project uuid="0ad54030e5cc477e939e073b52024ec4">tempest-ServerActionsV293TestJSON-747539193</nova:project>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <nova:port uuid="3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b">
Jan 20 14:53:23 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <system>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="serial">d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="uuid">d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </system>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <os>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </os>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <features>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </features>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config">
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </source>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-b687fb44-6160-427b-b91a-091715876a58">
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </source>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:53:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <serial>b687fb44-6160-427b-b91a-091715876a58</serial>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:f9:1a:06"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <target dev="tap3dbcd2bc-10"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/console.log" append="off"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <video>
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </video>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:53:23 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:53:23 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:53:23 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:53:23 compute-0 nova_compute[250018]: </domain>
Jan 20 14:53:23 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.799 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Preparing to wait for external event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.800 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.801 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.801 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.803 250022 DEBUG nova.virt.libvirt.vif [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-278330687',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.803 250022 DEBUG nova.network.os_vif_util [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.804 250022 DEBUG nova.network.os_vif_util [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.805 250022 DEBUG os_vif [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.807 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.808 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.812 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.812 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3dbcd2bc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.813 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3dbcd2bc-10, col_values=(('external_ids', {'iface-id': '3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:1a:06', 'vm-uuid': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 NetworkManager[48960]: <info>  [1768920803.8166] manager: (tap3dbcd2bc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.828 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.830 250022 INFO os_vif [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10')
Jan 20 14:53:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.959 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.959 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.959 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No VIF found with MAC fa:16:3e:f9:1a:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.960 250022 INFO nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Using config drive
Jan 20 14:53:23 compute-0 nova_compute[250018]: 2026-01-20 14:53:23.989 250022 DEBUG nova.storage.rbd_utils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1554013804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:53:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:53:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:25 compute-0 ceph-mon[74360]: pgmap v2001: 321 pgs: 321 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.332 250022 INFO nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating config drive at /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.341 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhurb89t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.478 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfhurb89t" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.511 250022 DEBUG nova.storage.rbd_utils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.515 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.788 250022 DEBUG nova.network.neutron [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updated VIF entry in instance network info cache for port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.789 250022 DEBUG nova.network.neutron [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating instance_info_cache with network_info: [{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.818 250022 DEBUG oslo_concurrency.lockutils [req-3eee1d85-ce86-465f-86d1-9583dda651c8 req-0fa7d9e3-431e-4152-9536-15343d9a8c68 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 665 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 131 op/s
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.969 250022 DEBUG oslo_concurrency.processutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:25 compute-0 nova_compute[250018]: 2026-01-20 14:53:25.970 250022 INFO nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deleting local config drive /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config because it was imported into RBD.
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.0352] manager: (tap3dbcd2bc-10): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Jan 20 14:53:26 compute-0 kernel: tap3dbcd2bc-10: entered promiscuous mode
Jan 20 14:53:26 compute-0 ovn_controller[148666]: 2026-01-20T14:53:26Z|00414|binding|INFO|Claiming lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for this chassis.
Jan 20 14:53:26 compute-0 ovn_controller[148666]: 2026-01-20T14:53:26Z|00415|binding|INFO|3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b: Claiming fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.045 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:1a:06 10.100.0.4'], port_security=['fa:16:3e:f9:1a:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad54030e5cc477e939e073b52024ec4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '809158a5-5df2-4e61-8536-596fb1ff7657', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42647ce1-15eb-4208-a167-10e96fd5deda, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.046 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b in datapath 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e bound to our chassis
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.047 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.058 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5784b409-4ef2-4b54-b92f-011325a2a78e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.059 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16cc2fb6-c1 in ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.063 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16cc2fb6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.063 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bff791ae-a7a8-4168-847b-e46c4a7fc5d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.064 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4028e3-5a4c-4e6d-a844-3db6aebabbbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_controller[148666]: 2026-01-20T14:53:26Z|00416|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b ovn-installed in OVS
Jan 20 14:53:26 compute-0 ovn_controller[148666]: 2026-01-20T14:53:26Z|00417|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b up in Southbound
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.074 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.075 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.081 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7cb815-a8fe-48a4-9822-6d2e241b4b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 systemd-machined[216401]: New machine qemu-56-instance-00000079.
Jan 20 14:53:26 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000079.
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.112 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[97624063-4a07-47cd-a78c-e659617cb37b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 systemd-udevd[321155]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.1351] device (tap3dbcd2bc-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.1372] device (tap3dbcd2bc-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.150 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9de0a06f-ca8d-492a-a6ea-2cb9aadcbe83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.156 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9baeb78e-eb4b-479b-b299-2000a414e2fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.1579] manager: (tap16cc2fb6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Jan 20 14:53:26 compute-0 ceph-mon[74360]: pgmap v2002: 321 pgs: 321 active+clean; 665 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 131 op/s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.198 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[43d98955-032d-4c49-9f43-f2da91be2bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.201 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8039b7eb-c939-4e1d-838e-6a8b8b6232ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.2361] device (tap16cc2fb6-c0): carrier: link connected
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.244 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebdc3f-e57d-497f-a28f-9fc217115cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.262 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4d3608-e184-4e2e-bc39-0199c4f23c53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16cc2fb6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:a3:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674695, 'reachable_time': 32723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321185, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.279 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8fdfb5-3253-48b9-8ebf-781277ea2dc4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:a369'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674695, 'tstamp': 674695}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321186, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.296 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[66524947-ebd7-4158-9471-f044a54e90ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16cc2fb6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:a3:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674695, 'reachable_time': 32723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321187, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.344 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[303d8cc4-4dd7-4822-bc29-548455a1247c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.427 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e9456402-d51b-4047-9a73-0685ebfbc441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.429 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16cc2fb6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.429 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.430 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16cc2fb6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.432 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 NetworkManager[48960]: <info>  [1768920806.4328] manager: (tap16cc2fb6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Jan 20 14:53:26 compute-0 kernel: tap16cc2fb6-c0: entered promiscuous mode
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.439 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16cc2fb6-c0, col_values=(('external_ids', {'iface-id': '114035f1-e09a-4fb9-a581-126e57246a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 ovn_controller[148666]: 2026-01-20T14:53:26Z|00418|binding|INFO|Releasing lport 114035f1-e09a-4fb9-a581-126e57246a7b from this chassis (sb_readonly=0)
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.441 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.441 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.442 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[25ad222e-4608-4f3f-b91d-df5d800182df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.443 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:53:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:26.443 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'env', 'PROCESS_TAG=haproxy-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.505 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920806.5048451, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.506 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Started (Lifecycle Event)
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.538 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.544 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920806.5064404, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.544 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Paused (Lifecycle Event)
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.572 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.586 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:26 compute-0 nova_compute[250018]: 2026-01-20 14:53:26.625 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:53:26 compute-0 podman[321261]: 2026-01-20 14:53:26.856580615 +0000 UTC m=+0.069247694 container create 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:53:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:26.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:26 compute-0 systemd[1]: Started libpod-conmon-4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c.scope.
Jan 20 14:53:26 compute-0 podman[321261]: 2026-01-20 14:53:26.818741217 +0000 UTC m=+0.031408336 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:53:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95367cdc5b47cd70cd5fe89e2570a904b89739da7bf0f49dd022f741e67c71cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:53:26 compute-0 podman[321261]: 2026-01-20 14:53:26.972842254 +0000 UTC m=+0.185509363 container init 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:53:26 compute-0 podman[321261]: 2026-01-20 14:53:26.98047472 +0000 UTC m=+0.193141799 container start 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 14:53:27 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [NOTICE]   (321280) : New worker (321282) forked
Jan 20 14:53:27 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [NOTICE]   (321280) : Loading success.
Jan 20 14:53:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1500097760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.205 250022 DEBUG nova.compute.manager [req-c34b45cb-7fa9-4387-984b-dad389e95a8f req-cce24e71-ea67-451e-825e-5bfbd5a8d911 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.206 250022 DEBUG oslo_concurrency.lockutils [req-c34b45cb-7fa9-4387-984b-dad389e95a8f req-cce24e71-ea67-451e-825e-5bfbd5a8d911 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.206 250022 DEBUG oslo_concurrency.lockutils [req-c34b45cb-7fa9-4387-984b-dad389e95a8f req-cce24e71-ea67-451e-825e-5bfbd5a8d911 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.206 250022 DEBUG oslo_concurrency.lockutils [req-c34b45cb-7fa9-4387-984b-dad389e95a8f req-cce24e71-ea67-451e-825e-5bfbd5a8d911 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.207 250022 DEBUG nova.compute.manager [req-c34b45cb-7fa9-4387-984b-dad389e95a8f req-cce24e71-ea67-451e-825e-5bfbd5a8d911 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Processing event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.207 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.212 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920807.2122252, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.212 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Resumed (Lifecycle Event)
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.214 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.218 250022 INFO nova.virt.libvirt.driver [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance spawned successfully.
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.219 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.247 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.248 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.248 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.248 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.249 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.249 250022 DEBUG nova.virt.libvirt.driver [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.257 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.260 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.286 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.320 250022 INFO nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Took 9.59 seconds to spawn the instance on the hypervisor.
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.321 250022 DEBUG nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.402 250022 INFO nova.compute.manager [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Took 12.57 seconds to build instance.
Jan 20 14:53:27 compute-0 nova_compute[250018]: 2026-01-20 14:53:27.420 250022 DEBUG oslo_concurrency.lockutils [None req-9f49e52e-c8e8-4a92-b900-3f705e87170a 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 699 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 20 14:53:28 compute-0 ceph-mon[74360]: pgmap v2003: 321 pgs: 321 active+clean; 699 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 20 14:53:28 compute-0 nova_compute[250018]: 2026-01-20 14:53:28.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:28 compute-0 nova_compute[250018]: 2026-01-20 14:53:28.815 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.365 250022 DEBUG nova.compute.manager [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.366 250022 DEBUG oslo_concurrency.lockutils [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.366 250022 DEBUG oslo_concurrency.lockutils [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.366 250022 DEBUG oslo_concurrency.lockutils [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.366 250022 DEBUG nova.compute.manager [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:29 compute-0 nova_compute[250018]: 2026-01-20 14:53:29.366 250022 WARNING nova.compute.manager [req-2174bb96-10e0-4c74-a7b7-2c1a5e803b07 req-f9ed0c32-789b-40c7-aa55-83405c78636a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state None.
Jan 20 14:53:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 707 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Jan 20 14:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:30.764 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:30.765 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:31 compute-0 ceph-mon[74360]: pgmap v2004: 321 pgs: 321 active+clean; 707 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Jan 20 14:53:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:31.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:31 compute-0 sudo[321293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:31 compute-0 sudo[321293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:31 compute-0 sudo[321293]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:31 compute-0 sudo[321318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:31 compute-0 sudo[321318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:31 compute-0 sudo[321318]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 14:53:32 compute-0 nova_compute[250018]: 2026-01-20 14:53:32.139 250022 DEBUG nova.compute.manager [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-changed-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:32 compute-0 nova_compute[250018]: 2026-01-20 14:53:32.139 250022 DEBUG nova.compute.manager [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Refreshing instance network info cache due to event network-changed-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:32 compute-0 nova_compute[250018]: 2026-01-20 14:53:32.140 250022 DEBUG oslo_concurrency.lockutils [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:32 compute-0 nova_compute[250018]: 2026-01-20 14:53:32.140 250022 DEBUG oslo_concurrency.lockutils [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:32 compute-0 nova_compute[250018]: 2026-01-20 14:53:32.140 250022 DEBUG nova.network.neutron [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Refreshing network info cache for port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:53:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:32.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 20 14:53:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:33.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 20 14:53:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 20 14:53:33 compute-0 ceph-mon[74360]: pgmap v2005: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 14:53:33 compute-0 nova_compute[250018]: 2026-01-20 14:53:33.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:33 compute-0 nova_compute[250018]: 2026-01-20 14:53:33.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 282 op/s
Jan 20 14:53:34 compute-0 nova_compute[250018]: 2026-01-20 14:53:34.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 20 14:53:34 compute-0 ceph-mon[74360]: osdmap e283: 3 total, 3 up, 3 in
Jan 20 14:53:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3836508932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:34 compute-0 ceph-mon[74360]: pgmap v2007: 321 pgs: 321 active+clean; 722 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.7 MiB/s wr, 282 op/s
Jan 20 14:53:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/654798905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 20 14:53:34 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 20 14:53:34 compute-0 nova_compute[250018]: 2026-01-20 14:53:34.509 250022 DEBUG nova.network.neutron [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updated VIF entry in instance network info cache for port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:53:34 compute-0 nova_compute[250018]: 2026-01-20 14:53:34.510 250022 DEBUG nova.network.neutron [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating instance_info_cache with network_info: [{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:34 compute-0 podman[321346]: 2026-01-20 14:53:34.522347948 +0000 UTC m=+0.084282269 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:53:34 compute-0 podman[321345]: 2026-01-20 14:53:34.526001957 +0000 UTC m=+0.089149760 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 20 14:53:34 compute-0 nova_compute[250018]: 2026-01-20 14:53:34.539 250022 DEBUG oslo_concurrency.lockutils [req-1e410b52-5373-477f-9290-25f0266594a6 req-268fb086-3fa8-4e4e-9348-07adae4f702b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:53:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:34.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:53:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:35.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 20 14:53:35 compute-0 ceph-mon[74360]: osdmap e284: 3 total, 3 up, 3 in
Jan 20 14:53:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 20 14:53:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 20 14:53:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 790 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 262 op/s
Jan 20 14:53:36 compute-0 ceph-mon[74360]: osdmap e285: 3 total, 3 up, 3 in
Jan 20 14:53:36 compute-0 ceph-mon[74360]: pgmap v2010: 321 pgs: 321 active+clean; 790 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 7.8 MiB/s wr, 262 op/s
Jan 20 14:53:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 376 op/s
Jan 20 14:53:38 compute-0 nova_compute[250018]: 2026-01-20 14:53:38.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:38 compute-0 nova_compute[250018]: 2026-01-20 14:53:38.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:38 compute-0 nova_compute[250018]: 2026-01-20 14:53:38.818 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:39.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:39 compute-0 ceph-mon[74360]: pgmap v2011: 321 pgs: 321 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 376 op/s
Jan 20 14:53:39 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 20 14:53:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 10 MiB/s wr, 319 op/s
Jan 20 14:53:40 compute-0 nova_compute[250018]: 2026-01-20 14:53:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:40 compute-0 ceph-mon[74360]: pgmap v2012: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 10 MiB/s wr, 319 op/s
Jan 20 14:53:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:41 compute-0 nova_compute[250018]: 2026-01-20 14:53:41.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:41 compute-0 nova_compute[250018]: 2026-01-20 14:53:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:53:41 compute-0 nova_compute[250018]: 2026-01-20 14:53:41.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:41 compute-0 nova_compute[250018]: 2026-01-20 14:53:41.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:53:41 compute-0 ovn_controller[148666]: 2026-01-20T14:53:41Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:53:41 compute-0 ovn_controller[148666]: 2026-01-20T14:53:41Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:53:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 9.2 MiB/s wr, 324 op/s
Jan 20 14:53:42 compute-0 nova_compute[250018]: 2026-01-20 14:53:42.185 250022 DEBUG nova.compute.manager [None req-04643289-7036-4057-b817-b3b50150711c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196
Jan 20 14:53:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:53:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1624741923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:42.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:43 compute-0 ceph-mon[74360]: pgmap v2013: 321 pgs: 321 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 9.2 MiB/s wr, 324 op/s
Jan 20 14:53:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3957894210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1624741923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:43.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.088 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.088 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201861485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.541 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.820 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.861 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.861 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.865 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.865 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.868 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 nova_compute[250018]: 2026-01-20 14:53:43.868 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 846 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.028 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.029 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3804MB free_disk=20.693958282470703GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.029 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.029 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2236801101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/201861485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2478177324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.326 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.327 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.328 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.328 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.329 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.433 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.460 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.460 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.474 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.502 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.586 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 20 14:53:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 20 14:53:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.814 250022 DEBUG oslo_concurrency.lockutils [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.815 250022 DEBUG oslo_concurrency.lockutils [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.815 250022 DEBUG nova.compute.manager [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.819 250022 DEBUG nova.compute.manager [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.820 250022 DEBUG nova.objects.instance [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:44 compute-0 nova_compute[250018]: 2026-01-20 14:53:44.841 250022 DEBUG nova.virt.libvirt.driver [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:53:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:53:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437868149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:45 compute-0 ceph-mon[74360]: pgmap v2014: 321 pgs: 321 active+clean; 846 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.9 MiB/s wr, 287 op/s
Jan 20 14:53:45 compute-0 ceph-mon[74360]: osdmap e286: 3 total, 3 up, 3 in
Jan 20 14:53:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:45.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:45 compute-0 nova_compute[250018]: 2026-01-20 14:53:45.064 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:45 compute-0 nova_compute[250018]: 2026-01-20 14:53:45.072 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:53:45 compute-0 nova_compute[250018]: 2026-01-20 14:53:45.091 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:53:45 compute-0 nova_compute[250018]: 2026-01-20 14:53:45.127 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:53:45 compute-0 nova_compute[250018]: 2026-01-20 14:53:45.128 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 286 op/s
Jan 20 14:53:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2437868149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:46 compute-0 nova_compute[250018]: 2026-01-20 14:53:46.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:46 compute-0 nova_compute[250018]: 2026-01-20 14:53:46.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1022852382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:46 compute-0 nova_compute[250018]: 2026-01-20 14:53:46.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:53:46 compute-0 nova_compute[250018]: 2026-01-20 14:53:46.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:53:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:46.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:47.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:47 compute-0 ceph-mon[74360]: pgmap v2016: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 286 op/s
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.077 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:53:47 compute-0 kernel: tap87b0cab5-af (unregistering): left promiscuous mode
Jan 20 14:53:47 compute-0 NetworkManager[48960]: <info>  [1768920827.0906] device (tap87b0cab5-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.095 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:47 compute-0 ovn_controller[148666]: 2026-01-20T14:53:47Z|00419|binding|INFO|Releasing lport 87b0cab5-af2f-4440-8f58-840860a23f68 from this chassis (sb_readonly=0)
Jan 20 14:53:47 compute-0 ovn_controller[148666]: 2026-01-20T14:53:47Z|00420|binding|INFO|Setting lport 87b0cab5-af2f-4440-8f58-840860a23f68 down in Southbound
Jan 20 14:53:47 compute-0 ovn_controller[148666]: 2026-01-20T14:53:47Z|00421|binding|INFO|Removing iface tap87b0cab5-af ovn-installed in OVS
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.103 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:79:1b 10.100.0.9'], port_security=['fa:16:3e:2b:79:1b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7f5cfffe-c1dc-4b00-844e-0fb35b340f44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=87b0cab5-af2f-4440-8f58-840860a23f68) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.104 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 87b0cab5-af2f-4440-8f58-840860a23f68 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.105 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.106 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3a8755-2891-4964-87ef-539e89b6fc13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.107 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.113 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.120 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.120 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.120 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:53:47 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 20 14:53:47 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000076.scope: Consumed 18.437s CPU time.
Jan 20 14:53:47 compute-0 systemd-machined[216401]: Machine qemu-54-instance-00000076 terminated.
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [NOTICE]   (318940) : haproxy version is 2.8.14-c23fe91
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [NOTICE]   (318940) : path to executable is /usr/sbin/haproxy
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [WARNING]  (318940) : Exiting Master process...
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [WARNING]  (318940) : Exiting Master process...
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [ALERT]    (318940) : Current worker (318942) exited with code 143 (Terminated)
Jan 20 14:53:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[318936]: [WARNING]  (318940) : All workers exited. Exiting... (0)
Jan 20 14:53:47 compute-0 systemd[1]: libpod-5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659.scope: Deactivated successfully.
Jan 20 14:53:47 compute-0 podman[321462]: 2026-01-20 14:53:47.293686854 +0000 UTC m=+0.060800998 container died 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659-userdata-shm.mount: Deactivated successfully.
Jan 20 14:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c961241b07c1a1ce872fb46b5509192d775baf97df50ff9c51115f8c0f98866-merged.mount: Deactivated successfully.
Jan 20 14:53:47 compute-0 podman[321462]: 2026-01-20 14:53:47.337249186 +0000 UTC m=+0.104363330 container cleanup 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:53:47 compute-0 systemd[1]: libpod-conmon-5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659.scope: Deactivated successfully.
Jan 20 14:53:47 compute-0 podman[321502]: 2026-01-20 14:53:47.429033406 +0000 UTC m=+0.059070041 container remove 5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.435 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[28f3705b-968f-424d-9d5c-edc3dafb2769]: (4, ('Tue Jan 20 02:53:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659)\n5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659\nTue Jan 20 02:53:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659)\n5b37b34cff7556e2d1bba6364f0f1db2ef866bab070556c37f86b76011bc5659\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.437 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2dde09a4-e605-4848-a437-5bb057dd9e83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.439 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.441 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:47 compute-0 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.462 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.465 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f8cf77-a765-4b4d-b2c8-86440c51aedb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.482 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e66882c0-a5e5-4312-b765-3e133d30576a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.484 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[04ebd480-90ca-4f3f-aa9f-ebf4a8d6d700]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.501 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[faa86876-3def-4750-828a-99635ffab5c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666470, 'reachable_time': 19801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321521, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.506 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:53:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:47.506 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[09e22081-b617-4960-aeb9-b32d4f7c4044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.860 250022 INFO nova.virt.libvirt.driver [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance shutdown successfully after 3 seconds.
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.864 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance destroyed successfully.
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.864 250022 DEBUG nova.objects.instance [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.883 250022 DEBUG nova.compute.manager [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Jan 20 14:53:47 compute-0 nova_compute[250018]: 2026-01-20 14:53:47.957 250022 DEBUG oslo_concurrency.lockutils [None req-c568d10b-a329-45f7-9984-48a99637cbe3 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.331 250022 DEBUG nova.compute.manager [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.331 250022 DEBUG oslo_concurrency.lockutils [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.332 250022 DEBUG oslo_concurrency.lockutils [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.332 250022 DEBUG oslo_concurrency.lockutils [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.332 250022 DEBUG nova.compute.manager [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.332 250022 WARNING nova.compute.manager [req-0ce2fdc3-70ae-4ed8-a0f0-a9a0eae64679 req-f2a20b33-012d-4ff4-bf95-aafaea049653 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-unplugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state stopped and task_state None.
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:48 compute-0 nova_compute[250018]: 2026-01-20 14:53:48.823 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:49.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 20 14:53:49 compute-0 ceph-mon[74360]: pgmap v2017: 321 pgs: 321 active+clean; 867 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Jan 20 14:53:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 20 14:53:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 20 14:53:49 compute-0 nova_compute[250018]: 2026-01-20 14:53:49.610 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:49 compute-0 nova_compute[250018]: 2026-01-20 14:53:49.624 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:49 compute-0 nova_compute[250018]: 2026-01-20 14:53:49.625 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:53:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 871 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.7 MiB/s wr, 137 op/s
Jan 20 14:53:49 compute-0 nova_compute[250018]: 2026-01-20 14:53:49.971 250022 INFO nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Rebuilding instance
Jan 20 14:53:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 20 14:53:50 compute-0 ceph-mon[74360]: osdmap e287: 3 total, 3 up, 3 in
Jan 20 14:53:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 20 14:53:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.217 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.236 250022 DEBUG nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.294 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'pci_requests' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.307 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'pci_devices' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.323 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'resources' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.336 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'migration_context' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.351 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.355 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.424 250022 DEBUG nova.compute.manager [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.424 250022 DEBUG oslo_concurrency.lockutils [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.425 250022 DEBUG oslo_concurrency.lockutils [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.425 250022 DEBUG oslo_concurrency.lockutils [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.425 250022 DEBUG nova.compute.manager [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] No waiting events found dispatching network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:50 compute-0 nova_compute[250018]: 2026-01-20 14:53:50.425 250022 WARNING nova.compute.manager [req-cd7fdee8-17fa-4090-8a45-4622e0283754 req-f2cf4971-5bdb-4718-8df3-f1572f9b906a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received unexpected event network-vif-plugged-87b0cab5-af2f-4440-8f58-840860a23f68 for instance with vm_state stopped and task_state None.
Jan 20 14:53:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:51.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 20 14:53:51 compute-0 ceph-mon[74360]: pgmap v2019: 321 pgs: 321 active+clean; 871 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.7 MiB/s wr, 137 op/s
Jan 20 14:53:51 compute-0 ceph-mon[74360]: osdmap e288: 3 total, 3 up, 3 in
Jan 20 14:53:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2500595433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 20 14:53:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 20 14:53:51 compute-0 sudo[321524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:51 compute-0 sudo[321524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:51 compute-0 sudo[321524]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:51 compute-0 sudo[321549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:53:51 compute-0 sudo[321549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:53:51 compute-0 sudo[321549]: pam_unix(sudo:session): session closed for user root
Jan 20 14:53:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 899 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 93 op/s
Jan 20 14:53:52 compute-0 ceph-mon[74360]: osdmap e289: 3 total, 3 up, 3 in
Jan 20 14:53:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3735164888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:52 compute-0 ceph-mon[74360]: pgmap v2022: 321 pgs: 321 active+clean; 899 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 93 op/s
Jan 20 14:53:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2119220250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:53:52
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'backups', 'vms']
Jan 20 14:53:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:53:52 compute-0 kernel: tap3dbcd2bc-10 (unregistering): left promiscuous mode
Jan 20 14:53:52 compute-0 NetworkManager[48960]: <info>  [1768920832.6637] device (tap3dbcd2bc-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.678 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 ovn_controller[148666]: 2026-01-20T14:53:52Z|00422|binding|INFO|Releasing lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b from this chassis (sb_readonly=0)
Jan 20 14:53:52 compute-0 ovn_controller[148666]: 2026-01-20T14:53:52Z|00423|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b down in Southbound
Jan 20 14:53:52 compute-0 ovn_controller[148666]: 2026-01-20T14:53:52Z|00424|binding|INFO|Removing iface tap3dbcd2bc-10 ovn-installed in OVS
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.680 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.687 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:1a:06 10.100.0.4'], port_security=['fa:16:3e:f9:1a:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad54030e5cc477e939e073b52024ec4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '809158a5-5df2-4e61-8536-596fb1ff7657', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42647ce1-15eb-4208-a167-10e96fd5deda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.690 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b in datapath 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e unbound from our chassis
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.693 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.694 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc478a1-978e-4e28-9e78-f1f778c29dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.694 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e namespace which is not needed anymore
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.698 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 20 14:53:52 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000079.scope: Consumed 14.728s CPU time.
Jan 20 14:53:52 compute-0 systemd-machined[216401]: Machine qemu-56-instance-00000079 terminated.
Jan 20 14:53:52 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [NOTICE]   (321280) : haproxy version is 2.8.14-c23fe91
Jan 20 14:53:52 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [NOTICE]   (321280) : path to executable is /usr/sbin/haproxy
Jan 20 14:53:52 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [WARNING]  (321280) : Exiting Master process...
Jan 20 14:53:52 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [ALERT]    (321280) : Current worker (321282) exited with code 143 (Terminated)
Jan 20 14:53:52 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321276]: [WARNING]  (321280) : All workers exited. Exiting... (0)
Jan 20 14:53:52 compute-0 systemd[1]: libpod-4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c.scope: Deactivated successfully.
Jan 20 14:53:52 compute-0 podman[321601]: 2026-01-20 14:53:52.832751562 +0000 UTC m=+0.049114892 container died 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 14:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-95367cdc5b47cd70cd5fe89e2570a904b89739da7bf0f49dd022f741e67c71cf-merged.mount: Deactivated successfully.
Jan 20 14:53:52 compute-0 podman[321601]: 2026-01-20 14:53:52.873291003 +0000 UTC m=+0.089654313 container cleanup 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:53:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:52.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:52 compute-0 systemd[1]: libpod-conmon-4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c.scope: Deactivated successfully.
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 podman[321633]: 2026-01-20 14:53:52.948674352 +0000 UTC m=+0.041628901 container remove 4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff89d44-8adf-4697-922c-0541a1d47e75]: (4, ('Tue Jan 20 02:53:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e (4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c)\n4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c\nTue Jan 20 02:53:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e (4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c)\n4231d3abd38327ded950bc4facad9a096271d1b6b8bb2df3785b8b67da66161c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fb97f938-2f56-4e2c-889d-e91aa6e82dff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.957 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16cc2fb6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.958 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 kernel: tap16cc2fb6-c0: left promiscuous mode
Jan 20 14:53:52 compute-0 nova_compute[250018]: 2026-01-20 14:53:52.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:52.988 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3c05c5-869e-4553-b409-6dbd1b9506e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:53.008 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7058ed8-5906-412d-afad-bbb682c6e787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:53.009 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae261da-3ccd-451a-be09-06603d803568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:53.023 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd754a1-4bfa-494a-8dd7-4389008ab39c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674686, 'reachable_time': 29594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321664, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:53.025 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:53:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:53:53.025 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[9c374803-1481-4fad-b7b9-3fa85ef216e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:53:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d16cc2fb6\x2dcda7\x2d431a\x2dae22\x2dfc6920fbbe4e.mount: Deactivated successfully.
Jan 20 14:53:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:53.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.369 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.374 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance shutdown successfully after 3 seconds.
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.379 250022 INFO nova.virt.libvirt.driver [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance destroyed successfully.
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.385 250022 INFO nova.virt.libvirt.driver [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance destroyed successfully.
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.386 250022 DEBUG nova.virt.libvirt.vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-592112959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:53:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:53:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.386 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.387 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.387 250022 DEBUG os_vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.389 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3dbcd2bc-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.390 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.395 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.398 250022 INFO os_vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10')
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.622 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deleting instance files /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_del
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.623 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deletion of /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_del complete
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.707 250022 DEBUG nova.compute.manager [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.707 250022 DEBUG oslo_concurrency.lockutils [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.707 250022 DEBUG oslo_concurrency.lockutils [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.708 250022 DEBUG oslo_concurrency.lockutils [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.708 250022 DEBUG nova.compute.manager [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.708 250022 WARNING nova.compute.manager [req-3dd054ff-e89d-4eaf-a722-6a1ca8f3419e req-261629e2-2e7e-4a5e-bcd3-3084a3d2099f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state rebuilding.
Jan 20 14:53:53 compute-0 nova_compute[250018]: 2026-01-20 14:53:53.900 250022 WARNING nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 could not be found.
Jan 20 14:53:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.423 250022 DEBUG nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Preparing to wait for external event volume-reimaged-b687fb44-6160-427b-b91a-091715876a58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.424 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.424 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.424 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.449 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.450 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:54 compute-0 nova_compute[250018]: 2026-01-20 14:53:54.450 250022 DEBUG nova.network.neutron [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:53:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:54.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:55 compute-0 ceph-mon[74360]: pgmap v2023: 321 pgs: 321 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 14:53:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/613345111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:53:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:55.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:53:55 compute-0 sshd-session[321686]: Invalid user postgres from 157.245.78.139 port 37726
Jan 20 14:53:55 compute-0 sshd-session[321686]: Connection closed by invalid user postgres 157.245.78.139 port 37726 [preauth]
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.781 250022 DEBUG nova.compute.manager [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.781 250022 DEBUG oslo_concurrency.lockutils [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.782 250022 DEBUG oslo_concurrency.lockutils [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.782 250022 DEBUG oslo_concurrency.lockutils [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.782 250022 DEBUG nova.compute.manager [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No event matching network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b in dict_keys([('volume-reimaged', 'b687fb44-6160-427b-b91a-091715876a58')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 20 14:53:55 compute-0 nova_compute[250018]: 2026-01-20 14:53:55.782 250022 WARNING nova.compute.manager [req-024efdf9-19af-436a-aa73-28cebb1b0801 req-3641b2d9-2c87-4c2b-aead-b5770c6f3102 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state rebuilding.
Jan 20 14:53:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 979 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.8 MiB/s wr, 322 op/s
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.046 250022 DEBUG nova.network.neutron [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.067 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.175 250022 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.176 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Creating file /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/dee5919ff75248d6b7debe6f073ed59c.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.176 250022 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/dee5919ff75248d6b7debe6f073ed59c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.689 250022 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/dee5919ff75248d6b7debe6f073ed59c.tmp" returned: 1 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.690 250022 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44/dee5919ff75248d6b7debe6f073ed59c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.690 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Creating directory /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.691 250022 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:53:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:56.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.938 250022 DEBUG oslo_concurrency.processutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/7f5cfffe-c1dc-4b00-844e-0fb35b340f44" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.943 250022 INFO nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance already shutdown.
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.951 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Instance destroyed successfully.
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.953 250022 DEBUG nova.virt.libvirt.vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:52:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:53:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.954 250022 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:2b:79:1b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.955 250022 DEBUG nova.network.os_vif_util [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.956 250022 DEBUG os_vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.959 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.959 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b0cab5-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.962 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.964 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.967 250022 INFO os_vif [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.972 250022 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:56 compute-0 nova_compute[250018]: 2026-01-20 14:53:56.973 250022 DEBUG nova.virt.libvirt.driver [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:53:57 compute-0 ceph-mon[74360]: pgmap v2024: 321 pgs: 321 active+clean; 979 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 9.8 MiB/s wr, 322 op/s
Jan 20 14:53:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:57 compute-0 nova_compute[250018]: 2026-01-20 14:53:57.160 250022 DEBUG neutronclient.v2_0.client [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 87b0cab5-af2f-4440-8f58-840860a23f68 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:53:57 compute-0 nova_compute[250018]: 2026-01-20 14:53:57.283 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:53:57 compute-0 nova_compute[250018]: 2026-01-20 14:53:57.283 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:53:57 compute-0 nova_compute[250018]: 2026-01-20 14:53:57.284 250022 DEBUG oslo_concurrency.lockutils [None req-469b4f3f-ba11-47de-b608-d3c23b30ada2 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:53:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 956 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.4 MiB/s wr, 344 op/s
Jan 20 14:53:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/433259130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:53:58 compute-0 nova_compute[250018]: 2026-01-20 14:53:58.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:53:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:53:58.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:53:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:53:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:53:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:53:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:53:59 compute-0 ceph-mon[74360]: pgmap v2025: 321 pgs: 321 active+clean; 956 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.4 MiB/s wr, 344 op/s
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.432 250022 DEBUG nova.compute.manager [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Received event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.432 250022 DEBUG nova.compute.manager [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing instance network info cache due to event network-changed-87b0cab5-af2f-4440-8f58-840860a23f68. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.433 250022 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.433 250022 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:53:59 compute-0 nova_compute[250018]: 2026-01-20 14:53:59.433 250022 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Refreshing network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:53:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:53:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 20 14:53:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 20 14:53:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 20 14:53:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 932 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 329 op/s
Jan 20 14:54:00 compute-0 ceph-mon[74360]: osdmap e290: 3 total, 3 up, 3 in
Jan 20 14:54:00 compute-0 ceph-mon[74360]: pgmap v2027: 321 pgs: 321 active+clean; 932 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 329 op/s
Jan 20 14:54:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:00.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.938 250022 DEBUG nova.compute.manager [req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b req-68d5e042-edbf-426a-87a8-dc78423f602f 0571fdb3621349bf94eb2c6bf0dfc036 d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event volume-reimaged-b687fb44-6160-427b-b91a-091715876a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.938 250022 DEBUG oslo_concurrency.lockutils [req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b req-68d5e042-edbf-426a-87a8-dc78423f602f 0571fdb3621349bf94eb2c6bf0dfc036 d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.939 250022 DEBUG oslo_concurrency.lockutils [req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b req-68d5e042-edbf-426a-87a8-dc78423f602f 0571fdb3621349bf94eb2c6bf0dfc036 d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.939 250022 DEBUG oslo_concurrency.lockutils [req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b req-68d5e042-edbf-426a-87a8-dc78423f602f 0571fdb3621349bf94eb2c6bf0dfc036 d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.939 250022 DEBUG nova.compute.manager [req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b req-68d5e042-edbf-426a-87a8-dc78423f602f 0571fdb3621349bf94eb2c6bf0dfc036 d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Processing event volume-reimaged-b687fb44-6160-427b-b91a-091715876a58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:54:00 compute-0 nova_compute[250018]: 2026-01-20 14:54:00.940 250022 DEBUG nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance event wait completed in 5 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.090 250022 INFO nova.virt.block_device [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Booting with volume b687fb44-6160-427b-b91a-091715876a58 at /dev/vda
Jan 20 14:54:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:01.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.287 250022 DEBUG os_brick.utils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.289 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.307 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.308 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[728f924e-fdeb-4da1-91b7-7efd48b8182e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.308 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.315 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.316 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[524344c8-b1f1-4129-81a6-97a276cf6959]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.317 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.324 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.324 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6379bc46-a4a0-4301-b779-2a1ae3e16d9e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.325 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff709bd-c8da-4344-bf81-1c04bebf4c2c]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.326 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.352 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.355 250022 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updated VIF entry in instance network info cache for port 87b0cab5-af2f-4440-8f58-840860a23f68. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.356 250022 DEBUG nova.network.neutron [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.358 250022 DEBUG os_brick.initiator.connectors.lightos [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.358 250022 DEBUG os_brick.initiator.connectors.lightos [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.359 250022 DEBUG os_brick.initiator.connectors.lightos [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.359 250022 DEBUG os_brick.utils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.360 250022 DEBUG nova.virt.block_device [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating existing volume attachment record: 140b4d04-9654-453e-bf6a-b47873d35adc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.373 250022 DEBUG oslo_concurrency.lockutils [req-3cf1c8fb-7ccd-4908-8ac4-22779402973f req-537fae91-771c-49c0-89be-305c6a3f71f1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 20 14:54:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 20 14:54:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 20 14:54:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 888 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.2 MiB/s wr, 300 op/s
Jan 20 14:54:01 compute-0 nova_compute[250018]: 2026-01-20 14:54:01.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.333 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920827.3324661, 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.334 250022 INFO nova.compute.manager [-] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] VM Stopped (Lifecycle Event)
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.366 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.366 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating image(s)
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.367 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.367 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Ensure instance console log exists: /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.367 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.368 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.368 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.371 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start _get_guest_xml network_info=[{"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': '140b4d04-9654-453e-bf6a-b47873d35adc', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b687fb44-6160-427b-b91a-091715876a58', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b687fb44-6160-427b-b91a-091715876a58', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'attached_at': '', 'detached_at': '', 'volume_id': 'b687fb44-6160-427b-b91a-091715876a58', 'serial': 'b687fb44-6160-427b-b91a-091715876a58'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.375 250022 DEBUG nova.compute.manager [None req-566862a1-caca-4ae5-a8cd-ad990635f3d3 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.380 250022 DEBUG nova.compute.manager [None req-566862a1-caca-4ae5-a8cd-ad990635f3d3 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_finish, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.383 250022 WARNING nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.390 250022 DEBUG nova.virt.libvirt.host [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.390 250022 DEBUG nova.virt.libvirt.host [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.396 250022 DEBUG nova.virt.libvirt.host [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.397 250022 DEBUG nova.virt.libvirt.host [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.399 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.399 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:22:02Z,direct_url=<?>,disk_format='qcow2',id=26699514-f465-4b50-98b7-36f2cfc6a308,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:04Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.400 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.400 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.400 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.400 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.401 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.401 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.401 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.401 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.402 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.402 250022 DEBUG nova.virt.hardware [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.402 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.410 250022 INFO nova.compute.manager [None req-566862a1-caca-4ae5-a8cd-ad990635f3d3 - - - - - -] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.454 250022 DEBUG nova.storage.rbd_utils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.459 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:02 compute-0 ceph-mon[74360]: osdmap e291: 3 total, 3 up, 3 in
Jan 20 14:54:02 compute-0 ceph-mon[74360]: pgmap v2029: 321 pgs: 321 active+clean; 888 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.2 MiB/s wr, 300 op/s
Jan 20 14:54:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3505285473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2647610533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3954144284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:02 compute-0 nova_compute[250018]: 2026-01-20 14:54:02.981 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.018 250022 DEBUG nova.virt.libvirt.vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-592112959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:53:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.018 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.019 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.022 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <uuid>d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</uuid>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <name>instance-00000079</name>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsV293TestJSON-server-592112959</nova:name>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:54:02</nova:creationTime>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:user uuid="15c455b119784bb9abe8e4774dadd01e">tempest-ServerActionsV293TestJSON-747539193-project-member</nova:user>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:project uuid="0ad54030e5cc477e939e073b52024ec4">tempest-ServerActionsV293TestJSON-747539193</nova:project>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <nova:port uuid="3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b">
Jan 20 14:54:03 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <system>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="serial">d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="uuid">d23ddbd4-8b5d-4bf5-a02d-3fb69b940770</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </system>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <os>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </os>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <features>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </features>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config">
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </source>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-b687fb44-6160-427b-b91a-091715876a58">
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </source>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:54:03 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <serial>b687fb44-6160-427b-b91a-091715876a58</serial>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:f9:1a:06"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <target dev="tap3dbcd2bc-10"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/console.log" append="off"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <video>
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </video>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:54:03 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:54:03 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:54:03 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:54:03 compute-0 nova_compute[250018]: </domain>
Jan 20 14:54:03 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.023 250022 DEBUG nova.virt.libvirt.vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-592112959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:53:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.023 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.024 250022 DEBUG nova.network.os_vif_util [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.024 250022 DEBUG os_vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.025 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.026 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.026 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.028 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.029 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3dbcd2bc-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.029 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3dbcd2bc-10, col_values=(('external_ids', {'iface-id': '3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:1a:06', 'vm-uuid': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.030 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:03 compute-0 NetworkManager[48960]: <info>  [1768920843.0313] manager: (tap3dbcd2bc-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.033 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.037 250022 INFO os_vif [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10')
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.124 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.125 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.125 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] No VIF found with MAC fa:16:3e:f9:1a:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.125 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Using config drive
Jan 20 14:54:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.149 250022 DEBUG nova.storage.rbd_utils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.171 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.211 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'keypairs' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.577 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Creating config drive at /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.586 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgofezd3c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.721 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgofezd3c" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1618823478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3954144284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.754 250022 DEBUG nova.storage.rbd_utils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] rbd image d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.758 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.921 250022 DEBUG oslo_concurrency.processutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:03 compute-0 nova_compute[250018]: 2026-01-20 14:54:03.922 250022 INFO nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deleting local config drive /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770/disk.config because it was imported into RBD.
Jan 20 14:54:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 881 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.5 MiB/s wr, 262 op/s
Jan 20 14:54:03 compute-0 kernel: tap3dbcd2bc-10: entered promiscuous mode
Jan 20 14:54:03 compute-0 NetworkManager[48960]: <info>  [1768920843.9709] manager: (tap3dbcd2bc-10): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Jan 20 14:54:04 compute-0 systemd-udevd[321813]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.010 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 ovn_controller[148666]: 2026-01-20T14:54:04Z|00425|binding|INFO|Claiming lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for this chassis.
Jan 20 14:54:04 compute-0 ovn_controller[148666]: 2026-01-20T14:54:04Z|00426|binding|INFO|3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b: Claiming fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.016 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:1a:06 10.100.0.4'], port_security=['fa:16:3e:f9:1a:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad54030e5cc477e939e073b52024ec4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '809158a5-5df2-4e61-8536-596fb1ff7657', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42647ce1-15eb-4208-a167-10e96fd5deda, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.017 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b in datapath 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e bound to our chassis
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.018 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:54:04 compute-0 NetworkManager[48960]: <info>  [1768920844.0261] device (tap3dbcd2bc-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:54:04 compute-0 NetworkManager[48960]: <info>  [1768920844.0271] device (tap3dbcd2bc-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:54:04 compute-0 ovn_controller[148666]: 2026-01-20T14:54:04Z|00427|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b ovn-installed in OVS
Jan 20 14:54:04 compute-0 ovn_controller[148666]: 2026-01-20T14:54:04Z|00428|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b up in Southbound
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.029 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.030 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5afa4baa-0812-4263-ab40-2856b648846d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.031 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16cc2fb6-c1 in ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.033 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16cc2fb6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.033 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a64e89-275f-432f-b37a-1217f211e4a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.034 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3661c592-3bc1-45a4-b6c7-3384248b0c66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 systemd-machined[216401]: New machine qemu-57-instance-00000079.
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.045 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[92bf0575-64b5-4f83-a0d6-1f46205d6a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000079.
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.069 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e087a0c3-5326-4d43-b0f1-624b2f0ffd91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.095 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2faa4d-f970-4d20-90bf-a0d31e2f3a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 NetworkManager[48960]: <info>  [1768920844.1036] manager: (tap16cc2fb6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.103 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6a8b3-cba8-4d7a-a2ab-f0da9b9f1737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.138 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bba1f9-4143-46e5-95eb-6bd2dff9d73a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.140 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2209ae-7e06-4e1c-a931-60cdc0e483a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 NetworkManager[48960]: <info>  [1768920844.1630] device (tap16cc2fb6-c0): carrier: link connected
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.167 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6130f509-d88e-49af-8808-ef16dddffeba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.182 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[79fd7e43-e267-4bd5-8116-e0fa6b440c10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16cc2fb6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:a3:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678488, 'reachable_time': 30477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321849, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.199 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[22f06dd8-0d96-428e-815a-2308308bee6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefb:a369'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678488, 'tstamp': 678488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321850, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.214 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e9be1329-abb8-4b4c-ad91-9dca0b3aa62c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16cc2fb6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fb:a3:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678488, 'reachable_time': 30477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321851, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.242 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[749ca22c-233b-484d-97cb-31e12ecadd98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.254 250022 DEBUG nova.compute.manager [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.255 250022 DEBUG oslo_concurrency.lockutils [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.255 250022 DEBUG oslo_concurrency.lockutils [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.255 250022 DEBUG oslo_concurrency.lockutils [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.255 250022 DEBUG nova.compute.manager [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.255 250022 WARNING nova.compute.manager [req-eddbb572-d7cd-4f85-a887-c66438e96c17 req-de59e738-2f6c-4ba2-bcb6-09f6c3f19aa7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state rebuild_spawning.
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.293 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf297e2-d6a7-4489-a8bd-0524071d0969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.294 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16cc2fb6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.294 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.295 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16cc2fb6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:04 compute-0 kernel: tap16cc2fb6-c0: entered promiscuous mode
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.296 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 NetworkManager[48960]: <info>  [1768920844.2977] manager: (tap16cc2fb6-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.302 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16cc2fb6-c0, col_values=(('external_ids', {'iface-id': '114035f1-e09a-4fb9-a581-126e57246a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 ovn_controller[148666]: 2026-01-20T14:54:04Z|00429|binding|INFO|Releasing lport 114035f1-e09a-4fb9-a581-126e57246a7b from this chassis (sb_readonly=0)
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.304 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.307 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.308 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e504f415-87a8-428c-bc71-6c89e7ad895b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.308 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.pid.haproxy
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:54:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:04.309 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'env', 'PROCESS_TAG=haproxy-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16cc2fb6-cda7-431a-ae22-fc6920fbbe4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.318 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.627 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.628 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920844.6274166, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.628 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Resumed (Lifecycle Event)
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.630 250022 DEBUG nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.630 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.635 250022 INFO nova.virt.libvirt.driver [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance spawned successfully.
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.635 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:54:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.660 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.663 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.663 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.664 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.664 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.664 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.664 250022 DEBUG nova.virt.libvirt.driver [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.668 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.697 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.697 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920844.63107, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.697 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Started (Lifecycle Event)
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.726 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.728 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:04 compute-0 podman[321925]: 2026-01-20 14:54:04.634710589 +0000 UTC m=+0.025196269 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.746 250022 DEBUG nova.compute.manager [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.748 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.801 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.801 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.802 250022 DEBUG nova.objects.instance [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 14:54:04 compute-0 ceph-mon[74360]: pgmap v2030: 321 pgs: 321 active+clean; 881 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.5 MiB/s wr, 262 op/s
Jan 20 14:54:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1756112089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:04 compute-0 podman[321925]: 2026-01-20 14:54:04.827795075 +0000 UTC m=+0.218280735 container create b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.850 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.850 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.850 250022 DEBUG nova.compute.manager [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 14:54:04 compute-0 systemd[1]: Started libpod-conmon-b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6.scope.
Jan 20 14:54:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:04 compute-0 nova_compute[250018]: 2026-01-20 14:54:04.881 250022 DEBUG oslo_concurrency.lockutils [None req-78352a7f-58ce-4abe-ae8f-a9bbbdc57b0b 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3644500512ec33d8f160c04161dd51b6058dfcf628ecc97e0e8a3b6eeff4c2e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:04 compute-0 podman[321925]: 2026-01-20 14:54:04.904098058 +0000 UTC m=+0.294583718 container init b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 14:54:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:04.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:04 compute-0 podman[321925]: 2026-01-20 14:54:04.910242754 +0000 UTC m=+0.300728424 container start b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:04 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [NOTICE]   (321971) : New worker (321981) forked
Jan 20 14:54:04 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [NOTICE]   (321971) : Loading success.
Jan 20 14:54:04 compute-0 podman[321942]: 2026-01-20 14:54:04.95245945 +0000 UTC m=+0.082911162 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:54:04 compute-0 podman[321939]: 2026-01-20 14:54:04.969077138 +0000 UTC m=+0.100523377 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:54:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:05 compute-0 nova_compute[250018]: 2026-01-20 14:54:05.280 250022 DEBUG neutronclient.v2_0.client [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 87b0cab5-af2f-4440-8f58-840860a23f68 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 14:54:05 compute-0 nova_compute[250018]: 2026-01-20 14:54:05.281 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:05 compute-0 nova_compute[250018]: 2026-01-20 14:54:05.282 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:05 compute-0 nova_compute[250018]: 2026-01-20 14:54:05.282 250022 DEBUG nova.network.neutron [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:54:05 compute-0 nova_compute[250018]: 2026-01-20 14:54:05.283 250022 DEBUG nova.objects.instance [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'info_cache' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3370297134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.7 MiB/s wr, 324 op/s
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.458 250022 DEBUG nova.compute.manager [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.459 250022 DEBUG oslo_concurrency.lockutils [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.459 250022 DEBUG oslo_concurrency.lockutils [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.459 250022 DEBUG oslo_concurrency.lockutils [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.459 250022 DEBUG nova.compute.manager [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:06 compute-0 nova_compute[250018]: 2026-01-20 14:54:06.460 250022 WARNING nova.compute.manager [req-4ee5e217-ab85-4bdf-b428-8338014943d3 req-476a2a6b-7f67-4d41-a457-e764ee302f27 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state None.
Jan 20 14:54:06 compute-0 ceph-mon[74360]: pgmap v2031: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.7 MiB/s wr, 324 op/s
Jan 20 14:54:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:07.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.183 250022 DEBUG nova.network.neutron [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Updating instance_info_cache with network_info: [{"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.202 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-7f5cfffe-c1dc-4b00-844e-0fb35b340f44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.203 250022 DEBUG nova.objects.instance [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f5cfffe-c1dc-4b00-844e-0fb35b340f44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.313 250022 DEBUG nova.storage.rbd_utils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(nova-resize) on rbd image(7f5cfffe-c1dc-4b00-844e-0fb35b340f44_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:54:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 20 14:54:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 20 14:54:07 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.907 250022 DEBUG nova.virt.libvirt.vif [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:51:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1654627482',display_name='tempest-ServerActionsTestOtherB-server-1654627482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1654627482',id=118,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-2ulk0sfq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=7f5cfffe-c1dc-4b00-844e-0fb35b340f44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.908 250022 DEBUG nova.network.os_vif_util [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "87b0cab5-af2f-4440-8f58-840860a23f68", "address": "fa:16:3e:2b:79:1b", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b0cab5-af", "ovs_interfaceid": "87b0cab5-af2f-4440-8f58-840860a23f68", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.909 250022 DEBUG nova.network.os_vif_util [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.909 250022 DEBUG os_vif [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.911 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b0cab5-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.911 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.913 250022 INFO os_vif [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:79:1b,bridge_name='br-int',has_traffic_filtering=True,id=87b0cab5-af2f-4440-8f58-840860a23f68,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b0cab5-af')
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.914 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:07 compute-0 nova_compute[250018]: 2026-01-20 14:54:07.914 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.7 MiB/s wr, 313 op/s
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.024 250022 DEBUG oslo_concurrency.processutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.048 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685966402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.479 250022 DEBUG oslo_concurrency.processutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.485 250022 DEBUG nova.compute.provider_tree [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.521 250022 DEBUG nova.scheduler.client.report [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.587 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.588 250022 DEBUG nova.compute.manager [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 7f5cfffe-c1dc-4b00-844e-0fb35b340f44] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805
Jan 20 14:54:08 compute-0 sudo[322057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:08 compute-0 sudo[322057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-0 sudo[322057]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.692 250022 INFO nova.scheduler.client.report [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Deleted allocation for migration 0c07b1b8-dd18-4f00-acbf-59c21d8f4a60
Jan 20 14:54:08 compute-0 nova_compute[250018]: 2026-01-20 14:54:08.741 250022 DEBUG oslo_concurrency.lockutils [None req-f8ef4cb2-81d0-4a4c-afaa-9ad559f4e405 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "7f5cfffe-c1dc-4b00-844e-0fb35b340f44" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:08 compute-0 sudo[322082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:54:08 compute-0 sudo[322082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-0 sudo[322082]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-0 sudo[322107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:08 compute-0 sudo[322107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-0 sudo[322107]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:08 compute-0 sudo[322132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:54:08 compute-0 sudo[322132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:08 compute-0 ceph-mon[74360]: osdmap e292: 3 total, 3 up, 3 in
Jan 20 14:54:08 compute-0 ceph-mon[74360]: pgmap v2033: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.7 MiB/s wr, 313 op/s
Jan 20 14:54:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/685966402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:08.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:09 compute-0 sudo[322132]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fde06103-6da6-485b-94f7-ce17c061c00e does not exist
Jan 20 14:54:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c5b11166-0676-48a0-bee4-2d434e733d3c does not exist
Jan 20 14:54:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 50204f48-64de-4e1c-8f82-d221dffcce6b does not exist
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:54:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:09 compute-0 sudo[322186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:09 compute-0 sudo[322186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:09 compute-0 sudo[322186]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:09 compute-0 sudo[322211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:54:09 compute-0 sudo[322211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:09 compute-0 sudo[322211]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:09 compute-0 sudo[322236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:09 compute-0 sudo[322236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:09 compute-0 sudo[322236]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:09 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 20 14:54:09 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:09.736852) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:54:09 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 20 14:54:09 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920849736927, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1957, "num_deletes": 265, "total_data_size": 3060447, "memory_usage": 3111704, "flush_reason": "Manual Compaction"}
Jan 20 14:54:09 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 20 14:54:09 compute-0 sudo[322261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:54:09 compute-0 sudo[322261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.4 MiB/s wr, 302 op/s
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920850050359, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2997055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43822, "largest_seqno": 45778, "table_properties": {"data_size": 2987998, "index_size": 5615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 2969678, "raw_average_value_size": 3203, "num_data_blocks": 241, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920711, "oldest_key_time": 1768920711, "file_creation_time": 1768920849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 313553 microseconds, and 6494 cpu microseconds.
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.050416) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2997055 bytes OK
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.050434) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.076448) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.076480) EVENT_LOG_v1 {"time_micros": 1768920850076473, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.076502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3052135, prev total WAL file size 3056378, number of live WAL files 2.
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/912464231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.077826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353133' seq:72057594037927935, type:22 .. '6C6F676D0031373635' seq:0, type:0; will stop at (end)
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2926KB)], [95(10MB)]
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920850077924, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13791841, "oldest_snapshot_seqno": -1}
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.123805223 +0000 UTC m=+0.028853377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7593 keys, 13632745 bytes, temperature: kUnknown
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920850356826, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 13632745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13578885, "index_size": 33780, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 195152, "raw_average_key_size": 25, "raw_value_size": 13440279, "raw_average_value_size": 1770, "num_data_blocks": 1347, "num_entries": 7593, "num_filter_entries": 7593, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.357449) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13632745 bytes
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.361470) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.4 rd, 48.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.3 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(9.2) write-amplify(4.5) OK, records in: 8136, records dropped: 543 output_compression: NoCompression
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.361500) EVENT_LOG_v1 {"time_micros": 1768920850361486, "job": 56, "event": "compaction_finished", "compaction_time_micros": 279253, "compaction_time_cpu_micros": 49020, "output_level": 6, "num_output_files": 1, "total_output_size": 13632745, "num_input_records": 8136, "num_output_records": 7593, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.362516327 +0000 UTC m=+0.267564471 container create 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920850363829, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920850367881, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.077706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.368179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.368190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.368194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.368201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:10.368209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:10 compute-0 systemd[1]: Started libpod-conmon-49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8.scope.
Jan 20 14:54:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.479901997 +0000 UTC m=+0.384950151 container init 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.490498631 +0000 UTC m=+0.395546805 container start 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:54:10 compute-0 nifty_tu[322344]: 167 167
Jan 20 14:54:10 compute-0 systemd[1]: libpod-49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8.scope: Deactivated successfully.
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.763323874 +0000 UTC m=+0.668372028 container attach 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.763852207 +0000 UTC m=+0.668900381 container died 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 14:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e85859228e707874b8d0a51ac70860dfd192e2732f9d7f5fa58bb48f5ee69ab5-merged.mount: Deactivated successfully.
Jan 20 14:54:10 compute-0 podman[322327]: 2026-01-20 14:54:10.817556363 +0000 UTC m=+0.722604527 container remove 49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:10 compute-0 systemd[1]: libpod-conmon-49fbe54eb369040c39b71cbb20fc2c12296794ae0180778848ed2c358c075cb8.scope: Deactivated successfully.
Jan 20 14:54:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:11 compute-0 podman[322369]: 2026-01-20 14:54:11.057533221 +0000 UTC m=+0.108883681 container create 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:54:11 compute-0 podman[322369]: 2026-01-20 14:54:10.972149163 +0000 UTC m=+0.023499653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:11 compute-0 ceph-mon[74360]: pgmap v2034: 321 pgs: 321 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.4 MiB/s wr, 302 op/s
Jan 20 14:54:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2126179071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:11 compute-0 systemd[1]: Started libpod-conmon-5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009.scope.
Jan 20 14:54:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:11.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:11 compute-0 podman[322369]: 2026-01-20 14:54:11.16445985 +0000 UTC m=+0.215810320 container init 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:11 compute-0 podman[322369]: 2026-01-20 14:54:11.172158686 +0000 UTC m=+0.223509156 container start 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:11 compute-0 podman[322369]: 2026-01-20 14:54:11.176067292 +0000 UTC m=+0.227417922 container attach 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013038281686208821 of space, bias 1.0, pg target 3.911484505862646 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.29392568310301503 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.009198553531546623 of space, bias 1.0, pg target 2.731970398869347 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 20 14:54:11 compute-0 sudo[322391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:11 compute-0 sudo[322391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:11 compute-0 sudo[322391]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:11 compute-0 sudo[322416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:11 compute-0 sudo[322416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:11 compute-0 sudo[322416]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 896 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 3.6 MiB/s wr, 296 op/s
Jan 20 14:54:12 compute-0 infallible_hermann[322386]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:54:12 compute-0 infallible_hermann[322386]: --> relative data size: 1.0
Jan 20 14:54:12 compute-0 infallible_hermann[322386]: --> All data devices are unavailable
Jan 20 14:54:12 compute-0 systemd[1]: libpod-5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009.scope: Deactivated successfully.
Jan 20 14:54:12 compute-0 podman[322452]: 2026-01-20 14:54:12.06482058 +0000 UTC m=+0.023855193 container died 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-65d35e6d41e98d01ea0fc61e07bd54860227d3282027adaac18c80f6bc7653b8-merged.mount: Deactivated successfully.
Jan 20 14:54:12 compute-0 podman[322452]: 2026-01-20 14:54:12.182501187 +0000 UTC m=+0.141535790 container remove 5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:54:12 compute-0 systemd[1]: libpod-conmon-5b7d2d38b5f11c45c9844b55b4facbbb365f014f7c75cb209ebe2c8594853009.scope: Deactivated successfully.
Jan 20 14:54:12 compute-0 sudo[322261]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:12 compute-0 sudo[322467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:12 compute-0 sudo[322467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:12 compute-0 sudo[322467]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:12 compute-0 sudo[322492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:54:12 compute-0 sudo[322492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:12 compute-0 sudo[322492]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:12 compute-0 sudo[322517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:12 compute-0 sudo[322517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:12 compute-0 sudo[322517]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:12 compute-0 sudo[322542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:54:12 compute-0 sudo[322542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.811002251 +0000 UTC m=+0.056484951 container create 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:54:12 compute-0 systemd[1]: Started libpod-conmon-163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736.scope.
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.785136014 +0000 UTC m=+0.030618744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.909501361 +0000 UTC m=+0.154984081 container init 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:54:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:12.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.920818876 +0000 UTC m=+0.166301576 container start 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.924552727 +0000 UTC m=+0.170035457 container attach 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:54:12 compute-0 boring_ardinghelli[322624]: 167 167
Jan 20 14:54:12 compute-0 systemd[1]: libpod-163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736.scope: Deactivated successfully.
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.928627846 +0000 UTC m=+0.174110556 container died 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf929cf399562b222eaa19ea4ed9e05e93f001b11042650770150d42d504a06e-merged.mount: Deactivated successfully.
Jan 20 14:54:12 compute-0 podman[322607]: 2026-01-20 14:54:12.969034173 +0000 UTC m=+0.214516883 container remove 163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:12 compute-0 systemd[1]: libpod-conmon-163d98e81adb5febee07bbf00e83592656249ed3463587418b3077e8d9064736.scope: Deactivated successfully.
Jan 20 14:54:13 compute-0 nova_compute[250018]: 2026-01-20 14:54:13.050 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:13 compute-0 ceph-mon[74360]: pgmap v2035: 321 pgs: 321 active+clean; 896 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 3.6 MiB/s wr, 296 op/s
Jan 20 14:54:13 compute-0 podman[322648]: 2026-01-20 14:54:13.152270195 +0000 UTC m=+0.049187565 container create 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:54:13 compute-0 systemd[1]: Started libpod-conmon-20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b.scope.
Jan 20 14:54:13 compute-0 podman[322648]: 2026-01-20 14:54:13.124490037 +0000 UTC m=+0.021407437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d04aef33a0f6346a5cb45db37e7ca12be11b187bc8a4cd3d982a6a3fde4693/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d04aef33a0f6346a5cb45db37e7ca12be11b187bc8a4cd3d982a6a3fde4693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d04aef33a0f6346a5cb45db37e7ca12be11b187bc8a4cd3d982a6a3fde4693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d04aef33a0f6346a5cb45db37e7ca12be11b187bc8a4cd3d982a6a3fde4693/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:13 compute-0 podman[322648]: 2026-01-20 14:54:13.249688347 +0000 UTC m=+0.146605787 container init 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:54:13 compute-0 podman[322648]: 2026-01-20 14:54:13.255623886 +0000 UTC m=+0.152541266 container start 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:13 compute-0 podman[322648]: 2026-01-20 14:54:13.258901104 +0000 UTC m=+0.155818554 container attach 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:54:13 compute-0 nova_compute[250018]: 2026-01-20 14:54:13.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:54:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:54:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 933 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 310 op/s
Jan 20 14:54:14 compute-0 vigilant_allen[322665]: {
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:     "0": [
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:         {
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "devices": [
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "/dev/loop3"
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             ],
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "lv_name": "ceph_lv0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "lv_size": "7511998464",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "name": "ceph_lv0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "tags": {
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.cluster_name": "ceph",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.crush_device_class": "",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.encrypted": "0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.osd_id": "0",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.type": "block",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:                 "ceph.vdo": "0"
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             },
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "type": "block",
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:             "vg_name": "ceph_vg0"
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:         }
Jan 20 14:54:14 compute-0 vigilant_allen[322665]:     ]
Jan 20 14:54:14 compute-0 vigilant_allen[322665]: }
Jan 20 14:54:14 compute-0 systemd[1]: libpod-20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b.scope: Deactivated successfully.
Jan 20 14:54:14 compute-0 podman[322648]: 2026-01-20 14:54:14.031053745 +0000 UTC m=+0.927971125 container died 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d04aef33a0f6346a5cb45db37e7ca12be11b187bc8a4cd3d982a6a3fde4693-merged.mount: Deactivated successfully.
Jan 20 14:54:14 compute-0 podman[322648]: 2026-01-20 14:54:14.092456978 +0000 UTC m=+0.989374358 container remove 20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 14:54:14 compute-0 systemd[1]: libpod-conmon-20575c0555c25bee3361398fa608656c19c9407bc06a2cbe1ba7c2600731d39b.scope: Deactivated successfully.
Jan 20 14:54:14 compute-0 sudo[322542]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 20 14:54:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2752549181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-0 sudo[322687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:14 compute-0 sudo[322687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:14 compute-0 sudo[322687]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:14 compute-0 sudo[322712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:54:14 compute-0 sudo[322712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:14 compute-0 sudo[322712]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:14 compute-0 sudo[322737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:14 compute-0 sudo[322737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:14 compute-0 sudo[322737]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:14 compute-0 sudo[322762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:54:14 compute-0 sudo[322762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 20 14:54:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.72908397 +0000 UTC m=+0.045449013 container create c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:54:14 compute-0 systemd[1]: Started libpod-conmon-c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62.scope.
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.70859214 +0000 UTC m=+0.024957173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.824713365 +0000 UTC m=+0.141078448 container init c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.834461727 +0000 UTC m=+0.150826770 container start c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.840032827 +0000 UTC m=+0.156397870 container attach c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 14:54:14 compute-0 gracious_ride[322843]: 167 167
Jan 20 14:54:14 compute-0 systemd[1]: libpod-c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62.scope: Deactivated successfully.
Jan 20 14:54:14 compute-0 conmon[322843]: conmon c73e26846d40caaf8e88 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62.scope/container/memory.events
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.845337989 +0000 UTC m=+0.161703062 container died c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dcbc5ce050df1ae58c48dab4ef90d2a89eec432aec024e75072ef04659952eb-merged.mount: Deactivated successfully.
Jan 20 14:54:14 compute-0 podman[322827]: 2026-01-20 14:54:14.898672815 +0000 UTC m=+0.215037888 container remove c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ride, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:54:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:14 compute-0 systemd[1]: libpod-conmon-c73e26846d40caaf8e883f6f7a8f1933237db855d20daa0209d968955648ad62.scope: Deactivated successfully.
Jan 20 14:54:15 compute-0 podman[322867]: 2026-01-20 14:54:15.1277632 +0000 UTC m=+0.053851930 container create 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:54:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:15.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:15 compute-0 ceph-mon[74360]: pgmap v2036: 321 pgs: 321 active+clean; 933 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 310 op/s
Jan 20 14:54:15 compute-0 ceph-mon[74360]: osdmap e293: 3 total, 3 up, 3 in
Jan 20 14:54:15 compute-0 ceph-mon[74360]: osdmap e294: 3 total, 3 up, 3 in
Jan 20 14:54:15 compute-0 systemd[1]: Started libpod-conmon-15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf.scope.
Jan 20 14:54:15 compute-0 podman[322867]: 2026-01-20 14:54:15.096436837 +0000 UTC m=+0.022525617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:54:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6df5571f2d6e0f152f614a08669983332f365606e4f842d5370797174d4138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6df5571f2d6e0f152f614a08669983332f365606e4f842d5370797174d4138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6df5571f2d6e0f152f614a08669983332f365606e4f842d5370797174d4138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6df5571f2d6e0f152f614a08669983332f365606e4f842d5370797174d4138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:15 compute-0 podman[322867]: 2026-01-20 14:54:15.247344089 +0000 UTC m=+0.173432819 container init 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:54:15 compute-0 podman[322867]: 2026-01-20 14:54:15.253627177 +0000 UTC m=+0.179715897 container start 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:15 compute-0 podman[322867]: 2026-01-20 14:54:15.256492214 +0000 UTC m=+0.182580944 container attach 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:54:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 5.8 MiB/s wr, 298 op/s
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]: {
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:         "osd_id": 0,
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:         "type": "bluestore"
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]:     }
Jan 20 14:54:16 compute-0 relaxed_franklin[322883]: }
Jan 20 14:54:16 compute-0 systemd[1]: libpod-15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf.scope: Deactivated successfully.
Jan 20 14:54:16 compute-0 podman[322867]: 2026-01-20 14:54:16.099159902 +0000 UTC m=+1.025248632 container died 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6df5571f2d6e0f152f614a08669983332f365606e4f842d5370797174d4138-merged.mount: Deactivated successfully.
Jan 20 14:54:16 compute-0 podman[322867]: 2026-01-20 14:54:16.154850382 +0000 UTC m=+1.080939092 container remove 15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:54:16 compute-0 systemd[1]: libpod-conmon-15207f761a53fd694d81b9496d849af6aeb05a602abe474d41ec5483acaf78cf.scope: Deactivated successfully.
Jan 20 14:54:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/474916819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4265517374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:16 compute-0 ceph-mon[74360]: pgmap v2039: 321 pgs: 321 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 5.8 MiB/s wr, 298 op/s
Jan 20 14:54:16 compute-0 sudo[322762]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:54:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:54:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cd34d722-836c-4e50-9284-9b34011b4b03 does not exist
Jan 20 14:54:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 159b1abd-1f4a-47ad-97a0-6b2346afedce does not exist
Jan 20 14:54:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5b52674c-76b8-4251-af4b-d70d1ca6bfb4 does not exist
Jan 20 14:54:16 compute-0 sudo[322917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:16 compute-0 sudo[322917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:16 compute-0 sudo[322917]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:16 compute-0 sudo[322942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:54:16 compute-0 sudo[322942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:16 compute-0 sudo[322942]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:17 compute-0 ovn_controller[148666]: 2026-01-20T14:54:17Z|00430|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 20 14:54:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:54:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:17.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:54:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:54:17 compute-0 ovn_controller[148666]: 2026-01-20T14:54:17Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:54:17 compute-0 ovn_controller[148666]: 2026-01-20T14:54:17Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:1a:06 10.100.0.4
Jan 20 14:54:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 889 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.5 MiB/s wr, 317 op/s
Jan 20 14:54:18 compute-0 nova_compute[250018]: 2026-01-20 14:54:18.054 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:18 compute-0 ceph-mon[74360]: pgmap v2040: 321 pgs: 321 active+clean; 889 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.5 MiB/s wr, 317 op/s
Jan 20 14:54:18 compute-0 nova_compute[250018]: 2026-01-20 14:54:18.380 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:19.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 887 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.7 MiB/s wr, 352 op/s
Jan 20 14:54:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:20.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:21 compute-0 ceph-mon[74360]: pgmap v2041: 321 pgs: 321 active+clean; 887 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.7 MiB/s wr, 352 op/s
Jan 20 14:54:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:21.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.4 MiB/s wr, 407 op/s
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:23 compute-0 ceph-mon[74360]: pgmap v2042: 321 pgs: 321 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.4 MiB/s wr, 407 op/s
Jan 20 14:54:23 compute-0 nova_compute[250018]: 2026-01-20 14:54:23.057 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:23.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:23 compute-0 nova_compute[250018]: 2026-01-20 14:54:23.383 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 828 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.6 MiB/s wr, 333 op/s
Jan 20 14:54:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 20 14:54:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 20 14:54:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 20 14:54:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:25 compute-0 ceph-mon[74360]: pgmap v2043: 321 pgs: 321 active+clean; 828 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.6 MiB/s wr, 333 op/s
Jan 20 14:54:25 compute-0 ceph-mon[74360]: osdmap e295: 3 total, 3 up, 3 in
Jan 20 14:54:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:25.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 290 op/s
Jan 20 14:54:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:27 compute-0 ceph-mon[74360]: pgmap v2045: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 290 op/s
Jan 20 14:54:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 14:54:28 compute-0 nova_compute[250018]: 2026-01-20 14:54:28.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:28 compute-0 ceph-mon[74360]: pgmap v2046: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 14:54:28 compute-0 nova_compute[250018]: 2026-01-20 14:54:28.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:54:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:28.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:54:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.537 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.538 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.556 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:54:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.718 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.718 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.725 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.726 250022 INFO nova.compute.claims [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:54:29 compute-0 nova_compute[250018]: 2026-01-20 14:54:29.962 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Jan 20 14:54:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413086570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.410 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.416 250022 DEBUG nova.compute.provider_tree [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.435 250022 DEBUG nova.scheduler.client.report [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.471 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.472 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.656 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.656 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:54:30 compute-0 ceph-mon[74360]: pgmap v2047: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.712 250022 INFO nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.738 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:54:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:30.765 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.851 250022 DEBUG nova.policy [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '215db37373dc4ae5a75cbd6866f471da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.863 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.865 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.865 250022 INFO nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Creating image(s)
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.900 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.934 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.961 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:30 compute-0 nova_compute[250018]: 2026-01-20 14:54:30.965 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.040 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.041 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.042 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.042 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.068 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.078 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.426 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.508 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] resizing rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.633 250022 DEBUG nova.objects.instance [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'migration_context' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2413086570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.741 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.742 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Ensure instance console log exists: /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.743 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.743 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:31 compute-0 nova_compute[250018]: 2026-01-20 14:54:31.744 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:31 compute-0 sudo[323163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:31 compute-0 sudo[323163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:31 compute-0 sudo[323163]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:31 compute-0 sudo[323188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:31 compute-0 sudo[323188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:31 compute-0 sudo[323188]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 54 KiB/s wr, 88 op/s
Jan 20 14:54:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:32.088 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:32 compute-0 nova_compute[250018]: 2026-01-20 14:54:32.088 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:32.089 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:54:32 compute-0 nova_compute[250018]: 2026-01-20 14:54:32.223 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Successfully created port: 5f645763-9f97-4686-80ab-6df7299b1235 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:54:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 20 14:54:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 20 14:54:32 compute-0 ceph-mon[74360]: pgmap v2048: 321 pgs: 321 active+clean; 822 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 54 KiB/s wr, 88 op/s
Jan 20 14:54:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 20 14:54:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:32.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.063 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.387 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Successfully updated port: 5f645763-9f97-4686-80ab-6df7299b1235 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.420 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.421 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.421 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.486 250022 DEBUG nova.compute.manager [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-changed-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.486 250022 DEBUG nova.compute.manager [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Refreshing instance network info cache due to event network-changed-5f645763-9f97-4686-80ab-6df7299b1235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.487 250022 DEBUG oslo_concurrency.lockutils [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:33 compute-0 nova_compute[250018]: 2026-01-20 14:54:33.626 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:54:33 compute-0 ceph-mon[74360]: osdmap e296: 3 total, 3 up, 3 in
Jan 20 14:54:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 498 KiB/s wr, 63 op/s
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.492 250022 DEBUG nova.network.neutron [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.541 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.541 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance network_info: |[{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.542 250022 DEBUG oslo_concurrency.lockutils [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.542 250022 DEBUG nova.network.neutron [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Refreshing network info cache for port 5f645763-9f97-4686-80ab-6df7299b1235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.544 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start _get_guest_xml network_info=[{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.549 250022 WARNING nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.553 250022 DEBUG nova.virt.libvirt.host [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.554 250022 DEBUG nova.virt.libvirt.host [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.559 250022 DEBUG nova.virt.libvirt.host [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.559 250022 DEBUG nova.virt.libvirt.host [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.560 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.561 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.561 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.561 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.561 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.562 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.562 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.562 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.562 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.562 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.563 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.563 250022 DEBUG nova.virt.hardware [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.565 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:34.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198119657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:34 compute-0 nova_compute[250018]: 2026-01-20 14:54:34.994 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.023 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.027 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:35 compute-0 sshd-session[323215]: Invalid user postgres from 157.245.78.139 port 51754
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.072 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:35 compute-0 podman[323258]: 2026-01-20 14:54:35.119263754 +0000 UTC m=+0.059893222 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:54:35 compute-0 sshd-session[323215]: Connection closed by invalid user postgres 157.245.78.139 port 51754 [preauth]
Jan 20 14:54:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:35 compute-0 ceph-mon[74360]: pgmap v2050: 321 pgs: 321 active+clean; 826 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 498 KiB/s wr, 63 op/s
Jan 20 14:54:35 compute-0 podman[323256]: 2026-01-20 14:54:35.247059364 +0000 UTC m=+0.187202439 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:54:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:54:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3718312487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.507 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.510 250022 DEBUG nova.virt.libvirt.vif [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.511 250022 DEBUG nova.network.os_vif_util [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.512 250022 DEBUG nova.network.os_vif_util [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.513 250022 DEBUG nova.objects.instance [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.536 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <uuid>b346e1ba-9e83-4e7f-bc03-c327d3e4173b</uuid>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <name>instance-0000007b</name>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsTestOtherB-server-141692785</nova:name>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:54:34</nova:creationTime>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <nova:port uuid="5f645763-9f97-4686-80ab-6df7299b1235">
Jan 20 14:54:35 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <system>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="serial">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="uuid">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </system>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <os>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </os>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <features>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </features>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk">
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config">
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:54:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a5:83:5d"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <target dev="tap5f645763-9f"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/console.log" append="off"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <video>
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </video>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:54:35 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:54:35 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:54:35 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:54:35 compute-0 nova_compute[250018]: </domain>
Jan 20 14:54:35 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.537 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Preparing to wait for external event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.538 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.538 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.538 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.539 250022 DEBUG nova.virt.libvirt.vif [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:54:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.539 250022 DEBUG nova.network.os_vif_util [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.540 250022 DEBUG nova.network.os_vif_util [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.540 250022 DEBUG os_vif [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.541 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.541 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.542 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.546 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.547 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f645763-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.547 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f645763-9f, col_values=(('external_ids', {'iface-id': '5f645763-9f97-4686-80ab-6df7299b1235', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:83:5d', 'vm-uuid': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:35 compute-0 NetworkManager[48960]: <info>  [1768920875.5520] manager: (tap5f645763-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.552 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.558 250022 INFO os_vif [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.648 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.648 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.648 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:a5:83:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.649 250022 INFO nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Using config drive
Jan 20 14:54:35 compute-0 nova_compute[250018]: 2026-01-20 14:54:35.673 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 748 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.087 250022 INFO nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Creating config drive at /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.093 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14cpdjp7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.235 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp14cpdjp7" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3198119657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3718312487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:36 compute-0 ceph-mon[74360]: pgmap v2051: 321 pgs: 321 active+clean; 748 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.275 250022 DEBUG nova.storage.rbd_utils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.280 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.315 250022 DEBUG nova.network.neutron [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updated VIF entry in instance network info cache for port 5f645763-9f97-4686-80ab-6df7299b1235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.316 250022 DEBUG nova.network.neutron [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.362 250022 DEBUG oslo_concurrency.lockutils [req-babbf24c-013e-4f08-be45-84d1470fbc9c req-1d556f52-a468-4133-89b5-6e58eaf10927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.565 250022 DEBUG oslo_concurrency.processutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.566 250022 INFO nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Deleting local config drive /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/disk.config because it was imported into RBD.
Jan 20 14:54:36 compute-0 kernel: tap5f645763-9f: entered promiscuous mode
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.6219] manager: (tap5f645763-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Jan 20 14:54:36 compute-0 ovn_controller[148666]: 2026-01-20T14:54:36Z|00431|binding|INFO|Claiming lport 5f645763-9f97-4686-80ab-6df7299b1235 for this chassis.
Jan 20 14:54:36 compute-0 ovn_controller[148666]: 2026-01-20T14:54:36Z|00432|binding|INFO|5f645763-9f97-4686-80ab-6df7299b1235: Claiming fa:16:3e:a5:83:5d 10.100.0.14
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.649 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.652 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.654 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:36 compute-0 ovn_controller[148666]: 2026-01-20T14:54:36Z|00433|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 ovn-installed in OVS
Jan 20 14:54:36 compute-0 ovn_controller[148666]: 2026-01-20T14:54:36Z|00434|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 up in Southbound
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.663 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.668 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[44648f31-fb72-4239-bc4c-ec1dba4641b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.669 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.671 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.671 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[afd34921-5367-4497-b68e-44c95389df74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.672 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[923c22fc-71c4-43ab-9c08-a45accbbefe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 systemd-udevd[323398]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.686 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d693d79f-df64-4335-aa86-3922a000fa1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 systemd-machined[216401]: New machine qemu-58-instance-0000007b.
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.6942] device (tap5f645763-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.6955] device (tap5f645763-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:54:36 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-0000007b.
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.714 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[54a11f29-20fb-4372-a6bc-4da65e0bd04f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.748 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9461ecf1-3614-408a-bf3f-53c01de23945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 systemd-udevd[323403]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.754 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[962ae1d2-ddb4-445e-8ab8-893b2838018a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.7563] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.787 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[93bacc35-a11f-4148-bf36-b9f17d1b3dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.790 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[34ff07e0-7de3-4f05-9b49-010ead6aea7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.8142] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.820 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d6d1c0-f4bd-4c23-a30f-b58a7d81e9b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.839 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3284e0da-7ba6-4ab7-973f-575cc70af6f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681753, 'reachable_time': 23689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323431, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.855 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac3fb82-c1e3-478b-9430-3ac8a45d5dbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681753, 'tstamp': 681753}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323432, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.871 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b287b8-b7ef-4a1c-8a30-3c31b98e817e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681753, 'reachable_time': 23689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323433, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.900 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ace7ebb2-291a-4335-a18c-c190662ad3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:54:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.962 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c666ff53-e5ac-4519-b156-10df81bf7248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.964 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.964 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.965 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:36 compute-0 NetworkManager[48960]: <info>  [1768920876.9675] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 20 14:54:36 compute-0 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.970 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:36 compute-0 ovn_controller[148666]: 2026-01-20T14:54:36Z|00435|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.987 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-0 nova_compute[250018]: 2026-01-20 14:54:36.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.989 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.990 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fb352c3a-44c3-464c-a25e-a463f6fec530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.991 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:54:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:36.993 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:54:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2926708069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.325 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920877.3249128, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.325 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Started (Lifecycle Event)
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.348 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.353 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920877.3276112, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.354 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Paused (Lifecycle Event)
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.379 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.382 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:37 compute-0 podman[323507]: 2026-01-20 14:54:37.394066584 +0000 UTC m=+0.049280097 container create f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.405 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:54:37 compute-0 systemd[1]: Started libpod-conmon-f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669.scope.
Jan 20 14:54:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:54:37 compute-0 podman[323507]: 2026-01-20 14:54:37.369773141 +0000 UTC m=+0.024986674 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9bc6ddf2216308038881440fff4df9c7fdff3b1df6b1c326222bcae2c51821a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:54:37 compute-0 podman[323507]: 2026-01-20 14:54:37.479965666 +0000 UTC m=+0.135179189 container init f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:54:37 compute-0 podman[323507]: 2026-01-20 14:54:37.484708734 +0000 UTC m=+0.139922247 container start f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:54:37 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [NOTICE]   (323527) : New worker (323529) forked
Jan 20 14:54:37 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [NOTICE]   (323527) : Loading success.
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.753 250022 DEBUG nova.compute.manager [req-2d467a36-0745-442c-aa68-ba7a234fa296 req-2db8f229-36e3-4b20-ad61-976374fb188b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.754 250022 DEBUG oslo_concurrency.lockutils [req-2d467a36-0745-442c-aa68-ba7a234fa296 req-2db8f229-36e3-4b20-ad61-976374fb188b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.755 250022 DEBUG oslo_concurrency.lockutils [req-2d467a36-0745-442c-aa68-ba7a234fa296 req-2db8f229-36e3-4b20-ad61-976374fb188b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.755 250022 DEBUG oslo_concurrency.lockutils [req-2d467a36-0745-442c-aa68-ba7a234fa296 req-2db8f229-36e3-4b20-ad61-976374fb188b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.755 250022 DEBUG nova.compute.manager [req-2d467a36-0745-442c-aa68-ba7a234fa296 req-2db8f229-36e3-4b20-ad61-976374fb188b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Processing event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.756 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.762 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920877.7618835, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.762 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Resumed (Lifecycle Event)
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.766 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.769 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance spawned successfully.
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.770 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.782 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.791 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.794 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.795 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.795 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.796 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.796 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.797 250022 DEBUG nova.virt.libvirt.driver [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.828 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.869 250022 INFO nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Took 7.01 seconds to spawn the instance on the hypervisor.
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.869 250022 DEBUG nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.941 250022 INFO nova.compute.manager [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Took 8.30 seconds to build instance.
Jan 20 14:54:37 compute-0 nova_compute[250018]: 2026-01-20 14:54:37.963 250022 DEBUG oslo_concurrency.lockutils [None req-e9e79326-d53d-40ba-ae5c-8aced49f1e53 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:38 compute-0 ceph-mon[74360]: pgmap v2052: 321 pgs: 321 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.393 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.537 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.538 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.538 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.539 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.539 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.540 250022 INFO nova.compute.manager [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Terminating instance
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.541 250022 DEBUG nova.compute.manager [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:54:38 compute-0 kernel: tap3dbcd2bc-10 (unregistering): left promiscuous mode
Jan 20 14:54:38 compute-0 NetworkManager[48960]: <info>  [1768920878.5929] device (tap3dbcd2bc-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:54:38 compute-0 ovn_controller[148666]: 2026-01-20T14:54:38Z|00436|binding|INFO|Releasing lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b from this chassis (sb_readonly=0)
Jan 20 14:54:38 compute-0 ovn_controller[148666]: 2026-01-20T14:54:38Z|00437|binding|INFO|Setting lport 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b down in Southbound
Jan 20 14:54:38 compute-0 ovn_controller[148666]: 2026-01-20T14:54:38Z|00438|binding|INFO|Removing iface tap3dbcd2bc-10 ovn-installed in OVS
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.602 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.608 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:1a:06 10.100.0.4'], port_security=['fa:16:3e:f9:1a:06 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd23ddbd4-8b5d-4bf5-a02d-3fb69b940770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0ad54030e5cc477e939e073b52024ec4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '809158a5-5df2-4e61-8536-596fb1ff7657', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.197', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42647ce1-15eb-4208-a167-10e96fd5deda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.609 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b in datapath 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e unbound from our chassis
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.613 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.616 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1554538e-bdd6-4e6c-85f9-bd121640de1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.617 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e namespace which is not needed anymore
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 20 14:54:38 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000079.scope: Consumed 14.989s CPU time.
Jan 20 14:54:38 compute-0 systemd-machined[216401]: Machine qemu-57-instance-00000079 terminated.
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.765 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [NOTICE]   (321971) : haproxy version is 2.8.14-c23fe91
Jan 20 14:54:38 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [NOTICE]   (321971) : path to executable is /usr/sbin/haproxy
Jan 20 14:54:38 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [WARNING]  (321971) : Exiting Master process...
Jan 20 14:54:38 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [ALERT]    (321971) : Current worker (321981) exited with code 143 (Terminated)
Jan 20 14:54:38 compute-0 neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e[321943]: [WARNING]  (321971) : All workers exited. Exiting... (0)
Jan 20 14:54:38 compute-0 systemd[1]: libpod-b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6.scope: Deactivated successfully.
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.774 250022 INFO nova.virt.libvirt.driver [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Instance destroyed successfully.
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.774 250022 DEBUG nova.objects.instance [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lazy-loading 'resources' on Instance uuid d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:38 compute-0 podman[323560]: 2026-01-20 14:54:38.775949324 +0000 UTC m=+0.058783403 container died b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6-userdata-shm.mount: Deactivated successfully.
Jan 20 14:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3644500512ec33d8f160c04161dd51b6058dfcf628ecc97e0e8a3b6eeff4c2e9-merged.mount: Deactivated successfully.
Jan 20 14:54:38 compute-0 podman[323560]: 2026-01-20 14:54:38.813507815 +0000 UTC m=+0.096341894 container cleanup b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:54:38 compute-0 systemd[1]: libpod-conmon-b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6.scope: Deactivated successfully.
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.827 250022 DEBUG nova.virt.libvirt.vif [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-592112959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-278330687',id=121,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO86kZNzDPCASADxiWGymA6UJfVUXa5anZX6myQUgRsg5kfeB7NCF6546KW3Ot1GqbB4r4cbuWZGKPoJrxymBlCt9uzHjI477OxSlIci+EHESOU35e8Xbs8CtBj17r8ipw==',key_name='tempest-keypair-286575342',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0ad54030e5cc477e939e073b52024ec4',ramdisk_id='',reservation_id='r-c0xrc8pm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='26699514-f465-4b50-98b7-36f2cfc6a308',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-747539193',owner_user_name='tempest-ServerActionsV293TestJSON-747539193-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:54:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='15c455b119784bb9abe8e4774dadd01e',uuid=d23ddbd4-8b5d-4bf5-a02d-3fb69b940770,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.828 250022 DEBUG nova.network.os_vif_util [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converting VIF {"id": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "address": "fa:16:3e:f9:1a:06", "network": {"id": "16cc2fb6-cda7-431a-ae22-fc6920fbbe4e", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1739563670-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0ad54030e5cc477e939e073b52024ec4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dbcd2bc-10", "ovs_interfaceid": "3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.829 250022 DEBUG nova.network.os_vif_util [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.829 250022 DEBUG os_vif [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.831 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.831 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3dbcd2bc-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.832 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.837 250022 INFO os_vif [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:1a:06,bridge_name='br-int',has_traffic_filtering=True,id=3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b,network=Network(16cc2fb6-cda7-431a-ae22-fc6920fbbe4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dbcd2bc-10')
Jan 20 14:54:38 compute-0 podman[323600]: 2026-01-20 14:54:38.881600747 +0000 UTC m=+0.046552134 container remove b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.892 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6e965763-d1cf-4dc3-967d-2964368e9144]: (4, ('Tue Jan 20 02:54:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e (b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6)\nb03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6\nTue Jan 20 02:54:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e (b03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6)\nb03bc2445560b0bd4f7acf1d1306483ed694dae7a90c821e4246bebe963db8c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.894 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a49a5fe-21fa-4526-b531-cddfe680eee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.895 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16cc2fb6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:38.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:38 compute-0 kernel: tap16cc2fb6-c0: left promiscuous mode
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.953 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4939696a-1c62-45e9-91c7-bfb44944a6d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.954 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 nova_compute[250018]: 2026-01-20 14:54:38.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d076f86f-ffa6-467e-9001-bbad269884be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.974 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9849c3-48bb-429a-82b5-d40c1e1761b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.990 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[38dd08b2-e42a-401b-891b-718dbcc766bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678481, 'reachable_time': 24851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323632, 'error': None, 'target': 'ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.993 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16cc2fb6-cda7-431a-ae22-fc6920fbbe4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:54:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:38.993 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[52e0f2c2-184c-4ef1-abac-f8a64de59804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d16cc2fb6\x2dcda7\x2d431a\x2dae22\x2dfc6920fbbe4e.mount: Deactivated successfully.
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.044 250022 INFO nova.virt.libvirt.driver [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deleting instance files /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_del
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.046 250022 INFO nova.virt.libvirt.driver [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deletion of /var/lib/nova/instances/d23ddbd4-8b5d-4bf5-a02d-3fb69b940770_del complete
Jan 20 14:54:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:39.091 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.142 250022 INFO nova.compute.manager [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.143 250022 DEBUG oslo.service.loopingcall [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.144 250022 DEBUG nova.compute.manager [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.144 250022 DEBUG nova.network.neutron [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:54:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:54:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:54:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3424035681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/61354593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 20 14:54:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 20 14:54:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.877 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.877 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.877 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.877 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.878 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.878 250022 WARNING nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state active and task_state None.
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.878 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.878 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.878 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-unplugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.879 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.880 250022 DEBUG oslo_concurrency.lockutils [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.880 250022 DEBUG nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] No waiting events found dispatching network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:39 compute-0 nova_compute[250018]: 2026-01-20 14:54:39.880 250022 WARNING nova.compute.manager [req-c65566d0-3ac0-4ec7-bcc8-79c6796c328d req-e823954f-6437-4922-bb5d-693ca5c54b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received unexpected event network-vif-plugged-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b for instance with vm_state active and task_state deleting.
Jan 20 14:54:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.405 250022 DEBUG nova.network.neutron [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.425 250022 INFO nova.compute.manager [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Took 1.28 seconds to deallocate network for instance.
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.478 250022 DEBUG nova.compute.manager [req-0b484261-9125-41d5-90f4-13c258794ea4 req-cb1bce4b-a2a1-43dc-ba41-18f9f5538a35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Received event network-vif-deleted-3dbcd2bc-10d2-4550-a4f5-4c4c9effa54b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 20 14:54:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 20 14:54:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 20 14:54:40 compute-0 ceph-mon[74360]: osdmap e297: 3 total, 3 up, 3 in
Jan 20 14:54:40 compute-0 ceph-mon[74360]: pgmap v2054: 321 pgs: 321 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 2.7 MiB/s wr, 202 op/s
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.719 250022 INFO nova.compute.manager [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Took 0.29 seconds to detach 1 volumes for instance.
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.724 250022 DEBUG nova.compute.manager [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Deleting volume: b687fb44-6160-427b-b91a-091715876a58 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.730582) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880730655, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 629, "num_deletes": 255, "total_data_size": 696027, "memory_usage": 708824, "flush_reason": "Manual Compaction"}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880736712, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 687669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45779, "largest_seqno": 46407, "table_properties": {"data_size": 684234, "index_size": 1279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8344, "raw_average_key_size": 20, "raw_value_size": 677213, "raw_average_value_size": 1631, "num_data_blocks": 55, "num_entries": 415, "num_filter_entries": 415, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920850, "oldest_key_time": 1768920850, "file_creation_time": 1768920880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6158 microseconds, and 2838 cpu microseconds.
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.736751) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 687669 bytes OK
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.736766) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.738700) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.738713) EVENT_LOG_v1 {"time_micros": 1768920880738710, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.738728) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 692593, prev total WAL file size 692593, number of live WAL files 2.
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.739220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(671KB)], [98(13MB)]
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880739297, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14320414, "oldest_snapshot_seqno": -1}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7484 keys, 12472514 bytes, temperature: kUnknown
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880821295, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 12472514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12420211, "index_size": 32479, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 193672, "raw_average_key_size": 25, "raw_value_size": 12284241, "raw_average_value_size": 1641, "num_data_blocks": 1286, "num_entries": 7484, "num_filter_entries": 7484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.821646) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 12472514 bytes
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.824246) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.3 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(39.0) write-amplify(18.1) OK, records in: 8008, records dropped: 524 output_compression: NoCompression
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.824278) EVENT_LOG_v1 {"time_micros": 1768920880824264, "job": 58, "event": "compaction_finished", "compaction_time_micros": 82142, "compaction_time_cpu_micros": 27443, "output_level": 6, "num_output_files": 1, "total_output_size": 12472514, "num_input_records": 8008, "num_output_records": 7484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880824789, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920880829464, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.739075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.829555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.829561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.829562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.829564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:54:40.829567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.898 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.899 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:40.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:40 compute-0 nova_compute[250018]: 2026-01-20 14:54:40.987 250022 DEBUG oslo_concurrency.processutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:41.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706603828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.473 250022 DEBUG oslo_concurrency.processutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.480 250022 DEBUG nova.compute.provider_tree [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.502 250022 DEBUG nova.scheduler.client.report [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.551 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.595 250022 INFO nova.scheduler.client.report [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Deleted allocations for instance d23ddbd4-8b5d-4bf5-a02d-3fb69b940770
Jan 20 14:54:41 compute-0 nova_compute[250018]: 2026-01-20 14:54:41.683 250022 DEBUG oslo_concurrency.lockutils [None req-94702a09-f174-48ca-a6a4-8d7b6543f078 15c455b119784bb9abe8e4774dadd01e 0ad54030e5cc477e939e073b52024ec4 - - default default] Lock "d23ddbd4-8b5d-4bf5-a02d-3fb69b940770" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:41 compute-0 ceph-mon[74360]: osdmap e298: 3 total, 3 up, 3 in
Jan 20 14:54:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3706603828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 629 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 260 op/s
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.186 250022 DEBUG nova.compute.manager [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-changed-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.186 250022 DEBUG nova.compute.manager [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Refreshing instance network info cache due to event network-changed-5f645763-9f97-4686-80ab-6df7299b1235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.187 250022 DEBUG oslo_concurrency.lockutils [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.187 250022 DEBUG oslo_concurrency.lockutils [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:42 compute-0 nova_compute[250018]: 2026-01-20 14:54:42.187 250022 DEBUG nova.network.neutron [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Refreshing network info cache for port 5f645763-9f97-4686-80ab-6df7299b1235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:54:42 compute-0 ceph-mon[74360]: pgmap v2056: 321 pgs: 321 active+clean; 629 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 260 op/s
Jan 20 14:54:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:54:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3510984213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:54:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.076 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.076 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.395 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834153926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.573 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.651 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.651 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.655 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.655 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.815 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.816 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4013MB free_disk=20.78514862060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.816 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.817 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2834153926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.889 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.890 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b346e1ba-9e83-4e7f-bc03-c327d3e4173b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.890 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.890 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:54:43 compute-0 nova_compute[250018]: 2026-01-20 14:54:43.972 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 271 op/s
Jan 20 14:54:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065210901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.426 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.431 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.448 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.479 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.479 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.738 250022 DEBUG nova.network.neutron [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updated VIF entry in instance network info cache for port 5f645763-9f97-4686-80ab-6df7299b1235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.739 250022 DEBUG nova.network.neutron [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:44 compute-0 nova_compute[250018]: 2026-01-20 14:54:44.760 250022 DEBUG oslo_concurrency.lockutils [req-1ae8a325-9d53-45da-b01f-9fd834783531 req-715ca840-1105-42ad-a79a-99ff9a7802d7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:44 compute-0 ceph-mon[74360]: pgmap v2057: 321 pgs: 321 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 271 op/s
Jan 20 14:54:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2381813736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4065210901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/510416413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:44.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:45 compute-0 nova_compute[250018]: 2026-01-20 14:54:45.480 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:45 compute-0 nova_compute[250018]: 2026-01-20 14:54:45.480 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:45 compute-0 nova_compute[250018]: 2026-01-20 14:54:45.480 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 256 op/s
Jan 20 14:54:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2620888822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2105245927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/802906552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:54:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:47 compute-0 ceph-mon[74360]: pgmap v2058: 321 pgs: 321 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 256 op/s
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.349 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.350 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.350 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.350 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:47 compute-0 ovn_controller[148666]: 2026-01-20T14:54:47Z|00439|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:54:47 compute-0 ovn_controller[148666]: 2026-01-20T14:54:47Z|00440|binding|INFO|Releasing lport b033e9e6-9781-4424-a20f-7b48a14e2c80 from this chassis (sb_readonly=0)
Jan 20 14:54:47 compute-0 nova_compute[250018]: 2026-01-20 14:54:47.420 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 20 14:54:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 20 14:54:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 20 14:54:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 221 op/s
Jan 20 14:54:48 compute-0 nova_compute[250018]: 2026-01-20 14:54:48.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:48 compute-0 ceph-mon[74360]: osdmap e299: 3 total, 3 up, 3 in
Jan 20 14:54:48 compute-0 ceph-mon[74360]: pgmap v2060: 321 pgs: 321 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 221 op/s
Jan 20 14:54:48 compute-0 nova_compute[250018]: 2026-01-20 14:54:48.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:48.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:49 compute-0 nova_compute[250018]: 2026-01-20 14:54:49.400 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [{"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:49 compute-0 nova_compute[250018]: 2026-01-20 14:54:49.423 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:54:49 compute-0 nova_compute[250018]: 2026-01-20 14:54:49.424 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:54:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 20 14:54:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 20 14:54:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 20 14:54:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4106358672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:49 compute-0 ceph-mon[74360]: osdmap e300: 3 total, 3 up, 3 in
Jan 20 14:54:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 404 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 6.2 KiB/s wr, 162 op/s
Jan 20 14:54:50 compute-0 ceph-mon[74360]: pgmap v2062: 321 pgs: 321 active+clean; 404 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 6.2 KiB/s wr, 162 op/s
Jan 20 14:54:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:50.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:54:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:54:51 compute-0 ovn_controller[148666]: 2026-01-20T14:54:51Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:83:5d 10.100.0.14
Jan 20 14:54:51 compute-0 ovn_controller[148666]: 2026-01-20T14:54:51Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:83:5d 10.100.0.14
Jan 20 14:54:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1206076325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:52 compute-0 sudo[323708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:52 compute-0 sudo[323708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:52 compute-0 sudo[323708]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:52 compute-0 sudo[323733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:54:52 compute-0 sudo[323733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:54:52 compute-0 sudo[323733]: pam_unix(sudo:session): session closed for user root
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:54:52
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data']
Jan 20 14:54:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:54:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 20 14:54:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 20 14:54:52 compute-0 ceph-mon[74360]: pgmap v2063: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 20 14:54:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 20 14:54:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.398 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.692 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.692 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.693 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.693 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.693 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.694 250022 INFO nova.compute.manager [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Terminating instance
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.695 250022 DEBUG nova.compute.manager [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:54:53 compute-0 kernel: tap7869c4f4-45 (unregistering): left promiscuous mode
Jan 20 14:54:53 compute-0 NetworkManager[48960]: <info>  [1768920893.7442] device (tap7869c4f4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:54:53 compute-0 ovn_controller[148666]: 2026-01-20T14:54:53Z|00441|binding|INFO|Releasing lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 from this chassis (sb_readonly=0)
Jan 20 14:54:53 compute-0 ovn_controller[148666]: 2026-01-20T14:54:53Z|00442|binding|INFO|Setting lport 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 down in Southbound
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.756 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-0 ovn_controller[148666]: 2026-01-20T14:54:53Z|00443|binding|INFO|Removing iface tap7869c4f4-45 ovn-installed in OVS
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.758 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:53.765 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:92:68 10.100.0.11'], port_security=['fa:16:3e:7f:92:68 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c3b4d4c6-c42f-4abc-9c01-89ec3e10c677', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79184781-1f23-4584-87de-08e262242488', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a29915e0dd2403fbd7b7e847696b00a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '30ec24b7-15ba-4aeb-9785-539071729f77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b73ab05-b29f-401a-84a5-ea1a96103f33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:54:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:53.767 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 in datapath 79184781-1f23-4584-87de-08e262242488 unbound from our chassis
Jan 20 14:54:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:53.770 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79184781-1f23-4584-87de-08e262242488, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:54:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:53.771 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69e30b3d-8573-42f1-b8a7-57961508f503]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:53.772 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79184781-1f23-4584-87de-08e262242488 namespace which is not needed anymore
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.772 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920878.7717693, d23ddbd4-8b5d-4bf5-a02d-3fb69b940770 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.773 250022 INFO nova.compute.manager [-] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] VM Stopped (Lifecycle Event)
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.794 250022 DEBUG nova.compute.manager [None req-7a15527b-1405-4643-adc5-7ed75554fbee - - - - - -] [instance: d23ddbd4-8b5d-4bf5-a02d-3fb69b940770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:54:53 compute-0 ceph-mon[74360]: osdmap e301: 3 total, 3 up, 3 in
Jan 20 14:54:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/805911879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/951443556' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:53 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 20 14:54:53 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006e.scope: Consumed 22.590s CPU time.
Jan 20 14:54:53 compute-0 systemd-machined[216401]: Machine qemu-51-instance-0000006e terminated.
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.968 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.981 250022 INFO nova.virt.libvirt.driver [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Instance destroyed successfully.
Jan 20 14:54:53 compute-0 nova_compute[250018]: 2026-01-20 14:54:53.982 250022 DEBUG nova.objects.instance [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lazy-loading 'resources' on Instance uuid c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:54:54 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [NOTICE]   (315920) : haproxy version is 2.8.14-c23fe91
Jan 20 14:54:54 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [NOTICE]   (315920) : path to executable is /usr/sbin/haproxy
Jan 20 14:54:54 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [WARNING]  (315920) : Exiting Master process...
Jan 20 14:54:54 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [ALERT]    (315920) : Current worker (315922) exited with code 143 (Terminated)
Jan 20 14:54:54 compute-0 neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488[315916]: [WARNING]  (315920) : All workers exited. Exiting... (0)
Jan 20 14:54:54 compute-0 systemd[1]: libpod-bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d.scope: Deactivated successfully.
Jan 20 14:54:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 5.2 MiB/s wr, 137 op/s
Jan 20 14:54:54 compute-0 podman[323783]: 2026-01-20 14:54:54.019709016 +0000 UTC m=+0.121052398 container died bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d-userdata-shm.mount: Deactivated successfully.
Jan 20 14:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-691026effad5f1390d9f39691f429dc01db6c0a2d89a24e97f7db25d7a7619df-merged.mount: Deactivated successfully.
Jan 20 14:54:54 compute-0 podman[323783]: 2026-01-20 14:54:54.060906796 +0000 UTC m=+0.162250178 container cleanup bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:54:54 compute-0 systemd[1]: libpod-conmon-bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d.scope: Deactivated successfully.
Jan 20 14:54:54 compute-0 podman[323821]: 2026-01-20 14:54:54.126071299 +0000 UTC m=+0.043572954 container remove bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.131 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f210af31-9add-434e-b4ca-8d7ff3bdd061]: (4, ('Tue Jan 20 02:54:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d)\nbfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d\nTue Jan 20 02:54:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79184781-1f23-4584-87de-08e262242488 (bfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d)\nbfa38f409e4db746f86c289bba5314413cbadb79ed929249202b16886c106a6d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.132 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8e8f53-ff65-419d-a7e0-b58efc113036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.133 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79184781-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.135 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:54 compute-0 kernel: tap79184781-10: left promiscuous mode
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.155 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[851ecd82-1bcf-4179-8bb8-b722792691e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.178 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fefdc8-9416-4595-8c5c-dddc03051be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.179 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[09d5bdcd-5522-41a6-bcb5-b0d711d9c22a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.193 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2df0ca9a-50b0-4b7b-8c94-618fb1bae89b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 660858, 'reachable_time': 23054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323839, 'error': None, 'target': 'ovnmeta-79184781-1f23-4584-87de-08e262242488', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.194 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79184781-1f23-4584-87de-08e262242488 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:54:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:54:54.195 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[67dafbe8-53fe-4489-9333-ded92206e192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:54:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d79184781\x2d1f23\x2d4584\x2d87de\x2d08e262242488.mount: Deactivated successfully.
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.227 250022 DEBUG nova.compute.manager [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.228 250022 DEBUG oslo_concurrency.lockutils [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.228 250022 DEBUG oslo_concurrency.lockutils [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.228 250022 DEBUG oslo_concurrency.lockutils [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.229 250022 DEBUG nova.compute.manager [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.229 250022 DEBUG nova.compute.manager [req-a73a2d82-cee9-47c9-a534-4e77ca671745 req-0ea448bd-bd2a-46ff-b407-f1f0cc00ec04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-unplugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.241 250022 DEBUG nova.virt.libvirt.vif [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-178332738',display_name='tempest-ServerStableDeviceRescueTest-server-178332738',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-178332738',id=110,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:51:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a29915e0dd2403fbd7b7e847696b00a',ramdisk_id='',reservation_id='r-6cagqhwr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-129078052',owner_user_name='tempest-ServerStableDeviceRescueTest-129078052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:51:08Z,user_data=None,user_id='d85d286ce6224326a0f4a15a06afbfea',uuid=c3b4d4c6-c42f-4abc-9c01-89ec3e10c677,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.241 250022 DEBUG nova.network.os_vif_util [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converting VIF {"id": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "address": "fa:16:3e:7f:92:68", "network": {"id": "79184781-1f23-4584-87de-08e262242488", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-165460946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a29915e0dd2403fbd7b7e847696b00a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7869c4f4-45", "ovs_interfaceid": "7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.242 250022 DEBUG nova.network.os_vif_util [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.243 250022 DEBUG os_vif [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.246 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7869c4f4-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.247 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.249 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.252 250022 INFO os_vif [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:92:68,bridge_name='br-int',has_traffic_filtering=True,id=7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9,network=Network(79184781-1f23-4584-87de-08e262242488),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7869c4f4-45')
Jan 20 14:54:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/276321979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:54 compute-0 ceph-mon[74360]: pgmap v2065: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 5.2 MiB/s wr, 137 op/s
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.908 250022 INFO nova.virt.libvirt.driver [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Deleting instance files /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_del
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.910 250022 INFO nova.virt.libvirt.driver [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Deletion of /var/lib/nova/instances/c3b4d4c6-c42f-4abc-9c01-89ec3e10c677_del complete
Jan 20 14:54:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.963 250022 INFO nova.compute.manager [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Took 1.27 seconds to destroy the instance on the hypervisor.
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.964 250022 DEBUG oslo.service.loopingcall [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.964 250022 DEBUG nova.compute.manager [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:54:54 compute-0 nova_compute[250018]: 2026-01-20 14:54:54.965 250022 DEBUG nova.network.neutron [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:54:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2457285435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 323 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 6.3 MiB/s wr, 260 op/s
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.441 250022 DEBUG nova.compute.manager [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.442 250022 DEBUG oslo_concurrency.lockutils [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.442 250022 DEBUG oslo_concurrency.lockutils [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.442 250022 DEBUG oslo_concurrency.lockutils [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.442 250022 DEBUG nova.compute.manager [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] No waiting events found dispatching network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.443 250022 WARNING nova.compute.manager [req-1a503813-436d-41b4-be33-00afa58bded6 req-9db0b5fa-2bdc-4bab-ae8d-08aef50ed269 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received unexpected event network-vif-plugged-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 for instance with vm_state active and task_state deleting.
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.612 250022 DEBUG nova.network.neutron [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.632 250022 INFO nova.compute.manager [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Took 1.67 seconds to deallocate network for instance.
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.702 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.702 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:54:56 compute-0 nova_compute[250018]: 2026-01-20 14:54:56.792 250022 DEBUG oslo_concurrency.processutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:54:56 compute-0 ceph-mon[74360]: pgmap v2066: 321 pgs: 321 active+clean; 323 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 6.3 MiB/s wr, 260 op/s
Jan 20 14:54:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3520046388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:54:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:54:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:54:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:57.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:54:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531414823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.249 250022 DEBUG oslo_concurrency.processutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.254 250022 DEBUG nova.compute.provider_tree [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.282 250022 DEBUG nova.scheduler.client.report [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.303 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.325 250022 INFO nova.scheduler.client.report [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Deleted allocations for instance c3b4d4c6-c42f-4abc-9c01-89ec3e10c677
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.419 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:54:57 compute-0 nova_compute[250018]: 2026-01-20 14:54:57.429 250022 DEBUG oslo_concurrency.lockutils [None req-71229272-21da-4634-b4b3-0ce7e3c2eb0a d85d286ce6224326a0f4a15a06afbfea 0a29915e0dd2403fbd7b7e847696b00a - - default default] Lock "c3b4d4c6-c42f-4abc-9c01-89ec3e10c677" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:54:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:54:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/531414823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:54:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 7.6 MiB/s wr, 293 op/s
Jan 20 14:54:58 compute-0 nova_compute[250018]: 2026-01-20 14:54:58.401 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:58 compute-0 ceph-mon[74360]: pgmap v2067: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 7.6 MiB/s wr, 293 op/s
Jan 20 14:54:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:54:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:59 compute-0 nova_compute[250018]: 2026-01-20 14:54:59.151 250022 DEBUG nova.compute.manager [req-8e29550b-0f9f-4f5e-830d-5438dc60566f req-94fa9842-acdf-4515-871f-6c72f3a846b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Received event network-vif-deleted-7869c4f4-4542-4bf0-aebd-eeb2d3ce94f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:54:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:54:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:54:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:54:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:54:59 compute-0 nova_compute[250018]: 2026-01-20 14:54:59.248 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:54:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:54:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 20 14:54:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 20 14:54:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 20 14:55:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 293 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.3 MiB/s wr, 330 op/s
Jan 20 14:55:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:01 compute-0 ceph-mon[74360]: osdmap e302: 3 total, 3 up, 3 in
Jan 20 14:55:01 compute-0 ceph-mon[74360]: pgmap v2069: 321 pgs: 321 active+clean; 293 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.3 MiB/s wr, 330 op/s
Jan 20 14:55:01 compute-0 ovn_controller[148666]: 2026-01-20T14:55:01Z|00444|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:55:01 compute-0 nova_compute[250018]: 2026-01-20 14:55:01.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Jan 20 14:55:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1360851754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:02 compute-0 ceph-mon[74360]: pgmap v2070: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 294 op/s
Jan 20 14:55:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:03 compute-0 nova_compute[250018]: 2026-01-20 14:55:03.402 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 271 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Jan 20 14:55:04 compute-0 nova_compute[250018]: 2026-01-20 14:55:04.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:05.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:05 compute-0 podman[323888]: 2026-01-20 14:55:05.479511484 +0000 UTC m=+0.056582523 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:55:05 compute-0 podman[323887]: 2026-01-20 14:55:05.533210669 +0000 UTC m=+0.104859463 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:55:05 compute-0 ceph-mon[74360]: pgmap v2071: 321 pgs: 321 active+clean; 271 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 357 op/s
Jan 20 14:55:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 255 op/s
Jan 20 14:55:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:55:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:07.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:55:07 compute-0 ceph-mon[74360]: pgmap v2072: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 255 op/s
Jan 20 14:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:55:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 46K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1816 writes, 8282 keys, 1816 commit groups, 1.0 writes per commit group, ingest: 11.40 MB, 0.02 MB/s
                                           Interval WAL: 1815 writes, 1815 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.5      1.47              0.20        29    0.051       0      0       0.0       0.0
                                             L6      1/0   11.89 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    127.7    107.5      2.37              0.81        28    0.085    167K    15K       0.0       0.0
                                            Sum      1/0   11.89 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     78.7     81.8      3.84              1.01        57    0.067    167K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     76.3     78.6      1.10              0.29        14    0.078     53K   3624       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    127.7    107.5      2.37              0.81        28    0.085    167K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.6      1.47              0.20        28    0.053       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.058, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.09 MB/s write, 0.30 GB read, 0.08 MB/s read, 3.8 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 35.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000307 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2082,34.58 MB,11.3761%) FilterBlock(58,476.05 KB,0.152924%) IndexBlock(58,807.69 KB,0.25946%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 14:55:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 581 KiB/s wr, 216 op/s
Jan 20 14:55:08 compute-0 nova_compute[250018]: 2026-01-20 14:55:08.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:08 compute-0 ceph-mon[74360]: pgmap v2073: 321 pgs: 321 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 581 KiB/s wr, 216 op/s
Jan 20 14:55:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:08 compute-0 nova_compute[250018]: 2026-01-20 14:55:08.979 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920893.9778297, c3b4d4c6-c42f-4abc-9c01-89ec3e10c677 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:08 compute-0 nova_compute[250018]: 2026-01-20 14:55:08.979 250022 INFO nova.compute.manager [-] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] VM Stopped (Lifecycle Event)
Jan 20 14:55:09 compute-0 nova_compute[250018]: 2026-01-20 14:55:09.178 250022 DEBUG nova.compute.manager [None req-4b6755fa-8fda-4405-95fe-bc392cab5ca7 - - - - - -] [instance: c3b4d4c6-c42f-4abc-9c01-89ec3e10c677] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:09 compute-0 nova_compute[250018]: 2026-01-20 14:55:09.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 252 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 671 KiB/s wr, 151 op/s
Jan 20 14:55:10 compute-0 ceph-mon[74360]: pgmap v2074: 321 pgs: 321 active+clean; 252 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 671 KiB/s wr, 151 op/s
Jan 20 14:55:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:10.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:11.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00563745376186488 of space, bias 1.0, pg target 1.691236128559464 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:55:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:55:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 260 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Jan 20 14:55:12 compute-0 sudo[323933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:12 compute-0 sudo[323933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:12 compute-0 sudo[323933]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:12 compute-0 sudo[323958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:12 compute-0 sudo[323958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:12 compute-0 sudo[323958]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:12 compute-0 ceph-mon[74360]: pgmap v2075: 321 pgs: 321 active+clean; 260 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 151 op/s
Jan 20 14:55:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:12.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:13.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:13 compute-0 nova_compute[250018]: 2026-01-20 14:55:13.407 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 274 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:55:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2253935145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:55:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2253935145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:55:14 compute-0 nova_compute[250018]: 2026-01-20 14:55:14.254 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:14.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:15 compute-0 ceph-mon[74360]: pgmap v2076: 321 pgs: 321 active+clean; 274 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 20 14:55:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.395 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.396 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.438 250022 DEBUG nova.objects.instance [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.514 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.819 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.820 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:15 compute-0 nova_compute[250018]: 2026-01-20 14:55:15.820 250022 INFO nova.compute.manager [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Attaching volume f3a427b1-0e50-45b1-a975-3d7aabd0195a to /dev/vdb
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.022 250022 DEBUG os_brick.utils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.024 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.038 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.038 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[37fe4350-b6c9-49be-a302-993ba6c84b72]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.039 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.047 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.047 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[f1505cda-0c86-4f50-9c5a-07ea6f89c66c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.049 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.058 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.058 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb13cb4-cef2-41ff-962b-97642db95134]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.060 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[89e29fef-6987-43c4-8d7c-4eea5c6f2055]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.060 250022 DEBUG oslo_concurrency.processutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.091 250022 DEBUG oslo_concurrency.processutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.093 250022 DEBUG os_brick.initiator.connectors.lightos [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.093 250022 DEBUG os_brick.initiator.connectors.lightos [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.093 250022 DEBUG os_brick.initiator.connectors.lightos [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.094 250022 DEBUG os_brick.utils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:55:16 compute-0 nova_compute[250018]: 2026-01-20 14:55:16.094 250022 DEBUG nova.virt.block_device [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating existing volume attachment record: 11b9f326-e25b-487a-9b09-8d4ee125cf3d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 14:55:16 compute-0 ceph-mon[74360]: pgmap v2077: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 14:55:16 compute-0 sudo[323992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:16 compute-0 sudo[323992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-0 sudo[323992]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-0 sudo[324017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:55:16 compute-0 sudo[324017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-0 sudo[324017]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-0 sudo[324042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:16 compute-0 sudo[324042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-0 sudo[324042]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:16 compute-0 sudo[324067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:55:16 compute-0 sudo[324067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 14:55:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 14:55:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/784021205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:17.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:17 compute-0 sudo[324067]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3a22a620-9c85-428b-8bff-d8d9c0979723 does not exist
Jan 20 14:55:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c106538d-b826-491e-828d-251b59951fbb does not exist
Jan 20 14:55:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d510e806-c08b-48bf-bcbf-2d8cc755c3d0 does not exist
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:55:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:55:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:17 compute-0 sudo[324124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:17 compute-0 sudo[324124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:17 compute-0 sudo[324124]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:17 compute-0 sudo[324149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:55:17 compute-0 sudo[324149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:17 compute-0 sudo[324149]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.644 250022 DEBUG nova.objects.instance [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:17 compute-0 sudo[324174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:17 compute-0 sudo[324174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:17 compute-0 sudo[324174]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.718 250022 DEBUG nova.virt.libvirt.driver [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Attempting to attach volume f3a427b1-0e50-45b1-a975-3d7aabd0195a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.721 250022 DEBUG nova.virt.libvirt.guest [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 14:55:17 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-f3a427b1-0e50-45b1-a975-3d7aabd0195a">
Jan 20 14:55:17 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   </source>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 14:55:17 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   </auth>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 14:55:17 compute-0 nova_compute[250018]:   <serial>f3a427b1-0e50-45b1-a975-3d7aabd0195a</serial>
Jan 20 14:55:17 compute-0 nova_compute[250018]: </disk>
Jan 20 14:55:17 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 14:55:17 compute-0 sudo[324199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:55:17 compute-0 sudo[324199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.891 250022 DEBUG nova.virt.libvirt.driver [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.893 250022 DEBUG nova.virt.libvirt.driver [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.893 250022 DEBUG nova.virt.libvirt.driver [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:17 compute-0 nova_compute[250018]: 2026-01-20 14:55:17.894 250022 DEBUG nova.virt.libvirt.driver [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:a5:83:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:55:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.072210333 +0000 UTC m=+0.043476271 container create c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:55:18 compute-0 systemd[1]: Started libpod-conmon-c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0.scope.
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.052253727 +0000 UTC m=+0.023519675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.185883092 +0000 UTC m=+0.157149040 container init c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.196118418 +0000 UTC m=+0.167384366 container start c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.199859248 +0000 UTC m=+0.171125196 container attach c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:55:18 compute-0 dazzling_zhukovsky[324299]: 167 167
Jan 20 14:55:18 compute-0 systemd[1]: libpod-c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0.scope: Deactivated successfully.
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.204756701 +0000 UTC m=+0.176022619 container died c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:55:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:55:18 compute-0 ceph-mon[74360]: pgmap v2078: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 14:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d4caf9b7263d83207e01d764eba3769c533baf9d4f0d28465cd82897a7958c-merged.mount: Deactivated successfully.
Jan 20 14:55:18 compute-0 podman[324283]: 2026-01-20 14:55:18.244591132 +0000 UTC m=+0.215857060 container remove c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_zhukovsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:55:18 compute-0 systemd[1]: libpod-conmon-c5f210ef6dcebc3a06ca4d2bbe11566c3b8f815a60f6e0db7a9f89385e8277a0.scope: Deactivated successfully.
Jan 20 14:55:18 compute-0 nova_compute[250018]: 2026-01-20 14:55:18.266 250022 DEBUG oslo_concurrency.lockutils [None req-db83d2f3-77b3-43cf-aa8f-89178a3bb91a 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:18 compute-0 podman[324324]: 2026-01-20 14:55:18.403229541 +0000 UTC m=+0.044540439 container create 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 14:55:18 compute-0 nova_compute[250018]: 2026-01-20 14:55:18.409 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:18 compute-0 systemd[1]: Started libpod-conmon-3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61.scope.
Jan 20 14:55:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:18 compute-0 podman[324324]: 2026-01-20 14:55:18.382502974 +0000 UTC m=+0.023813922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:18 compute-0 podman[324324]: 2026-01-20 14:55:18.488027994 +0000 UTC m=+0.129338912 container init 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 14:55:18 compute-0 podman[324324]: 2026-01-20 14:55:18.494967791 +0000 UTC m=+0.136278689 container start 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:55:18 compute-0 podman[324324]: 2026-01-20 14:55:18.499430811 +0000 UTC m=+0.140741759 container attach 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 14:55:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:19.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:19 compute-0 nova_compute[250018]: 2026-01-20 14:55:19.256 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:19 compute-0 vibrant_cray[324340]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:55:19 compute-0 vibrant_cray[324340]: --> relative data size: 1.0
Jan 20 14:55:19 compute-0 vibrant_cray[324340]: --> All data devices are unavailable
Jan 20 14:55:19 compute-0 systemd[1]: libpod-3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61.scope: Deactivated successfully.
Jan 20 14:55:19 compute-0 podman[324324]: 2026-01-20 14:55:19.40084544 +0000 UTC m=+1.042156348 container died 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-614908dcd74cea8eb775fda5ff6b58a572fa8341156599a8b83a19ffef613647-merged.mount: Deactivated successfully.
Jan 20 14:55:19 compute-0 podman[324324]: 2026-01-20 14:55:19.522681169 +0000 UTC m=+1.163992077 container remove 3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:55:19 compute-0 systemd[1]: libpod-conmon-3ff5c6052af90c47a2c8e51efde8841dcb3142473866c256357131a935156c61.scope: Deactivated successfully.
Jan 20 14:55:19 compute-0 sudo[324199]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:19 compute-0 sudo[324369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:19 compute-0 sudo[324369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:19 compute-0 sudo[324369]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:19 compute-0 sudo[324394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:55:19 compute-0 sudo[324394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:19 compute-0 sudo[324394]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:19 compute-0 sudo[324419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:19 compute-0 sudo[324419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:19 compute-0 sudo[324419]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:19 compute-0 sudo[324445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:55:19 compute-0 sudo[324445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 14:55:20 compute-0 podman[324507]: 2026-01-20 14:55:20.127109636 +0000 UTC m=+0.039256419 container create bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:20 compute-0 systemd[1]: Started libpod-conmon-bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223.scope.
Jan 20 14:55:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:20 compute-0 podman[324507]: 2026-01-20 14:55:20.109948733 +0000 UTC m=+0.022095506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:20 compute-0 podman[324507]: 2026-01-20 14:55:20.216224403 +0000 UTC m=+0.128371186 container init bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:20 compute-0 podman[324507]: 2026-01-20 14:55:20.224071735 +0000 UTC m=+0.136218508 container start bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:55:20 compute-0 podman[324507]: 2026-01-20 14:55:20.226998044 +0000 UTC m=+0.139144817 container attach bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:55:20 compute-0 stoic_herschel[324523]: 167 167
Jan 20 14:55:20 compute-0 systemd[1]: libpod-bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223.scope: Deactivated successfully.
Jan 20 14:55:20 compute-0 ceph-mon[74360]: pgmap v2079: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 14:55:20 compute-0 nova_compute[250018]: 2026-01-20 14:55:20.257 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:20 compute-0 podman[324528]: 2026-01-20 14:55:20.27332125 +0000 UTC m=+0.025107867 container died bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-98c61bb26c7159e2fc6381d5d1083ca6e4d8d742dd51108b39683ebcd7c0c444-merged.mount: Deactivated successfully.
Jan 20 14:55:20 compute-0 podman[324528]: 2026-01-20 14:55:20.313685507 +0000 UTC m=+0.065472114 container remove bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:55:20 compute-0 systemd[1]: libpod-conmon-bf8dcc1e280a5a873599b4f98dbe65efc7727aa26abc671ad85652b1870a6223.scope: Deactivated successfully.
Jan 20 14:55:20 compute-0 podman[324550]: 2026-01-20 14:55:20.527891611 +0000 UTC m=+0.052310449 container create 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:55:20 compute-0 systemd[1]: Started libpod-conmon-91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114.scope.
Jan 20 14:55:20 compute-0 podman[324550]: 2026-01-20 14:55:20.500120864 +0000 UTC m=+0.024539752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe0e199d26bccf643afce20d9cf93efc6cbef24cde5d9317d7d943aaf3ea629/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe0e199d26bccf643afce20d9cf93efc6cbef24cde5d9317d7d943aaf3ea629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe0e199d26bccf643afce20d9cf93efc6cbef24cde5d9317d7d943aaf3ea629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe0e199d26bccf643afce20d9cf93efc6cbef24cde5d9317d7d943aaf3ea629/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:20 compute-0 podman[324550]: 2026-01-20 14:55:20.637659905 +0000 UTC m=+0.162078713 container init 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:55:20 compute-0 podman[324550]: 2026-01-20 14:55:20.646676598 +0000 UTC m=+0.171095396 container start 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:55:20 compute-0 podman[324550]: 2026-01-20 14:55:20.650668785 +0000 UTC m=+0.175087583 container attach 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:55:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:20.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:21 compute-0 nova_compute[250018]: 2026-01-20 14:55:21.115 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:21.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:21 compute-0 great_banzai[324566]: {
Jan 20 14:55:21 compute-0 great_banzai[324566]:     "0": [
Jan 20 14:55:21 compute-0 great_banzai[324566]:         {
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "devices": [
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "/dev/loop3"
Jan 20 14:55:21 compute-0 great_banzai[324566]:             ],
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "lv_name": "ceph_lv0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "lv_size": "7511998464",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "name": "ceph_lv0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "tags": {
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.cluster_name": "ceph",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.crush_device_class": "",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.encrypted": "0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.osd_id": "0",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.type": "block",
Jan 20 14:55:21 compute-0 great_banzai[324566]:                 "ceph.vdo": "0"
Jan 20 14:55:21 compute-0 great_banzai[324566]:             },
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "type": "block",
Jan 20 14:55:21 compute-0 great_banzai[324566]:             "vg_name": "ceph_vg0"
Jan 20 14:55:21 compute-0 great_banzai[324566]:         }
Jan 20 14:55:21 compute-0 great_banzai[324566]:     ]
Jan 20 14:55:21 compute-0 great_banzai[324566]: }
Jan 20 14:55:21 compute-0 systemd[1]: libpod-91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114.scope: Deactivated successfully.
Jan 20 14:55:21 compute-0 podman[324550]: 2026-01-20 14:55:21.448947749 +0000 UTC m=+0.973366557 container died 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fe0e199d26bccf643afce20d9cf93efc6cbef24cde5d9317d7d943aaf3ea629-merged.mount: Deactivated successfully.
Jan 20 14:55:21 compute-0 podman[324550]: 2026-01-20 14:55:21.760732309 +0000 UTC m=+1.285151147 container remove 91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 14:55:21 compute-0 sudo[324445]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:21 compute-0 systemd[1]: libpod-conmon-91b20800fb6a99f317e780c642e055827bc6118b3381cdca93005d438ed5a114.scope: Deactivated successfully.
Jan 20 14:55:21 compute-0 sudo[324589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:21 compute-0 sudo[324589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:21 compute-0 sudo[324589]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:21 compute-0 sudo[324614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:55:21 compute-0 sudo[324614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:21 compute-0 sudo[324614]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:22 compute-0 sudo[324639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:22 compute-0 sudo[324639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:22 compute-0 sudo[324639]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Jan 20 14:55:22 compute-0 sudo[324664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:55:22 compute-0 sudo[324664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:22 compute-0 podman[324732]: 2026-01-20 14:55:22.377019925 +0000 UTC m=+0.046941384 container create 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:55:22 compute-0 systemd[1]: Started libpod-conmon-939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b.scope.
Jan 20 14:55:22 compute-0 podman[324732]: 2026-01-20 14:55:22.351751265 +0000 UTC m=+0.021672804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.460 250022 DEBUG nova.compute.manager [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 14:55:22 compute-0 podman[324732]: 2026-01-20 14:55:22.464255733 +0000 UTC m=+0.134177222 container init 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:55:22 compute-0 podman[324732]: 2026-01-20 14:55:22.472415083 +0000 UTC m=+0.142336552 container start 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:55:22 compute-0 podman[324732]: 2026-01-20 14:55:22.476179684 +0000 UTC m=+0.146101163 container attach 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:55:22 compute-0 reverent_euclid[324749]: 167 167
Jan 20 14:55:22 compute-0 systemd[1]: libpod-939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b.scope: Deactivated successfully.
Jan 20 14:55:22 compute-0 conmon[324749]: conmon 939cf53d6d56d49e2dc2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b.scope/container/memory.events
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:22 compute-0 podman[324754]: 2026-01-20 14:55:22.520953879 +0000 UTC m=+0.027963324 container died 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:55:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cac07ac31bc890358ed43dfdc03723b569844ca9570e10688dfeaaaeb9061a7-merged.mount: Deactivated successfully.
Jan 20 14:55:22 compute-0 podman[324754]: 2026-01-20 14:55:22.562320972 +0000 UTC m=+0.069330447 container remove 939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:55:22 compute-0 systemd[1]: libpod-conmon-939cf53d6d56d49e2dc2b9ae5bd33066b708b967526773f404ebf2304dbd877b.scope: Deactivated successfully.
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.618 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.621 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.660 250022 DEBUG nova.objects.instance [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_requests' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.677 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.677 250022 INFO nova.compute.claims [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.677 250022 DEBUG nova.objects.instance [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.689 250022 DEBUG nova.objects.instance [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.738 250022 INFO nova.compute.resource_tracker [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating resource usage from migration 0b824da1-0249-41a3-b5ef-cb3a5de1b2b8
Jan 20 14:55:22 compute-0 nova_compute[250018]: 2026-01-20 14:55:22.790 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:22 compute-0 podman[324776]: 2026-01-20 14:55:22.747163497 +0000 UTC m=+0.023530755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:55:22 compute-0 podman[324776]: 2026-01-20 14:55:22.893218687 +0000 UTC m=+0.169585925 container create d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:22 compute-0 systemd[1]: Started libpod-conmon-d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2.scope.
Jan 20 14:55:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23da4867b501dae6b16880be8936b78f899e011e3b1480f563882a6dda9c56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23da4867b501dae6b16880be8936b78f899e011e3b1480f563882a6dda9c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23da4867b501dae6b16880be8936b78f899e011e3b1480f563882a6dda9c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a23da4867b501dae6b16880be8936b78f899e011e3b1480f563882a6dda9c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:22 compute-0 podman[324776]: 2026-01-20 14:55:22.984175395 +0000 UTC m=+0.260542653 container init d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:55:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:22 compute-0 podman[324776]: 2026-01-20 14:55:22.991628315 +0000 UTC m=+0.267995563 container start d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:55:22 compute-0 podman[324776]: 2026-01-20 14:55:22.99473758 +0000 UTC m=+0.271104818 container attach d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 20 14:55:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 20 14:55:23 compute-0 ceph-mon[74360]: pgmap v2080: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Jan 20 14:55:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 20 14:55:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 20 14:55:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:23.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896483640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.277 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.287 250022 DEBUG nova.compute.provider_tree [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.314 250022 DEBUG nova.scheduler.client.report [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.356 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.356 250022 INFO nova.compute.manager [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Migrating
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.411 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.469 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.470 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:23 compute-0 nova_compute[250018]: 2026-01-20 14:55:23.470 250022 DEBUG nova.network.neutron [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]: {
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:         "osd_id": 0,
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:         "type": "bluestore"
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]:     }
Jan 20 14:55:23 compute-0 laughing_grothendieck[324813]: }
Jan 20 14:55:23 compute-0 systemd[1]: libpod-d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2.scope: Deactivated successfully.
Jan 20 14:55:23 compute-0 podman[324776]: 2026-01-20 14:55:23.845095974 +0000 UTC m=+1.121463252 container died d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:55:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a23da4867b501dae6b16880be8936b78f899e011e3b1480f563882a6dda9c56-merged.mount: Deactivated successfully.
Jan 20 14:55:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 132 KiB/s wr, 22 op/s
Jan 20 14:55:24 compute-0 podman[324776]: 2026-01-20 14:55:24.059510455 +0000 UTC m=+1.335877703 container remove d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:55:24 compute-0 systemd[1]: libpod-conmon-d64e158112b751970daf75ad09001b8148473b92a6d19343e100a9e7dc5bfeb2.scope: Deactivated successfully.
Jan 20 14:55:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 20 14:55:24 compute-0 sudo[324664]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 20 14:55:24 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 20 14:55:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:55:24 compute-0 ceph-mon[74360]: osdmap e303: 3 total, 3 up, 3 in
Jan 20 14:55:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3896483640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:55:24 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ce23c071-216e-4d49-9362-10375577de75 does not exist
Jan 20 14:55:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f1f89fa2-043b-476e-85f1-da1c4c27c5a2 does not exist
Jan 20 14:55:24 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1a386298-9482-405f-83ec-8ba7578acc88 does not exist
Jan 20 14:55:24 compute-0 nova_compute[250018]: 2026-01-20 14:55:24.259 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:24 compute-0 sudo[324851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:24 compute-0 sudo[324851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:24 compute-0 sudo[324851]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:24 compute-0 sudo[324876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:55:24 compute-0 sudo[324876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:24 compute-0 sudo[324876]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:24 compute-0 nova_compute[250018]: 2026-01-20 14:55:24.756 250022 DEBUG nova.network.neutron [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:24 compute-0 nova_compute[250018]: 2026-01-20 14:55:24.792 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:24 compute-0 nova_compute[250018]: 2026-01-20 14:55:24.965 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 14:55:24 compute-0 nova_compute[250018]: 2026-01-20 14:55:24.971 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:55:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 20 14:55:25 compute-0 ceph-mon[74360]: pgmap v2082: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 132 KiB/s wr, 22 op/s
Jan 20 14:55:25 compute-0 ceph-mon[74360]: osdmap e304: 3 total, 3 up, 3 in
Jan 20 14:55:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:55:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:25.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 20 14:55:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 20 14:55:25 compute-0 nova_compute[250018]: 2026-01-20 14:55:25.494 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Jan 20 14:55:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2689986105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:26 compute-0 ceph-mon[74360]: osdmap e305: 3 total, 3 up, 3 in
Jan 20 14:55:26 compute-0 ceph-mon[74360]: pgmap v2085: 321 pgs: 321 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Jan 20 14:55:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:26.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:27.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:27 compute-0 kernel: tap5f645763-9f (unregistering): left promiscuous mode
Jan 20 14:55:27 compute-0 NetworkManager[48960]: <info>  [1768920927.9766] device (tap5f645763-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:55:27 compute-0 nova_compute[250018]: 2026-01-20 14:55:27.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:27 compute-0 ovn_controller[148666]: 2026-01-20T14:55:27Z|00445|binding|INFO|Releasing lport 5f645763-9f97-4686-80ab-6df7299b1235 from this chassis (sb_readonly=0)
Jan 20 14:55:27 compute-0 ovn_controller[148666]: 2026-01-20T14:55:27Z|00446|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 down in Southbound
Jan 20 14:55:27 compute-0 ovn_controller[148666]: 2026-01-20T14:55:27Z|00447|binding|INFO|Removing iface tap5f645763-9f ovn-installed in OVS
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.011 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.013 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.014 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.015 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.016 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9c175640-ad28-48a3-83d3-0ea0d1f24ad7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.016 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:55:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Jan 20 14:55:28 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 20 14:55:28 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007b.scope: Consumed 15.701s CPU time.
Jan 20 14:55:28 compute-0 systemd-machined[216401]: Machine qemu-58-instance-0000007b terminated.
Jan 20 14:55:28 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [NOTICE]   (323527) : haproxy version is 2.8.14-c23fe91
Jan 20 14:55:28 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [NOTICE]   (323527) : path to executable is /usr/sbin/haproxy
Jan 20 14:55:28 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [WARNING]  (323527) : Exiting Master process...
Jan 20 14:55:28 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [ALERT]    (323527) : Current worker (323529) exited with code 143 (Terminated)
Jan 20 14:55:28 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[323523]: [WARNING]  (323527) : All workers exited. Exiting... (0)
Jan 20 14:55:28 compute-0 systemd[1]: libpod-f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669.scope: Deactivated successfully.
Jan 20 14:55:28 compute-0 podman[324927]: 2026-01-20 14:55:28.167593163 +0000 UTC m=+0.050518281 container died f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669-userdata-shm.mount: Deactivated successfully.
Jan 20 14:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9bc6ddf2216308038881440fff4df9c7fdff3b1df6b1c326222bcae2c51821a-merged.mount: Deactivated successfully.
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.212 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 podman[324927]: 2026-01-20 14:55:28.214603988 +0000 UTC m=+0.097529106 container cleanup f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.217 250022 INFO nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance shutdown successfully after 3 seconds.
Jan 20 14:55:28 compute-0 systemd[1]: libpod-conmon-f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669.scope: Deactivated successfully.
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.221 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance destroyed successfully.
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.222 250022 DEBUG nova.virt.libvirt.vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.223 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.223 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.224 250022 DEBUG os_vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.228 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.229 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f645763-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.230 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.235 250022 INFO os_vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.240 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.240 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.240 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:28 compute-0 podman[324966]: 2026-01-20 14:55:28.288717843 +0000 UTC m=+0.048901047 container remove f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.294 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fe07653e-5f1b-4562-90c1-e9c560c07e63]: (4, ('Tue Jan 20 02:55:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669)\nf6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669\nTue Jan 20 02:55:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (f6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669)\nf6fa1315bec9494f1a217a37790a02b8426cf8009b6514391178a0849d08c669\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.295 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7c515c-6d27-48bb-ad5b-ae066f7dde65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.296 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:28 compute-0 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.313 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0523bc74-90b0-4254-99f8-95525cb7fc49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.327 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[798248de-7868-48e0-88a4-36ed326a83ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.328 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4ff30a-1ee4-4450-9fb5-a0115d260be2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.341 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9f52d9fd-cbbb-4718-a044-adbc230ef0c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681746, 'reachable_time': 30503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324979, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.345 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:55:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:28.345 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[5f264cdb-05ad-4e1a-80e6-89cd40c4cc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:28 compute-0 nova_compute[250018]: 2026-01-20 14:55:28.412 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:28.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:29 compute-0 ceph-mon[74360]: pgmap v2086: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.224 250022 DEBUG nova.network.neutron [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Port 5f645763-9f97-4686-80ab-6df7299b1235 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 14:55:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:29.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.372 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.372 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.373 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.654 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.654 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:29 compute-0 nova_compute[250018]: 2026-01-20 14:55:29.655 250022 DEBUG nova.network.neutron [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 388 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 8.2 MiB/s wr, 160 op/s
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.191 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.192 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.207 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.373 250022 DEBUG nova.compute.manager [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.373 250022 DEBUG oslo_concurrency.lockutils [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.373 250022 DEBUG oslo_concurrency.lockutils [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.374 250022 DEBUG oslo_concurrency.lockutils [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.374 250022 DEBUG nova.compute.manager [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.374 250022 WARNING nova.compute.manager [req-efa91177-578b-494a-a73e-69fbf5d4c9a2 req-0427088c-e46a-48a0-b9f1-d8ca4c4c3cb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state active and task_state resize_migrated.
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.445 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.446 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.455 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.456 250022 INFO nova.compute.claims [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:55:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:30.766 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:30 compute-0 nova_compute[250018]: 2026-01-20 14:55:30.861 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:30.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:31 compute-0 ceph-mon[74360]: pgmap v2087: 321 pgs: 321 active+clean; 388 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 8.2 MiB/s wr, 160 op/s
Jan 20 14:55:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/683276483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:31.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268157372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.358 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.366 250022 DEBUG nova.compute.provider_tree [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.394 250022 DEBUG nova.scheduler.client.report [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.435 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.436 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.489 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.490 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.510 250022 INFO nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.540 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.653 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.655 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.656 250022 INFO nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Creating image(s)
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.698 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.758 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.791 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.796 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.833 250022 DEBUG nova.policy [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.850 250022 DEBUG nova.network.neutron [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.871 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.872 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.873 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.873 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.904 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.908 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 11c82470-ab02-4424-908b-705f1f65e062_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:31 compute-0 nova_compute[250018]: 2026-01-20 14:55:31.951 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.016 250022 DEBUG os_brick.utils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.018 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 175 op/s
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.061 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.062 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[34c280bc-c313-467b-96db-e023a4b8d1a8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.064 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.073 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.073 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[500c4f7c-bf67-4ea7-9c50-a18adde0c1ff]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.076 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.089 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.089 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[27ba2d06-a347-4dbf-8620-2d44ff767c2c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.091 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[38247e82-36f2-4397-a4e8-ac16526618e0]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.092 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.129 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.133 250022 DEBUG os_brick.initiator.connectors.lightos [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.133 250022 DEBUG os_brick.initiator.connectors.lightos [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.134 250022 DEBUG os_brick.initiator.connectors.lightos [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.134 250022 DEBUG os_brick.utils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] <== get_connector_properties: return (116ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:55:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2268157372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3071191747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.273 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 11c82470-ab02-4424-908b-705f1f65e062_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.370 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:55:32 compute-0 sudo[325141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:32 compute-0 sudo[325141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:32 compute-0 sudo[325141]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.482 250022 DEBUG nova.objects.instance [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid 11c82470-ab02-4424-908b-705f1f65e062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:32 compute-0 sudo[325184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:32 compute-0 sudo[325184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:32 compute-0 sudo[325184]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.631 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.631 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Ensure instance console log exists: /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.632 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.632 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:32 compute-0 nova_compute[250018]: 2026-01-20 14:55:32.632 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.056 250022 DEBUG nova.compute.manager [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.057 250022 DEBUG oslo_concurrency.lockutils [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.057 250022 DEBUG oslo_concurrency.lockutils [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.058 250022 DEBUG oslo_concurrency.lockutils [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.058 250022 DEBUG nova.compute.manager [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.059 250022 WARNING nova.compute.manager [req-8320c692-a968-40c1-bb65-87014e8a158f req-8ac502fd-4d95-4acd-8d7c-d9483c6e4953 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state active and task_state resize_finish.
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.072 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Successfully created port: 6532e62a-b883-47ff-9799-820144870294 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:55:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1693936922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:55:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.414 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.536 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.539 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.539 250022 INFO nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Creating image(s)
Jan 20 14:55:33 compute-0 ceph-mon[74360]: pgmap v2088: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 175 op/s
Jan 20 14:55:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1693936922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.859 250022 DEBUG nova.storage.rbd_utils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(nova-resize) on rbd image(b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.972 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Successfully updated port: 6532e62a-b883-47ff-9799-820144870294 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.995 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.996 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:33 compute-0 nova_compute[250018]: 2026-01-20 14:55:33.996 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 373 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 126 op/s
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.151 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:55:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 20 14:55:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 20 14:55:34 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.760 250022 DEBUG nova.objects.instance [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3124150087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:34 compute-0 ceph-mon[74360]: pgmap v2089: 321 pgs: 321 active+clean; 373 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 126 op/s
Jan 20 14:55:34 compute-0 ceph-mon[74360]: osdmap e306: 3 total, 3 up, 3 in
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.902 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.902 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Ensure instance console log exists: /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.903 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.903 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.904 250022 DEBUG oslo_concurrency.lockutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.907 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start _get_guest_xml network_info=[{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'boot_index': None, 'device_type': 'disk', 'attachment_id': 'eea3000c-bb00-408a-8f86-bead31fa452f', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'attached_at': '2026-01-20T14:55:33.000000', 'detached_at': '', 'volume_id': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'serial': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.912 250022 WARNING nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.916 250022 DEBUG nova.virt.libvirt.host [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.917 250022 DEBUG nova.virt.libvirt.host [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.918 250022 DEBUG nova.network.neutron [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updating instance_info_cache with network_info: [{"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.922 250022 DEBUG nova.virt.libvirt.host [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.922 250022 DEBUG nova.virt.libvirt.host [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.924 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.925 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.926 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.926 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.927 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.927 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.927 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.928 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.928 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.928 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.929 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.929 250022 DEBUG nova.virt.hardware [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.929 250022 DEBUG nova.objects.instance [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.939 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.940 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Instance network_info: |[{"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.941 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Start _get_guest_xml network_info=[{"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.945 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.976 250022 WARNING nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.981 250022 DEBUG nova.virt.libvirt.host [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.981 250022 DEBUG nova.virt.libvirt.host [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.986 250022 DEBUG nova.virt.libvirt.host [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.986 250022 DEBUG nova.virt.libvirt.host [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.987 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.988 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.988 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.988 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.988 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.989 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.989 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.989 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.989 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.990 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.990 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.990 250022 DEBUG nova.virt.hardware [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:55:34 compute-0 nova_compute[250018]: 2026-01-20 14:55:34.993 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:55:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353520320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.421 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121600348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.462 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.493 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.497 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.529 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.800 250022 DEBUG nova.compute.manager [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-changed-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.802 250022 DEBUG nova.compute.manager [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Refreshing instance network info cache due to event network-changed-6532e62a-b883-47ff-9799-820144870294. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.803 250022 DEBUG oslo_concurrency.lockutils [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.804 250022 DEBUG oslo_concurrency.lockutils [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.804 250022 DEBUG nova.network.neutron [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Refreshing network info cache for port 6532e62a-b883-47ff-9799-820144870294 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:55:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2353520320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3121600348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2064067837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.960 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.961 250022 DEBUG nova.virt.libvirt.vif [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1988318391',display_name='tempest-₡-1988318391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1988318391',id=127,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-jk35jcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:31Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=11c82470-ab02-4424-908b-705f1f65e062,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.962 250022 DEBUG nova.network.os_vif_util [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.963 250022 DEBUG nova.network.os_vif_util [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.965 250022 DEBUG nova.objects.instance [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11c82470-ab02-4424-908b-705f1f65e062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426357909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.979 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <uuid>11c82470-ab02-4424-908b-705f1f65e062</uuid>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <name>instance-0000007f</name>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:name>tempest-₡-1988318391</nova:name>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:55:34</nova:creationTime>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <nova:port uuid="6532e62a-b883-47ff-9799-820144870294">
Jan 20 14:55:35 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <system>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="serial">11c82470-ab02-4424-908b-705f1f65e062</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="uuid">11c82470-ab02-4424-908b-705f1f65e062</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </system>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <os>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </os>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <features>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </features>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/11c82470-ab02-4424-908b-705f1f65e062_disk">
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/11c82470-ab02-4424-908b-705f1f65e062_disk.config">
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:7a:3f:c2"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <target dev="tap6532e62a-b8"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/console.log" append="off"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <video>
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </video>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:55:35 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:55:35 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:55:35 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:55:35 compute-0 nova_compute[250018]: </domain>
Jan 20 14:55:35 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.987 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Preparing to wait for external event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.987 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.988 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.988 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.989 250022 DEBUG nova.virt.libvirt.vif [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-1988318391',display_name='tempest-₡-1988318391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1988318391',id=127,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-jk35jcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:31Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=11c82470-ab02-4424-908b-705f1f65e062,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.990 250022 DEBUG nova.network.os_vif_util [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.991 250022 DEBUG nova.network.os_vif_util [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.991 250022 DEBUG os_vif [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.992 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.993 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.993 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.994 250022 DEBUG oslo_concurrency.processutils [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:35 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.999 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:35.999 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6532e62a-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.000 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6532e62a-b8, col_values=(('external_ids', {'iface-id': '6532e62a-b883-47ff-9799-820144870294', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:3f:c2', 'vm-uuid': '11c82470-ab02-4424-908b-705f1f65e062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.002 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.0031] manager: (tap6532e62a-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.007 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.008 250022 INFO os_vif [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8')
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.022 250022 DEBUG nova.virt.libvirt.vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.023 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.024 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.024 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.025 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.026 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.026 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <uuid>b346e1ba-9e83-4e7f-bc03-c327d3e4173b</uuid>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <name>instance-0000007b</name>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <memory>196608</memory>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsTestOtherB-server-141692785</nova:name>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:55:34</nova:creationTime>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:flavor name="m1.micro">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:memory>192</nova:memory>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <nova:port uuid="5f645763-9f97-4686-80ab-6df7299b1235">
Jan 20 14:55:36 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <system>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="serial">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="uuid">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </system>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <os>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </os>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <features>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </features>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-f3a427b1-0e50-45b1-a975-3d7aabd0195a">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:36 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <serial>f3a427b1-0e50-45b1-a975-3d7aabd0195a</serial>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a5:83:5d"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <target dev="tap5f645763-9f"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/console.log" append="off"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <video>
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </video>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:55:36 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:55:36 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:55:36 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:55:36 compute-0 nova_compute[250018]: </domain>
Jan 20 14:55:36 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.028 250022 DEBUG nova.virt.libvirt.vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:54:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:55:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.029 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1445030024-network", "vif_mac": "fa:16:3e:a5:83:5d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.030 250022 DEBUG nova.network.os_vif_util [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.030 250022 DEBUG os_vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.031 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.032 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.032 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.033 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 365 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.9 MiB/s wr, 175 op/s
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.038 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f645763-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.038 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f645763-9f, col_values=(('external_ids', {'iface-id': '5f645763-9f97-4686-80ab-6df7299b1235', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:83:5d', 'vm-uuid': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.0421] manager: (tap5f645763-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.050 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.052 250022 INFO os_vif [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.069 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.070 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.070 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:7a:3f:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.070 250022 INFO nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Using config drive
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.104 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:36 compute-0 podman[325429]: 2026-01-20 14:55:36.122655532 +0000 UTC m=+0.065194476 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 14:55:36 compute-0 podman[325428]: 2026-01-20 14:55:36.151239611 +0000 UTC m=+0.093779405 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.189 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.189 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.190 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.190 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:a5:83:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.190 250022 INFO nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Using config drive
Jan 20 14:55:36 compute-0 kernel: tap5f645763-9f: entered promiscuous mode
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.2900] manager: (tap5f645763-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.292 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_controller[148666]: 2026-01-20T14:55:36Z|00448|binding|INFO|Claiming lport 5f645763-9f97-4686-80ab-6df7299b1235 for this chassis.
Jan 20 14:55:36 compute-0 ovn_controller[148666]: 2026-01-20T14:55:36Z|00449|binding|INFO|5f645763-9f97-4686-80ab-6df7299b1235: Claiming fa:16:3e:a5:83:5d 10.100.0.14
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.298 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.299 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.300 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.309 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3b57d94b-0d27-4aad-9208-91c28513cb46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.310 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:55:36 compute-0 ovn_controller[148666]: 2026-01-20T14:55:36Z|00450|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 ovn-installed in OVS
Jan 20 14:55:36 compute-0 ovn_controller[148666]: 2026-01-20T14:55:36Z|00451|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 up in Southbound
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.311 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.312 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.312 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ce5d43-7f51-4d51-b9b5-8d1d1f643c88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.313 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba060a8-d1eb-41a0-bc19-96797e1af3f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 systemd-machined[216401]: New machine qemu-59-instance-0000007b.
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.328 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[a52ec06b-28ea-4043-8726-839446f9b017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-0000007b.
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.348 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[533edf23-48f3-4291-a7af-aa756f33adc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 systemd-udevd[325528]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.3721] device (tap5f645763-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.3730] device (tap5f645763-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.374 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[08b31ae6-11b6-4ebd-a887-17cfad6893ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.381 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e3aad749-6fcd-46ef-8692-3c8c9aedcfa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.3835] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.410 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a95a97c8-3feb-4ffa-8bdc-fc2714911518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.413 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b2037632-93d0-415b-8329-7c9e531cd23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.4310] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.436 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[996b1dcd-c9c6-4d6a-9bf8-669589b3ed39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.453 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[96a17ca9-d1dc-4b0d-8523-15f15518924a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687715, 'reachable_time': 40902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325560, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.469 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e57d2a-b084-40d7-904b-3ff16df3a7d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687715, 'tstamp': 687715}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325561, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.488 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ba55c866-edaa-4c89-84c0-689d2d02879f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687715, 'reachable_time': 40902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325562, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.522 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[98be23e3-bcde-4682-82d0-d75b4afa8185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.582 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[22b5a0d3-6f00-4d42-bc23-86f37f7a2132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.583 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.583 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.584 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.619 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 NetworkManager[48960]: <info>  [1768920936.6212] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 20 14:55:36 compute-0 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.628 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.629 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_controller[148666]: 2026-01-20T14:55:36Z|00452|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:55:36 compute-0 nova_compute[250018]: 2026-01-20 14:55:36.653 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.654 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.655 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[acc4e8c2-903a-498c-ae17-197a63089fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.656 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:55:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:36.658 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:55:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2064067837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3426357909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:36 compute-0 ceph-mon[74360]: pgmap v2091: 321 pgs: 321 active+clean; 365 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.9 MiB/s wr, 175 op/s
Jan 20 14:55:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:37 compute-0 podman[325658]: 2026-01-20 14:55:37.002225803 +0000 UTC m=+0.046686067 container create 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.014 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for b346e1ba-9e83-4e7f-bc03-c327d3e4173b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.014 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920937.013714, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.015 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Resumed (Lifecycle Event)
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.017 250022 DEBUG nova.compute.manager [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.020 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance running successfully.
Jan 20 14:55:37 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.021 250022 DEBUG nova.virt.libvirt.guest [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.022 250022 DEBUG nova.virt.libvirt.driver [None req-1b390d0a-dfcb-4ebc-a1ec-983f45a5e476 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 14:55:37 compute-0 systemd[1]: Started libpod-conmon-8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c.scope.
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.068 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.072 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:37 compute-0 podman[325658]: 2026-01-20 14:55:36.977994101 +0000 UTC m=+0.022454385 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdd42ff22a69d44a672e0252b683372722e5f4946c002ad42f11c0e5acbfc985/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.097 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.097 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920937.0147376, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.097 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Started (Lifecycle Event)
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.114 250022 INFO nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Creating config drive at /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.121 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasa5gqcg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.154 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.157 250022 DEBUG nova.compute.manager [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.157 250022 DEBUG oslo_concurrency.lockutils [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.158 250022 DEBUG oslo_concurrency.lockutils [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.158 250022 DEBUG oslo_concurrency.lockutils [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.158 250022 DEBUG nova.compute.manager [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.159 250022 WARNING nova.compute.manager [req-d36acf41-f8bc-43fa-a9bd-9234f1addd5e req-15c3dcd2-4db8-4725-ac92-a74b2faa3168 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state active and task_state resize_finish.
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.162 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.207 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 14:55:37 compute-0 podman[325658]: 2026-01-20 14:55:37.216785077 +0000 UTC m=+0.261245351 container init 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 14:55:37 compute-0 podman[325658]: 2026-01-20 14:55:37.222419689 +0000 UTC m=+0.266879963 container start 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 14:55:37 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [NOTICE]   (325681) : New worker (325683) forked
Jan 20 14:55:37 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [NOTICE]   (325681) : Loading success.
Jan 20 14:55:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:37.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.259 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpasa5gqcg" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.292 250022 DEBUG nova.storage.rbd_utils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 11c82470-ab02-4424-908b-705f1f65e062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.296 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config 11c82470-ab02-4424-908b-705f1f65e062_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.685 250022 DEBUG nova.network.neutron [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updated VIF entry in instance network info cache for port 6532e62a-b883-47ff-9799-820144870294. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.687 250022 DEBUG nova.network.neutron [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updating instance_info_cache with network_info: [{"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.701 250022 DEBUG oslo_concurrency.lockutils [req-166a64bc-7356-43d7-869b-7b5fcc5eeda9 req-ef2f8b41-032e-4064-a6a2-375b7257c4a2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2887191558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.904 250022 DEBUG oslo_concurrency.processutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config 11c82470-ab02-4424-908b-705f1f65e062_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.906 250022 INFO nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Deleting local config drive /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062/disk.config because it was imported into RBD.
Jan 20 14:55:37 compute-0 NetworkManager[48960]: <info>  [1768920937.9575] manager: (tap6532e62a-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Jan 20 14:55:37 compute-0 systemd-udevd[325544]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:37 compute-0 kernel: tap6532e62a-b8: entered promiscuous mode
Jan 20 14:55:37 compute-0 ovn_controller[148666]: 2026-01-20T14:55:37Z|00453|binding|INFO|Claiming lport 6532e62a-b883-47ff-9799-820144870294 for this chassis.
Jan 20 14:55:37 compute-0 ovn_controller[148666]: 2026-01-20T14:55:37Z|00454|binding|INFO|6532e62a-b883-47ff-9799-820144870294: Claiming fa:16:3e:7a:3f:c2 10.100.0.4
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:37 compute-0 NetworkManager[48960]: <info>  [1768920937.9728] device (tap6532e62a-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:55:37 compute-0 NetworkManager[48960]: <info>  [1768920937.9737] device (tap6532e62a-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.973 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:3f:c2 10.100.0.4'], port_security=['fa:16:3e:7a:3f:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11c82470-ab02-4424-908b-705f1f65e062', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=6532e62a-b883-47ff-9799-820144870294) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.974 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 6532e62a-b883-47ff-9799-820144870294 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.976 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:55:37 compute-0 ovn_controller[148666]: 2026-01-20T14:55:37Z|00455|binding|INFO|Setting lport 6532e62a-b883-47ff-9799-820144870294 ovn-installed in OVS
Jan 20 14:55:37 compute-0 ovn_controller[148666]: 2026-01-20T14:55:37Z|00456|binding|INFO|Setting lport 6532e62a-b883-47ff-9799-820144870294 up in Southbound
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:37 compute-0 nova_compute[250018]: 2026-01-20 14:55:37.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.990 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0e6d37-2221-405f-b708-1f5d1917b12c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.990 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4c8474b-01 in ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.992 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4c8474b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.992 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a52f3af1-851b-46e3-a95f-977f135b35e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:37.993 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b2598c-558d-41c1-99bc-b6f040e931e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:37 compute-0 systemd-machined[216401]: New machine qemu-60-instance-0000007f.
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.010 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7228ee4a-9c8e-47e9-9594-f800087da894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-0000007f.
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.024 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[eec16470-dddc-4c18-9bbc-12133cdb31d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 355 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.053 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[813fcbee-acdb-4774-955b-70875ed158e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.061 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[026fa1d3-26ee-48cb-9c1c-c1772d2e9c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 NetworkManager[48960]: <info>  [1768920938.0620] manager: (tapf4c8474b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.092 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[81a97bdf-c1bb-44b1-8015-47371198bc9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.095 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[05a3f7c6-f479-449b-b79f-42d48a986da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 NetworkManager[48960]: <info>  [1768920938.1181] device (tapf4c8474b-00): carrier: link connected
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.124 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[75c4b858-92c5-4843-9649-e38d0b0c6c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.140 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9e58dff9-02bc-41d1-877d-b7b5ddc18c87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 30124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325763, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.154 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[396ea72e-9084-45c9-8553-92dda9182ad8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:a25f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687884, 'tstamp': 687884}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325764, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.169 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a390edd7-2f37-4d4a-9cf1-4781db34b0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 30124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325765, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.200 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7ca0e6-fbe0-4424-9d6f-8854567e9153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.257 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[94d24e10-ca66-4b00-812f-1e72562dec10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.261 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:38 compute-0 NetworkManager[48960]: <info>  [1768920938.2630] manager: (tapf4c8474b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 20 14:55:38 compute-0 kernel: tapf4c8474b-00: entered promiscuous mode
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.266 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:38 compute-0 ovn_controller[148666]: 2026-01-20T14:55:38Z|00457|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.269 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.270 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.271 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0163ab37-4ff0-4f41-9a8a-b26671fad7bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.272 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.pid.haproxy
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:55:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:38.273 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'env', 'PROCESS_TAG=haproxy-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.287 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.415 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:38 compute-0 podman[325814]: 2026-01-20 14:55:38.627642637 +0000 UTC m=+0.045615119 container create 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:55:38 compute-0 systemd[1]: Started libpod-conmon-8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9.scope.
Jan 20 14:55:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:38 compute-0 podman[325814]: 2026-01-20 14:55:38.603839126 +0000 UTC m=+0.021811628 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:55:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b56dfda89449563153c1e0ee260ed297534c31a44556dd6ae32ac75a1179d741/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.716 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920938.7161634, 11c82470-ab02-4424-908b-705f1f65e062 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.717 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] VM Started (Lifecycle Event)
Jan 20 14:55:38 compute-0 podman[325814]: 2026-01-20 14:55:38.727577156 +0000 UTC m=+0.145549658 container init 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:55:38 compute-0 podman[325814]: 2026-01-20 14:55:38.738540141 +0000 UTC m=+0.156512623 container start 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.745 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.752 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920938.717637, 11c82470-ab02-4424-908b-705f1f65e062 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.752 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] VM Paused (Lifecycle Event)
Jan 20 14:55:38 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [NOTICE]   (325857) : New worker (325859) forked
Jan 20 14:55:38 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [NOTICE]   (325857) : Loading success.
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.804 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.809 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:38 compute-0 ceph-mon[74360]: pgmap v2092: 321 pgs: 321 active+clean; 355 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 3.8 MiB/s wr, 187 op/s
Jan 20 14:55:38 compute-0 nova_compute[250018]: 2026-01-20 14:55:38.942 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:55:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:39.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.783 250022 DEBUG nova.compute.manager [req-c6b5901a-63ff-4b77-b21f-3ccb228a1d03 req-bffc821b-d3ff-41d4-bd4b-c0f26191c31a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.784 250022 DEBUG oslo_concurrency.lockutils [req-c6b5901a-63ff-4b77-b21f-3ccb228a1d03 req-bffc821b-d3ff-41d4-bd4b-c0f26191c31a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.784 250022 DEBUG oslo_concurrency.lockutils [req-c6b5901a-63ff-4b77-b21f-3ccb228a1d03 req-bffc821b-d3ff-41d4-bd4b-c0f26191c31a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.784 250022 DEBUG oslo_concurrency.lockutils [req-c6b5901a-63ff-4b77-b21f-3ccb228a1d03 req-bffc821b-d3ff-41d4-bd4b-c0f26191c31a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.785 250022 DEBUG nova.compute.manager [req-c6b5901a-63ff-4b77-b21f-3ccb228a1d03 req-bffc821b-d3ff-41d4-bd4b-c0f26191c31a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Processing event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.785 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.790 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920939.7902868, 11c82470-ab02-4424-908b-705f1f65e062 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.791 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] VM Resumed (Lifecycle Event)
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.794 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.805 250022 INFO nova.virt.libvirt.driver [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Instance spawned successfully.
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.805 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.818 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.841 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.855 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.857 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.858 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.859 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.860 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.861 250022 DEBUG nova.virt.libvirt.driver [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.873 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.914 250022 INFO nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Took 8.26 seconds to spawn the instance on the hypervisor.
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.914 250022 DEBUG nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/641791931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:39 compute-0 nova_compute[250018]: 2026-01-20 14:55:39.984 250022 INFO nova.compute.manager [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Took 9.57 seconds to build instance.
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.002 250022 DEBUG oslo_concurrency.lockutils [None req-be5730d0-d4bb-4f57-9a2e-bb64e58fae1a 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 219 op/s
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.105 250022 DEBUG nova.compute.manager [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.106 250022 DEBUG oslo_concurrency.lockutils [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.106 250022 DEBUG oslo_concurrency.lockutils [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.106 250022 DEBUG oslo_concurrency.lockutils [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.107 250022 DEBUG nova.compute.manager [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:40 compute-0 nova_compute[250018]: 2026-01-20 14:55:40.107 250022 WARNING nova.compute.manager [req-9a7848ad-621c-4bce-a4dc-ac9be4ff1945 req-18f938cc-3bb5-4429-8e9e-c11d28fcc39d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state resized and task_state None.
Jan 20 14:55:40 compute-0 ceph-mon[74360]: pgmap v2093: 321 pgs: 321 active+clean; 343 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 219 op/s
Jan 20 14:55:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:41.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:41.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.269 250022 DEBUG nova.network.neutron [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Port 5f645763-9f97-4686-80ab-6df7299b1235 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.269 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.269 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.269 250022 DEBUG nova.network.neutron [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.908 250022 DEBUG nova.compute.manager [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.909 250022 DEBUG oslo_concurrency.lockutils [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.909 250022 DEBUG oslo_concurrency.lockutils [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.909 250022 DEBUG oslo_concurrency.lockutils [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.909 250022 DEBUG nova.compute.manager [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] No waiting events found dispatching network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:41 compute-0 nova_compute[250018]: 2026-01-20 14:55:41.909 250022 WARNING nova.compute.manager [req-56b371c8-c8de-4949-903f-ead4c249d25a req-e340c7d4-3f77-4f52-b1e5-03be5d20382e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received unexpected event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 for instance with vm_state active and task_state None.
Jan 20 14:55:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:55:42 compute-0 nova_compute[250018]: 2026-01-20 14:55:42.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:42 compute-0 nova_compute[250018]: 2026-01-20 14:55:42.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:42 compute-0 nova_compute[250018]: 2026-01-20 14:55:42.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:55:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:43.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:43 compute-0 ceph-mon[74360]: pgmap v2094: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 258 op/s
Jan 20 14:55:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:43.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.597 250022 DEBUG nova.network.neutron [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.612 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:43 compute-0 ovn_controller[148666]: 2026-01-20T14:55:43Z|00458|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:55:43 compute-0 ovn_controller[148666]: 2026-01-20T14:55:43Z|00459|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:55:43 compute-0 kernel: tap5f645763-9f (unregistering): left promiscuous mode
Jan 20 14:55:43 compute-0 NetworkManager[48960]: <info>  [1768920943.8460] device (tap5f645763-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:55:43 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 20 14:55:43 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Consumed 7.484s CPU time.
Jan 20 14:55:43 compute-0 systemd-machined[216401]: Machine qemu-59-instance-0000007b terminated.
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.968 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:43 compute-0 ovn_controller[148666]: 2026-01-20T14:55:43Z|00460|binding|INFO|Releasing lport 5f645763-9f97-4686-80ab-6df7299b1235 from this chassis (sb_readonly=0)
Jan 20 14:55:43 compute-0 ovn_controller[148666]: 2026-01-20T14:55:43Z|00461|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 down in Southbound
Jan 20 14:55:43 compute-0 ovn_controller[148666]: 2026-01-20T14:55:43Z|00462|binding|INFO|Removing iface tap5f645763-9f ovn-installed in OVS
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.971 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:43.976 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:43.977 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:55:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:43.979 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:55:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:43.980 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[19859563-c215-4865-958a-280e72a002b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:43.981 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:55:43 compute-0 nova_compute[250018]: 2026-01-20 14:55:43.987 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 273 op/s
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.069 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance destroyed successfully.
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.070 250022 DEBUG nova.objects.instance [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.094 250022 DEBUG nova.virt.libvirt.vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.095 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.095 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.096 250022 DEBUG os_vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.099 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f645763-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.104 250022 INFO os_vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [NOTICE]   (325681) : haproxy version is 2.8.14-c23fe91
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [NOTICE]   (325681) : path to executable is /usr/sbin/haproxy
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [WARNING]  (325681) : Exiting Master process...
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [WARNING]  (325681) : Exiting Master process...
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [ALERT]    (325681) : Current worker (325683) exited with code 143 (Terminated)
Jan 20 14:55:44 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[325674]: [WARNING]  (325681) : All workers exited. Exiting... (0)
Jan 20 14:55:44 compute-0 systemd[1]: libpod-8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c.scope: Deactivated successfully.
Jan 20 14:55:44 compute-0 podman[325897]: 2026-01-20 14:55:44.152639307 +0000 UTC m=+0.079627604 container died 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.197 250022 DEBUG nova.compute.manager [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.197 250022 DEBUG oslo_concurrency.lockutils [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.199 250022 DEBUG oslo_concurrency.lockutils [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.199 250022 DEBUG oslo_concurrency.lockutils [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.200 250022 DEBUG nova.compute.manager [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.200 250022 WARNING nova.compute.manager [req-556bfc06-2b1d-4883-a42c-ca6a536892a0 req-621dacdb-dbc1-45ea-b36d-29f9d580a8d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state resized and task_state resize_reverting.
Jan 20 14:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c-userdata-shm.mount: Deactivated successfully.
Jan 20 14:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd42ff22a69d44a672e0252b683372722e5f4946c002ad42f11c0e5acbfc985-merged.mount: Deactivated successfully.
Jan 20 14:55:44 compute-0 podman[325897]: 2026-01-20 14:55:44.314834252 +0000 UTC m=+0.241822559 container cleanup 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:55:44 compute-0 systemd[1]: libpod-conmon-8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c.scope: Deactivated successfully.
Jan 20 14:55:44 compute-0 ceph-mon[74360]: pgmap v2095: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 273 op/s
Jan 20 14:55:44 compute-0 podman[325931]: 2026-01-20 14:55:44.383096259 +0000 UTC m=+0.041924760 container remove 8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.388 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[48142975-5a64-41df-9a3e-5810bdd98f3a]: (4, ('Tue Jan 20 02:55:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c)\n8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c\nTue Jan 20 02:55:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c)\n8c15e9323bf319fdf2d396e5ff9125f543acf8d53b207710b27cfb3ff30b337c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.390 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fe7637-492b-4f81-a0db-2aa1785dc35c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.391 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.393 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.409 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.412 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6b707516-d992-4ac9-abdc-9b31803a2640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.436 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e3a023-709a-4a12-9fff-8068fb802c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.438 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[28961dfa-9cc6-4871-82d3-7fd344f85608]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.454 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f4e613-76b1-43d2-a582-dc1a20897aaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687709, 'reachable_time': 43577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325946, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.456 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:55:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:44.456 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3d168f27-8d6c-488a-9c35-d5a83c46b45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.712 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.712 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.734 250022 DEBUG nova.objects.instance [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'migration_context' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:44 compute-0 nova_compute[250018]: 2026-01-20 14:55:44.962 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:45.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:45.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3485196193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332547316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.402 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.407 250022 DEBUG nova.compute.provider_tree [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.422 250022 DEBUG nova.scheduler.client.report [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.484 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.491 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.491 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.491 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.492 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.713 250022 INFO nova.compute.manager [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Swapping old allocation on dict_keys(['068db7fd-4bd6-45a9-8bd6-a22cfe7596ed']) held by migration 0b824da1-0249-41a3-b5ef-cb3a5de1b2b8 for instance
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.747 250022 DEBUG nova.scheduler.client.report [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Overwriting current allocation {'allocations': {'068db7fd-4bd6-45a9-8bd6-a22cfe7596ed': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 67}}, 'project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'user_id': '215db37373dc4ae5a75cbd6866f471da', 'consumer_generation': 1} on consumer b346e1ba-9e83-4e7f-bc03-c327d3e4173b move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 20 14:55:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2964940297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.963 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.964 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.965 250022 DEBUG nova.network.neutron [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:55:45 compute-0 nova_compute[250018]: 2026-01-20 14:55:45.968 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.3 MiB/s wr, 340 op/s
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.045 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.045 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.200 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.201 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4178MB free_disk=20.87616729736328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.202 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.202 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.300 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b346e1ba-9e83-4e7f-bc03-c327d3e4173b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.300 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 11c82470-ab02-4424-908b-705f1f65e062 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.300 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.301 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:55:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2332547316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/387418848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2964940297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 ceph-mon[74360]: pgmap v2096: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.3 MiB/s wr, 340 op/s
Jan 20 14:55:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4122561463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1562014082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.419 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.464 250022 DEBUG nova.compute.manager [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.464 250022 DEBUG oslo_concurrency.lockutils [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.464 250022 DEBUG oslo_concurrency.lockutils [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.465 250022 DEBUG oslo_concurrency.lockutils [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.465 250022 DEBUG nova.compute.manager [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.465 250022 WARNING nova.compute.manager [req-4c52200b-22a1-487f-aadd-357ef5c207c6 req-917e3377-dbef-49ba-83cd-75d99cd02fb5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state resized and task_state resize_reverting.
Jan 20 14:55:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2961583989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.855 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.861 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.880 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.917 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:55:46 compute-0 nova_compute[250018]: 2026-01-20 14:55:46.917 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:47.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:47.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/541677167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2961583989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.510 250022 DEBUG nova.network.neutron [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.545 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.546 250022 DEBUG os_brick.utils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.547 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.560 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.560 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[21fa677e-1d3b-487f-b64a-5d4cb46eb628]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.561 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.570 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.571 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[e86dd33f-f658-4aa5-8df3-d6da844e31f0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.572 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.584 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.584 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[da53b9b3-749e-4b1b-8a90-99af4128642f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.585 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb03aa5-914f-4125-9218-f3bf31e6ba8b]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.586 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.634 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "nvme version" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.636 250022 DEBUG os_brick.initiator.connectors.lightos [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.637 250022 DEBUG os_brick.initiator.connectors.lightos [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.637 250022 DEBUG os_brick.initiator.connectors.lightos [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.637 250022 DEBUG os_brick.utils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.916 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:47 compute-0 nova_compute[250018]: 2026-01-20 14:55:47.917 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 281 op/s
Jan 20 14:55:48 compute-0 nova_compute[250018]: 2026-01-20 14:55:48.419 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2048446538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:48 compute-0 ceph-mon[74360]: pgmap v2097: 321 pgs: 321 active+clean; 378 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 281 op/s
Jan 20 14:55:48 compute-0 nova_compute[250018]: 2026-01-20 14:55:48.867 250022 DEBUG nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 20 14:55:48 compute-0 nova_compute[250018]: 2026-01-20 14:55:48.973 250022 DEBUG nova.storage.rbd_utils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rolling back rbd image(b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 20 14:55:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.170 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.174 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.174 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.174 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.177 250022 DEBUG nova.storage.rbd_utils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(nova-resize) on rbd image(b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:55:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 20 14:55:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3137524621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/858933901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 20 14:55:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.608 250022 DEBUG nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start _get_guest_xml network_info=[{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'boot_index': None, 'device_type': 'disk', 'attachment_id': '557d23e1-f318-4f7d-be6b-c62aa5870a94', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'attached_at': '2026-01-20T14:55:48.000000', 'detached_at': '', 'volume_id': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a', 'serial': 'f3a427b1-0e50-45b1-a975-3d7aabd0195a'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.616 250022 WARNING nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.621 250022 DEBUG nova.virt.libvirt.host [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.622 250022 DEBUG nova.virt.libvirt.host [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.630 250022 DEBUG nova.virt.libvirt.host [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.631 250022 DEBUG nova.virt.libvirt.host [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.634 250022 DEBUG nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.634 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.636 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.636 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.637 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.637 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.638 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.639 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.640 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.641 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.641 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.642 250022 DEBUG nova.virt.hardware [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.643 250022 DEBUG nova.objects.instance [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:49 compute-0 nova_compute[250018]: 2026-01-20 14:55:49.670 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 397 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:55:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089473108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.141 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.182 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:50 compute-0 ceph-mon[74360]: osdmap e307: 3 total, 3 up, 3 in
Jan 20 14:55:50 compute-0 ceph-mon[74360]: pgmap v2099: 321 pgs: 321 active+clean; 397 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:55:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2089473108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:55:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40220004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.621 250022 DEBUG oslo_concurrency.processutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.682 250022 DEBUG nova.virt.libvirt.vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.683 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.684 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.686 250022 DEBUG nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <uuid>b346e1ba-9e83-4e7f-bc03-c327d3e4173b</uuid>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <name>instance-0000007b</name>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsTestOtherB-server-141692785</nova:name>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:55:49</nova:creationTime>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <nova:port uuid="5f645763-9f97-4686-80ab-6df7299b1235">
Jan 20 14:55:50 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <system>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="serial">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="uuid">b346e1ba-9e83-4e7f-bc03-c327d3e4173b</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </system>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <os>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </os>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <features>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </features>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_disk.config">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-f3a427b1-0e50-45b1-a975-3d7aabd0195a">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </source>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:55:50 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <target dev="vdb" bus="virtio"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <serial>f3a427b1-0e50-45b1-a975-3d7aabd0195a</serial>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a5:83:5d"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <target dev="tap5f645763-9f"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b/console.log" append="off"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <video>
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </video>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:55:50 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:55:50 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:55:50 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:55:50 compute-0 nova_compute[250018]: </domain>
Jan 20 14:55:50 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.688 250022 DEBUG nova.compute.manager [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Preparing to wait for external event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.688 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.688 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.688 250022 DEBUG oslo_concurrency.lockutils [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.689 250022 DEBUG nova.virt.libvirt.vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.689 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.690 250022 DEBUG nova.network.os_vif_util [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.690 250022 DEBUG os_vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.691 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.691 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.692 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.693 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.694 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f645763-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.694 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f645763-9f, col_values=(('external_ids', {'iface-id': '5f645763-9f97-4686-80ab-6df7299b1235', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:83:5d', 'vm-uuid': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.695 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.6966] manager: (tap5f645763-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.699 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.701 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.702 250022 INFO os_vif [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:55:50 compute-0 kernel: tap5f645763-9f: entered promiscuous mode
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.7669] manager: (tap5f645763-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/234)
Jan 20 14:55:50 compute-0 ovn_controller[148666]: 2026-01-20T14:55:50Z|00463|binding|INFO|Claiming lport 5f645763-9f97-4686-80ab-6df7299b1235 for this chassis.
Jan 20 14:55:50 compute-0 ovn_controller[148666]: 2026-01-20T14:55:50Z|00464|binding|INFO|5f645763-9f97-4686-80ab-6df7299b1235: Claiming fa:16:3e:a5:83:5d 10.100.0.14
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.778 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.779 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.780 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:50 compute-0 ovn_controller[148666]: 2026-01-20T14:55:50Z|00465|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 ovn-installed in OVS
Jan 20 14:55:50 compute-0 ovn_controller[148666]: 2026-01-20T14:55:50Z|00466|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 up in Southbound
Jan 20 14:55:50 compute-0 nova_compute[250018]: 2026-01-20 14:55:50.792 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.791 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[292f1ca9-71fc-4692-8d67-e34660db1ff4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.792 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.793 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.793 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0df3cd50-a0ed-4c64-a020-7005f75b80ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.794 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f7bac28b-2f56-4bee-82c8-a7aa9c949468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 systemd-machined[216401]: New machine qemu-61-instance-0000007b.
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.806 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1486f2-5b04-4291-b93e-520b926c47a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-0000007b.
Jan 20 14:55:50 compute-0 systemd-udevd[326157]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.829 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[70aef274-b178-4751-b6c0-70761fb161ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.8493] device (tap5f645763-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.8514] device (tap5f645763-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.867 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[461c9301-6b40-4f2a-8c40-998d49371c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 systemd-udevd[326162]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.8773] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.879 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce62e3b-d8d8-473b-80d1-773147fd48b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.921 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7de5de0a-c4e0-40ef-b609-c381e6416883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.925 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2147c577-fbc2-490d-b8e2-f6215fa41b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 NetworkManager[48960]: <info>  [1768920950.9561] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.962 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e23ade00-dca9-442b-8993-9135c321f442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.980 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cc414a93-e248-471d-bb84-39e6f5ecdbb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689167, 'reachable_time': 21588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326187, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:50.994 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e6eb7fb8-11ff-4e1e-a209-24b3621a68ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689167, 'tstamp': 689167}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326188, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.015 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a12d2696-96b6-484d-981d-032e200ba357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689167, 'reachable_time': 21588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326189, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:55:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.033 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [{"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.052 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb33b4f-f8d8-4102-85ad-f6abd160d23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.058 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-b346e1ba-9e83-4e7f-bc03-c327d3e4173b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.058 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.117 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8dcc760e-70d0-4fd9-89f8-bb0345843637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.119 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.119 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.120 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.122 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-0 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:55:51 compute-0 NetworkManager[48960]: <info>  [1768920951.1226] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.128 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.129 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.130 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-0 ovn_controller[148666]: 2026-01-20T14:55:51Z|00467|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.132 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.133 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f553ed-65e1-444f-abcb-e4027cff1a8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.133 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:55:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:51.134 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.145 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.233 250022 DEBUG nova.compute.manager [req-190a3e7e-c825-4310-b5b9-3575de471d3c req-d60d5596-eacf-4a05-aaaa-66674afadbb4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.233 250022 DEBUG oslo_concurrency.lockutils [req-190a3e7e-c825-4310-b5b9-3575de471d3c req-d60d5596-eacf-4a05-aaaa-66674afadbb4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.234 250022 DEBUG oslo_concurrency.lockutils [req-190a3e7e-c825-4310-b5b9-3575de471d3c req-d60d5596-eacf-4a05-aaaa-66674afadbb4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.234 250022 DEBUG oslo_concurrency.lockutils [req-190a3e7e-c825-4310-b5b9-3575de471d3c req-d60d5596-eacf-4a05-aaaa-66674afadbb4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:51 compute-0 nova_compute[250018]: 2026-01-20 14:55:51.234 250022 DEBUG nova.compute.manager [req-190a3e7e-c825-4310-b5b9-3575de471d3c req-d60d5596-eacf-4a05-aaaa-66674afadbb4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Processing event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:55:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:51 compute-0 podman[326221]: 2026-01-20 14:55:51.497413502 +0000 UTC m=+0.058635609 container create a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:55:51 compute-0 systemd[1]: Started libpod-conmon-a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7.scope.
Jan 20 14:55:51 compute-0 podman[326221]: 2026-01-20 14:55:51.471292868 +0000 UTC m=+0.032515005 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:55:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13e4941e437bf6d65f7c66d76ede56e5220996d9473d9d08bccaf7b679609cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:55:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/40220004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:51 compute-0 podman[326221]: 2026-01-20 14:55:51.594427032 +0000 UTC m=+0.155649159 container init a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:55:51 compute-0 podman[326221]: 2026-01-20 14:55:51.600066104 +0000 UTC m=+0.161288211 container start a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:55:51 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [NOTICE]   (326240) : New worker (326242) forked
Jan 20 14:55:51 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [NOTICE]   (326240) : Loading success.
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.282 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for b346e1ba-9e83-4e7f-bc03-c327d3e4173b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.283 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920952.2821534, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.283 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Started (Lifecycle Event)
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.285 250022 DEBUG nova.compute.manager [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.292 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance running successfully.
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.293 250022 DEBUG nova.virt.libvirt.driver [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.319 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.321 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.349 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.350 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920952.282286, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.350 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Paused (Lifecycle Event)
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.370 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.373 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920952.2879717, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.373 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Resumed (Lifecycle Event)
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.390 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.393 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.418 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:55:52
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images', 'vms']
Jan 20 14:55:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:55:52 compute-0 sudo[326312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:52 compute-0 sudo[326312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:52 compute-0 sudo[326312]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:52 compute-0 ceph-mon[74360]: pgmap v2100: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Jan 20 14:55:52 compute-0 nova_compute[250018]: 2026-01-20 14:55:52.631 250022 INFO nova.compute.manager [None req-b810852e-f780-4fab-8cf9-b7f09b8f227c 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance to original state: 'active'
Jan 20 14:55:52 compute-0 sudo[326337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:55:52 compute-0 sudo[326337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:55:52 compute-0 sudo[326337]: pam_unix(sudo:session): session closed for user root
Jan 20 14:55:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:53 compute-0 ovn_controller[148666]: 2026-01-20T14:55:53Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:3f:c2 10.100.0.4
Jan 20 14:55:53 compute-0 ovn_controller[148666]: 2026-01-20T14:55:53Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:3f:c2 10.100.0.4
Jan 20 14:55:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.341 250022 DEBUG nova.compute.manager [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.342 250022 DEBUG oslo_concurrency.lockutils [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.342 250022 DEBUG oslo_concurrency.lockutils [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.343 250022 DEBUG oslo_concurrency.lockutils [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.343 250022 DEBUG nova.compute.manager [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.344 250022 WARNING nova.compute.manager [req-05ed0373-f0fe-4df1-b4a7-493dc7f5d5cc req-9567cb60-ddff-421d-8826-b5149775fe7f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state active and task_state None.
Jan 20 14:55:53 compute-0 ceph-mgr[74653]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.401 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:53 compute-0 nova_compute[250018]: 2026-01-20 14:55:53.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 398 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.2 MiB/s wr, 291 op/s
Jan 20 14:55:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.933664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954933751, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1081, "num_deletes": 254, "total_data_size": 1471437, "memory_usage": 1503192, "flush_reason": "Manual Compaction"}
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954952706, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 977776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46408, "largest_seqno": 47488, "table_properties": {"data_size": 973325, "index_size": 1911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12214, "raw_average_key_size": 21, "raw_value_size": 963598, "raw_average_value_size": 1702, "num_data_blocks": 83, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920881, "oldest_key_time": 1768920881, "file_creation_time": 1768920954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 19096 microseconds, and 3797 cpu microseconds.
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.952767) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 977776 bytes OK
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.952793) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.954232) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.954244) EVENT_LOG_v1 {"time_micros": 1768920954954240, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.954259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1466376, prev total WAL file size 1466376, number of live WAL files 2.
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.954896) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(954KB)], [101(11MB)]
Jan 20 14:55:54 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920954954975, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 13450290, "oldest_snapshot_seqno": -1}
Jan 20 14:55:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:55.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7557 keys, 10085697 bytes, temperature: kUnknown
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955035934, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 10085697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10036481, "index_size": 29200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 195641, "raw_average_key_size": 25, "raw_value_size": 9902848, "raw_average_value_size": 1310, "num_data_blocks": 1149, "num_entries": 7557, "num_filter_entries": 7557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768920954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.036166) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 10085697 bytes
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.037510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.0 rd, 124.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(24.1) write-amplify(10.3) OK, records in: 8050, records dropped: 493 output_compression: NoCompression
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.037530) EVENT_LOG_v1 {"time_micros": 1768920955037521, "job": 60, "event": "compaction_finished", "compaction_time_micros": 81022, "compaction_time_cpu_micros": 32491, "output_level": 6, "num_output_files": 1, "total_output_size": 10085697, "num_input_records": 8050, "num_output_records": 7557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955037816, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768920955040470, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:54.954750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.040537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.040543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.040545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.040546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:55:55.040548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:55:55 compute-0 ceph-mon[74360]: pgmap v2101: 321 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 309 active+clean; 398 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.2 MiB/s wr, 291 op/s
Jan 20 14:55:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2206619567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/251820846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.218 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.219 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.219 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.219 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.219 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.220 250022 INFO nova.compute.manager [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Terminating instance
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.221 250022 DEBUG nova.compute.manager [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:55:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:55.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:55 compute-0 kernel: tap5f645763-9f (unregistering): left promiscuous mode
Jan 20 14:55:55 compute-0 NetworkManager[48960]: <info>  [1768920955.2852] device (tap5f645763-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 ovn_controller[148666]: 2026-01-20T14:55:55Z|00468|binding|INFO|Releasing lport 5f645763-9f97-4686-80ab-6df7299b1235 from this chassis (sb_readonly=0)
Jan 20 14:55:55 compute-0 ovn_controller[148666]: 2026-01-20T14:55:55Z|00469|binding|INFO|Setting lport 5f645763-9f97-4686-80ab-6df7299b1235 down in Southbound
Jan 20 14:55:55 compute-0 ovn_controller[148666]: 2026-01-20T14:55:55Z|00470|binding|INFO|Removing iface tap5f645763-9f ovn-installed in OVS
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.293 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.301 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:83:5d 10.100.0.14'], port_security=['fa:16:3e:a5:83:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b346e1ba-9e83-4e7f-bc03-c327d3e4173b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8b11f3fb-2601-4eca-a1b6-838549d7750c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.184', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5f645763-9f97-4686-80ab-6df7299b1235) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.303 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5f645763-9f97-4686-80ab-6df7299b1235 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.304 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.305 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[66e85405-fc15-4958-a3e6-1815ba43beb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.305 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.314 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 20 14:55:55 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Consumed 4.502s CPU time.
Jan 20 14:55:55 compute-0 systemd-machined[216401]: Machine qemu-61-instance-0000007b terminated.
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.441 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.445 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.455 250022 INFO nova.virt.libvirt.driver [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Instance destroyed successfully.
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.456 250022 DEBUG nova.objects.instance [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid b346e1ba-9e83-4e7f-bc03-c327d3e4173b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:55:55 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [NOTICE]   (326240) : haproxy version is 2.8.14-c23fe91
Jan 20 14:55:55 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [NOTICE]   (326240) : path to executable is /usr/sbin/haproxy
Jan 20 14:55:55 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [WARNING]  (326240) : Exiting Master process...
Jan 20 14:55:55 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [ALERT]    (326240) : Current worker (326242) exited with code 143 (Terminated)
Jan 20 14:55:55 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[326236]: [WARNING]  (326240) : All workers exited. Exiting... (0)
Jan 20 14:55:55 compute-0 systemd[1]: libpod-a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7.scope: Deactivated successfully.
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.472 250022 DEBUG nova.virt.libvirt.vif [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:54:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-141692785',display_name='tempest-ServerActionsTestOtherB-server-141692785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-141692785',id=123,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO99uJ9+FwgjxRb/9u+f3Mj9/VKSDM+OKd66Ygsg8lEO+7bGpDEQrC5BIaSV+Na5YF+3DqUwLNmAYSN9IkTSGbRPw5y8813A+KsiNHebrpnZ7oReyT+5/zNQYafCHVAfGA==',key_name='tempest-keypair-302882914',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-c8v02o4r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='215db37373dc4ae5a75cbd6866f471da',uuid=b346e1ba-9e83-4e7f-bc03-c327d3e4173b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:55:55 compute-0 podman[326388]: 2026-01-20 14:55:55.473442845 +0000 UTC m=+0.070197720 container died a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.473 250022 DEBUG nova.network.os_vif_util [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "5f645763-9f97-4686-80ab-6df7299b1235", "address": "fa:16:3e:a5:83:5d", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f645763-9f", "ovs_interfaceid": "5f645763-9f97-4686-80ab-6df7299b1235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.474 250022 DEBUG nova.network.os_vif_util [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.474 250022 DEBUG os_vif [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.477 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f645763-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.478 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.480 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.482 250022 INFO os_vif [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:83:5d,bridge_name='br-int',has_traffic_filtering=True,id=5f645763-9f97-4686-80ab-6df7299b1235,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f645763-9f')
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.552 250022 DEBUG nova.compute.manager [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.552 250022 DEBUG oslo_concurrency.lockutils [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.553 250022 DEBUG oslo_concurrency.lockutils [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.553 250022 DEBUG oslo_concurrency.lockutils [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.553 250022 DEBUG nova.compute.manager [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.553 250022 DEBUG nova.compute.manager [req-58489562-6138-41d5-b3b4-80ce81f2461a req-e92e446b-dc69-49b8-9d5a-ae369c4355ae 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-unplugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7-userdata-shm.mount: Deactivated successfully.
Jan 20 14:55:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d13e4941e437bf6d65f7c66d76ede56e5220996d9473d9d08bccaf7b679609cc-merged.mount: Deactivated successfully.
Jan 20 14:55:55 compute-0 podman[326388]: 2026-01-20 14:55:55.578891733 +0000 UTC m=+0.175646608 container cleanup a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:55:55 compute-0 systemd[1]: libpod-conmon-a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7.scope: Deactivated successfully.
Jan 20 14:55:55 compute-0 podman[326447]: 2026-01-20 14:55:55.647747786 +0000 UTC m=+0.049672468 container remove a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.653 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7168ffae-4ee5-45d8-8d3d-f103979b031f]: (4, ('Tue Jan 20 02:55:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7)\na07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7\nTue Jan 20 02:55:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (a07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7)\na07276957324a0b9d6bf31b3a7bc72a1188a5fe89c9b310fae9e32f68f5099c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.655 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c78920e8-4a68-46be-a448-ee9b17354cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.656 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.724 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cb59a5dc-6e72-45ec-ac4b-5a26fca63510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.739 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[077422ac-345a-4caf-8ba4-e0649dfb4da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.741 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[956e8790-436f-4a19-9c03-62298802990c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.756 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4151bb-a753-4e24-8de2-8c83dac886ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689158, 'reachable_time': 33440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326462, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.760 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:55:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:55:55.760 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ad00b126-0da0-49ed-8e75-49808dacdfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.883 250022 INFO nova.virt.libvirt.driver [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Deleting instance files /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_del
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.884 250022 INFO nova.virt.libvirt.driver [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Deletion of /var/lib/nova/instances/b346e1ba-9e83-4e7f-bc03-c327d3e4173b_del complete
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.929 250022 INFO nova.compute.manager [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.929 250022 DEBUG oslo.service.loopingcall [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.930 250022 DEBUG nova.compute.manager [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:55:55 compute-0 nova_compute[250018]: 2026-01-20 14:55:55.930 250022 DEBUG nova.network.neutron [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:55:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 7.3 MiB/s wr, 316 op/s
Jan 20 14:55:56 compute-0 ceph-mon[74360]: pgmap v2102: 321 pgs: 321 active+clean; 399 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 7.3 MiB/s wr, 316 op/s
Jan 20 14:55:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:57.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.148 250022 DEBUG nova.network.neutron [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.168 250022 INFO nova.compute.manager [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Took 1.24 seconds to deallocate network for instance.
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.267 250022 DEBUG nova.compute.manager [req-46553763-caa9-408c-9ad1-8301142b1a7f req-50d3d0b3-a7e5-4f3f-94a1-af7daf0219c8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-deleted-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:57.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.362 250022 INFO nova.compute.manager [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Took 0.19 seconds to detach 1 volumes for instance.
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.404 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.405 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.469 250022 DEBUG oslo_concurrency.processutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:55:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.841 250022 DEBUG nova.compute.manager [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.842 250022 DEBUG oslo_concurrency.lockutils [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.842 250022 DEBUG oslo_concurrency.lockutils [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.842 250022 DEBUG oslo_concurrency.lockutils [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.843 250022 DEBUG nova.compute.manager [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] No waiting events found dispatching network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.843 250022 WARNING nova.compute.manager [req-70296557-1a5a-469f-a787-eb1cdbaf2483 req-28ec1d0c-c144-44df-932b-945b313d0c33 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Received unexpected event network-vif-plugged-5f645763-9f97-4686-80ab-6df7299b1235 for instance with vm_state deleted and task_state None.
Jan 20 14:55:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:55:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540996139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.877 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.895 250022 DEBUG oslo_concurrency.processutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.901 250022 DEBUG nova.compute.provider_tree [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.921 250022 DEBUG nova.scheduler.client.report [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:55:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3540996139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.942 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:57 compute-0 nova_compute[250018]: 2026-01-20 14:55:57.978 250022 INFO nova.scheduler.client.report [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Deleted allocations for instance b346e1ba-9e83-4e7f-bc03-c327d3e4173b
Jan 20 14:55:58 compute-0 nova_compute[250018]: 2026-01-20 14:55:58.032 250022 DEBUG oslo_concurrency.lockutils [None req-f536dad5-be85-457b-bfae-d688b8852994 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "b346e1ba-9e83-4e7f-bc03-c327d3e4173b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:55:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.4 MiB/s wr, 368 op/s
Jan 20 14:55:58 compute-0 nova_compute[250018]: 2026-01-20 14:55:58.424 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:55:58 compute-0 ceph-mon[74360]: pgmap v2103: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 6.4 MiB/s wr, 368 op/s
Jan 20 14:55:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:55:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:55:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:55:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:55:59.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:55:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:55:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 20 14:55:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 20 14:55:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 20 14:56:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 364 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.0 MiB/s wr, 393 op/s
Jan 20 14:56:00 compute-0 nova_compute[250018]: 2026-01-20 14:56:00.480 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:00 compute-0 ceph-mon[74360]: osdmap e308: 3 total, 3 up, 3 in
Jan 20 14:56:00 compute-0 ceph-mon[74360]: pgmap v2105: 321 pgs: 321 active+clean; 364 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.0 MiB/s wr, 393 op/s
Jan 20 14:56:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/242987236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:01.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/533248142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2444160729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 307 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.7 MiB/s wr, 387 op/s
Jan 20 14:56:02 compute-0 ceph-mon[74360]: pgmap v2106: 321 pgs: 321 active+clean; 307 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.7 MiB/s wr, 387 op/s
Jan 20 14:56:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:03 compute-0 nova_compute[250018]: 2026-01-20 14:56:03.427 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 364 op/s
Jan 20 14:56:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:05 compute-0 ceph-mon[74360]: pgmap v2107: 321 pgs: 321 active+clean; 303 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.5 MiB/s wr, 364 op/s
Jan 20 14:56:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:05 compute-0 nova_compute[250018]: 2026-01-20 14:56:05.482 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:05 compute-0 nova_compute[250018]: 2026-01-20 14:56:05.855 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:05 compute-0 nova_compute[250018]: 2026-01-20 14:56:05.856 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:05 compute-0 nova_compute[250018]: 2026-01-20 14:56:05.914 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.008 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.008 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.015 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.016 250022 INFO nova.compute.claims [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:56:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 363 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.171 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:06 compute-0 podman[326512]: 2026-01-20 14:56:06.456180695 +0000 UTC m=+0.047181850 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 14:56:06 compute-0 podman[326511]: 2026-01-20 14:56:06.530451174 +0000 UTC m=+0.121452249 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:56:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4107789460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.602 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.608 250022 DEBUG nova.compute.provider_tree [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.622 250022 DEBUG nova.scheduler.client.report [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.642 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.642 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.690 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.691 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.712 250022 INFO nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.744 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.844 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.845 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.845 250022 INFO nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Creating image(s)
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.868 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.892 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.921 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.925 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.955 250022 DEBUG nova.policy [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.991 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.992 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.992 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:06 compute-0 nova_compute[250018]: 2026-01-20 14:56:06.993 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.016 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.019 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:07 compute-0 ceph-mon[74360]: pgmap v2108: 321 pgs: 321 active+clean; 363 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Jan 20 14:56:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1903058217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4107789460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3100921922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3304163422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:07.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.320 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.391 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.502 250022 DEBUG nova.objects.instance [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid ce67f778-2918-4ec5-b8d6-ab1f4346d817 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.555 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.556 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Ensure instance console log exists: /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.557 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.558 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:07 compute-0 nova_compute[250018]: 2026-01-20 14:56:07.558 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:56:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2754763687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:08 compute-0 nova_compute[250018]: 2026-01-20 14:56:08.389 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Successfully created port: facff57d-50c3-4c23-9db2-5a75346ccad9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:08 compute-0 nova_compute[250018]: 2026-01-20 14:56:08.429 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:09 compute-0 ceph-mon[74360]: pgmap v2109: 321 pgs: 321 active+clean; 376 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 248 op/s
Jan 20 14:56:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:09.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 392 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 232 op/s
Jan 20 14:56:10 compute-0 ceph-mon[74360]: pgmap v2110: 321 pgs: 321 active+clean; 392 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 232 op/s
Jan 20 14:56:10 compute-0 nova_compute[250018]: 2026-01-20 14:56:10.454 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920955.4529867, b346e1ba-9e83-4e7f-bc03-c327d3e4173b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:10 compute-0 nova_compute[250018]: 2026-01-20 14:56:10.455 250022 INFO nova.compute.manager [-] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] VM Stopped (Lifecycle Event)
Jan 20 14:56:10 compute-0 nova_compute[250018]: 2026-01-20 14:56:10.484 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:10 compute-0 nova_compute[250018]: 2026-01-20 14:56:10.486 250022 DEBUG nova.compute.manager [None req-7481c133-51a3-423c-9a7b-1e466753e415 - - - - - -] [instance: b346e1ba-9e83-4e7f-bc03-c327d3e4173b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.307 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Successfully updated port: facff57d-50c3-4c23-9db2-5a75346ccad9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.339 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.339 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.339 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.491 250022 DEBUG nova.compute.manager [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-changed-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.492 250022 DEBUG nova.compute.manager [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Refreshing instance network info cache due to event network-changed-facff57d-50c3-4c23-9db2-5a75346ccad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.492 250022 DEBUG oslo_concurrency.lockutils [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008833410396891866 of space, bias 1.0, pg target 2.65002311906756 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.0003791381676903034 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:56:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:56:11 compute-0 nova_compute[250018]: 2026-01-20 14:56:11.643 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 368 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Jan 20 14:56:12 compute-0 sudo[326723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:12 compute-0 sudo[326723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:12 compute-0 sudo[326723]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:12 compute-0 sudo[326748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:12 compute-0 sudo[326748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:12 compute-0 sudo[326748]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:13 compute-0 ceph-mon[74360]: pgmap v2111: 321 pgs: 321 active+clean; 368 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Jan 20 14:56:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2494222947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:56:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.430 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.813 250022 DEBUG nova.network.neutron [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Updating instance_info_cache with network_info: [{"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.833 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.834 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Instance network_info: |[{"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.834 250022 DEBUG oslo_concurrency.lockutils [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.834 250022 DEBUG nova.network.neutron [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Refreshing network info cache for port facff57d-50c3-4c23-9db2-5a75346ccad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.837 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Start _get_guest_xml network_info=[{"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.843 250022 WARNING nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.850 250022 DEBUG nova.virt.libvirt.host [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.851 250022 DEBUG nova.virt.libvirt.host [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.854 250022 DEBUG nova.virt.libvirt.host [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.854 250022 DEBUG nova.virt.libvirt.host [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.855 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.855 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.856 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.857 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.857 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.857 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.857 250022 DEBUG nova.virt.hardware [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:13 compute-0 nova_compute[250018]: 2026-01-20 14:56:13.860 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Jan 20 14:56:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1356210779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1356210779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532883791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.313 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.354 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.360 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/945588242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.837 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.838 250022 DEBUG nova.virt.libvirt.vif [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1614139119',display_name='tempest-ServersTestJSON-server-1614139119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1614139119',id=131,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx48+N9sZKMLpQ9qbcSMm5w3l7zz2GESkvBxkHyejesNw7Z3uVJZScBx/ao2tQ5mWZLxSORkTB8kJmMstO2es6FU96DAllAb+ZVYrRpZXWD1/FfcVj3gmTepB8PghbySg==',key_name='tempest-key-1021718469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-bshoghjz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:06Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=ce67f778-2918-4ec5-b8d6-ab1f4346d817,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.839 250022 DEBUG nova.network.os_vif_util [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.839 250022 DEBUG nova.network.os_vif_util [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.840 250022 DEBUG nova.objects.instance [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce67f778-2918-4ec5-b8d6-ab1f4346d817 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.878 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <uuid>ce67f778-2918-4ec5-b8d6-ab1f4346d817</uuid>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <name>instance-00000083</name>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersTestJSON-server-1614139119</nova:name>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:56:13</nova:creationTime>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <nova:port uuid="facff57d-50c3-4c23-9db2-5a75346ccad9">
Jan 20 14:56:14 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <system>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="serial">ce67f778-2918-4ec5-b8d6-ab1f4346d817</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="uuid">ce67f778-2918-4ec5-b8d6-ab1f4346d817</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </system>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <os>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </os>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <features>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </features>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk">
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </source>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config">
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </source>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:56:14 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:d9:47:72"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <target dev="tapfacff57d-50"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/console.log" append="off"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <video>
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </video>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:56:14 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:56:14 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:56:14 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:56:14 compute-0 nova_compute[250018]: </domain>
Jan 20 14:56:14 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.880 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Preparing to wait for external event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.880 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.881 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.881 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.882 250022 DEBUG nova.virt.libvirt.vif [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1614139119',display_name='tempest-ServersTestJSON-server-1614139119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1614139119',id=131,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx48+N9sZKMLpQ9qbcSMm5w3l7zz2GESkvBxkHyejesNw7Z3uVJZScBx/ao2tQ5mWZLxSORkTB8kJmMstO2es6FU96DAllAb+ZVYrRpZXWD1/FfcVj3gmTepB8PghbySg==',key_name='tempest-key-1021718469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-bshoghjz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:06Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=ce67f778-2918-4ec5-b8d6-ab1f4346d817,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.882 250022 DEBUG nova.network.os_vif_util [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.883 250022 DEBUG nova.network.os_vif_util [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.884 250022 DEBUG os_vif [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.885 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.885 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.886 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.890 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.890 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfacff57d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.891 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfacff57d-50, col_values=(('external_ids', {'iface-id': 'facff57d-50c3-4c23-9db2-5a75346ccad9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:47:72', 'vm-uuid': 'ce67f778-2918-4ec5-b8d6-ab1f4346d817'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:14 compute-0 NetworkManager[48960]: <info>  [1768920974.8936] manager: (tapfacff57d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.895 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.898 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.899 250022 INFO os_vif [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50')
Jan 20 14:56:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.953 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.954 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.955 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:d9:47:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.955 250022 INFO nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Using config drive
Jan 20 14:56:14 compute-0 nova_compute[250018]: 2026-01-20 14:56:14.982 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:15 compute-0 ceph-mon[74360]: pgmap v2112: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 209 op/s
Jan 20 14:56:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2532883791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/945588242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:56:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.374 250022 INFO nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Creating config drive at /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.381 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2nokhgm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.518 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2nokhgm" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.551 250022 DEBUG nova.storage.rbd_utils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.557 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.586 250022 DEBUG nova.network.neutron [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Updated VIF entry in instance network info cache for port facff57d-50c3-4c23-9db2-5a75346ccad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.587 250022 DEBUG nova.network.neutron [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Updating instance_info_cache with network_info: [{"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.607 250022 DEBUG oslo_concurrency.lockutils [req-604dbb63-eeb6-4421-9b81-ff0e42757fe8 req-68182b96-1fce-4cbe-8d56-1a6b5e3bc24c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-ce67f778-2918-4ec5-b8d6-ab1f4346d817" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.714 250022 DEBUG oslo_concurrency.processutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config ce67f778-2918-4ec5-b8d6-ab1f4346d817_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.715 250022 INFO nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Deleting local config drive /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817/disk.config because it was imported into RBD.
Jan 20 14:56:15 compute-0 kernel: tapfacff57d-50: entered promiscuous mode
Jan 20 14:56:15 compute-0 NetworkManager[48960]: <info>  [1768920975.7622] manager: (tapfacff57d-50): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.762 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 ovn_controller[148666]: 2026-01-20T14:56:15Z|00471|binding|INFO|Claiming lport facff57d-50c3-4c23-9db2-5a75346ccad9 for this chassis.
Jan 20 14:56:15 compute-0 ovn_controller[148666]: 2026-01-20T14:56:15Z|00472|binding|INFO|facff57d-50c3-4c23-9db2-5a75346ccad9: Claiming fa:16:3e:d9:47:72 10.100.0.6
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.771 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:47:72 10.100.0.6'], port_security=['fa:16:3e:d9:47:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ce67f778-2918-4ec5-b8d6-ab1f4346d817', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=facff57d-50c3-4c23-9db2-5a75346ccad9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.772 160071 INFO neutron.agent.ovn.metadata.agent [-] Port facff57d-50c3-4c23-9db2-5a75346ccad9 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.773 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 ovn_controller[148666]: 2026-01-20T14:56:15Z|00473|binding|INFO|Setting lport facff57d-50c3-4c23-9db2-5a75346ccad9 ovn-installed in OVS
Jan 20 14:56:15 compute-0 ovn_controller[148666]: 2026-01-20T14:56:15Z|00474|binding|INFO|Setting lport facff57d-50c3-4c23-9db2-5a75346ccad9 up in Southbound
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.790 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[41ba53cb-3246-46d6-ba74-2a9e112f7e2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 systemd-machined[216401]: New machine qemu-62-instance-00000083.
Jan 20 14:56:15 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000083.
Jan 20 14:56:15 compute-0 systemd-udevd[326913]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:15 compute-0 NetworkManager[48960]: <info>  [1768920975.8152] device (tapfacff57d-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:15 compute-0 NetworkManager[48960]: <info>  [1768920975.8160] device (tapfacff57d-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.818 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[34009c4c-dc90-4814-8411-1dbdbb815fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.821 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9f803e65-40ce-4f83-ba61-70d32070fd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.845 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[31450ca1-a6f5-4ab0-b542-db34062d4bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.860 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8867c6a7-0198-448a-bf0e-a3ad205130a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 30124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326923, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.873 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a06f8-101d-404e-b93e-8a0645bfad16]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326925, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326925, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.874 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.876 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 nova_compute[250018]: 2026-01-20 14:56:15.877 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.878 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.878 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.878 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:15.879 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 266 op/s
Jan 20 14:56:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2874809718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:16 compute-0 ceph-mon[74360]: pgmap v2113: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.1 MiB/s wr, 266 op/s
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.353 250022 DEBUG nova.compute.manager [req-62e4d6d7-808b-4e63-895c-c86d66599837 req-241c1b9a-27ad-47c1-b3fe-a1a66188d4a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.353 250022 DEBUG oslo_concurrency.lockutils [req-62e4d6d7-808b-4e63-895c-c86d66599837 req-241c1b9a-27ad-47c1-b3fe-a1a66188d4a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.354 250022 DEBUG oslo_concurrency.lockutils [req-62e4d6d7-808b-4e63-895c-c86d66599837 req-241c1b9a-27ad-47c1-b3fe-a1a66188d4a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.354 250022 DEBUG oslo_concurrency.lockutils [req-62e4d6d7-808b-4e63-895c-c86d66599837 req-241c1b9a-27ad-47c1-b3fe-a1a66188d4a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.354 250022 DEBUG nova.compute.manager [req-62e4d6d7-808b-4e63-895c-c86d66599837 req-241c1b9a-27ad-47c1-b3fe-a1a66188d4a9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Processing event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.449 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920976.4487936, ce67f778-2918-4ec5-b8d6-ab1f4346d817 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.449 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] VM Started (Lifecycle Event)
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.452 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.455 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.458 250022 INFO nova.virt.libvirt.driver [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Instance spawned successfully.
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.459 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.492 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.497 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.501 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.501 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.502 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.502 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.502 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.503 250022 DEBUG nova.virt.libvirt.driver [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.545 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.545 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920976.4519658, ce67f778-2918-4ec5-b8d6-ab1f4346d817 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.545 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] VM Paused (Lifecycle Event)
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.609 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.613 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768920976.454955, ce67f778-2918-4ec5-b8d6-ab1f4346d817 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.614 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] VM Resumed (Lifecycle Event)
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.635 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.638 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.661 250022 INFO nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Took 9.82 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.661 250022 DEBUG nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.662 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.829 250022 INFO nova.compute.manager [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Took 10.85 seconds to build instance.
Jan 20 14:56:16 compute-0 nova_compute[250018]: 2026-01-20 14:56:16.854 250022 DEBUG oslo_concurrency.lockutils [None req-4fae9f7c-d5cc-473e-822b-a94febbba8b1 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3749428790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3749428790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:17.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Jan 20 14:56:18 compute-0 ceph-mon[74360]: pgmap v2114: 321 pgs: 321 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.436 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.670 250022 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.670 250022 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.670 250022 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.671 250022 DEBUG oslo_concurrency.lockutils [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.671 250022 DEBUG nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] No waiting events found dispatching network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:18 compute-0 nova_compute[250018]: 2026-01-20 14:56:18.671 250022 WARNING nova.compute.manager [req-0f4035ba-b83f-4459-867a-94982d716b05 req-7b02a496-dfb6-40b8-bb95-e62a3cd6b7c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received unexpected event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 for instance with vm_state active and task_state None.
Jan 20 14:56:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.594 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.595 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.596 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.597 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.598 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.600 250022 INFO nova.compute.manager [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Terminating instance
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.602 250022 DEBUG nova.compute.manager [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:56:19 compute-0 kernel: tapfacff57d-50 (unregistering): left promiscuous mode
Jan 20 14:56:19 compute-0 NetworkManager[48960]: <info>  [1768920979.6481] device (tapfacff57d-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.657 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.658 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 ovn_controller[148666]: 2026-01-20T14:56:19Z|00475|binding|INFO|Releasing lport facff57d-50c3-4c23-9db2-5a75346ccad9 from this chassis (sb_readonly=0)
Jan 20 14:56:19 compute-0 ovn_controller[148666]: 2026-01-20T14:56:19Z|00476|binding|INFO|Setting lport facff57d-50c3-4c23-9db2-5a75346ccad9 down in Southbound
Jan 20 14:56:19 compute-0 ovn_controller[148666]: 2026-01-20T14:56:19Z|00477|binding|INFO|Removing iface tapfacff57d-50 ovn-installed in OVS
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.686 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.691 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:47:72 10.100.0.6'], port_security=['fa:16:3e:d9:47:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'ce67f778-2918-4ec5-b8d6-ab1f4346d817', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=facff57d-50c3-4c23-9db2-5a75346ccad9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.693 160071 INFO neutron.agent.ovn.metadata.agent [-] Port facff57d-50c3-4c23-9db2-5a75346ccad9 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.694 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.713 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[db7cf67e-d1f6-41fd-8862-627ceaf70429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000083.scope: Deactivated successfully.
Jan 20 14:56:19 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000083.scope: Consumed 3.937s CPU time.
Jan 20 14:56:19 compute-0 systemd-machined[216401]: Machine qemu-62-instance-00000083 terminated.
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.747 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc96af0-1f0f-414b-9f54-4b122f16b14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.750 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1038f9ac-8f42-4359-b6f5-dd9bfbfcad1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.782 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0a94d415-e2f2-40a2-a521-cc22fc0ee814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.801 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc70f60-f527-4d98-a95f-fabf3ee55d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 30124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326981, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.823 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdb7c73-392d-44d5-8e3e-9a2d4de37d1c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326982, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326982, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.824 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.826 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.835 250022 INFO nova.virt.libvirt.driver [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Instance destroyed successfully.
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.835 250022 DEBUG nova.objects.instance [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid ce67f778-2918-4ec5-b8d6-ab1f4346d817 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.838 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.838 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.838 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.839 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:19.839 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.847 250022 DEBUG nova.virt.libvirt.vif [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1614139119',display_name='tempest-ServersTestJSON-server-1614139119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1614139119',id=131,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx48+N9sZKMLpQ9qbcSMm5w3l7zz2GESkvBxkHyejesNw7Z3uVJZScBx/ao2tQ5mWZLxSORkTB8kJmMstO2es6FU96DAllAb+ZVYrRpZXWD1/FfcVj3gmTepB8PghbySg==',key_name='tempest-key-1021718469',keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-bshoghjz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:56:16Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=ce67f778-2918-4ec5-b8d6-ab1f4346d817,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.848 250022 DEBUG nova.network.os_vif_util [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "facff57d-50c3-4c23-9db2-5a75346ccad9", "address": "fa:16:3e:d9:47:72", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfacff57d-50", "ovs_interfaceid": "facff57d-50c3-4c23-9db2-5a75346ccad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.849 250022 DEBUG nova.network.os_vif_util [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.849 250022 DEBUG os_vif [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.851 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacff57d-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.852 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:19 compute-0 nova_compute[250018]: 2026-01-20 14:56:19.858 250022 INFO os_vif [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:47:72,bridge_name='br-int',has_traffic_filtering=True,id=facff57d-50c3-4c23-9db2-5a75346ccad9,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfacff57d-50')
Jan 20 14:56:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3206897350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 332 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 275 op/s
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.681 250022 INFO nova.virt.libvirt.driver [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Deleting instance files /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817_del
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.682 250022 INFO nova.virt.libvirt.driver [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Deletion of /var/lib/nova/instances/ce67f778-2918-4ec5-b8d6-ab1f4346d817_del complete
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.727 250022 INFO nova.compute.manager [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Took 1.13 seconds to destroy the instance on the hypervisor.
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.728 250022 DEBUG oslo.service.loopingcall [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.728 250022 DEBUG nova.compute.manager [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.728 250022 DEBUG nova.network.neutron [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.797 250022 DEBUG nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-unplugged-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.797 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.797 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.797 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.798 250022 DEBUG nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] No waiting events found dispatching network-vif-unplugged-facff57d-50c3-4c23-9db2-5a75346ccad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.798 250022 DEBUG nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-unplugged-facff57d-50c3-4c23-9db2-5a75346ccad9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.798 250022 DEBUG nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.798 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.798 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.799 250022 DEBUG oslo_concurrency.lockutils [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.799 250022 DEBUG nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] No waiting events found dispatching network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.799 250022 WARNING nova.compute.manager [req-9a7d2372-fbdd-46a7-829a-dfcb66e133b7 req-f79b82b0-8966-4578-9483-4f18a48b76a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received unexpected event network-vif-plugged-facff57d-50c3-4c23-9db2-5a75346ccad9 for instance with vm_state active and task_state deleting.
Jan 20 14:56:20 compute-0 ovn_controller[148666]: 2026-01-20T14:56:20Z|00478|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:56:20 compute-0 nova_compute[250018]: 2026-01-20 14:56:20.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:21 compute-0 ceph-mon[74360]: pgmap v2115: 321 pgs: 321 active+clean; 332 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 275 op/s
Jan 20 14:56:21 compute-0 ovn_controller[148666]: 2026-01-20T14:56:21Z|00479|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:56:21 compute-0 nova_compute[250018]: 2026-01-20 14:56:21.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:21 compute-0 nova_compute[250018]: 2026-01-20 14:56:21.987 250022 DEBUG nova.network.neutron [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.007 250022 INFO nova.compute.manager [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Took 1.28 seconds to deallocate network for instance.
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 291 op/s
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.079 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.080 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.179 250022 DEBUG oslo_concurrency.processutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.216 250022 DEBUG nova.compute.manager [req-d77d7d56-354c-464d-bb94-6705c631e6b4 req-49287084-6ef0-479c-ba4d-235b4571bf19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Received event network-vif-deleted-facff57d-50c3-4c23-9db2-5a75346ccad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:22 compute-0 ceph-mon[74360]: pgmap v2116: 321 pgs: 321 active+clean; 304 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 291 op/s
Jan 20 14:56:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539267138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.925 250022 DEBUG oslo_concurrency.processutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.746s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.931 250022 DEBUG nova.compute.provider_tree [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.945 250022 DEBUG nova.scheduler.client.report [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:22 compute-0 nova_compute[250018]: 2026-01-20 14:56:22.988 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:23 compute-0 nova_compute[250018]: 2026-01-20 14:56:23.031 250022 INFO nova.scheduler.client.report [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance ce67f778-2918-4ec5-b8d6-ab1f4346d817
Jan 20 14:56:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:23 compute-0 nova_compute[250018]: 2026-01-20 14:56:23.104 250022 DEBUG oslo_concurrency.lockutils [None req-52e7184e-d6f7-41b8-bae1-7a5c9c4aabe3 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "ce67f778-2918-4ec5-b8d6-ab1f4346d817" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:23.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:23 compute-0 nova_compute[250018]: 2026-01-20 14:56:23.435 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/539267138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 311 op/s
Jan 20 14:56:24 compute-0 ceph-mon[74360]: pgmap v2117: 321 pgs: 321 active+clean; 276 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.3 MiB/s wr, 311 op/s
Jan 20 14:56:24 compute-0 sudo[327039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:24 compute-0 sudo[327039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-0 sudo[327039]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-0 sudo[327064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:56:24 compute-0 sudo[327064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-0 sudo[327064]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-0 sudo[327089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:24 compute-0 sudo[327089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-0 sudo[327089]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:24 compute-0 sudo[327114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:56:24 compute-0 sudo[327114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:24 compute-0 nova_compute[250018]: 2026-01-20 14:56:24.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:25.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:25.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:25 compute-0 sudo[327114]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4ec0b805-0fe9-43c9-b966-13f1573e66a9 does not exist
Jan 20 14:56:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1b646a40-bdd2-4f83-8501-ceaee25beed0 does not exist
Jan 20 14:56:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ea6dbea7-da01-416c-9c5c-c9af80a680d3 does not exist
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:56:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:25 compute-0 sudo[327170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:25 compute-0 sudo[327170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:25 compute-0 sudo[327170]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:25 compute-0 sudo[327195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:56:25 compute-0 sudo[327195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:25 compute-0 sudo[327195]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:25 compute-0 sudo[327220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:25 compute-0 sudo[327220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:25 compute-0 sudo[327220]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:56:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:56:25 compute-0 sudo[327245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:56:25 compute-0 sudo[327245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.016495974 +0000 UTC m=+0.047386907 container create e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 14:56:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 278 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Jan 20 14:56:26 compute-0 systemd[1]: Started libpod-conmon-e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8.scope.
Jan 20 14:56:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:25.997014479 +0000 UTC m=+0.027905442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.107268436 +0000 UTC m=+0.138159389 container init e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.116873654 +0000 UTC m=+0.147764607 container start e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.120467372 +0000 UTC m=+0.151358305 container attach e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:56:26 compute-0 unruffled_rosalind[327327]: 167 167
Jan 20 14:56:26 compute-0 systemd[1]: libpod-e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8.scope: Deactivated successfully.
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.125935629 +0000 UTC m=+0.156826582 container died e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec883d1dd5bd886b03df9d37c58dda0b327ffc41b013fb131ae16caf743306af-merged.mount: Deactivated successfully.
Jan 20 14:56:26 compute-0 podman[327311]: 2026-01-20 14:56:26.174336451 +0000 UTC m=+0.205227384 container remove e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_rosalind, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:26 compute-0 systemd[1]: libpod-conmon-e0df0408b196e7f8c4f2d7a5ebd902ca5652f6948366bcadedd695718399f8e8.scope: Deactivated successfully.
Jan 20 14:56:26 compute-0 podman[327352]: 2026-01-20 14:56:26.35559958 +0000 UTC m=+0.046621997 container create 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:56:26 compute-0 systemd[1]: Started libpod-conmon-0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355.scope.
Jan 20 14:56:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:26 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:56:26 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:56:26 compute-0 podman[327352]: 2026-01-20 14:56:26.336318331 +0000 UTC m=+0.027340768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:26 compute-0 podman[327352]: 2026-01-20 14:56:26.442532839 +0000 UTC m=+0.133555246 container init 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:56:26 compute-0 podman[327352]: 2026-01-20 14:56:26.450983137 +0000 UTC m=+0.142005554 container start 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:26 compute-0 podman[327352]: 2026-01-20 14:56:26.454265245 +0000 UTC m=+0.145287652 container attach 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:56:26 compute-0 ceph-mon[74360]: pgmap v2118: 321 pgs: 321 active+clean; 278 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Jan 20 14:56:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:27 compute-0 fervent_galois[327368]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:56:27 compute-0 fervent_galois[327368]: --> relative data size: 1.0
Jan 20 14:56:27 compute-0 fervent_galois[327368]: --> All data devices are unavailable
Jan 20 14:56:27 compute-0 systemd[1]: libpod-0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355.scope: Deactivated successfully.
Jan 20 14:56:27 compute-0 podman[327352]: 2026-01-20 14:56:27.247025349 +0000 UTC m=+0.938047756 container died 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:56:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:27.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fd08ade7dcab259e0a403a7813cae26cf050c885a3b7c8b6d801aed920f015a-merged.mount: Deactivated successfully.
Jan 20 14:56:27 compute-0 podman[327352]: 2026-01-20 14:56:27.526926702 +0000 UTC m=+1.217949119 container remove 0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:56:27 compute-0 sudo[327245]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:27 compute-0 systemd[1]: libpod-conmon-0bc334eae4f4e7a18de6f8d126eabb176e7012219a23b677585b010d9d50c355.scope: Deactivated successfully.
Jan 20 14:56:27 compute-0 sudo[327397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:27 compute-0 sudo[327397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:27 compute-0 sudo[327397]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:27 compute-0 sudo[327422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:56:27 compute-0 sudo[327422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:27 compute-0 sudo[327422]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:27 compute-0 sudo[327447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:27 compute-0 sudo[327447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:27 compute-0 sudo[327447]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:27 compute-0 sudo[327473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:56:27 compute-0 sudo[327473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.169645129 +0000 UTC m=+0.037314344 container create 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:56:28 compute-0 systemd[1]: Started libpod-conmon-2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3.scope.
Jan 20 14:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.237093554 +0000 UTC m=+0.104762789 container init 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.243333153 +0000 UTC m=+0.111002378 container start 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:56:28 compute-0 brave_gauss[327555]: 167 167
Jan 20 14:56:28 compute-0 systemd[1]: libpod-2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3.scope: Deactivated successfully.
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.247442443 +0000 UTC m=+0.115111658 container attach 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.248116661 +0000 UTC m=+0.115785886 container died 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.153837424 +0000 UTC m=+0.021506659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cac722a716c26850b6f3dc13f3af7bc252bdbc54f1acaf6b117cc05e890a5fe-merged.mount: Deactivated successfully.
Jan 20 14:56:28 compute-0 podman[327539]: 2026-01-20 14:56:28.284465949 +0000 UTC m=+0.152135164 container remove 2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:56:28 compute-0 systemd[1]: libpod-conmon-2b9f29c35a5fa67869df1e5406076af1258ee1195930983e802bc081471eeae3.scope: Deactivated successfully.
Jan 20 14:56:28 compute-0 nova_compute[250018]: 2026-01-20 14:56:28.436 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:28 compute-0 podman[327580]: 2026-01-20 14:56:28.457139877 +0000 UTC m=+0.042476395 container create e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 14:56:28 compute-0 systemd[1]: Started libpod-conmon-e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e.scope.
Jan 20 14:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18d83fd35f8bc02bcf56ee5b0a448a6b7d81a247025613dc5a6684d8afaf095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18d83fd35f8bc02bcf56ee5b0a448a6b7d81a247025613dc5a6684d8afaf095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18d83fd35f8bc02bcf56ee5b0a448a6b7d81a247025613dc5a6684d8afaf095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c18d83fd35f8bc02bcf56ee5b0a448a6b7d81a247025613dc5a6684d8afaf095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:28 compute-0 podman[327580]: 2026-01-20 14:56:28.439137402 +0000 UTC m=+0.024473940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:28 compute-0 podman[327580]: 2026-01-20 14:56:28.535585558 +0000 UTC m=+0.120922096 container init e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:56:28 compute-0 podman[327580]: 2026-01-20 14:56:28.541320882 +0000 UTC m=+0.126657400 container start e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:56:28 compute-0 podman[327580]: 2026-01-20 14:56:28.544373374 +0000 UTC m=+0.129709912 container attach e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:56:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:29 compute-0 ceph-mon[74360]: pgmap v2119: 321 pgs: 321 active+clean; 279 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 14:56:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1100107449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]: {
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:     "0": [
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:         {
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "devices": [
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "/dev/loop3"
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             ],
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "lv_name": "ceph_lv0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "lv_size": "7511998464",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "name": "ceph_lv0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "tags": {
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.cluster_name": "ceph",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.crush_device_class": "",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.encrypted": "0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.osd_id": "0",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.type": "block",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:                 "ceph.vdo": "0"
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             },
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "type": "block",
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:             "vg_name": "ceph_vg0"
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:         }
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]:     ]
Jan 20 14:56:29 compute-0 romantic_maxwell[327597]: }
Jan 20 14:56:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:29.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:29 compute-0 systemd[1]: libpod-e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e.scope: Deactivated successfully.
Jan 20 14:56:29 compute-0 podman[327580]: 2026-01-20 14:56:29.320759598 +0000 UTC m=+0.906096136 container died e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:56:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18d83fd35f8bc02bcf56ee5b0a448a6b7d81a247025613dc5a6684d8afaf095-merged.mount: Deactivated successfully.
Jan 20 14:56:29 compute-0 podman[327580]: 2026-01-20 14:56:29.373699143 +0000 UTC m=+0.959035671 container remove e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_maxwell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 14:56:29 compute-0 systemd[1]: libpod-conmon-e96575db8c679cedfb034c3ccd9a03b0d2fbc3b4bc7e5fe90e73161e5a00db8e.scope: Deactivated successfully.
Jan 20 14:56:29 compute-0 sudo[327473]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:29 compute-0 sudo[327620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:29 compute-0 sudo[327620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:29 compute-0 sudo[327620]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:29 compute-0 sudo[327645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:56:29 compute-0 sudo[327645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:29 compute-0 sudo[327645]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:29 compute-0 sudo[327670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:29 compute-0 sudo[327670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:29 compute-0 sudo[327670]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:29 compute-0 sudo[327695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:56:29 compute-0 sudo[327695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:29 compute-0 nova_compute[250018]: 2026-01-20 14:56:29.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:29 compute-0 podman[327759]: 2026-01-20 14:56:29.9952182 +0000 UTC m=+0.051600490 container create 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:56:30 compute-0 systemd[1]: Started libpod-conmon-0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d.scope.
Jan 20 14:56:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 214 op/s
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:29.96995772 +0000 UTC m=+0.026340090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:30.068319317 +0000 UTC m=+0.124701597 container init 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:30.074507944 +0000 UTC m=+0.130890224 container start 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:30.077253348 +0000 UTC m=+0.133635648 container attach 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 14:56:30 compute-0 serene_sinoussi[327774]: 167 167
Jan 20 14:56:30 compute-0 systemd[1]: libpod-0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d.scope: Deactivated successfully.
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:30.079547769 +0000 UTC m=+0.135930049 container died 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-91e4e7d867ffd940f0631d9c949ad30c09e12ba689285dee86d4eb31e6aac9ad-merged.mount: Deactivated successfully.
Jan 20 14:56:30 compute-0 podman[327759]: 2026-01-20 14:56:30.115656781 +0000 UTC m=+0.172039081 container remove 0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:56:30 compute-0 systemd[1]: libpod-conmon-0fac90cbfd5f2f3f4f46ba628a584e1eed8182ebf833db774b4e0f87baa7cf1d.scope: Deactivated successfully.
Jan 20 14:56:30 compute-0 podman[327800]: 2026-01-20 14:56:30.306628181 +0000 UTC m=+0.041883698 container create b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:56:30 compute-0 systemd[1]: Started libpod-conmon-b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe.scope.
Jan 20 14:56:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1690c3a3945b064b5d9b35f568f878b6c219022ccc4bcd4229400026df73fbca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1690c3a3945b064b5d9b35f568f878b6c219022ccc4bcd4229400026df73fbca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1690c3a3945b064b5d9b35f568f878b6c219022ccc4bcd4229400026df73fbca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1690c3a3945b064b5d9b35f568f878b6c219022ccc4bcd4229400026df73fbca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:30 compute-0 podman[327800]: 2026-01-20 14:56:30.286781016 +0000 UTC m=+0.022036553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:56:30 compute-0 podman[327800]: 2026-01-20 14:56:30.384181078 +0000 UTC m=+0.119436645 container init b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:56:30 compute-0 podman[327800]: 2026-01-20 14:56:30.391955667 +0000 UTC m=+0.127211204 container start b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:30 compute-0 podman[327800]: 2026-01-20 14:56:30.394987599 +0000 UTC m=+0.130243126 container attach b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:56:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:30.767 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:30.769 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:30.770 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:31 compute-0 ceph-mon[74360]: pgmap v2120: 321 pgs: 321 active+clean; 289 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 214 op/s
Jan 20 14:56:31 compute-0 reverent_bose[327816]: {
Jan 20 14:56:31 compute-0 reverent_bose[327816]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:56:31 compute-0 reverent_bose[327816]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:56:31 compute-0 reverent_bose[327816]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:56:31 compute-0 reverent_bose[327816]:         "osd_id": 0,
Jan 20 14:56:31 compute-0 reverent_bose[327816]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:56:31 compute-0 reverent_bose[327816]:         "type": "bluestore"
Jan 20 14:56:31 compute-0 reverent_bose[327816]:     }
Jan 20 14:56:31 compute-0 reverent_bose[327816]: }
Jan 20 14:56:31 compute-0 systemd[1]: libpod-b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe.scope: Deactivated successfully.
Jan 20 14:56:31 compute-0 podman[327837]: 2026-01-20 14:56:31.283187032 +0000 UTC m=+0.023901544 container died b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1690c3a3945b064b5d9b35f568f878b6c219022ccc4bcd4229400026df73fbca-merged.mount: Deactivated successfully.
Jan 20 14:56:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:31.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:31 compute-0 podman[327837]: 2026-01-20 14:56:31.335428878 +0000 UTC m=+0.076143380 container remove b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:56:31 compute-0 systemd[1]: libpod-conmon-b61f81a2a1a9895933dd0c34df4e3efdbbf33167d590ad1db28acfd19bd10ebe.scope: Deactivated successfully.
Jan 20 14:56:31 compute-0 sudo[327695]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:56:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:56:31 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1bc378c6-93a3-4531-9cb4-b7db075ba50c does not exist
Jan 20 14:56:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c509cddf-1e4d-4f89-9c29-0623278da083 does not exist
Jan 20 14:56:31 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 85c6a773-9e6f-46da-85f8-f61c8516766d does not exist
Jan 20 14:56:31 compute-0 sudo[327852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:31 compute-0 sudo[327852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:31 compute-0 sudo[327852]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:31 compute-0 sudo[327877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:56:31 compute-0 sudo[327877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:31 compute-0 sudo[327877]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.725 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.725 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.756 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.856 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.857 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.868 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:56:31 compute-0 nova_compute[250018]: 2026-01-20 14:56:31.869 250022 INFO nova.compute.claims [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.001 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 310 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 173 op/s
Jan 20 14:56:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:56:32 compute-0 ceph-mon[74360]: pgmap v2121: 321 pgs: 321 active+clean; 310 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.1 MiB/s wr, 173 op/s
Jan 20 14:56:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315137130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.428 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.433 250022 DEBUG nova.compute.provider_tree [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.452 250022 DEBUG nova.scheduler.client.report [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.471 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.471 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.558 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.558 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.591 250022 INFO nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.650 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.769 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.771 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.772 250022 INFO nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Creating image(s)
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.808 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.842 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:32 compute-0 sudo[327940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:32 compute-0 sudo[327940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:32 compute-0 sudo[327940]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.873 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.879 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:32 compute-0 sudo[328004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:32 compute-0 sudo[328004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:32 compute-0 sudo[328004]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.955 250022 DEBUG nova.policy [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '215db37373dc4ae5a75cbd6866f471da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.959 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.960 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.960 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.961 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.986 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:32 compute-0 nova_compute[250018]: 2026-01-20 14:56:32.990 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 07b956d3-9ad0-4477-9774-52336fa39e0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:33.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.368 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 07b956d3-9ad0-4477-9774-52336fa39e0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/315137130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.429 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] resizing rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.466 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.540 250022 DEBUG nova.objects.instance [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 07b956d3-9ad0-4477-9774-52336fa39e0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.559 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.560 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Ensure instance console log exists: /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.560 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.560 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.561 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:33 compute-0 nova_compute[250018]: 2026-01-20 14:56:33.896 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Successfully created port: 8c4a4338-7e14-4722-b49c-72a5d815cc76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:56:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 331 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 4.1 MiB/s wr, 139 op/s
Jan 20 14:56:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/863845729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/863845729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:34 compute-0 ceph-mon[74360]: pgmap v2122: 321 pgs: 321 active+clean; 331 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 4.1 MiB/s wr, 139 op/s
Jan 20 14:56:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2907952302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:34 compute-0 nova_compute[250018]: 2026-01-20 14:56:34.834 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768920979.8330705, ce67f778-2918-4ec5-b8d6-ab1f4346d817 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:34 compute-0 nova_compute[250018]: 2026-01-20 14:56:34.834 250022 INFO nova.compute.manager [-] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] VM Stopped (Lifecycle Event)
Jan 20 14:56:34 compute-0 nova_compute[250018]: 2026-01-20 14:56:34.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:34 compute-0 nova_compute[250018]: 2026-01-20 14:56:34.883 250022 DEBUG nova.compute.manager [None req-18c41156-fbca-4c55-9f4e-343dc0279569 - - - - - -] [instance: ce67f778-2918-4ec5-b8d6-ab1f4346d817] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:35.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.257 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Successfully updated port: 8c4a4338-7e14-4722-b49c-72a5d815cc76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.274 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.275 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.275 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:35.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.391 250022 DEBUG nova.compute.manager [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-changed-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.391 250022 DEBUG nova.compute.manager [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Refreshing instance network info cache due to event network-changed-8c4a4338-7e14-4722-b49c-72a5d815cc76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.392 250022 DEBUG oslo_concurrency.lockutils [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:35 compute-0 nova_compute[250018]: 2026-01-20 14:56:35.456 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:56:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 20 14:56:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:56:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:56:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2104464304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:36 compute-0 ceph-mon[74360]: pgmap v2123: 321 pgs: 321 active+clean; 361 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 276 KiB/s rd, 4.2 MiB/s wr, 109 op/s
Jan 20 14:56:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1348621708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.795 250022 DEBUG nova.network.neutron [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updating instance_info_cache with network_info: [{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.820 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.820 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance network_info: |[{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.821 250022 DEBUG oslo_concurrency.lockutils [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.822 250022 DEBUG nova.network.neutron [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Refreshing network info cache for port 8c4a4338-7e14-4722-b49c-72a5d815cc76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.827 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Start _get_guest_xml network_info=[{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.834 250022 WARNING nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.842 250022 DEBUG nova.virt.libvirt.host [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.843 250022 DEBUG nova.virt.libvirt.host [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.846 250022 DEBUG nova.virt.libvirt.host [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.847 250022 DEBUG nova.virt.libvirt.host [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.848 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.848 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.848 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.849 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.849 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.849 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.849 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.849 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.850 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.850 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.850 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.850 250022 DEBUG nova.virt.hardware [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:56:36 compute-0 nova_compute[250018]: 2026-01-20 14:56:36.853 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.017 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:37.017 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:37.018 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:56:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021551626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.301 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:37.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.327 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.331 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:37 compute-0 podman[328185]: 2026-01-20 14:56:37.468676608 +0000 UTC m=+0.060535180 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 14:56:37 compute-0 podman[328184]: 2026-01-20 14:56:37.498647634 +0000 UTC m=+0.091049531 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:56:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:56:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247402655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.764 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.769 250022 DEBUG nova.virt.libvirt.vif [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-24332407',display_name='tempest-ServerActionsTestOtherB-server-24332407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-24332407',id=133,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-jftjrieb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:32Z,user_data=None,user_id='215db37373dc4ae5a75cbd6866f471da',uuid=07b956d3-9ad0-4477-9774-52336fa39e0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.770 250022 DEBUG nova.network.os_vif_util [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.772 250022 DEBUG nova.network.os_vif_util [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.775 250022 DEBUG nova.objects.instance [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 07b956d3-9ad0-4477-9774-52336fa39e0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.812 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <uuid>07b956d3-9ad0-4477-9774-52336fa39e0c</uuid>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <name>instance-00000085</name>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerActionsTestOtherB-server-24332407</nova:name>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:56:36</nova:creationTime>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:user uuid="215db37373dc4ae5a75cbd6866f471da">tempest-ServerActionsTestOtherB-1136521362-project-member</nova:user>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:project uuid="b3b1b7f5b4f84b5abbc401eb577c85c0">tempest-ServerActionsTestOtherB-1136521362</nova:project>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <nova:port uuid="8c4a4338-7e14-4722-b49c-72a5d815cc76">
Jan 20 14:56:37 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <system>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="serial">07b956d3-9ad0-4477-9774-52336fa39e0c</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="uuid">07b956d3-9ad0-4477-9774-52336fa39e0c</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </system>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <os>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </os>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <features>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </features>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/07b956d3-9ad0-4477-9774-52336fa39e0c_disk">
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </source>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config">
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </source>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:56:37 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:6b:82:58"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <target dev="tap8c4a4338-7e"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/console.log" append="off"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <video>
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </video>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:56:37 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:56:37 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:56:37 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:56:37 compute-0 nova_compute[250018]: </domain>
Jan 20 14:56:37 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.814 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Preparing to wait for external event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.815 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.815 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.815 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.816 250022 DEBUG nova.virt.libvirt.vif [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-24332407',display_name='tempest-ServerActionsTestOtherB-server-24332407',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-24332407',id=133,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-jftjrieb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:56:32Z,user_data=None,user_id='215db37373dc4ae5a75cbd6866f471da',uuid=07b956d3-9ad0-4477-9774-52336fa39e0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.816 250022 DEBUG nova.network.os_vif_util [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.817 250022 DEBUG nova.network.os_vif_util [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.817 250022 DEBUG os_vif [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.818 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.818 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.819 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.823 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.824 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c4a4338-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.825 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c4a4338-7e, col_values=(('external_ids', {'iface-id': '8c4a4338-7e14-4722-b49c-72a5d815cc76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:82:58', 'vm-uuid': '07b956d3-9ad0-4477-9774-52336fa39e0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:37 compute-0 NetworkManager[48960]: <info>  [1768920997.8661] manager: (tap8c4a4338-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.869 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.872 250022 INFO os_vif [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e')
Jan 20 14:56:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2021551626' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2247402655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.930 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.930 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.930 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No VIF found with MAC fa:16:3e:6b:82:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.931 250022 INFO nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Using config drive
Jan 20 14:56:37 compute-0 nova_compute[250018]: 2026-01-20 14:56:37.972 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.059 250022 DEBUG nova.network.neutron [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updated VIF entry in instance network info cache for port 8c4a4338-7e14-4722-b49c-72a5d815cc76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.060 250022 DEBUG nova.network.neutron [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updating instance_info_cache with network_info: [{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.084 250022 DEBUG oslo_concurrency.lockutils [req-3f1862ba-88a9-4285-96d7-d99b54de2b67 req-627c3bc1-3642-4a9e-8024-ea2146ad8970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.313 250022 INFO nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Creating config drive at /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.318 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpekdwk8zh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.451 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpekdwk8zh" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.486 250022 DEBUG nova.storage.rbd_utils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] rbd image 07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.491 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config 07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.728 250022 DEBUG oslo_concurrency.processutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config 07b956d3-9ad0-4477-9774-52336fa39e0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.729 250022 INFO nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Deleting local config drive /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c/disk.config because it was imported into RBD.
Jan 20 14:56:38 compute-0 kernel: tap8c4a4338-7e: entered promiscuous mode
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.778 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00480|binding|INFO|Claiming lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 for this chassis.
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00481|binding|INFO|8c4a4338-7e14-4722-b49c-72a5d815cc76: Claiming fa:16:3e:6b:82:58 10.100.0.3
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.7806] manager: (tap8c4a4338-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.7891] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.7897] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.803 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:82:58 10.100.0.3'], port_security=['fa:16:3e:6b:82:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '07b956d3-9ad0-4477-9774-52336fa39e0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '800ce09e-d4c4-4be1-b862-b09f6926701e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8c4a4338-7e14-4722-b49c-72a5d815cc76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.805 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4a4338-7e14-4722-b49c-72a5d815cc76 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce bound to our chassis
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.806 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:56:38 compute-0 systemd-udevd[328325]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.822 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc96da-2f54-4cd1-b7eb-7572e23ddf73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.824 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41a1a3fe-f1 in ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.825 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41a1a3fe-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.826 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3fe1a7-abff-4235-9e82-7d029578bfc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.828 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7c1e92-6c9c-401c-95d7-d852c2d0128f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 systemd-machined[216401]: New machine qemu-63-instance-00000085.
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.8404] device (tap8c4a4338-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.8411] device (tap8c4a4338-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.844 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2310f9-9f60-4a08-b84f-9c3dc0dfa9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000085.
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.873 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[29dec690-c7a4-4f74-8051-6311740d344a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3611847006' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.911 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[881be236-6c66-4022-aa59-b10e70911cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3611847006' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:56:38 compute-0 ceph-mon[74360]: pgmap v2124: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Jan 20 14:56:38 compute-0 NetworkManager[48960]: <info>  [1768920998.9213] manager: (tap41a1a3fe-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.920 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[baa85308-c333-42ea-ae58-613da163344e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00482|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.967 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00483|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.975 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0783f15e-77d5-4752-a6cf-c73d32717e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:38.978 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd81bc3-1fe9-4498-be67-5a5d69de1789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00484|binding|INFO|Setting lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 up in Southbound
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.983 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:38 compute-0 ovn_controller[148666]: 2026-01-20T14:56:38Z|00485|binding|INFO|Setting lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 ovn-installed in OVS
Jan 20 14:56:38 compute-0 nova_compute[250018]: 2026-01-20 14:56:38.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:39 compute-0 NetworkManager[48960]: <info>  [1768920999.0025] device (tap41a1a3fe-f0): carrier: link connected
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.009 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c590089b-fef2-4afd-9184-388833f72ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.027 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[09e96d52-c738-4036-ac9b-f2b3d0adb486]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693972, 'reachable_time': 42821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328358, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.043 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f3f0c7-5e1a-407f-88f8-f0acb2a11445]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:1fb5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693972, 'tstamp': 693972}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328359, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.058 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bf6d33-d831-4fb7-93b6-3becf564fcfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41a1a3fe-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:1f:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693972, 'reachable_time': 42821, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328360, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.089 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[24d7c651-7c90-4106-bb21-d3132acec0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.148 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d86f2e-7cdc-421c-8647-bcec3dba332b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.149 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.150 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.150 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41a1a3fe-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:39 compute-0 nova_compute[250018]: 2026-01-20 14:56:39.151 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:39 compute-0 kernel: tap41a1a3fe-f0: entered promiscuous mode
Jan 20 14:56:39 compute-0 NetworkManager[48960]: <info>  [1768920999.1534] manager: (tap41a1a3fe-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.156 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41a1a3fe-f0, col_values=(('external_ids', {'iface-id': '3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:39 compute-0 nova_compute[250018]: 2026-01-20 14:56:39.157 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:39 compute-0 ovn_controller[148666]: 2026-01-20T14:56:39Z|00486|binding|INFO|Releasing lport 3fa2df7b-42b2-4a3b-a33b-ab37b5d6aef3 from this chassis (sb_readonly=0)
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.158 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.159 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[89a23950-7b31-4bfd-b054-55593abdc957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.159 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.pid.haproxy
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:56:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:39.160 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'env', 'PROCESS_TAG=haproxy-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:56:39 compute-0 nova_compute[250018]: 2026-01-20 14:56:39.174 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:39 compute-0 podman[328391]: 2026-01-20 14:56:39.511372791 +0000 UTC m=+0.052068422 container create 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 14:56:39 compute-0 systemd[1]: Started libpod-conmon-72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375.scope.
Jan 20 14:56:39 compute-0 podman[328391]: 2026-01-20 14:56:39.485170125 +0000 UTC m=+0.025865766 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:56:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19119de2701906b083a86d1679e1762bc66f3a44882cf5c68d850c4656310183/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:56:39 compute-0 podman[328391]: 2026-01-20 14:56:39.599553674 +0000 UTC m=+0.140249295 container init 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:56:39 compute-0 podman[328391]: 2026-01-20 14:56:39.606955073 +0000 UTC m=+0.147650704 container start 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:56:39 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [NOTICE]   (328411) : New worker (328413) forked
Jan 20 14:56:39 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [NOTICE]   (328411) : Loading success.
Jan 20 14:56:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.173 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921000.17329, 07b956d3-9ad0-4477-9774-52336fa39e0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.174 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] VM Started (Lifecycle Event)
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.191 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.195 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921000.1744473, 07b956d3-9ad0-4477-9774-52336fa39e0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.196 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] VM Paused (Lifecycle Event)
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.218 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.223 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:40 compute-0 nova_compute[250018]: 2026-01-20 14:56:40.256 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:41 compute-0 ceph-mon[74360]: pgmap v2125: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 20 14:56:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:41.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.775 250022 DEBUG nova.compute.manager [req-20db46f6-886c-4568-9f28-74b6ef00c24a req-140bc67f-4e78-4fda-a54d-06f42ca71bbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.776 250022 DEBUG oslo_concurrency.lockutils [req-20db46f6-886c-4568-9f28-74b6ef00c24a req-140bc67f-4e78-4fda-a54d-06f42ca71bbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.776 250022 DEBUG oslo_concurrency.lockutils [req-20db46f6-886c-4568-9f28-74b6ef00c24a req-140bc67f-4e78-4fda-a54d-06f42ca71bbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.777 250022 DEBUG oslo_concurrency.lockutils [req-20db46f6-886c-4568-9f28-74b6ef00c24a req-140bc67f-4e78-4fda-a54d-06f42ca71bbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.777 250022 DEBUG nova.compute.manager [req-20db46f6-886c-4568-9f28-74b6ef00c24a req-140bc67f-4e78-4fda-a54d-06f42ca71bbb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Processing event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.778 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.781 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921002.7815993, 07b956d3-9ad0-4477-9774-52336fa39e0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.782 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] VM Resumed (Lifecycle Event)
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.783 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.787 250022 INFO nova.virt.libvirt.driver [-] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance spawned successfully.
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.787 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.819 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.826 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.827 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.828 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.828 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.829 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.829 250022 DEBUG nova.virt.libvirt.driver [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:56:42 compute-0 ceph-mon[74360]: pgmap v2126: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 14:56:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1147169689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.835 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.865 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.894 250022 INFO nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Took 10.12 seconds to spawn the instance on the hypervisor.
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.895 250022 DEBUG nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:42 compute-0 nova_compute[250018]: 2026-01-20 14:56:42.984 250022 INFO nova.compute.manager [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Took 11.17 seconds to build instance.
Jan 20 14:56:43 compute-0 nova_compute[250018]: 2026-01-20 14:56:43.004 250022 DEBUG oslo_concurrency.lockutils [None req-0fcbda40-c352-4a4d-8ce4-308ad232ab96 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:43 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:43.020 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:43 compute-0 nova_compute[250018]: 2026-01-20 14:56:43.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:43 compute-0 nova_compute[250018]: 2026-01-20 14:56:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:43.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:43.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:43 compute-0 nova_compute[250018]: 2026-01-20 14:56:43.447 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/394105736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:56:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 397 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 159 op/s
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.843 250022 DEBUG nova.compute.manager [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.843 250022 DEBUG oslo_concurrency.lockutils [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.844 250022 DEBUG oslo_concurrency.lockutils [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.844 250022 DEBUG oslo_concurrency.lockutils [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.844 250022 DEBUG nova.compute.manager [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] No waiting events found dispatching network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:44 compute-0 nova_compute[250018]: 2026-01-20 14:56:44.844 250022 WARNING nova.compute.manager [req-31f75d18-86a5-4dbc-b3e2-229ea09dac40 req-29b54029-ddcc-4b3d-a8b5-fd395b9c1951 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received unexpected event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 for instance with vm_state active and task_state None.
Jan 20 14:56:44 compute-0 ceph-mon[74360]: pgmap v2127: 321 pgs: 321 active+clean; 397 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 159 op/s
Jan 20 14:56:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:45.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:45.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.473 250022 INFO nova.compute.manager [None req-461fd9a2-3007-4887-9c23-f594a4af9186 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Pausing
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.474 250022 DEBUG nova.objects.instance [None req-461fd9a2-3007-4887-9c23-f594a4af9186 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'flavor' on Instance uuid 07b956d3-9ad0-4477-9774-52336fa39e0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.498 250022 DEBUG nova.compute.manager [None req-461fd9a2-3007-4887-9c23-f594a4af9186 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.499 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921005.4983385, 07b956d3-9ad0-4477-9774-52336fa39e0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.499 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] VM Paused (Lifecycle Event)
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.535 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.538 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:56:45 compute-0 nova_compute[250018]: 2026-01-20 14:56:45.570 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 205 op/s
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.077 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239732291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.541 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.629 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.630 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.635 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.635 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.806 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.807 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3995MB free_disk=20.781208038330078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.807 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.807 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.836 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.836 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.836 250022 INFO nova.compute.manager [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Shelving
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 11c82470-ab02-4424-908b-705f1f65e062 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.901 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 07b956d3-9ad0-4477-9774-52336fa39e0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.901 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.902 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:56:46 compute-0 kernel: tap8c4a4338-7e (unregistering): left promiscuous mode
Jan 20 14:56:46 compute-0 NetworkManager[48960]: <info>  [1768921006.9281] device (tap8c4a4338-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:56:46 compute-0 ovn_controller[148666]: 2026-01-20T14:56:46Z|00487|binding|INFO|Releasing lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 from this chassis (sb_readonly=0)
Jan 20 14:56:46 compute-0 ovn_controller[148666]: 2026-01-20T14:56:46Z|00488|binding|INFO|Setting lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 down in Southbound
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:46 compute-0 ovn_controller[148666]: 2026-01-20T14:56:46Z|00489|binding|INFO|Removing iface tap8c4a4338-7e ovn-installed in OVS
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.943 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:46.953 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:82:58 10.100.0.3'], port_security=['fa:16:3e:6b:82:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '07b956d3-9ad0-4477-9774-52336fa39e0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '800ce09e-d4c4-4be1-b862-b09f6926701e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8c4a4338-7e14-4722-b49c-72a5d815cc76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:46.954 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4a4338-7e14-4722-b49c-72a5d815cc76 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:56:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:46.955 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:46.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2177527d-84ac-429a-9c7f-d3d10b7fd00d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:46.957 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce namespace which is not needed anymore
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.958 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:46 compute-0 nova_compute[250018]: 2026-01-20 14:56:46.979 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:46 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 20 14:56:46 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000085.scope: Consumed 4.157s CPU time.
Jan 20 14:56:46 compute-0 systemd-machined[216401]: Machine qemu-63-instance-00000085 terminated.
Jan 20 14:56:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [NOTICE]   (328411) : haproxy version is 2.8.14-c23fe91
Jan 20 14:56:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [NOTICE]   (328411) : path to executable is /usr/sbin/haproxy
Jan 20 14:56:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [WARNING]  (328411) : Exiting Master process...
Jan 20 14:56:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [ALERT]    (328411) : Current worker (328413) exited with code 143 (Terminated)
Jan 20 14:56:47 compute-0 neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce[328407]: [WARNING]  (328411) : All workers exited. Exiting... (0)
Jan 20 14:56:47 compute-0 systemd[1]: libpod-72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375.scope: Deactivated successfully.
Jan 20 14:56:47 compute-0 podman[328516]: 2026-01-20 14:56:47.094780176 +0000 UTC m=+0.049840013 container died 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:47 compute-0 kernel: tap8c4a4338-7e: entered promiscuous mode
Jan 20 14:56:47 compute-0 NetworkManager[48960]: <info>  [1768921007.0962] manager: (tap8c4a4338-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Jan 20 14:56:47 compute-0 kernel: tap8c4a4338-7e (unregistering): left promiscuous mode
Jan 20 14:56:47 compute-0 systemd-udevd[328493]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:56:47 compute-0 ovn_controller[148666]: 2026-01-20T14:56:47Z|00490|binding|INFO|Claiming lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 for this chassis.
Jan 20 14:56:47 compute-0 ovn_controller[148666]: 2026-01-20T14:56:47Z|00491|binding|INFO|8c4a4338-7e14-4722-b49c-72a5d815cc76: Claiming fa:16:3e:6b:82:58 10.100.0.3
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.098 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.115 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:82:58 10.100.0.3'], port_security=['fa:16:3e:6b:82:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '07b956d3-9ad0-4477-9774-52336fa39e0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '800ce09e-d4c4-4be1-b862-b09f6926701e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8c4a4338-7e14-4722-b49c-72a5d815cc76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375-userdata-shm.mount: Deactivated successfully.
Jan 20 14:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-19119de2701906b083a86d1679e1762bc66f3a44882cf5c68d850c4656310183-merged.mount: Deactivated successfully.
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.126 250022 INFO nova.virt.libvirt.driver [-] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance destroyed successfully.
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.127 250022 DEBUG nova.objects.instance [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 07b956d3-9ad0-4477-9774-52336fa39e0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:47 compute-0 ovn_controller[148666]: 2026-01-20T14:56:47Z|00492|binding|INFO|Releasing lport 8c4a4338-7e14-4722-b49c-72a5d815cc76 from this chassis (sb_readonly=0)
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.133 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:47 compute-0 ceph-mon[74360]: pgmap v2128: 321 pgs: 321 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 205 op/s
Jan 20 14:56:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4239732291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1572009590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3207207858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-0 podman[328516]: 2026-01-20 14:56:47.13691784 +0000 UTC m=+0.091977657 container cleanup 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.137 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:82:58 10.100.0.3'], port_security=['fa:16:3e:6b:82:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '07b956d3-9ad0-4477-9774-52336fa39e0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3b1b7f5b4f84b5abbc401eb577c85c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '800ce09e-d4c4-4be1-b862-b09f6926701e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3273589e-5585-406c-9611-87f758b0e521, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8c4a4338-7e14-4722-b49c-72a5d815cc76) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:56:47 compute-0 systemd[1]: libpod-conmon-72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375.scope: Deactivated successfully.
Jan 20 14:56:47 compute-0 podman[328568]: 2026-01-20 14:56:47.202721511 +0000 UTC m=+0.036590686 container remove 72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.208 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7aff0e64-ea17-44b0-9cd2-1e1af0961d81]: (4, ('Tue Jan 20 02:56:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375)\n72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375\nTue Jan 20 02:56:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce (72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375)\n72ca2ab86a0c1cb4febc031102b0b93498204d00c8487277d9f2883b7129a375\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.209 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7dd23c-cbed-44dd-9037-1b4d7b6719c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.210 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41a1a3fe-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.212 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:47 compute-0 kernel: tap41a1a3fe-f0: left promiscuous mode
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.230 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.235 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[11edbe60-fbb4-4a9a-b3f8-fe1ae0cf4ebe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.261 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c8666f19-2dea-44ce-889b-ca482e965813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.263 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[554fd6d5-e349-448c-a49a-46fe1ac83ea1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.278 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d96559ea-0a72-4f05-8ecd-b983c0571d4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693962, 'reachable_time': 32592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328587, 'error': None, 'target': 'ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d41a1a3fe\x2df6f8\x2d4375\x2d9b0f\x2da4d4bb269cce.mount: Deactivated successfully.
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.282 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.282 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[a4648d85-0a91-4e81-b79d-6e1ab374de25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.283 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4a4338-7e14-4722-b49c-72a5d815cc76 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.285 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.285 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b9d307-9d10-4302-a559-fd3ad09b559f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.286 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8c4a4338-7e14-4722-b49c-72a5d815cc76 in datapath 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce unbound from our chassis
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.288 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:56:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:56:47.289 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2da44fad-7be3-4506-a893-603decd0f577]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:56:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:47.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627162203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.435 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.442 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.464 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.494 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.494 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.616 250022 INFO nova.virt.libvirt.driver [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Beginning cold snapshot process
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.805 250022 DEBUG nova.virt.libvirt.imagebackend [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 14:56:47 compute-0 nova_compute[250018]: 2026-01-20 14:56:47.869 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.014 250022 DEBUG nova.storage.rbd_utils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(fd5a6c8044794bc28d84076c004c7199) on rbd image(07b956d3-9ad0-4477-9774-52336fa39e0c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:56:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 465 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Jan 20 14:56:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 20 14:56:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 20 14:56:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 20 14:56:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3627162203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1351465299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1427266879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4132156085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1142049157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.207 250022 DEBUG nova.storage.rbd_utils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] cloning vms/07b956d3-9ad0-4477-9774-52336fa39e0c_disk@fd5a6c8044794bc28d84076c004c7199 to images/6fa40815-d8ae-463b-8e43-fe8a8d4374e5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.495 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.495 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:48 compute-0 nova_compute[250018]: 2026-01-20 14:56:48.833 250022 DEBUG nova.storage.rbd_utils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] flattening images/6fa40815-d8ae-463b-8e43-fe8a8d4374e5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:56:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:49.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:49 compute-0 ceph-mon[74360]: pgmap v2129: 321 pgs: 321 active+clean; 465 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Jan 20 14:56:49 compute-0 ceph-mon[74360]: osdmap e309: 3 total, 3 up, 3 in
Jan 20 14:56:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4275779399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4266035393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.323 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.323 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.323 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.323 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 11c82470-ab02-4424-908b-705f1f65e062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:49.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.434 250022 DEBUG nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-vif-unplugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.435 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.435 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.435 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.435 250022 DEBUG nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] No waiting events found dispatching network-vif-unplugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.435 250022 WARNING nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received unexpected event network-vif-unplugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.436 250022 DEBUG nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.436 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.436 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.436 250022 DEBUG oslo_concurrency.lockutils [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.436 250022 DEBUG nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] No waiting events found dispatching network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.437 250022 WARNING nova.compute.manager [req-c7ec5e10-c32d-4463-91b5-7ba0353fffd0 req-bdf452b0-9ce9-4463-8ecd-0bc1e80bebb6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received unexpected event network-vif-plugged-8c4a4338-7e14-4722-b49c-72a5d815cc76 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 20 14:56:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:49 compute-0 nova_compute[250018]: 2026-01-20 14:56:49.975 250022 DEBUG nova.storage.rbd_utils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] removing snapshot(fd5a6c8044794bc28d84076c004c7199) on rbd image(07b956d3-9ad0-4477-9774-52336fa39e0c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 14:56:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 244 op/s
Jan 20 14:56:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 20 14:56:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 20 14:56:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 20 14:56:50 compute-0 nova_compute[250018]: 2026-01-20 14:56:50.502 250022 DEBUG nova.storage.rbd_utils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] creating snapshot(snap) on rbd image(6fa40815-d8ae-463b-8e43-fe8a8d4374e5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 14:56:50 compute-0 nova_compute[250018]: 2026-01-20 14:56:50.640 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updating instance_info_cache with network_info: [{"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:50 compute-0 nova_compute[250018]: 2026-01-20 14:56:50.662 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-11c82470-ab02-4424-908b-705f1f65e062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:50 compute-0 nova_compute[250018]: 2026-01-20 14:56:50.663 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:56:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 20 14:56:51 compute-0 ceph-mon[74360]: pgmap v2131: 321 pgs: 321 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 244 op/s
Jan 20 14:56:51 compute-0 ceph-mon[74360]: osdmap e310: 3 total, 3 up, 3 in
Jan 20 14:56:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:51.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 20 14:56:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.5 MiB/s wr, 192 op/s
Jan 20 14:56:52 compute-0 ceph-mon[74360]: osdmap e311: 3 total, 3 up, 3 in
Jan 20 14:56:52 compute-0 ceph-mon[74360]: pgmap v2134: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.5 MiB/s wr, 192 op/s
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:56:52
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Jan 20 14:56:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:56:52 compute-0 nova_compute[250018]: 2026-01-20 14:56:52.864 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:52 compute-0 nova_compute[250018]: 2026-01-20 14:56:52.870 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:52 compute-0 sudo[328734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:52 compute-0 sudo[328734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:52 compute-0 sudo[328734]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:53 compute-0 sudo[328759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:56:53 compute-0 sudo[328759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:56:53 compute-0 sudo[328759]: pam_unix(sudo:session): session closed for user root
Jan 20 14:56:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.202 250022 INFO nova.virt.libvirt.driver [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Snapshot image upload complete
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.203 250022 DEBUG nova.compute.manager [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.267 250022 INFO nova.compute.manager [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Shelve offloading
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.273 250022 INFO nova.virt.libvirt.driver [-] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance destroyed successfully.
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.273 250022 DEBUG nova.compute.manager [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.275 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.275 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquired lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.275 250022 DEBUG nova.network.neutron [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:56:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:53.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:53 compute-0 nova_compute[250018]: 2026-01-20 14:56:53.451 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.8 MiB/s wr, 362 op/s
Jan 20 14:56:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:55 compute-0 ceph-mon[74360]: pgmap v2135: 321 pgs: 321 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.8 MiB/s wr, 362 op/s
Jan 20 14:56:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:56:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:55.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:56:55 compute-0 nova_compute[250018]: 2026-01-20 14:56:55.422 250022 DEBUG nova.network.neutron [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updating instance_info_cache with network_info: [{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:55 compute-0 nova_compute[250018]: 2026-01-20 14:56:55.792 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Releasing lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:56:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 386 op/s
Jan 20 14:56:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:57 compute-0 ceph-mon[74360]: pgmap v2136: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 386 op/s
Jan 20 14:56:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:57.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:56:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:56:57 compute-0 nova_compute[250018]: 2026-01-20 14:56:57.871 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 448 op/s
Jan 20 14:56:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2810887296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.358 250022 INFO nova.virt.libvirt.driver [-] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Instance destroyed successfully.
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.359 250022 DEBUG nova.objects.instance [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lazy-loading 'resources' on Instance uuid 07b956d3-9ad0-4477-9774-52336fa39e0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.379 250022 DEBUG nova.virt.libvirt.vif [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-24332407',display_name='tempest-ServerActionsTestOtherB-server-24332407',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-24332407',id=133,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3b1b7f5b4f84b5abbc401eb577c85c0',ramdisk_id='',reservation_id='r-jftjrieb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1136521362',owner_user_name='tempest-ServerActionsTestOtherB-1136521362-project-member',shelved_at='2026-01-20T14:56:53.203299',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6fa40815-d8ae-463b-8e43-fe8a8d4374e5'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:56:47Z,user_data=None,user_id='215db37373dc4ae5a75cbd6866f471da',uuid=07b956d3-9ad0-4477-9774-52336fa39e0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.379 250022 DEBUG nova.network.os_vif_util [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converting VIF {"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.380 250022 DEBUG nova.network.os_vif_util [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.380 250022 DEBUG os_vif [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.382 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.382 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c4a4338-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.384 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.388 250022 INFO os_vif [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:82:58,bridge_name='br-int',has_traffic_filtering=True,id=8c4a4338-7e14-4722-b49c-72a5d815cc76,network=Network(41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c4a4338-7e')
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.449 250022 DEBUG nova.compute.manager [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Received event network-changed-8c4a4338-7e14-4722-b49c-72a5d815cc76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.449 250022 DEBUG nova.compute.manager [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Refreshing instance network info cache due to event network-changed-8c4a4338-7e14-4722-b49c-72a5d815cc76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.449 250022 DEBUG oslo_concurrency.lockutils [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.449 250022 DEBUG oslo_concurrency.lockutils [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.450 250022 DEBUG nova.network.neutron [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Refreshing network info cache for port 8c4a4338-7e14-4722-b49c-72a5d815cc76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.867 250022 INFO nova.virt.libvirt.driver [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Deleting instance files /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c_del
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.867 250022 INFO nova.virt.libvirt.driver [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Deletion of /var/lib/nova/instances/07b956d3-9ad0-4477-9774-52336fa39e0c_del complete
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.951 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:56:58 compute-0 nova_compute[250018]: 2026-01-20 14:56:58.986 250022 INFO nova.scheduler.client.report [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Deleted allocations for instance 07b956d3-9ad0-4477-9774-52336fa39e0c
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.059 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.060 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:56:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:56:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.109 250022 DEBUG oslo_concurrency.processutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:56:59 compute-0 ceph-mon[74360]: pgmap v2137: 321 pgs: 321 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 448 op/s
Jan 20 14:56:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:56:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:56:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:56:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:56:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689296263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.546 250022 DEBUG oslo_concurrency.processutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.552 250022 DEBUG nova.compute.provider_tree [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.566 250022 DEBUG nova.scheduler.client.report [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.591 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.640 250022 DEBUG oslo_concurrency.lockutils [None req-205b500e-6ec8-4e01-8782-d7a6c9537a76 215db37373dc4ae5a75cbd6866f471da b3b1b7f5b4f84b5abbc401eb577c85c0 - - default default] Lock "07b956d3-9ad0-4477-9774-52336fa39e0c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.942 250022 DEBUG nova.network.neutron [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updated VIF entry in instance network info cache for port 8c4a4338-7e14-4722-b49c-72a5d815cc76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.943 250022 DEBUG nova.network.neutron [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Updating instance_info_cache with network_info: [{"id": "8c4a4338-7e14-4722-b49c-72a5d815cc76", "address": "fa:16:3e:6b:82:58", "network": {"id": "41a1a3fe-f6f8-4375-9b0f-a4d4bb269cce", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1445030024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3b1b7f5b4f84b5abbc401eb577c85c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8c4a4338-7e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 20 14:56:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 20 14:56:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 20 14:56:59 compute-0 nova_compute[250018]: 2026-01-20 14:56:59.975 250022 DEBUG oslo_concurrency.lockutils [req-a70a1873-8bed-40e3-af22-4980c5cbc796 req-be075133-9318-4412-bd93-7c2d2a5260b0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-07b956d3-9ad0-4477-9774-52336fa39e0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.2 MiB/s wr, 373 op/s
Jan 20 14:57:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/689296263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:00 compute-0 ceph-mon[74360]: osdmap e312: 3 total, 3 up, 3 in
Jan 20 14:57:00 compute-0 nova_compute[250018]: 2026-01-20 14:57:00.659 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:01.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:01 compute-0 ceph-mon[74360]: pgmap v2139: 321 pgs: 321 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.2 MiB/s wr, 373 op/s
Jan 20 14:57:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:01.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 353 op/s
Jan 20 14:57:02 compute-0 nova_compute[250018]: 2026-01-20 14:57:02.117 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921007.1159723, 07b956d3-9ad0-4477-9774-52336fa39e0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:02 compute-0 nova_compute[250018]: 2026-01-20 14:57:02.118 250022 INFO nova.compute.manager [-] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] VM Stopped (Lifecycle Event)
Jan 20 14:57:02 compute-0 nova_compute[250018]: 2026-01-20 14:57:02.211 250022 DEBUG nova.compute.manager [None req-533dc990-2e17-4a99-86a3-2676102bc6fa - - - - - -] [instance: 07b956d3-9ad0-4477-9774-52336fa39e0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:02 compute-0 ceph-mon[74360]: pgmap v2140: 321 pgs: 321 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 353 op/s
Jan 20 14:57:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:03.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:03.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:03 compute-0 nova_compute[250018]: 2026-01-20 14:57:03.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:03 compute-0 nova_compute[250018]: 2026-01-20 14:57:03.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/234247054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 243 op/s
Jan 20 14:57:04 compute-0 ceph-mon[74360]: pgmap v2141: 321 pgs: 321 active+clean; 385 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.1 MiB/s wr, 243 op/s
Jan 20 14:57:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:05.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 403 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 20 14:57:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:07 compute-0 ceph-mon[74360]: pgmap v2142: 321 pgs: 321 active+clean; 403 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 20 14:57:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Jan 20 14:57:08 compute-0 nova_compute[250018]: 2026-01-20 14:57:08.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:08 compute-0 nova_compute[250018]: 2026-01-20 14:57:08.456 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:08 compute-0 podman[328835]: 2026-01-20 14:57:08.474707198 +0000 UTC m=+0.061440555 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:57:08 compute-0 podman[328834]: 2026-01-20 14:57:08.515407813 +0000 UTC m=+0.102418027 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.051 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 20 14:57:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 20 14:57:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 20 14:57:09 compute-0 ceph-mon[74360]: pgmap v2143: 321 pgs: 321 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Jan 20 14:57:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1167924893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1449155061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.182 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.182 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.210 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.323 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.324 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.343 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.344 250022 INFO nova.compute.claims [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:57:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:57:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.462 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771001101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.879 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.885 250022 DEBUG nova.compute.provider_tree [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.899 250022 DEBUG nova.scheduler.client.report [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.920 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.920 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:57:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.972 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.972 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:57:09 compute-0 nova_compute[250018]: 2026-01-20 14:57:09.992 250022 INFO nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.005 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:57:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 414 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.106 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.107 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.108 250022 INFO nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Creating image(s)
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.147 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 20 14:57:10 compute-0 ceph-mon[74360]: osdmap e313: 3 total, 3 up, 3 in
Jan 20 14:57:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4079294048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1771001101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.183 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 20 14:57:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.225 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.230 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.306 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.307 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.308 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.308 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.344 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.349 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.383 250022 DEBUG nova.policy [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.733 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.810 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.928 250022 DEBUG nova.objects.instance [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid 26313fbe-d22b-4bbb-8216-3d6e227174e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.948 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.948 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Ensure instance console log exists: /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.949 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.949 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:10 compute-0 nova_compute[250018]: 2026-01-20 14:57:10.949 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:11 compute-0 ceph-mon[74360]: pgmap v2145: 321 pgs: 321 active+clean; 414 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 20 14:57:11 compute-0 ceph-mon[74360]: osdmap e314: 3 total, 3 up, 3 in
Jan 20 14:57:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 20 14:57:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 20 14:57:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 20 14:57:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:11.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009205460171226502 of space, bias 1.0, pg target 2.761638051367951 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003340814314628699 of space, bias 1.0, pg target 0.9955626657593524 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:57:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 14:57:11 compute-0 nova_compute[250018]: 2026-01-20 14:57:11.614 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Successfully created port: fafd431a-99c9-4a97-9712-f699fc9f34ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:57:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 449 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 61 op/s
Jan 20 14:57:12 compute-0 ceph-mon[74360]: osdmap e315: 3 total, 3 up, 3 in
Jan 20 14:57:12 compute-0 ceph-mon[74360]: pgmap v2148: 321 pgs: 321 active+clean; 449 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.5 MiB/s wr, 61 op/s
Jan 20 14:57:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:13 compute-0 sudo[329069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:13 compute-0 sudo[329069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:13 compute-0 sudo[329069]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:13 compute-0 sudo[329094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:13 compute-0 sudo[329094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:13 compute-0 sudo[329094]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:13.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.393 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.460 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.554 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Successfully updated port: fafd431a-99c9-4a97-9712-f699fc9f34ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.596 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.597 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.597 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:57:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:57:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1115444861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.661 250022 DEBUG nova.compute.manager [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-changed-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.662 250022 DEBUG nova.compute.manager [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Refreshing instance network info cache due to event network-changed-fafd431a-99c9-4a97-9712-f699fc9f34ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.662 250022 DEBUG oslo_concurrency.lockutils [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:13 compute-0 nova_compute[250018]: 2026-01-20 14:57:13.937 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:57:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 320 op/s
Jan 20 14:57:14 compute-0 ceph-mon[74360]: pgmap v2149: 321 pgs: 321 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 13 MiB/s wr, 320 op/s
Jan 20 14:57:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:15.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.744 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.868 250022 DEBUG nova.network.neutron [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Updating instance_info_cache with network_info: [{"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.889 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.889 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Instance network_info: |[{"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.890 250022 DEBUG oslo_concurrency.lockutils [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.890 250022 DEBUG nova.network.neutron [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Refreshing network info cache for port fafd431a-99c9-4a97-9712-f699fc9f34ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.893 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Start _get_guest_xml network_info=[{"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.900 250022 WARNING nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.904 250022 DEBUG nova.virt.libvirt.host [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.905 250022 DEBUG nova.virt.libvirt.host [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.907 250022 DEBUG nova.virt.libvirt.host [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.908 250022 DEBUG nova.virt.libvirt.host [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.909 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.909 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.910 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.910 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.911 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.911 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.911 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.912 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.912 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.912 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.912 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.913 250022 DEBUG nova.virt.hardware [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:57:15 compute-0 nova_compute[250018]: 2026-01-20 14:57:15.917 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 12 MiB/s wr, 325 op/s
Jan 20 14:57:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422669962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.374 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.405 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.409 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750816865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.857 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.859 250022 DEBUG nova.virt.libvirt.vif [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1810122753',display_name='tempest-ServersTestJSON-server-1810122753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1810122753',id=136,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-qdvt0ckv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:10Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=26313fbe-d22b-4bbb-8216-3d6e227174e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.860 250022 DEBUG nova.network.os_vif_util [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.861 250022 DEBUG nova.network.os_vif_util [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.862 250022 DEBUG nova.objects.instance [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26313fbe-d22b-4bbb-8216-3d6e227174e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.874 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <uuid>26313fbe-d22b-4bbb-8216-3d6e227174e5</uuid>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <name>instance-00000088</name>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersTestJSON-server-1810122753</nova:name>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:57:15</nova:creationTime>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <nova:port uuid="fafd431a-99c9-4a97-9712-f699fc9f34ba">
Jan 20 14:57:16 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <system>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="serial">26313fbe-d22b-4bbb-8216-3d6e227174e5</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="uuid">26313fbe-d22b-4bbb-8216-3d6e227174e5</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </system>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <os>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </os>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <features>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </features>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/26313fbe-d22b-4bbb-8216-3d6e227174e5_disk">
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </source>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config">
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </source>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:57:16 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:7a:c7:8c"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <target dev="tapfafd431a-99"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/console.log" append="off"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <video>
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </video>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:57:16 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:57:16 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:57:16 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:57:16 compute-0 nova_compute[250018]: </domain>
Jan 20 14:57:16 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.876 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Preparing to wait for external event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.877 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.877 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.877 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.878 250022 DEBUG nova.virt.libvirt.vif [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1810122753',display_name='tempest-ServersTestJSON-server-1810122753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1810122753',id=136,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-qdvt0ckv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:10Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=26313fbe-d22b-4bbb-8216-3d6e227174e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.879 250022 DEBUG nova.network.os_vif_util [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.879 250022 DEBUG nova.network.os_vif_util [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.879 250022 DEBUG os_vif [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.880 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.881 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.881 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.884 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.884 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfafd431a-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.885 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfafd431a-99, col_values=(('external_ids', {'iface-id': 'fafd431a-99c9-4a97-9712-f699fc9f34ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:c7:8c', 'vm-uuid': '26313fbe-d22b-4bbb-8216-3d6e227174e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:16 compute-0 NetworkManager[48960]: <info>  [1768921036.8884] manager: (tapfafd431a-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.890 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.897 250022 INFO os_vif [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99')
Jan 20 14:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:57:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 38K writes, 144K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.78 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9375 writes, 35K keys, 9375 commit groups, 1.0 writes per commit group, ingest: 33.99 MB, 0.06 MB/s
                                           Interval WAL: 9375 writes, 3941 syncs, 2.38 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.954 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.955 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.955 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:7a:c7:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.955 250022 INFO nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Using config drive
Jan 20 14:57:16 compute-0 nova_compute[250018]: 2026-01-20 14:57:16.978 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:17.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:17 compute-0 ceph-mon[74360]: pgmap v2150: 321 pgs: 321 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 12 MiB/s wr, 325 op/s
Jan 20 14:57:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2422669962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3750816865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.329 250022 DEBUG nova.network.neutron [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Updated VIF entry in instance network info cache for port fafd431a-99c9-4a97-9712-f699fc9f34ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.331 250022 DEBUG nova.network.neutron [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Updating instance_info_cache with network_info: [{"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.351 250022 DEBUG oslo_concurrency.lockutils [req-489cc450-2f44-468e-b8fb-b13cb73c9682 req-f2f00fb2-7137-4407-9021-d06c2428b195 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-26313fbe-d22b-4bbb-8216-3d6e227174e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:17.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.427 250022 INFO nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Creating config drive at /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.436 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprt1qa9r9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.597 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprt1qa9r9" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.633 250022 DEBUG nova.storage.rbd_utils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.638 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.839 250022 DEBUG oslo_concurrency.processutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config 26313fbe-d22b-4bbb-8216-3d6e227174e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.841 250022 INFO nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Deleting local config drive /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5/disk.config because it was imported into RBD.
Jan 20 14:57:17 compute-0 kernel: tapfafd431a-99: entered promiscuous mode
Jan 20 14:57:17 compute-0 NetworkManager[48960]: <info>  [1768921037.9117] manager: (tapfafd431a-99): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Jan 20 14:57:17 compute-0 ovn_controller[148666]: 2026-01-20T14:57:17Z|00493|binding|INFO|Claiming lport fafd431a-99c9-4a97-9712-f699fc9f34ba for this chassis.
Jan 20 14:57:17 compute-0 ovn_controller[148666]: 2026-01-20T14:57:17Z|00494|binding|INFO|fafd431a-99c9-4a97-9712-f699fc9f34ba: Claiming fa:16:3e:7a:c7:8c 10.100.0.5
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.921 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:c7:8c 10.100.0.5'], port_security=['fa:16:3e:7a:c7:8c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '26313fbe-d22b-4bbb-8216-3d6e227174e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=fafd431a-99c9-4a97-9712-f699fc9f34ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.923 160071 INFO neutron.agent.ovn.metadata.agent [-] Port fafd431a-99c9-4a97-9712-f699fc9f34ba in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.924 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.940 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7662d5-c220-4bc5-95fb-7a8fed12d44b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:17 compute-0 systemd-udevd[329259]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:57:17 compute-0 systemd-machined[216401]: New machine qemu-64-instance-00000088.
Jan 20 14:57:17 compute-0 ovn_controller[148666]: 2026-01-20T14:57:17Z|00495|binding|INFO|Setting lport fafd431a-99c9-4a97-9712-f699fc9f34ba ovn-installed in OVS
Jan 20 14:57:17 compute-0 ovn_controller[148666]: 2026-01-20T14:57:17Z|00496|binding|INFO|Setting lport fafd431a-99c9-4a97-9712-f699fc9f34ba up in Southbound
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.953 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:17 compute-0 NetworkManager[48960]: <info>  [1768921037.9579] device (tapfafd431a-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:57:17 compute-0 NetworkManager[48960]: <info>  [1768921037.9589] device (tapfafd431a-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:57:17 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000088.
Jan 20 14:57:17 compute-0 nova_compute[250018]: 2026-01-20 14:57:17.959 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.974 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[82096e8a-1c74-402c-a725-3e1f2ba4e2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:17.978 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[85c5e6d3-cc42-44d7-a3b1-e4b36cb55999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.016 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f9636c9f-4675-4449-9284-8ee55a1df26c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.041 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0538978-da4d-4c6a-9481-4164f19e51ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 19711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329271, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.067 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2e75a6-5922-4e09-9b3b-2421f6360aa0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329272, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329272, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.069 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.071 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.072 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.072 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.073 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:18.073 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 20 14:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 20 14:57:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4255108125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 20 14:57:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.461 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.491 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921038.4913075, 26313fbe-d22b-4bbb-8216-3d6e227174e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.492 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] VM Started (Lifecycle Event)
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.513 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.518 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921038.4924219, 26313fbe-d22b-4bbb-8216-3d6e227174e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.519 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] VM Paused (Lifecycle Event)
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.543 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.548 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:18 compute-0 nova_compute[250018]: 2026-01-20 14:57:18.571 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.103 250022 DEBUG nova.compute.manager [req-25cae5f3-1ce1-4d92-9a03-59b37552a091 req-454e6fef-0ab7-4459-8867-096d75d1c58d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.105 250022 DEBUG oslo_concurrency.lockutils [req-25cae5f3-1ce1-4d92-9a03-59b37552a091 req-454e6fef-0ab7-4459-8867-096d75d1c58d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.106 250022 DEBUG oslo_concurrency.lockutils [req-25cae5f3-1ce1-4d92-9a03-59b37552a091 req-454e6fef-0ab7-4459-8867-096d75d1c58d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.106 250022 DEBUG oslo_concurrency.lockutils [req-25cae5f3-1ce1-4d92-9a03-59b37552a091 req-454e6fef-0ab7-4459-8867-096d75d1c58d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.107 250022 DEBUG nova.compute.manager [req-25cae5f3-1ce1-4d92-9a03-59b37552a091 req-454e6fef-0ab7-4459-8867-096d75d1c58d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Processing event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.109 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:57:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:19.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.113 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921039.1132054, 26313fbe-d22b-4bbb-8216-3d6e227174e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.113 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] VM Resumed (Lifecycle Event)
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.116 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.119 250022 INFO nova.virt.libvirt.driver [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Instance spawned successfully.
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.120 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.150 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.157 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.161 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.161 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.161 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.162 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.162 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.163 250022 DEBUG nova.virt.libvirt.driver [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:19 compute-0 ceph-mon[74360]: pgmap v2151: 321 pgs: 321 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 20 14:57:19 compute-0 ceph-mon[74360]: osdmap e316: 3 total, 3 up, 3 in
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.190 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.258 250022 INFO nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Took 9.15 seconds to spawn the instance on the hypervisor.
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.258 250022 DEBUG nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.345 250022 INFO nova.compute.manager [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Took 10.06 seconds to build instance.
Jan 20 14:57:19 compute-0 nova_compute[250018]: 2026-01-20 14:57:19.364 250022 DEBUG oslo_concurrency.lockutils [None req-35b1e35d-c7f7-45eb-9ce6-bfea0502a41d 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:19.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 20 14:57:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 20 14:57:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 20 14:57:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 542 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.1 MiB/s wr, 339 op/s
Jan 20 14:57:20 compute-0 ceph-mon[74360]: osdmap e317: 3 total, 3 up, 3 in
Jan 20 14:57:20 compute-0 ceph-mon[74360]: pgmap v2154: 321 pgs: 321 active+clean; 542 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.1 MiB/s wr, 339 op/s
Jan 20 14:57:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.284 250022 DEBUG nova.compute.manager [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.284 250022 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.284 250022 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.284 250022 DEBUG oslo_concurrency.lockutils [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.285 250022 DEBUG nova.compute.manager [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] No waiting events found dispatching network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.285 250022 WARNING nova.compute.manager [req-87b9a3a5-4e98-4201-9755-e486bd6571dc req-c541d0da-fda3-4442-a775-ba4641e234f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received unexpected event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba for instance with vm_state active and task_state None.
Jan 20 14:57:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:21.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:21 compute-0 nova_compute[250018]: 2026-01-20 14:57:21.888 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 502 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.650 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.650 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.650 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.650 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.651 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.652 250022 INFO nova.compute.manager [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Terminating instance
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.653 250022 DEBUG nova.compute.manager [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:57:22 compute-0 kernel: tapfafd431a-99 (unregistering): left promiscuous mode
Jan 20 14:57:22 compute-0 NetworkManager[48960]: <info>  [1768921042.7087] device (tapfafd431a-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:57:22 compute-0 ovn_controller[148666]: 2026-01-20T14:57:22Z|00497|binding|INFO|Releasing lport fafd431a-99c9-4a97-9712-f699fc9f34ba from this chassis (sb_readonly=0)
Jan 20 14:57:22 compute-0 ovn_controller[148666]: 2026-01-20T14:57:22Z|00498|binding|INFO|Setting lport fafd431a-99c9-4a97-9712-f699fc9f34ba down in Southbound
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.773 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 ovn_controller[148666]: 2026-01-20T14:57:22Z|00499|binding|INFO|Removing iface tapfafd431a-99 ovn-installed in OVS
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.775 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.779 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:c7:8c 10.100.0.5'], port_security=['fa:16:3e:7a:c7:8c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '26313fbe-d22b-4bbb-8216-3d6e227174e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=fafd431a-99c9-4a97-9712-f699fc9f34ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.780 160071 INFO neutron.agent.ovn.metadata.agent [-] Port fafd431a-99c9-4a97-9712-f699fc9f34ba in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.781 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.796 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.800 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1353a0b5-28ea-46f6-8548-acb51d74441d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 20 14:57:22 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Consumed 3.986s CPU time.
Jan 20 14:57:22 compute-0 systemd-machined[216401]: Machine qemu-64-instance-00000088 terminated.
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.838 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f28677-03b8-4c59-b83e-18756b1bf024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.841 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9b161a7e-ebbb-4432-bd6f-36b27ca6a15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.873 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5396ac47-f271-4425-9551-9f3655f0170f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.892 250022 INFO nova.virt.libvirt.driver [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Instance destroyed successfully.
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.893 250022 DEBUG nova.objects.instance [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid 26313fbe-d22b-4bbb-8216-3d6e227174e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.896 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5b9e85-06f0-4e0f-8567-e717bafc3cfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 19711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329336, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.915 250022 DEBUG nova.virt.libvirt.vif [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:57:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1810122753',display_name='tempest-ServersTestJSON-server-1810122753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1810122753',id=136,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:57:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-qdvt0ckv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:57:20Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=26313fbe-d22b-4bbb-8216-3d6e227174e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.915 250022 DEBUG nova.network.os_vif_util [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "address": "fa:16:3e:7a:c7:8c", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfafd431a-99", "ovs_interfaceid": "fafd431a-99c9-4a97-9712-f699fc9f34ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.916 250022 DEBUG nova.network.os_vif_util [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.917 250022 DEBUG os_vif [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.918 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfafd431a-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.922 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d2108fa2-8a5d-4ae7-a4fc-913912ace0ea]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329339, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329339, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.922 250022 INFO os_vif [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:c7:8c,bridge_name='br-int',has_traffic_filtering=True,id=fafd431a-99c9-4a97-9712-f699fc9f34ba,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfafd431a-99')
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.924 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.927 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.927 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.927 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:22.928 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:22 compute-0 nova_compute[250018]: 2026-01-20 14:57:22.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:23.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:23 compute-0 ceph-mon[74360]: pgmap v2155: 321 pgs: 321 active+clean; 502 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.299 250022 INFO nova.virt.libvirt.driver [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Deleting instance files /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5_del
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.300 250022 INFO nova.virt.libvirt.driver [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Deletion of /var/lib/nova/instances/26313fbe-d22b-4bbb-8216-3d6e227174e5_del complete
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.386 250022 INFO nova.compute.manager [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.387 250022 DEBUG oslo.service.loopingcall [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.388 250022 DEBUG nova.compute.manager [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.388 250022 DEBUG nova.network.neutron [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:57:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:23.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.391 250022 DEBUG nova.compute.manager [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-unplugged-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.391 250022 DEBUG oslo_concurrency.lockutils [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.391 250022 DEBUG oslo_concurrency.lockutils [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.392 250022 DEBUG oslo_concurrency.lockutils [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.392 250022 DEBUG nova.compute.manager [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] No waiting events found dispatching network-vif-unplugged-fafd431a-99c9-4a97-9712-f699fc9f34ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.392 250022 DEBUG nova.compute.manager [req-3f1a0e8b-1b13-4b06-9a17-cf41620c4261 req-72ec863c-1502-4336-a9d4-2938201cd3bf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-unplugged-fafd431a-99c9-4a97-9712-f699fc9f34ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.463 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:23 compute-0 nova_compute[250018]: 2026-01-20 14:57:23.990 250022 DEBUG nova.network.neutron [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.010 250022 INFO nova.compute.manager [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Took 0.62 seconds to deallocate network for instance.
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.059 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.059 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.129 250022 DEBUG nova.compute.manager [req-9d9eea11-8aa5-4b6a-85f6-fa1579f15474 req-a32c533c-f59e-4aea-82c8-093064ec4f1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-deleted-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.139 250022 DEBUG oslo_concurrency.processutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4028243345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658765285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.618 250022 DEBUG oslo_concurrency.processutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.624 250022 DEBUG nova.compute.provider_tree [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.641 250022 DEBUG nova.scheduler.client.report [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.661 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.695 250022 INFO nova.scheduler.client.report [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance 26313fbe-d22b-4bbb-8216-3d6e227174e5
Jan 20 14:57:24 compute-0 nova_compute[250018]: 2026-01-20 14:57:24.792 250022 DEBUG oslo_concurrency.lockutils [None req-0272e883-6e83-49a8-a5f9-12702ba9b5bc 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:25 compute-0 ceph-mon[74360]: pgmap v2156: 321 pgs: 321 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Jan 20 14:57:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1658765285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:25.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.489 250022 DEBUG nova.compute.manager [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.489 250022 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.490 250022 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.490 250022 DEBUG oslo_concurrency.lockutils [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26313fbe-d22b-4bbb-8216-3d6e227174e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.490 250022 DEBUG nova.compute.manager [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] No waiting events found dispatching network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:25 compute-0 nova_compute[250018]: 2026-01-20 14:57:25.491 250022 WARNING nova.compute.manager [req-f4718a15-cde7-4e38-ae03-2cfac926ba5b req-a94a2c68-9914-47fb-89ae-617fc949dc96 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Received unexpected event network-vif-plugged-fafd431a-99c9-4a97-9712-f699fc9f34ba for instance with vm_state deleted and task_state None.
Jan 20 14:57:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.6 MiB/s wr, 249 op/s
Jan 20 14:57:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:27 compute-0 ceph-mon[74360]: pgmap v2157: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.6 MiB/s wr, 249 op/s
Jan 20 14:57:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:27 compute-0 nova_compute[250018]: 2026-01-20 14:57:27.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.1 MiB/s wr, 239 op/s
Jan 20 14:57:28 compute-0 nova_compute[250018]: 2026-01-20 14:57:28.466 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:29.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:29 compute-0 ceph-mon[74360]: pgmap v2158: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 2.1 MiB/s wr, 239 op/s
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.235 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.235 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.270 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.375 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.376 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.383 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.384 250022 INFO nova.compute.claims [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:57:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:29.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.512 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535457568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.945 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.954 250022 DEBUG nova.compute.provider_tree [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:29 compute-0 nova_compute[250018]: 2026-01-20 14:57:29.982 250022 DEBUG nova.scheduler.client.report [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.008 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.009 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.070 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.071 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:57:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.104 250022 INFO nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.129 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:57:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2445595473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/535457568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2654572084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.220 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.222 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.223 250022 INFO nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Creating image(s)
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.256 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.292 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.325 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.330 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.371 250022 DEBUG nova.policy [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '395a5c503218411284bc94c45263d1fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.429 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.431 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.432 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.433 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.478 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.482 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:30.767 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:30.768 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:30.768 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.781 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:30 compute-0 nova_compute[250018]: 2026-01-20 14:57:30.878 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] resizing rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.028 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Successfully created port: 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.040 250022 DEBUG nova.objects.instance [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'migration_context' on Instance uuid 64a2f5d0-8a64-4e88-87f1-8cd406f83cda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.060 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.060 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Ensure instance console log exists: /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.061 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.061 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:31 compute-0 nova_compute[250018]: 2026-01-20 14:57:31.061 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:31.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:31 compute-0 ceph-mon[74360]: pgmap v2159: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Jan 20 14:57:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:31.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:31 compute-0 sudo[329574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:31 compute-0 sudo[329574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:31 compute-0 sudo[329574]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 14:57:31 compute-0 sudo[329599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:31 compute-0 sudo[329599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:31 compute-0 sudo[329599]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 sudo[329624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:32 compute-0 sudo[329624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-0 sudo[329624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.7 MiB/s wr, 329 op/s
Jan 20 14:57:32 compute-0 sudo[329649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:57:32 compute-0 sudo[329649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-0 ceph-mon[74360]: pgmap v2160: 321 pgs: 321 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.7 MiB/s wr, 329 op/s
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.341 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Successfully updated port: 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.358 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.358 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquired lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.358 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.444 250022 DEBUG nova.compute.manager [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-changed-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.444 250022 DEBUG nova.compute.manager [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Refreshing instance network info cache due to event network-changed-2b81c6c8-fe13-44f6-85a2-1d0130b60a64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.445 250022 DEBUG oslo_concurrency.lockutils [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.570 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:57:32 compute-0 sudo[329649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 sudo[329705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:32 compute-0 sudo[329705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-0 sudo[329705]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 sudo[329730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:32 compute-0 sudo[329730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-0 sudo[329730]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 sudo[329755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:32 compute-0 sudo[329755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:32 compute-0 sudo[329755]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:32 compute-0 nova_compute[250018]: 2026-01-20 14:57:32.922 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:32 compute-0 sudo[329780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 14:57:32 compute-0 sudo[329780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:57:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:33.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:33 compute-0 sudo[329780]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 20 14:57:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1816590284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 20 14:57:33 compute-0 sudo[329823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-0 sudo[329823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:33 compute-0 sudo[329823]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dd51bfe4-b38a-4337-93d8-b2000f6cc6c3 does not exist
Jan 20 14:57:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 32e853cc-9c36-4a14-84e0-15a3dc5401d7 does not exist
Jan 20 14:57:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f59c1086-a35a-40fc-8677-6cb987565494 does not exist
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:57:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:57:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:33 compute-0 sudo[329848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-0 sudo[329848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 sudo[329848]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 sudo[329849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-0 sudo[329849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 sudo[329849]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:33 compute-0 sudo[329897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:33 compute-0 sudo[329897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 sudo[329897]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.469 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:33 compute-0 sudo[329923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:33 compute-0 sudo[329923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 sudo[329923]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:33 compute-0 sudo[329948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:57:33 compute-0 sudo[329948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.878 250022 DEBUG nova.network.neutron [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Updating instance_info_cache with network_info: [{"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.895 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Releasing lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.895 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance network_info: |[{"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.896 250022 DEBUG oslo_concurrency.lockutils [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.896 250022 DEBUG nova.network.neutron [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Refreshing network info cache for port 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.898 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Start _get_guest_xml network_info=[{"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.903 250022 WARNING nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.908 250022 DEBUG nova.virt.libvirt.host [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.908 250022 DEBUG nova.virt.libvirt.host [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.911 250022 DEBUG nova.virt.libvirt.host [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.912 250022 DEBUG nova.virt.libvirt.host [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.913 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.913 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.914 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.914 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.914 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.914 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.914 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.915 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.915 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.915 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.915 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.916 250022 DEBUG nova.virt.hardware [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:57:33 compute-0 nova_compute[250018]: 2026-01-20 14:57:33.918 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:33 compute-0 podman[330016]: 2026-01-20 14:57:33.937841678 +0000 UTC m=+0.050934753 container create 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:57:33 compute-0 systemd[1]: Started libpod-conmon-7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93.scope.
Jan 20 14:57:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:33.917303145 +0000 UTC m=+0.030396260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:34.029696529 +0000 UTC m=+0.142789624 container init 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:34.038855746 +0000 UTC m=+0.151948821 container start 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:34.042566096 +0000 UTC m=+0.155659231 container attach 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 14:57:34 compute-0 loving_dubinsky[330033]: 167 167
Jan 20 14:57:34 compute-0 systemd[1]: libpod-7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93.scope: Deactivated successfully.
Jan 20 14:57:34 compute-0 conmon[330033]: conmon 7c0ae86508e8b32f004f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93.scope/container/memory.events
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:34.046182053 +0000 UTC m=+0.159275138 container died 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 14:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a54b16aa2e32b86a55e3f8fab63cf60b09c089a8b99ae11d0bb4ba4ede7073b1-merged.mount: Deactivated successfully.
Jan 20 14:57:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 591 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 10 MiB/s wr, 369 op/s
Jan 20 14:57:34 compute-0 podman[330016]: 2026-01-20 14:57:34.091616726 +0000 UTC m=+0.204709801 container remove 7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:57:34 compute-0 systemd[1]: libpod-conmon-7c0ae86508e8b32f004fd6fe6b5b65fe5dadb3abe596c5847087531dcff4ca93.scope: Deactivated successfully.
Jan 20 14:57:34 compute-0 podman[330075]: 2026-01-20 14:57:34.250960984 +0000 UTC m=+0.044917960 container create de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:34 compute-0 systemd[1]: Started libpod-conmon-de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0.scope.
Jan 20 14:57:34 compute-0 podman[330075]: 2026-01-20 14:57:34.229669181 +0000 UTC m=+0.023626177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690735207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:34 compute-0 podman[330075]: 2026-01-20 14:57:34.361237822 +0000 UTC m=+0.155194868 container init de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:57:34 compute-0 podman[330075]: 2026-01-20 14:57:34.369110024 +0000 UTC m=+0.163066980 container start de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:57:34 compute-0 podman[330075]: 2026-01-20 14:57:34.372155706 +0000 UTC m=+0.166112702 container attach de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 14:57:34 compute-0 nova_compute[250018]: 2026-01-20 14:57:34.374 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:34 compute-0 nova_compute[250018]: 2026-01-20 14:57:34.855 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: osdmap e318: 3 total, 3 up, 3 in
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3524889482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:34 compute-0 ceph-mon[74360]: pgmap v2162: 321 pgs: 321 active+clean; 591 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 10 MiB/s wr, 369 op/s
Jan 20 14:57:34 compute-0 nova_compute[250018]: 2026-01-20 14:57:34.866 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:35 compute-0 hopeful_elbakyan[330092]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:57:35 compute-0 hopeful_elbakyan[330092]: --> relative data size: 1.0
Jan 20 14:57:35 compute-0 hopeful_elbakyan[330092]: --> All data devices are unavailable
Jan 20 14:57:35 compute-0 systemd[1]: libpod-de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0.scope: Deactivated successfully.
Jan 20 14:57:35 compute-0 podman[330075]: 2026-01-20 14:57:35.202316608 +0000 UTC m=+0.996273564 container died de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-def59ceb18fb016a8b1a61e9fdb6017eba8e6e2d23ca8b7ca477c468e4abdabb-merged.mount: Deactivated successfully.
Jan 20 14:57:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:57:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079449944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-0 podman[330075]: 2026-01-20 14:57:35.280290446 +0000 UTC m=+1.074247402 container remove de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elbakyan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:57:35 compute-0 systemd[1]: libpod-conmon-de8a6b472f2c839b2c62081cba6fd084c588d61fd55b6e41368537fb317bb8d0.scope: Deactivated successfully.
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.305 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.307 250022 DEBUG nova.virt.libvirt.vif [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-623581889',display_name='tempest-ServersTestJSON-server-623581889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-623581889',id=137,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-5z001ple',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:30Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=64a2f5d0-8a64-4e88-87f1-8cd406f83cda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.308 250022 DEBUG nova.network.os_vif_util [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.309 250022 DEBUG nova.network.os_vif_util [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.310 250022 DEBUG nova.objects.instance [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64a2f5d0-8a64-4e88-87f1-8cd406f83cda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:35 compute-0 sudo[329948]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.326 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <uuid>64a2f5d0-8a64-4e88-87f1-8cd406f83cda</uuid>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <name>instance-00000089</name>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersTestJSON-server-623581889</nova:name>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:57:33</nova:creationTime>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:user uuid="395a5c503218411284bc94c45263d1fb">tempest-ServersTestJSON-405461620-project-member</nova:user>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:project uuid="ca6cd0afe0ab41e3ab36d21a4129f734">tempest-ServersTestJSON-405461620</nova:project>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <nova:port uuid="2b81c6c8-fe13-44f6-85a2-1d0130b60a64">
Jan 20 14:57:35 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <system>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="serial">64a2f5d0-8a64-4e88-87f1-8cd406f83cda</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="uuid">64a2f5d0-8a64-4e88-87f1-8cd406f83cda</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </system>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <os>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </os>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <features>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </features>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk">
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config">
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </source>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:57:35 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:87:f5:04"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <target dev="tap2b81c6c8-fe"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/console.log" append="off"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <video>
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </video>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:57:35 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:57:35 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:57:35 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:57:35 compute-0 nova_compute[250018]: </domain>
Jan 20 14:57:35 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.327 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Preparing to wait for external event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.328 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.328 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.329 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.330 250022 DEBUG nova.virt.libvirt.vif [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:57:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-623581889',display_name='tempest-ServersTestJSON-server-623581889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-623581889',id=137,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-5z001ple',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:57:30Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=64a2f5d0-8a64-4e88-87f1-8cd406f83cda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.331 250022 DEBUG nova.network.os_vif_util [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.332 250022 DEBUG nova.network.os_vif_util [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.332 250022 DEBUG os_vif [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.334 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.335 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.339 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.339 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b81c6c8-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.339 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b81c6c8-fe, col_values=(('external_ids', {'iface-id': '2b81c6c8-fe13-44f6-85a2-1d0130b60a64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:f5:04', 'vm-uuid': '64a2f5d0-8a64-4e88-87f1-8cd406f83cda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:35 compute-0 sudo[330163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:35 compute-0 sudo[330163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:35 compute-0 sudo[330163]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.390 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:35 compute-0 NetworkManager[48960]: <info>  [1768921055.3918] manager: (tap2b81c6c8-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.393 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.398 250022 INFO os_vif [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe')
Jan 20 14:57:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:57:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:35.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:57:35 compute-0 sudo[330188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:35 compute-0 sudo[330188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:35 compute-0 sudo[330188]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.466 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.467 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.467 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] No VIF found with MAC fa:16:3e:87:f5:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.469 250022 INFO nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Using config drive
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.499 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:35 compute-0 sudo[330216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:35 compute-0 sudo[330216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:35 compute-0 sudo[330216]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:35 compute-0 sudo[330259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:57:35 compute-0 sudo[330259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2690735207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4079449944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.876 250022 INFO nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Creating config drive at /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.881 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxn_ebtaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:35 compute-0 podman[330324]: 2026-01-20 14:57:35.940287348 +0000 UTC m=+0.069660466 container create 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.962 250022 DEBUG nova.network.neutron [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Updated VIF entry in instance network info cache for port 2b81c6c8-fe13-44f6-85a2-1d0130b60a64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.965 250022 DEBUG nova.network.neutron [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Updating instance_info_cache with network_info: [{"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:57:35 compute-0 systemd[1]: Started libpod-conmon-3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459.scope.
Jan 20 14:57:35 compute-0 nova_compute[250018]: 2026-01-20 14:57:35.984 250022 DEBUG oslo_concurrency.lockutils [req-9b9bf6f6-6655-4893-bba7-a1bc6e2d0bcd req-998b54bb-07b1-4a22-9dd2-10b1915f8553 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-64a2f5d0-8a64-4e88-87f1-8cd406f83cda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:57:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:35.914517635 +0000 UTC m=+0.043890853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:36.018843332 +0000 UTC m=+0.148216540 container init 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.022 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxn_ebtaj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:36.027299799 +0000 UTC m=+0.156672917 container start 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:36.031269426 +0000 UTC m=+0.160642564 container attach 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:36 compute-0 sharp_diffie[330343]: 167 167
Jan 20 14:57:36 compute-0 systemd[1]: libpod-3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459.scope: Deactivated successfully.
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:36.033547888 +0000 UTC m=+0.162921006 container died 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.051 250022 DEBUG nova.storage.rbd_utils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] rbd image 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.054 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59f9f09304c59ad00a4fdb33a2a7437b9196e94c23680f893eb2c8de1680d9c-merged.mount: Deactivated successfully.
Jan 20 14:57:36 compute-0 podman[330324]: 2026-01-20 14:57:36.067492741 +0000 UTC m=+0.196865859 container remove 3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_diffie, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 14:57:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 491 op/s
Jan 20 14:57:36 compute-0 systemd[1]: libpod-conmon-3259ca06f8c3d2a409221aa52e5e51056710f02b4fd8835b22057ed328ad0459.scope: Deactivated successfully.
Jan 20 14:57:36 compute-0 podman[330401]: 2026-01-20 14:57:36.223148 +0000 UTC m=+0.038816146 container create 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.236 250022 DEBUG oslo_concurrency.processutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config 64a2f5d0-8a64-4e88-87f1-8cd406f83cda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.238 250022 INFO nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Deleting local config drive /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda/disk.config because it was imported into RBD.
Jan 20 14:57:36 compute-0 systemd[1]: Started libpod-conmon-9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165.scope.
Jan 20 14:57:36 compute-0 kernel: tap2b81c6c8-fe: entered promiscuous mode
Jan 20 14:57:36 compute-0 NetworkManager[48960]: <info>  [1768921056.2866] manager: (tap2b81c6c8-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Jan 20 14:57:36 compute-0 ovn_controller[148666]: 2026-01-20T14:57:36Z|00500|binding|INFO|Claiming lport 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 for this chassis.
Jan 20 14:57:36 compute-0 ovn_controller[148666]: 2026-01-20T14:57:36Z|00501|binding|INFO|2b81c6c8-fe13-44f6-85a2-1d0130b60a64: Claiming fa:16:3e:87:f5:04 10.100.0.7
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.288 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.297 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:f5:04 10.100.0.7'], port_security=['fa:16:3e:87:f5:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '64a2f5d0-8a64-4e88-87f1-8cd406f83cda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '2', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2b81c6c8-fe13-44f6-85a2-1d0130b60a64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917d8611fe8273056006c786667c980850c18c86f68a97e82b0fbf709b496372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.298 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c bound to our chassis
Jan 20 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917d8611fe8273056006c786667c980850c18c86f68a97e82b0fbf709b496372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917d8611fe8273056006c786667c980850c18c86f68a97e82b0fbf709b496372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917d8611fe8273056006c786667c980850c18c86f68a97e82b0fbf709b496372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:36 compute-0 podman[330401]: 2026-01-20 14:57:36.20752274 +0000 UTC m=+0.023190906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.302 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:57:36 compute-0 ovn_controller[148666]: 2026-01-20T14:57:36Z|00502|binding|INFO|Setting lport 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 ovn-installed in OVS
Jan 20 14:57:36 compute-0 ovn_controller[148666]: 2026-01-20T14:57:36Z|00503|binding|INFO|Setting lport 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 up in Southbound
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.311 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:36 compute-0 podman[330401]: 2026-01-20 14:57:36.312889075 +0000 UTC m=+0.128557231 container init 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.320 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c21e4b-2793-4b2a-9410-2904c2b9d55b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 systemd-machined[216401]: New machine qemu-65-instance-00000089.
Jan 20 14:57:36 compute-0 podman[330401]: 2026-01-20 14:57:36.323549723 +0000 UTC m=+0.139217869 container start 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 14:57:36 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000089.
Jan 20 14:57:36 compute-0 podman[330401]: 2026-01-20 14:57:36.326833101 +0000 UTC m=+0.142501247 container attach 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:57:36 compute-0 systemd-udevd[330439]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:57:36 compute-0 NetworkManager[48960]: <info>  [1768921056.3477] device (tap2b81c6c8-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:57:36 compute-0 NetworkManager[48960]: <info>  [1768921056.3484] device (tap2b81c6c8-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.349 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[caeecabc-2ec2-45d4-9d37-a980c5ee90be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.352 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[54421da6-813a-4125-bb08-9b19d95dc35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.375 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[56340307-b8e5-4e1d-a2f7-d19dc4efafe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.392 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[af034241-f368-4479-bdee-608b6c2e6ce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 19711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330451, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.406 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[07b6ffe0-864e-4b6f-93fb-1582f30295b9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330452, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330452, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.407 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.467 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.469 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.469 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.469 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:36.470 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.753 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921056.7528374, 64a2f5d0-8a64-4e88-87f1-8cd406f83cda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.753 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] VM Started (Lifecycle Event)
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.774 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.778 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921056.7529163, 64a2f5d0-8a64-4e88-87f1-8cd406f83cda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.778 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] VM Paused (Lifecycle Event)
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.796 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.799 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:36 compute-0 nova_compute[250018]: 2026-01-20 14:57:36.822 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/759606216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:36 compute-0 ceph-mon[74360]: pgmap v2163: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 13 MiB/s wr, 491 op/s
Jan 20 14:57:37 compute-0 upbeat_euler[330425]: {
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:     "0": [
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:         {
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "devices": [
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "/dev/loop3"
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             ],
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "lv_name": "ceph_lv0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "lv_size": "7511998464",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "name": "ceph_lv0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "tags": {
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.cluster_name": "ceph",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.crush_device_class": "",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.encrypted": "0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.osd_id": "0",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.type": "block",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:                 "ceph.vdo": "0"
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             },
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "type": "block",
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:             "vg_name": "ceph_vg0"
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:         }
Jan 20 14:57:37 compute-0 upbeat_euler[330425]:     ]
Jan 20 14:57:37 compute-0 upbeat_euler[330425]: }
Jan 20 14:57:37 compute-0 systemd[1]: libpod-9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165.scope: Deactivated successfully.
Jan 20 14:57:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.132 250022 DEBUG nova.compute.manager [req-cd290b24-6aea-420f-b644-8532ffa25bc4 req-59da6ed8-7ad4-4e45-8bc8-c1605745afcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.133 250022 DEBUG oslo_concurrency.lockutils [req-cd290b24-6aea-420f-b644-8532ffa25bc4 req-59da6ed8-7ad4-4e45-8bc8-c1605745afcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.134 250022 DEBUG oslo_concurrency.lockutils [req-cd290b24-6aea-420f-b644-8532ffa25bc4 req-59da6ed8-7ad4-4e45-8bc8-c1605745afcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.134 250022 DEBUG oslo_concurrency.lockutils [req-cd290b24-6aea-420f-b644-8532ffa25bc4 req-59da6ed8-7ad4-4e45-8bc8-c1605745afcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.134 250022 DEBUG nova.compute.manager [req-cd290b24-6aea-420f-b644-8532ffa25bc4 req-59da6ed8-7ad4-4e45-8bc8-c1605745afcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Processing event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.135 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.139 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921057.138825, 64a2f5d0-8a64-4e88-87f1-8cd406f83cda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.139 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] VM Resumed (Lifecycle Event)
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.141 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:57:37 compute-0 podman[330499]: 2026-01-20 14:57:37.141219248 +0000 UTC m=+0.045245569 container died 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.144 250022 INFO nova.virt.libvirt.driver [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance spawned successfully.
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.145 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-917d8611fe8273056006c786667c980850c18c86f68a97e82b0fbf709b496372-merged.mount: Deactivated successfully.
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.180 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:37 compute-0 podman[330499]: 2026-01-20 14:57:37.209887016 +0000 UTC m=+0.113913257 container remove 9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_euler, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 14:57:37 compute-0 systemd[1]: libpod-conmon-9725e6117e3279e8d794a8f5b64d5fd416e32ac51afe0da4e80bc5d452ec0165.scope: Deactivated successfully.
Jan 20 14:57:37 compute-0 sudo[330259]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:37 compute-0 sudo[330514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:37 compute-0 sudo[330514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:37 compute-0 sudo[330514]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.345 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.349 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.349 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.350 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.350 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.350 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.351 250022 DEBUG nova.virt.libvirt.driver [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:57:37 compute-0 sudo[330539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:57:37 compute-0 sudo[330539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:37 compute-0 sudo[330539]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.387 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:57:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:37.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.429 250022 INFO nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Took 7.21 seconds to spawn the instance on the hypervisor.
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.429 250022 DEBUG nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:37 compute-0 sudo[330564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:37 compute-0 sudo[330564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:37 compute-0 sudo[330564]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:37 compute-0 sudo[330589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:57:37 compute-0 sudo[330589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.517 250022 INFO nova.compute.manager [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Took 8.16 seconds to build instance.
Jan 20 14:57:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:37.538 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:37.538 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.574 250022 DEBUG oslo_concurrency.lockutils [None req-c9ec874c-6d83-4258-aef6-5ddb55260471 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.587 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.814688773 +0000 UTC m=+0.046396860 container create 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:57:37 compute-0 systemd[1]: Started libpod-conmon-20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382.scope.
Jan 20 14:57:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 20 14:57:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.796133083 +0000 UTC m=+0.027841200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.891115249 +0000 UTC m=+0.122823356 container init 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.891 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921042.889885, 26313fbe-d22b-4bbb-8216-3d6e227174e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.891 250022 INFO nova.compute.manager [-] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] VM Stopped (Lifecycle Event)
Jan 20 14:57:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.900729038 +0000 UTC m=+0.132437125 container start 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.905429504 +0000 UTC m=+0.137137881 container attach 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:57:37 compute-0 xenodochial_wescoff[330670]: 167 167
Jan 20 14:57:37 compute-0 systemd[1]: libpod-20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382.scope: Deactivated successfully.
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.906490083 +0000 UTC m=+0.138198170 container died 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 14:57:37 compute-0 nova_compute[250018]: 2026-01-20 14:57:37.908 250022 DEBUG nova.compute.manager [None req-96099f05-57b7-49fd-9f36-efca814b7473 - - - - - -] [instance: 26313fbe-d22b-4bbb-8216-3d6e227174e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-945f3a1517ecee4900e0fc6988acf1a5a72ecd0b39da3df68d0a9fa59e93a909-merged.mount: Deactivated successfully.
Jan 20 14:57:37 compute-0 podman[330653]: 2026-01-20 14:57:37.949059569 +0000 UTC m=+0.180767656 container remove 20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:57:37 compute-0 systemd[1]: libpod-conmon-20e82ad89fa2d8f59efaeaeb5a70db38136ae0b9937e528f84b9d1a802abc382.scope: Deactivated successfully.
Jan 20 14:57:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 15 MiB/s wr, 516 op/s
Jan 20 14:57:38 compute-0 podman[330693]: 2026-01-20 14:57:38.127684766 +0000 UTC m=+0.052120594 container create 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:38 compute-0 systemd[1]: Started libpod-conmon-85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272.scope.
Jan 20 14:57:38 compute-0 podman[330693]: 2026-01-20 14:57:38.101639685 +0000 UTC m=+0.026075543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:57:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5639b4be892daeac2d2c62686d2952808c9610e1e2a1941d8ee784be34bb072b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5639b4be892daeac2d2c62686d2952808c9610e1e2a1941d8ee784be34bb072b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5639b4be892daeac2d2c62686d2952808c9610e1e2a1941d8ee784be34bb072b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5639b4be892daeac2d2c62686d2952808c9610e1e2a1941d8ee784be34bb072b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:57:38 compute-0 podman[330693]: 2026-01-20 14:57:38.237301385 +0000 UTC m=+0.161737233 container init 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:57:38 compute-0 podman[330693]: 2026-01-20 14:57:38.244347976 +0000 UTC m=+0.168783814 container start 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:57:38 compute-0 podman[330693]: 2026-01-20 14:57:38.248303001 +0000 UTC m=+0.172738859 container attach 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 20 14:57:38 compute-0 nova_compute[250018]: 2026-01-20 14:57:38.471 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:38 compute-0 ceph-mon[74360]: osdmap e319: 3 total, 3 up, 3 in
Jan 20 14:57:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4005775922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:38 compute-0 ceph-mon[74360]: pgmap v2165: 321 pgs: 321 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 15 MiB/s wr, 516 op/s
Jan 20 14:57:39 compute-0 sad_moore[330709]: {
Jan 20 14:57:39 compute-0 sad_moore[330709]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:57:39 compute-0 sad_moore[330709]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:57:39 compute-0 sad_moore[330709]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:57:39 compute-0 sad_moore[330709]:         "osd_id": 0,
Jan 20 14:57:39 compute-0 sad_moore[330709]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:57:39 compute-0 sad_moore[330709]:         "type": "bluestore"
Jan 20 14:57:39 compute-0 sad_moore[330709]:     }
Jan 20 14:57:39 compute-0 sad_moore[330709]: }
Jan 20 14:57:39 compute-0 systemd[1]: libpod-85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272.scope: Deactivated successfully.
Jan 20 14:57:39 compute-0 podman[330693]: 2026-01-20 14:57:39.04192708 +0000 UTC m=+0.966362948 container died 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5639b4be892daeac2d2c62686d2952808c9610e1e2a1941d8ee784be34bb072b-merged.mount: Deactivated successfully.
Jan 20 14:57:39 compute-0 podman[330693]: 2026-01-20 14:57:39.116719583 +0000 UTC m=+1.041155421 container remove 85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:57:39 compute-0 systemd[1]: libpod-conmon-85870a654f70d23026944d22a3c257028f07814752b05a1d685f580d3ee0e272.scope: Deactivated successfully.
Jan 20 14:57:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:39 compute-0 sudo[330589]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:57:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:39 compute-0 podman[330738]: 2026-01-20 14:57:39.163366598 +0000 UTC m=+0.083082936 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:57:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c3c26505-b659-44a9-a925-15c4b72216c8 does not exist
Jan 20 14:57:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 71eedd7c-3221-44cb-91c7-86a5160f366d does not exist
Jan 20 14:57:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6b335a3b-c98d-4561-9b3a-fff142e8c8cb does not exist
Jan 20 14:57:39 compute-0 podman[330731]: 2026-01-20 14:57:39.189216384 +0000 UTC m=+0.103933168 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:57:39 compute-0 sudo[330784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.230 250022 DEBUG nova.compute.manager [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.231 250022 DEBUG oslo_concurrency.lockutils [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.231 250022 DEBUG oslo_concurrency.lockutils [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.231 250022 DEBUG oslo_concurrency.lockutils [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.232 250022 DEBUG nova.compute.manager [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] No waiting events found dispatching network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:39 compute-0 nova_compute[250018]: 2026-01-20 14:57:39.232 250022 WARNING nova.compute.manager [req-d559a994-261a-40e9-b092-fe6f1a2bd379 req-576315f4-a446-4392-aeea-fd5e2b5cd472 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received unexpected event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 for instance with vm_state active and task_state None.
Jan 20 14:57:39 compute-0 sudo[330784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:39 compute-0 sudo[330784]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:39 compute-0 sudo[330810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:57:39 compute-0 sudo[330810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:39 compute-0 sudo[330810]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 20 14:57:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 20 14:57:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 20 14:57:39 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 20 14:57:39 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:39.986604) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:57:39 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 20 14:57:39 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921059986684, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 256, "total_data_size": 2193436, "memory_usage": 2228360, "flush_reason": "Manual Compaction"}
Jan 20 14:57:39 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060003198, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2143187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47489, "largest_seqno": 48945, "table_properties": {"data_size": 2136383, "index_size": 3811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15753, "raw_average_key_size": 20, "raw_value_size": 2122188, "raw_average_value_size": 2822, "num_data_blocks": 166, "num_entries": 752, "num_filter_entries": 752, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768920955, "oldest_key_time": 1768920955, "file_creation_time": 1768921059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 16627 microseconds, and 5613 cpu microseconds.
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.003243) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2143187 bytes OK
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.003262) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.004665) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.004678) EVENT_LOG_v1 {"time_micros": 1768921060004674, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.004695) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 2186941, prev total WAL file size 2186941, number of live WAL files 2.
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.005460) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2092KB)], [104(9849KB)]
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060005550, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12228884, "oldest_snapshot_seqno": -1}
Jan 20 14:57:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 519 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 425 op/s
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7776 keys, 10350963 bytes, temperature: kUnknown
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060106035, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10350963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10300158, "index_size": 30277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 201220, "raw_average_key_size": 25, "raw_value_size": 10162541, "raw_average_value_size": 1306, "num_data_blocks": 1188, "num_entries": 7776, "num_filter_entries": 7776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921060, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.106269) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10350963 bytes
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.107475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.6 rd, 102.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 8309, records dropped: 533 output_compression: NoCompression
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.107491) EVENT_LOG_v1 {"time_micros": 1768921060107483, "job": 62, "event": "compaction_finished", "compaction_time_micros": 100562, "compaction_time_cpu_micros": 41928, "output_level": 6, "num_output_files": 1, "total_output_size": 10350963, "num_input_records": 8309, "num_output_records": 7776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060107967, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921060109738, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.005315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.109782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.109788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.109789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.109791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-14:57:40.109793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:57:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:57:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1873084837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:40 compute-0 ceph-mon[74360]: osdmap e320: 3 total, 3 up, 3 in
Jan 20 14:57:40 compute-0 nova_compute[250018]: 2026-01-20 14:57:40.390 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:40.539 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:40 compute-0 ovn_controller[148666]: 2026-01-20T14:57:40Z|00504|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:57:40 compute-0 nova_compute[250018]: 2026-01-20 14:57:40.885 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:41 compute-0 ovn_controller[148666]: 2026-01-20T14:57:41Z|00505|binding|INFO|Releasing lport 8c6fd3ab-70a8-4e63-99de-f2e15ac0207f from this chassis (sb_readonly=0)
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:41 compute-0 ceph-mon[74360]: pgmap v2167: 321 pgs: 321 active+clean; 519 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 10 MiB/s wr, 425 op/s
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.374 250022 DEBUG oslo_concurrency.lockutils [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.374 250022 DEBUG oslo_concurrency.lockutils [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.374 250022 DEBUG nova.compute.manager [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.378 250022 DEBUG nova.compute.manager [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.379 250022 DEBUG nova.objects.instance [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'flavor' on Instance uuid 64a2f5d0-8a64-4e88-87f1-8cd406f83cda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:41 compute-0 nova_compute[250018]: 2026-01-20 14:57:41.408 250022 DEBUG nova.virt.libvirt.driver [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 14:57:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 374 op/s
Jan 20 14:57:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/721238381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2972576168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:43 compute-0 ceph-mon[74360]: pgmap v2168: 321 pgs: 321 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 374 op/s
Jan 20 14:57:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/663477354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2004004617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:43 compute-0 nova_compute[250018]: 2026-01-20 14:57:43.474 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 39 KiB/s wr, 157 op/s
Jan 20 14:57:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2874127425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 20 14:57:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 20 14:57:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 20 14:57:45 compute-0 nova_compute[250018]: 2026-01-20 14:57:45.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:45 compute-0 nova_compute[250018]: 2026-01-20 14:57:45.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:45 compute-0 ceph-mon[74360]: pgmap v2169: 321 pgs: 321 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 39 KiB/s wr, 157 op/s
Jan 20 14:57:45 compute-0 ceph-mon[74360]: osdmap e321: 3 total, 3 up, 3 in
Jan 20 14:57:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/607676245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:57:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/607676245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:57:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:45 compute-0 nova_compute[250018]: 2026-01-20 14:57:45.436 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.088 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.088 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 60 KiB/s wr, 267 op/s
Jan 20 14:57:46 compute-0 ceph-mon[74360]: pgmap v2171: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 60 KiB/s wr, 267 op/s
Jan 20 14:57:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3342654723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436669365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.537 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.626 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.626 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.629 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.630 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.788 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.789 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3852MB free_disk=20.876060485839844GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.789 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:46 compute-0 nova_compute[250018]: 2026-01-20 14:57:46.790 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.012 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 11c82470-ab02-4424-908b-705f1f65e062 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.013 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 64a2f5d0-8a64-4e88-87f1-8cd406f83cda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.013 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.013 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.068 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:57:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1436669365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1505514556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:57:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674329294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.514 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.519 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.641 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.762 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:57:47 compute-0 nova_compute[250018]: 2026-01-20 14:57:47.763 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 59 KiB/s wr, 239 op/s
Jan 20 14:57:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2911748478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1674329294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1045157438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-0 ceph-mon[74360]: pgmap v2172: 321 pgs: 321 active+clean; 406 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 59 KiB/s wr, 239 op/s
Jan 20 14:57:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4255159192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:48 compute-0 nova_compute[250018]: 2026-01-20 14:57:48.475 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:48 compute-0 nova_compute[250018]: 2026-01-20 14:57:48.764 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:48 compute-0 nova_compute[250018]: 2026-01-20 14:57:48.764 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:48 compute-0 nova_compute[250018]: 2026-01-20 14:57:48.764 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3301246247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:57:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:49.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 391 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 48 KiB/s wr, 280 op/s
Jan 20 14:57:50 compute-0 ceph-mon[74360]: pgmap v2173: 321 pgs: 321 active+clean; 391 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 48 KiB/s wr, 280 op/s
Jan 20 14:57:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/319160311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:50 compute-0 nova_compute[250018]: 2026-01-20 14:57:50.439 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:51 compute-0 nova_compute[250018]: 2026-01-20 14:57:51.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:57:51 compute-0 nova_compute[250018]: 2026-01-20 14:57:51.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:57:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:51 compute-0 nova_compute[250018]: 2026-01-20 14:57:51.160 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 14:57:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:51 compute-0 nova_compute[250018]: 2026-01-20 14:57:51.460 250022 DEBUG nova.virt.libvirt.driver [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 345 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:57:52
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'images']
Jan 20 14:57:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:57:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:57:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:57:53 compute-0 ceph-mon[74360]: pgmap v2174: 321 pgs: 321 active+clean; 345 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Jan 20 14:57:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:53.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:53 compute-0 sudo[330888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:53 compute-0 sudo[330888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:53 compute-0 sudo[330888]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.478 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 sudo[330913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:57:53 compute-0 sudo[330913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:57:53 compute-0 sudo[330913]: pam_unix(sudo:session): session closed for user root
Jan 20 14:57:53 compute-0 kernel: tap2b81c6c8-fe (unregistering): left promiscuous mode
Jan 20 14:57:53 compute-0 NetworkManager[48960]: <info>  [1768921073.7154] device (tap2b81c6c8-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:57:53 compute-0 ovn_controller[148666]: 2026-01-20T14:57:53Z|00506|binding|INFO|Releasing lport 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 from this chassis (sb_readonly=0)
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 ovn_controller[148666]: 2026-01-20T14:57:53Z|00507|binding|INFO|Setting lport 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 down in Southbound
Jan 20 14:57:53 compute-0 ovn_controller[148666]: 2026-01-20T14:57:53Z|00508|binding|INFO|Removing iface tap2b81c6c8-fe ovn-installed in OVS
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.729 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:f5:04 10.100.0.7'], port_security=['fa:16:3e:87:f5:04 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '64a2f5d0-8a64-4e88-87f1-8cd406f83cda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2b81c6c8-fe13-44f6-85a2-1d0130b60a64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.731 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2b81c6c8-fe13-44f6-85a2-1d0130b60a64 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.734 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.749 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0e73dc42-649b-43b1-9cb1-78fe58be2b32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.775 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[dab0417f-9ec5-4a82-ae29-6245121608f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.779 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[77fe5f97-2959-4354-bade-efa2eb9033cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 20 14:57:53 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Consumed 13.974s CPU time.
Jan 20 14:57:53 compute-0 systemd-machined[216401]: Machine qemu-65-instance-00000089 terminated.
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.803 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a72ef8bc-a013-4d8c-b3e5-b7a6f8b6580e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.824 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4085ba12-a84e-4119-a068-b97476715cc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4c8474b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:a2:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 151], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687884, 'reachable_time': 19711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330951, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.840 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b40948db-24d1-443e-a7e8-4808b724c6ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687894, 'tstamp': 687894}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330952, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf4c8474b-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 687897, 'tstamp': 687897}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330952, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.841 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 nova_compute[250018]: 2026-01-20 14:57:53.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.846 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4c8474b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.847 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.847 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4c8474b-00, col_values=(('external_ids', {'iface-id': '8c6fd3ab-70a8-4e63-99de-f2e15ac0207f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:57:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:57:53.848 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:57:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 353 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.5 MiB/s wr, 357 op/s
Jan 20 14:57:54 compute-0 nova_compute[250018]: 2026-01-20 14:57:54.472 250022 INFO nova.virt.libvirt.driver [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance shutdown successfully after 13 seconds.
Jan 20 14:57:54 compute-0 nova_compute[250018]: 2026-01-20 14:57:54.479 250022 INFO nova.virt.libvirt.driver [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance destroyed successfully.
Jan 20 14:57:54 compute-0 nova_compute[250018]: 2026-01-20 14:57:54.480 250022 DEBUG nova.objects.instance [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'numa_topology' on Instance uuid 64a2f5d0-8a64-4e88-87f1-8cd406f83cda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:57:54 compute-0 nova_compute[250018]: 2026-01-20 14:57:54.527 250022 DEBUG nova.compute.manager [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:57:54 compute-0 nova_compute[250018]: 2026-01-20 14:57:54.649 250022 DEBUG oslo_concurrency.lockutils [None req-3b8207f9-c0a6-4623-a7a0-c39c03414de2 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.101 250022 DEBUG nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-vif-unplugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.102 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.102 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.103 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.103 250022 DEBUG nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] No waiting events found dispatching network-vif-unplugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.104 250022 WARNING nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received unexpected event network-vif-unplugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 for instance with vm_state stopped and task_state None.
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.104 250022 DEBUG nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.105 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.105 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.106 250022 DEBUG oslo_concurrency.lockutils [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.106 250022 DEBUG nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] No waiting events found dispatching network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.106 250022 WARNING nova.compute.manager [req-58adad1c-eb7f-4c81-8a8f-6ff19f8942a2 req-cdce7841-68bd-4f48-94ba-41a1a90f9c10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received unexpected event network-vif-plugged-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 for instance with vm_state stopped and task_state None.
Jan 20 14:57:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:55 compute-0 ceph-mon[74360]: pgmap v2175: 321 pgs: 321 active+clean; 353 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.5 MiB/s wr, 357 op/s
Jan 20 14:57:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:55.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:55 compute-0 nova_compute[250018]: 2026-01-20 14:57:55.481 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.3 MiB/s wr, 376 op/s
Jan 20 14:57:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2303621507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:57:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:57 compute-0 ceph-mon[74360]: pgmap v2176: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.3 MiB/s wr, 376 op/s
Jan 20 14:57:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:57:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:57.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:57:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:57:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 275 op/s
Jan 20 14:57:58 compute-0 ceph-mon[74360]: pgmap v2177: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 275 op/s
Jan 20 14:57:58 compute-0 nova_compute[250018]: 2026-01-20 14:57:58.479 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:57:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:57:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:57:59.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:57:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:57:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:57:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:57:59.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:57:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 389 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.5 MiB/s wr, 328 op/s
Jan 20 14:58:00 compute-0 nova_compute[250018]: 2026-01-20 14:58:00.531 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:01.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:01 compute-0 ceph-mon[74360]: pgmap v2178: 321 pgs: 321 active+clean; 389 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.5 MiB/s wr, 328 op/s
Jan 20 14:58:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:01.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 328 op/s
Jan 20 14:58:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:03.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:03 compute-0 ceph-mon[74360]: pgmap v2179: 321 pgs: 321 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 328 op/s
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.250 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.250 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.250 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.251 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.251 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.252 250022 INFO nova.compute.manager [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Terminating instance
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.253 250022 DEBUG nova.compute.manager [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.258 250022 INFO nova.virt.libvirt.driver [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Instance destroyed successfully.
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.258 250022 DEBUG nova.objects.instance [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid 64a2f5d0-8a64-4e88-87f1-8cd406f83cda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.283 250022 DEBUG nova.virt.libvirt.vif [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:57:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-623581889',display_name='tempest-Íñstáñcé-1073298369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-623581889',id=137,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:57:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-5z001ple',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:57:58Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=64a2f5d0-8a64-4e88-87f1-8cd406f83cda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.284 250022 DEBUG nova.network.os_vif_util [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "address": "fa:16:3e:87:f5:04", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b81c6c8-fe", "ovs_interfaceid": "2b81c6c8-fe13-44f6-85a2-1d0130b60a64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.284 250022 DEBUG nova.network.os_vif_util [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.285 250022 DEBUG os_vif [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.288 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.288 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b81c6c8-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.290 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.294 250022 INFO os_vif [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:f5:04,bridge_name='br-int',has_traffic_filtering=True,id=2b81c6c8-fe13-44f6-85a2-1d0130b60a64,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b81c6c8-fe')
Jan 20 14:58:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.481 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.703 250022 INFO nova.virt.libvirt.driver [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Deleting instance files /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda_del
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.704 250022 INFO nova.virt.libvirt.driver [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Deletion of /var/lib/nova/instances/64a2f5d0-8a64-4e88-87f1-8cd406f83cda_del complete
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.820 250022 INFO nova.compute.manager [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.821 250022 DEBUG oslo.service.loopingcall [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.822 250022 DEBUG nova.compute.manager [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:58:03 compute-0 nova_compute[250018]: 2026-01-20 14:58:03.822 250022 DEBUG nova.network.neutron [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:58:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 253 op/s
Jan 20 14:58:04 compute-0 ceph-mon[74360]: pgmap v2180: 321 pgs: 321 active+clean; 407 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 253 op/s
Jan 20 14:58:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:05.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.4 MiB/s wr, 255 op/s
Jan 20 14:58:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:07 compute-0 ceph-mon[74360]: pgmap v2181: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.4 MiB/s wr, 255 op/s
Jan 20 14:58:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:07.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:07 compute-0 nova_compute[250018]: 2026-01-20 14:58:07.887 250022 DEBUG nova.network.neutron [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:07 compute-0 nova_compute[250018]: 2026-01-20 14:58:07.916 250022 INFO nova.compute.manager [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Took 4.09 seconds to deallocate network for instance.
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.016 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.017 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.051 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.052 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.052 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.052 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.053 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.053 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.095 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.095 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Image id a32b3e07-16d8-46fd-9a7b-c242c432fcf9 yields fingerprint 82d5c1918fd7c974214c7a48c1793a7a82560462 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.095 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] image a32b3e07-16d8-46fd-9a7b-c242c432fcf9 at (/var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462): checking
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.095 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] image a32b3e07-16d8-46fd-9a7b-c242c432fcf9 at (/var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.097 250022 INFO oslo.privsep.daemon [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp8jgpkoop/privsep.sock']
Jan 20 14:58:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.136 250022 DEBUG oslo_concurrency.processutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.241 250022 DEBUG nova.compute.manager [req-a7a60445-0dd6-4a28-9034-53317ba81346 req-63b5c4b5-fabe-4183-af92-edd1a6f0f7f6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Received event network-vif-deleted-2b81c6c8-fe13-44f6-85a2-1d0130b60a64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.522 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689632524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.612 250022 DEBUG oslo_concurrency.processutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.618 250022 DEBUG nova.compute.provider_tree [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.648 250022 DEBUG nova.scheduler.client.report [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.701 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.809 250022 INFO oslo.privsep.daemon [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Spawned new privsep daemon via rootwrap
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.692 331017 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.696 331017 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.697 331017 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.698 331017 INFO oslo.privsep.daemon [-] privsep daemon running as pid 331017
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.893 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.894 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] 11c82470-ab02-4424-908b-705f1f65e062 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.894 250022 WARNING nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.894 250022 WARNING nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.894 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Active base files: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.895 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Removable base files: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.895 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.895 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.895 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.895 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.896 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.935 250022 INFO nova.scheduler.client.report [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance 64a2f5d0-8a64-4e88-87f1-8cd406f83cda
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.950 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921073.9497352, 64a2f5d0-8a64-4e88-87f1-8cd406f83cda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.950 250022 INFO nova.compute.manager [-] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] VM Stopped (Lifecycle Event)
Jan 20 14:58:08 compute-0 nova_compute[250018]: 2026-01-20 14:58:08.969 250022 DEBUG nova.compute.manager [None req-877e13fd-3124-48f8-8472-5b4a47fa7864 - - - - - -] [instance: 64a2f5d0-8a64-4e88-87f1-8cd406f83cda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:09 compute-0 nova_compute[250018]: 2026-01-20 14:58:09.031 250022 DEBUG oslo_concurrency.lockutils [None req-6255194d-d225-41b1-949b-0c6705d32669 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "64a2f5d0-8a64-4e88-87f1-8cd406f83cda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:58:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:58:09 compute-0 ceph-mon[74360]: pgmap v2182: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Jan 20 14:58:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2689632524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:09.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:09 compute-0 podman[331020]: 2026-01-20 14:58:09.546185716 +0000 UTC m=+0.136515959 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:58:09 compute-0 podman[331019]: 2026-01-20 14:58:09.615831971 +0000 UTC m=+0.210383047 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:58:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:58:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:11 compute-0 ceph-mon[74360]: pgmap v2183: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.367 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.368 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.369 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.370 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.370 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.373 250022 INFO nova.compute.manager [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Terminating instance
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.375 250022 DEBUG nova.compute.manager [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:58:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 14:58:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:11.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 14:58:11 compute-0 kernel: tap6532e62a-b8 (unregistering): left promiscuous mode
Jan 20 14:58:11 compute-0 NetworkManager[48960]: <info>  [1768921091.5302] device (tap6532e62a-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:58:11 compute-0 ovn_controller[148666]: 2026-01-20T14:58:11Z|00509|binding|INFO|Releasing lport 6532e62a-b883-47ff-9799-820144870294 from this chassis (sb_readonly=0)
Jan 20 14:58:11 compute-0 ovn_controller[148666]: 2026-01-20T14:58:11Z|00510|binding|INFO|Setting lport 6532e62a-b883-47ff-9799-820144870294 down in Southbound
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-0 ovn_controller[148666]: 2026-01-20T14:58:11Z|00511|binding|INFO|Removing iface tap6532e62a-b8 ovn-installed in OVS
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.544 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:11.547 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:3f:c2 10.100.0.4'], port_security=['fa:16:3e:7a:3f:c2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11c82470-ab02-4424-908b-705f1f65e062', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca6cd0afe0ab41e3ab36d21a4129f734', 'neutron:revision_number': '4', 'neutron:security_group_ids': '819ea4ae-b994-44d1-9da3-8b0ca609fb2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee620e3e-ef7e-4826-b394-b8a89442b353, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=6532e62a-b883-47ff-9799-820144870294) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:11.549 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 6532e62a-b883-47ff-9799-820144870294 in datapath f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c unbound from our chassis
Jan 20 14:58:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:11.550 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:58:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:11.552 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6a32eb64-c405-4a36-811a-677f7ebb85b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:11.552 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c namespace which is not needed anymore
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021886777638190956 of space, bias 1.0, pg target 0.6566033291457287 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006481154324865066 of space, bias 1.0, pg target 1.9443462974595196 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:58:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.566 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 20 14:58:11 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007f.scope: Consumed 19.695s CPU time.
Jan 20 14:58:11 compute-0 systemd-machined[216401]: Machine qemu-60-instance-0000007f terminated.
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [NOTICE]   (325857) : haproxy version is 2.8.14-c23fe91
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [NOTICE]   (325857) : path to executable is /usr/sbin/haproxy
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [WARNING]  (325857) : Exiting Master process...
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [WARNING]  (325857) : Exiting Master process...
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [ALERT]    (325857) : Current worker (325859) exited with code 143 (Terminated)
Jan 20 14:58:11 compute-0 neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c[325853]: [WARNING]  (325857) : All workers exited. Exiting... (0)
Jan 20 14:58:11 compute-0 systemd[1]: libpod-8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9.scope: Deactivated successfully.
Jan 20 14:58:11 compute-0 podman[331091]: 2026-01-20 14:58:11.742140056 +0000 UTC m=+0.064939290 container died 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9-userdata-shm.mount: Deactivated successfully.
Jan 20 14:58:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b56dfda89449563153c1e0ee260ed297534c31a44556dd6ae32ac75a1179d741-merged.mount: Deactivated successfully.
Jan 20 14:58:11 compute-0 podman[331091]: 2026-01-20 14:58:11.788942226 +0000 UTC m=+0.111741450 container cleanup 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:11 compute-0 systemd[1]: libpod-conmon-8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9.scope: Deactivated successfully.
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.810 250022 INFO nova.virt.libvirt.driver [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Instance destroyed successfully.
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.811 250022 DEBUG nova.objects.instance [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lazy-loading 'resources' on Instance uuid 11c82470-ab02-4424-908b-705f1f65e062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.828 250022 DEBUG nova.virt.libvirt.vif [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:55:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-1988318391',display_name='tempest-₡-1988318391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--1988318391',id=127,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:55:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca6cd0afe0ab41e3ab36d21a4129f734',ramdisk_id='',reservation_id='r-jk35jcuy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-405461620',owner_user_name='tempest-ServersTestJSON-405461620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:55:39Z,user_data=None,user_id='395a5c503218411284bc94c45263d1fb',uuid=11c82470-ab02-4424-908b-705f1f65e062,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.829 250022 DEBUG nova.network.os_vif_util [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converting VIF {"id": "6532e62a-b883-47ff-9799-820144870294", "address": "fa:16:3e:7a:3f:c2", "network": {"id": "f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c", "bridge": "br-int", "label": "tempest-ServersTestJSON-1745321011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca6cd0afe0ab41e3ab36d21a4129f734", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6532e62a-b8", "ovs_interfaceid": "6532e62a-b883-47ff-9799-820144870294", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.830 250022 DEBUG nova.network.os_vif_util [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.831 250022 DEBUG os_vif [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.833 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6532e62a-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.838 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:58:11 compute-0 nova_compute[250018]: 2026-01-20 14:58:11.840 250022 INFO os_vif [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=6532e62a-b883-47ff-9799-820144870294,network=Network(f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6532e62a-b8')
Jan 20 14:58:11 compute-0 podman[331123]: 2026-01-20 14:58:11.994603006 +0000 UTC m=+0.180131442 container remove 8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.002 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[32dfa649-1c1a-4599-82a2-564f24159bac]: (4, ('Tue Jan 20 02:58:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c (8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9)\n8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9\nTue Jan 20 02:58:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c (8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9)\n8c5059b9deafd41fc67064e191ee53efa47ee0f5d1c4a13500bcb3112c323dd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.004 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[444d681f-cda2-473a-8aed-cf524be7b037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.005 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4c8474b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.007 250022 DEBUG nova.compute.manager [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-unplugged-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.008 250022 DEBUG oslo_concurrency.lockutils [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.008 250022 DEBUG oslo_concurrency.lockutils [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.009 250022 DEBUG oslo_concurrency.lockutils [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.009 250022 DEBUG nova.compute.manager [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] No waiting events found dispatching network-vif-unplugged-6532e62a-b883-47ff-9799-820144870294 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.010 250022 DEBUG nova.compute.manager [req-4e9be728-3bae-4943-89f6-ade926dfc61d req-d2ae0eeb-5b39-4cda-a6c6-ea7f1fdf1ac4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-unplugged-6532e62a-b883-47ff-9799-820144870294 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:12 compute-0 kernel: tapf4c8474b-00: left promiscuous mode
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.039 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.043 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[98030625-a4b9-48c6-810c-6826ad4b4610]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.059 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b39037a8-89b7-4534-88fa-be9287fe506c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.061 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.060 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[76d48d19-8919-42e9-a5af-e16e94bb2e29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.077 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a0129215-b8b7-4fd5-849e-98714ec322a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 687877, 'reachable_time': 37909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331164, 'error': None, 'target': 'ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.079 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4c8474b-0ca3-4cb0-b6dd-e6aa302def5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:58:12 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:12.079 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b3107a6b-8621-43c4-828e-a1b0bcf8cfb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:12 compute-0 systemd[1]: run-netns-ovnmeta\x2df4c8474b\x2d0ca3\x2d4cb0\x2db6dd\x2de6aa302def5c.mount: Deactivated successfully.
Jan 20 14:58:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.273 250022 INFO nova.virt.libvirt.driver [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Deleting instance files /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062_del
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.274 250022 INFO nova.virt.libvirt.driver [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Deletion of /var/lib/nova/instances/11c82470-ab02-4424-908b-705f1f65e062_del complete
Jan 20 14:58:12 compute-0 ceph-mon[74360]: pgmap v2184: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.339 250022 INFO nova.compute.manager [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.340 250022 DEBUG oslo.service.loopingcall [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.340 250022 DEBUG nova.compute.manager [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:58:12 compute-0 nova_compute[250018]: 2026-01-20 14:58:12.340 250022 DEBUG nova.network.neutron [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:58:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:13.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:13.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:13 compute-0 nova_compute[250018]: 2026-01-20 14:58:13.524 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:13 compute-0 nova_compute[250018]: 2026-01-20 14:58:13.570 250022 DEBUG nova.network.neutron [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:13 compute-0 nova_compute[250018]: 2026-01-20 14:58:13.590 250022 INFO nova.compute.manager [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Took 1.25 seconds to deallocate network for instance.
Jan 20 14:58:13 compute-0 sudo[331166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:13 compute-0 sudo[331166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:13 compute-0 sudo[331166]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:13 compute-0 sudo[331191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:13 compute-0 sudo[331191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:13 compute-0 sudo[331191]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.167 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.168 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1767471805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1767471805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.364 250022 DEBUG oslo_concurrency.processutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.701 250022 DEBUG nova.compute.manager [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.702 250022 DEBUG oslo_concurrency.lockutils [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "11c82470-ab02-4424-908b-705f1f65e062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.702 250022 DEBUG oslo_concurrency.lockutils [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.702 250022 DEBUG oslo_concurrency.lockutils [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.703 250022 DEBUG nova.compute.manager [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] No waiting events found dispatching network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.703 250022 WARNING nova.compute.manager [req-cccf390a-dc7a-4216-adf6-6df56a4894f3 req-c1ef643c-8af3-4cdf-830e-f069bac2482b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received unexpected event network-vif-plugged-6532e62a-b883-47ff-9799-820144870294 for instance with vm_state deleted and task_state None.
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.779 250022 DEBUG nova.compute.manager [req-c495fb59-17f5-45a1-98bd-80498c125788 req-44b89a61-016a-4480-a4b8-05ad2854f19b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Received event network-vif-deleted-6532e62a-b883-47ff-9799-820144870294 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697642607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.804 250022 DEBUG oslo_concurrency.processutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.810 250022 DEBUG nova.compute.provider_tree [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.866 250022 DEBUG nova.scheduler.client.report [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.925 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:14 compute-0 nova_compute[250018]: 2026-01-20 14:58:14.959 250022 INFO nova.scheduler.client.report [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Deleted allocations for instance 11c82470-ab02-4424-908b-705f1f65e062
Jan 20 14:58:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:15 compute-0 ceph-mon[74360]: pgmap v2185: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Jan 20 14:58:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/697642607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:15 compute-0 nova_compute[250018]: 2026-01-20 14:58:15.361 250022 DEBUG oslo_concurrency.lockutils [None req-298c9178-d284-4633-b2ab-d8d9edb8474b 395a5c503218411284bc94c45263d1fb ca6cd0afe0ab41e3ab36d21a4129f734 - - default default] Lock "11c82470-ab02-4424-908b-705f1f65e062" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:15.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 20 14:58:16 compute-0 ceph-mon[74360]: pgmap v2186: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 20 14:58:16 compute-0 nova_compute[250018]: 2026-01-20 14:58:16.837 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:17.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 290 KiB/s wr, 52 op/s
Jan 20 14:58:18 compute-0 nova_compute[250018]: 2026-01-20 14:58:18.529 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:19 compute-0 ceph-mon[74360]: pgmap v2187: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 290 KiB/s wr, 52 op/s
Jan 20 14:58:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3000352245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:19.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 305 KiB/s wr, 55 op/s
Jan 20 14:58:20 compute-0 ceph-mon[74360]: pgmap v2188: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 305 KiB/s wr, 55 op/s
Jan 20 14:58:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:21.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:21.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:21 compute-0 nova_compute[250018]: 2026-01-20 14:58:21.841 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 393 KiB/s wr, 57 op/s
Jan 20 14:58:22 compute-0 ceph-mon[74360]: pgmap v2189: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 393 KiB/s wr, 57 op/s
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:58:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:58:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:23.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:58:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2286653853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:58:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:23.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:23 compute-0 nova_compute[250018]: 2026-01-20 14:58:23.530 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:23 compute-0 nova_compute[250018]: 2026-01-20 14:58:23.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 389 KiB/s wr, 56 op/s
Jan 20 14:58:24 compute-0 ceph-mon[74360]: pgmap v2190: 321 pgs: 321 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 389 KiB/s wr, 56 op/s
Jan 20 14:58:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:25.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 376 KiB/s wr, 68 op/s
Jan 20 14:58:26 compute-0 nova_compute[250018]: 2026-01-20 14:58:26.808 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921091.8072262, 11c82470-ab02-4424-908b-705f1f65e062 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:26 compute-0 nova_compute[250018]: 2026-01-20 14:58:26.809 250022 INFO nova.compute.manager [-] [instance: 11c82470-ab02-4424-908b-705f1f65e062] VM Stopped (Lifecycle Event)
Jan 20 14:58:26 compute-0 nova_compute[250018]: 2026-01-20 14:58:26.833 250022 DEBUG nova.compute.manager [None req-d8e7919c-d759-4dee-b336-b10353b1552b - - - - - -] [instance: 11c82470-ab02-4424-908b-705f1f65e062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:26 compute-0 nova_compute[250018]: 2026-01-20 14:58:26.844 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:27 compute-0 ceph-mon[74360]: pgmap v2191: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 376 KiB/s wr, 68 op/s
Jan 20 14:58:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:27.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:27.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:28 compute-0 nova_compute[250018]: 2026-01-20 14:58:28.560 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:29 compute-0 ceph-mon[74360]: pgmap v2192: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:29.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.081 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.082 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.120 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.296 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.297 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.305 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.306 250022 INFO nova.compute.claims [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.544 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:30.768 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:30.769 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:30.769 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.773 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.774 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.807 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.889 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304440551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.966 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.971 250022 DEBUG nova.compute.provider_tree [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:30 compute-0 nova_compute[250018]: 2026-01-20 14:58:30.986 250022 DEBUG nova.scheduler.client.report [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.021 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.021 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.024 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.030 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.031 250022 INFO nova.compute.claims [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.103 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.104 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.135 250022 INFO nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.164 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:58:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.280 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.311 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.312 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.313 250022 INFO nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Creating image(s)
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.453 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.478 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:31 compute-0 ceph-mon[74360]: pgmap v2193: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 195 KiB/s wr, 37 op/s
Jan 20 14:58:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1304440551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.504 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.508 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:31.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.539 250022 DEBUG nova.policy [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f78d8330caf745e1b9d9eb71d167e735', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9edd9bb3389a4ef2a90569bcdd524d35', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.574 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.575 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.576 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.577 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721070655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.797 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.802 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.838 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.844 250022 DEBUG nova.compute.provider_tree [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.865 250022 DEBUG nova.scheduler.client.report [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.888 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.889 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.956 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.957 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:58:31 compute-0 nova_compute[250018]: 2026-01-20 14:58:31.981 250022 INFO nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.009 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:58:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 181 KiB/s wr, 34 op/s
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.115 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.116 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.116 250022 INFO nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Creating image(s)
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.141 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.166 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.189 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.193 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.224 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2569693860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.289 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.290 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.290 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.291 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.313 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.316 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 20a008fa-d059-4906-ba1c-697755b1ba06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.350 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] resizing rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.462 250022 DEBUG nova.policy [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0362dea63d5a43778f8d4164a77cd3c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bae96a4c20ce4f3a859ff518a8423db5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.469 250022 DEBUG nova.objects.instance [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a8ed269-8170-4a79-aa6d-75abae8562c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.494 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.494 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Ensure instance console log exists: /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.495 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.495 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.495 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/721070655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:32 compute-0 ceph-mon[74360]: pgmap v2194: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 181 KiB/s wr, 34 op/s
Jan 20 14:58:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2569693860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.638 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 20a008fa-d059-4906-ba1c-697755b1ba06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.710 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] resizing rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.825 250022 DEBUG nova.objects.instance [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lazy-loading 'migration_context' on Instance uuid 20a008fa-d059-4906-ba1c-697755b1ba06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.845 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.846 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Ensure instance console log exists: /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.846 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.846 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:32 compute-0 nova_compute[250018]: 2026-01-20 14:58:32.847 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:33.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:33 compute-0 nova_compute[250018]: 2026-01-20 14:58:33.562 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:33 compute-0 sudo[331624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:33 compute-0 sudo[331624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:33 compute-0 sudo[331624]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:33 compute-0 sudo[331649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:33 compute-0 sudo[331649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:33 compute-0 sudo[331649]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 322 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 20 14:58:34 compute-0 nova_compute[250018]: 2026-01-20 14:58:34.439 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Successfully created port: 05a3e492-ecc2-4c60-8410-6fde269e9285 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:58:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.193 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Successfully created port: 947d815e-382a-4899-acd3-af215e4fbab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:58:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:58:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:58:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.610 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Successfully updated port: 05a3e492-ecc2-4c60-8410-6fde269e9285 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.629 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.629 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquired lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.629 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.763 250022 DEBUG nova.compute.manager [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-changed-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.764 250022 DEBUG nova.compute.manager [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Refreshing instance network info cache due to event network-changed-05a3e492-ecc2-4c60-8410-6fde269e9285. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.765 250022 DEBUG oslo_concurrency.lockutils [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.847 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:58:35 compute-0 nova_compute[250018]: 2026-01-20 14:58:35.895 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Jan 20 14:58:36 compute-0 ceph-mon[74360]: pgmap v2195: 321 pgs: 321 active+clean; 322 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 20 14:58:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3023486754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:36 compute-0 nova_compute[250018]: 2026-01-20 14:58:36.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:37.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:58:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:37.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:58:37 compute-0 ceph-mon[74360]: pgmap v2196: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Jan 20 14:58:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3023486754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:37 compute-0 nova_compute[250018]: 2026-01-20 14:58:37.935 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:37.934 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:37.936 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.066 250022 DEBUG nova.network.neutron [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Updating instance_info_cache with network_info: [{"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.087 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Releasing lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.087 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Instance network_info: |[{"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.088 250022 DEBUG oslo_concurrency.lockutils [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.088 250022 DEBUG nova.network.neutron [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Refreshing network info cache for port 05a3e492-ecc2-4c60-8410-6fde269e9285 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.091 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Start _get_guest_xml network_info=[{"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.095 250022 WARNING nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.103 250022 DEBUG nova.virt.libvirt.host [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.104 250022 DEBUG nova.virt.libvirt.host [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.108 250022 DEBUG nova.virt.libvirt.host [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.108 250022 DEBUG nova.virt.libvirt.host [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.110 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.110 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.111 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.111 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.111 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.112 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.112 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.112 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.112 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.113 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.113 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.113 250022 DEBUG nova.virt.hardware [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:58:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.117 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554415008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.563 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.569 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.594 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:38 compute-0 nova_compute[250018]: 2026-01-20 14:58:38.598 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:38 compute-0 ceph-mon[74360]: pgmap v2197: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2554415008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:38.938 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761535922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.024 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.026 250022 DEBUG nova.virt.libvirt.vif [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1583931054',id=141,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9edd9bb3389a4ef2a90569bcdd524d35',ramdisk_id='',reservation_id='r-gvd7ruz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-881955012',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-881955012-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:31Z,user_data=None,user_id='f78d8330caf745e1b9d9eb71d167e735',uuid=9a8ed269-8170-4a79-aa6d-75abae8562c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.027 250022 DEBUG nova.network.os_vif_util [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converting VIF {"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.028 250022 DEBUG nova.network.os_vif_util [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.029 250022 DEBUG nova.objects.instance [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a8ed269-8170-4a79-aa6d-75abae8562c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.049 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <uuid>9a8ed269-8170-4a79-aa6d-75abae8562c3</uuid>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <name>instance-0000008d</name>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1583931054</nova:name>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:58:38</nova:creationTime>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:user uuid="f78d8330caf745e1b9d9eb71d167e735">tempest-ServersNegativeTestMultiTenantJSON-881955012-project-member</nova:user>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:project uuid="9edd9bb3389a4ef2a90569bcdd524d35">tempest-ServersNegativeTestMultiTenantJSON-881955012</nova:project>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <nova:port uuid="05a3e492-ecc2-4c60-8410-6fde269e9285">
Jan 20 14:58:39 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <system>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="serial">9a8ed269-8170-4a79-aa6d-75abae8562c3</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="uuid">9a8ed269-8170-4a79-aa6d-75abae8562c3</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </system>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <os>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </os>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <features>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </features>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9a8ed269-8170-4a79-aa6d-75abae8562c3_disk">
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </source>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config">
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </source>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:58:39 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:af:3f:23"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <target dev="tap05a3e492-ec"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/console.log" append="off"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <video>
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </video>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:58:39 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:58:39 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:58:39 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:58:39 compute-0 nova_compute[250018]: </domain>
Jan 20 14:58:39 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.050 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Preparing to wait for external event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.051 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.051 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.052 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.053 250022 DEBUG nova.virt.libvirt.vif [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1583931054',id=141,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9edd9bb3389a4ef2a90569bcdd524d35',ramdisk_id='',reservation_id='r-gvd7ruz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-881955012',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-881955012-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:31Z,user_data=None,user_id='f78d8330caf745e1b9d9eb71d167e735',uuid=9a8ed269-8170-4a79-aa6d-75abae8562c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.054 250022 DEBUG nova.network.os_vif_util [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converting VIF {"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.054 250022 DEBUG nova.network.os_vif_util [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.055 250022 DEBUG os_vif [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.056 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.056 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.061 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.061 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05a3e492-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.062 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05a3e492-ec, col_values=(('external_ids', {'iface-id': '05a3e492-ecc2-4c60-8410-6fde269e9285', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:3f:23', 'vm-uuid': '9a8ed269-8170-4a79-aa6d-75abae8562c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:39 compute-0 NetworkManager[48960]: <info>  [1768921119.0642] manager: (tap05a3e492-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.065 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.069 250022 INFO os_vif [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec')
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.093 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Successfully updated port: 947d815e-382a-4899-acd3-af215e4fbab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:58:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:39.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.203 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.203 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquired lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.204 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:58:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.596 250022 DEBUG nova.compute.manager [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-changed-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.596 250022 DEBUG nova.compute.manager [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Refreshing instance network info cache due to event network-changed-947d815e-382a-4899-acd3-af215e4fbab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.597 250022 DEBUG oslo_concurrency.lockutils [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:39 compute-0 sudo[331742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:39 compute-0 sudo[331742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-0 sudo[331742]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.655 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.655 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.655 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] No VIF found with MAC fa:16:3e:af:3f:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.656 250022 INFO nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Using config drive
Jan 20 14:58:39 compute-0 nova_compute[250018]: 2026-01-20 14:58:39.688 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:39 compute-0 podman[331766]: 2026-01-20 14:58:39.721549761 +0000 UTC m=+0.063894641 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:58:39 compute-0 sudo[331769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:39 compute-0 sudo[331769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-0 sudo[331769]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-0 podman[331767]: 2026-01-20 14:58:39.781489396 +0000 UTC m=+0.121979176 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:58:39 compute-0 sudo[331845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:39 compute-0 sudo[331845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-0 sudo[331845]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:39 compute-0 sudo[331877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 14:58:39 compute-0 sudo[331877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.014 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:58:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:40 compute-0 sudo[331877]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:58:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/761535922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.352 250022 INFO nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Creating config drive at /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.362 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0z9bvfn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.517 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg0z9bvfn" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.784 250022 DEBUG nova.storage.rbd_utils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] rbd image 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:40 compute-0 nova_compute[250018]: 2026-01-20 14:58:40.787 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:58:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.509 250022 DEBUG nova.network.neutron [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Updated VIF entry in instance network info cache for port 05a3e492-ecc2-4c60-8410-6fde269e9285. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.510 250022 DEBUG nova.network.neutron [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Updating instance_info_cache with network_info: [{"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:41.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.652 250022 DEBUG oslo_concurrency.processutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config 9a8ed269-8170-4a79-aa6d-75abae8562c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.865s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.653 250022 INFO nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Deleting local config drive /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3/disk.config because it was imported into RBD.
Jan 20 14:58:41 compute-0 kernel: tap05a3e492-ec: entered promiscuous mode
Jan 20 14:58:41 compute-0 NetworkManager[48960]: <info>  [1768921121.7168] manager: (tap05a3e492-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Jan 20 14:58:41 compute-0 ovn_controller[148666]: 2026-01-20T14:58:41Z|00512|binding|INFO|Claiming lport 05a3e492-ecc2-4c60-8410-6fde269e9285 for this chassis.
Jan 20 14:58:41 compute-0 ovn_controller[148666]: 2026-01-20T14:58:41Z|00513|binding|INFO|05a3e492-ecc2-4c60-8410-6fde269e9285: Claiming fa:16:3e:af:3f:23 10.100.0.3
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.716 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-0 systemd-udevd[331990]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:58:41 compute-0 systemd-machined[216401]: New machine qemu-66-instance-0000008d.
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.764 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3f:23 10.100.0.3'], port_security=['fa:16:3e:af:3f:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9a8ed269-8170-4a79-aa6d-75abae8562c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9edd9bb3389a4ef2a90569bcdd524d35', 'neutron:revision_number': '2', 'neutron:security_group_ids': '95ba9616-5b79-4819-afc2-f87bef352b1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=769ddbf1-bd27-45c3-a935-758b0ad7c249, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=05a3e492-ecc2-4c60-8410-6fde269e9285) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.766 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 05a3e492-ecc2-4c60-8410-6fde269e9285 in datapath 2bc7e730-1c24-4817-b1fa-ada338da8f3d bound to our chassis
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.768 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2bc7e730-1c24-4817-b1fa-ada338da8f3d
Jan 20 14:58:41 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000008d.
Jan 20 14:58:41 compute-0 NetworkManager[48960]: <info>  [1768921121.7790] device (tap05a3e492-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:58:41 compute-0 NetworkManager[48960]: <info>  [1768921121.7797] device (tap05a3e492-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.780 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e37d2742-fa3b-468c-8794-9f40a295dc36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.782 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2bc7e730-11 in ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.784 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2bc7e730-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.784 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2952e2-eab2-4198-ba67-7554dabe552b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.785 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b8e2ee-6cfa-4ebf-b628-a491042c5d04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.786 250022 DEBUG oslo_concurrency.lockutils [req-729a5e18-ccc8-48ab-a82e-58630443db29 req-f33b0744-8fe8-472b-b7ae-d44fc0e58caf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9a8ed269-8170-4a79-aa6d-75abae8562c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:41 compute-0 sudo[331972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:41 compute-0 sudo[331972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:41 compute-0 sudo[331972]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.799 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d94f2a38-1814-4861-8138-3038625d5ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.813 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.814 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0a97a44c-3fcd-406a-81ac-b52a163deab1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.822 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-0 ovn_controller[148666]: 2026-01-20T14:58:41Z|00514|binding|INFO|Setting lport 05a3e492-ecc2-4c60-8410-6fde269e9285 ovn-installed in OVS
Jan 20 14:58:41 compute-0 ovn_controller[148666]: 2026-01-20T14:58:41Z|00515|binding|INFO|Setting lport 05a3e492-ecc2-4c60-8410-6fde269e9285 up in Southbound
Jan 20 14:58:41 compute-0 nova_compute[250018]: 2026-01-20 14:58:41.828 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.846 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4e92335a-82c7-4a05-a1f6-2fd91a20be86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 sudo[332006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:41 compute-0 sudo[332006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:41 compute-0 sudo[332006]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.857 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfd8936-1d6a-4258-b2fa-9555230c4581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ceph-mon[74360]: pgmap v2198: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Jan 20 14:58:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-0 NetworkManager[48960]: <info>  [1768921121.8598] manager: (tap2bc7e730-10): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Jan 20 14:58:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.892 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b3414824-b87c-43d0-82b1-cfb5ef03ed6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.899 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[569058cd-0482-416c-931d-7bc62a5e1076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 NetworkManager[48960]: <info>  [1768921121.9209] device (tap2bc7e730-10): carrier: link connected
Jan 20 14:58:41 compute-0 sudo[332041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:41 compute-0 sudo[332041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:41 compute-0 sudo[332041]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.926 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6e176b4d-d112-4507-b973-e7c2c43798e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.947 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a68959ad-34c1-4bcc-a2a6-e919c05c189e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2bc7e730-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:11:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706264, 'reachable_time': 33967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332084, 'error': None, 'target': 'ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.963 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[df72676d-a8ba-4d4d-8484-0a92918a7203]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:1128'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706264, 'tstamp': 706264}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332098, 'error': None, 'target': 'ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 sudo[332085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:58:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:41.978 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[23cd29cd-97da-4001-b41f-931812180c93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2bc7e730-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:11:28'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706264, 'reachable_time': 33967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332109, 'error': None, 'target': 'ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:41 compute-0 sudo[332085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.005 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[605e1ee3-d5be-4723-93e2-21934e64eac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.057 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c99a08-03b6-4d16-a73d-5a4350fa6f57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.058 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc7e730-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.058 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.059 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bc7e730-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:42 compute-0 NetworkManager[48960]: <info>  [1768921122.0610] manager: (tap2bc7e730-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 20 14:58:42 compute-0 kernel: tap2bc7e730-10: entered promiscuous mode
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.063 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.064 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2bc7e730-10, col_values=(('external_ids', {'iface-id': 'facbdbbb-d853-4ab4-a039-2dfed04424a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.065 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:42 compute-0 ovn_controller[148666]: 2026-01-20T14:58:42Z|00516|binding|INFO|Releasing lport facbdbbb-d853-4ab4-a039-2dfed04424a9 from this chassis (sb_readonly=0)
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.081 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.081 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2bc7e730-1c24-4817-b1fa-ada338da8f3d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2bc7e730-1c24-4817-b1fa-ada338da8f3d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.082 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2e71efc0-1351-41ce-8897-80f45d670022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.083 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-2bc7e730-1c24-4817-b1fa-ada338da8f3d
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/2bc7e730-1c24-4817-b1fa-ada338da8f3d.pid.haproxy
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 2bc7e730-1c24-4817-b1fa-ada338da8f3d
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:58:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:42.084 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'env', 'PROCESS_TAG=haproxy-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2bc7e730-1c24-4817-b1fa-ada338da8f3d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:58:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.401 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921122.4011476, 9a8ed269-8170-4a79-aa6d-75abae8562c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.402 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] VM Started (Lifecycle Event)
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.440 250022 DEBUG nova.network.neutron [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updating instance_info_cache with network_info: [{"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.453 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.457 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921122.4036214, 9a8ed269-8170-4a79-aa6d-75abae8562c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.458 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] VM Paused (Lifecycle Event)
Jan 20 14:58:42 compute-0 sudo[332085]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:42 compute-0 podman[332205]: 2026-01-20 14:58:42.43272632 +0000 UTC m=+0.024648915 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:58:42 compute-0 podman[332205]: 2026-01-20 14:58:42.552555607 +0000 UTC m=+0.144478152 container create 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 14:58:42 compute-0 systemd[1]: Started libpod-conmon-7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf.scope.
Jan 20 14:58:42 compute-0 sudo[332229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:42 compute-0 sudo[332229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.604 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:42 compute-0 sudo[332229]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.611 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.616 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Releasing lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.617 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Instance network_info: |[{"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.617 250022 DEBUG oslo_concurrency.lockutils [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.617 250022 DEBUG nova.network.neutron [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Refreshing network info cache for port 947d815e-382a-4899-acd3-af215e4fbab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.620 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Start _get_guest_xml network_info=[{"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:58:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.625 250022 WARNING nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.631 250022 DEBUG nova.virt.libvirt.host [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:58:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7054240a97e6afec8d4f56479e38c290736e007edb81fe7a0d4af7e74ce14ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.632 250022 DEBUG nova.virt.libvirt.host [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.636 250022 DEBUG nova.virt.libvirt.host [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.636 250022 DEBUG nova.virt.libvirt.host [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.638 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.638 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.639 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.639 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.639 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.640 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.640 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.640 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.641 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.641 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.641 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.641 250022 DEBUG nova.virt.hardware [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:58:42 compute-0 podman[332205]: 2026-01-20 14:58:42.645169872 +0000 UTC m=+0.237092477 container init 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.645 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:42 compute-0 podman[332205]: 2026-01-20 14:58:42.651622516 +0000 UTC m=+0.243545071 container start 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:58:42 compute-0 sudo[332259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:42 compute-0 sudo[332259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:42 compute-0 sudo[332259]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:42 compute-0 nova_compute[250018]: 2026-01-20 14:58:42.678 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:58:42 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [NOTICE]   (332283) : New worker (332288) forked
Jan 20 14:58:42 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [NOTICE]   (332283) : Loading success.
Jan 20 14:58:42 compute-0 sudo[332296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:42 compute-0 sudo[332296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:42 compute-0 sudo[332296]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:42 compute-0 sudo[332322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 14:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:58:42 compute-0 sudo[332322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:58:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:42 compute-0 ceph-mon[74360]: pgmap v2199: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 14:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110291340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.089329336 +0000 UTC m=+0.038367294 container create e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.094 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.122 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:43 compute-0 systemd[1]: Started libpod-conmon-e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63.scope.
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.126 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.072703979 +0000 UTC m=+0.021741967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.173093622 +0000 UTC m=+0.122131590 container init e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.179565117 +0000 UTC m=+0.128603065 container start e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.183839382 +0000 UTC m=+0.132877370 container attach e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:58:43 compute-0 busy_shtern[332442]: 167 167
Jan 20 14:58:43 compute-0 systemd[1]: libpod-e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63.scope: Deactivated successfully.
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.187016457 +0000 UTC m=+0.136054435 container died e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:58:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:43.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d5c4526ac626ddcb3e9c62f522863a860dae3fe9df2ca5950f82e8c402cd50a-merged.mount: Deactivated successfully.
Jan 20 14:58:43 compute-0 podman[332406]: 2026-01-20 14:58:43.224832616 +0000 UTC m=+0.173870574 container remove e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shtern, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:58:43 compute-0 systemd[1]: libpod-conmon-e333c582cd3505e5cbefe835cc1a504849b067be3a9d216ff356888564386f63.scope: Deactivated successfully.
Jan 20 14:58:43 compute-0 podman[332487]: 2026-01-20 14:58:43.368402163 +0000 UTC m=+0.024416969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:58:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546240148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.555 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.558 250022 DEBUG nova.virt.libvirt.vif [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1617556540',display_name='tempest-ServerAddressesTestJSON-server-1617556540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1617556540',id=142,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bae96a4c20ce4f3a859ff518a8423db5',ramdisk_id='',reservation_id='r-08ubvzco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1769818234',owner_user_name='tempest-ServerAddressesTestJSON-1769818234-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:32Z,user_data=None,user_id='0362dea63d5a43778f8d4164a77cd3c6',uuid=20a008fa-d059-4906-ba1c-697755b1ba06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.559 250022 DEBUG nova.network.os_vif_util [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converting VIF {"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.561 250022 DEBUG nova.network.os_vif_util [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.564 250022 DEBUG nova.objects.instance [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20a008fa-d059-4906-ba1c-697755b1ba06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-0 podman[332487]: 2026-01-20 14:58:43.757504044 +0000 UTC m=+0.413518830 container create 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.793 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <uuid>20a008fa-d059-4906-ba1c-697755b1ba06</uuid>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <name>instance-0000008e</name>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerAddressesTestJSON-server-1617556540</nova:name>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:58:42</nova:creationTime>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:user uuid="0362dea63d5a43778f8d4164a77cd3c6">tempest-ServerAddressesTestJSON-1769818234-project-member</nova:user>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:project uuid="bae96a4c20ce4f3a859ff518a8423db5">tempest-ServerAddressesTestJSON-1769818234</nova:project>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <nova:port uuid="947d815e-382a-4899-acd3-af215e4fbab4">
Jan 20 14:58:43 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <system>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="serial">20a008fa-d059-4906-ba1c-697755b1ba06</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="uuid">20a008fa-d059-4906-ba1c-697755b1ba06</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </system>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <os>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </os>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <features>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </features>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/20a008fa-d059-4906-ba1c-697755b1ba06_disk">
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </source>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/20a008fa-d059-4906-ba1c-697755b1ba06_disk.config">
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </source>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:58:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:b6:5b:09"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <target dev="tap947d815e-38"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/console.log" append="off"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <video>
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </video>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:58:43 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:58:43 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:58:43 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:58:43 compute-0 nova_compute[250018]: </domain>
Jan 20 14:58:43 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.794 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Preparing to wait for external event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.794 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.795 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.795 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.796 250022 DEBUG nova.virt.libvirt.vif [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1617556540',display_name='tempest-ServerAddressesTestJSON-server-1617556540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1617556540',id=142,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bae96a4c20ce4f3a859ff518a8423db5',ramdisk_id='',reservation_id='r-08ubvzco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1769818234',owner_user_name='tempest-ServerAddressesTestJSON-1769818234-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:32Z,user_data=None,user_id='0362dea63d5a43778f8d4164a77cd3c6',uuid=20a008fa-d059-4906-ba1c-697755b1ba06,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.797 250022 DEBUG nova.network.os_vif_util [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converting VIF {"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.797 250022 DEBUG nova.network.os_vif_util [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.798 250022 DEBUG os_vif [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.799 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.799 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.800 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.804 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap947d815e-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.805 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap947d815e-38, col_values=(('external_ids', {'iface-id': '947d815e-382a-4899-acd3-af215e4fbab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:5b:09', 'vm-uuid': '20a008fa-d059-4906-ba1c-697755b1ba06'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-0 NetworkManager[48960]: <info>  [1768921123.8091] manager: (tap947d815e-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:43 compute-0 nova_compute[250018]: 2026-01-20 14:58:43.817 250022 INFO os_vif [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38')
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.037 250022 DEBUG nova.compute.manager [req-f1cd253e-1252-4a4b-b743-a177d87a68a6 req-a3f56888-62cf-4f77-94f6-d651ac3eb371 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.038 250022 DEBUG oslo_concurrency.lockutils [req-f1cd253e-1252-4a4b-b743-a177d87a68a6 req-a3f56888-62cf-4f77-94f6-d651ac3eb371 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.038 250022 DEBUG oslo_concurrency.lockutils [req-f1cd253e-1252-4a4b-b743-a177d87a68a6 req-a3f56888-62cf-4f77-94f6-d651ac3eb371 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.039 250022 DEBUG oslo_concurrency.lockutils [req-f1cd253e-1252-4a4b-b743-a177d87a68a6 req-a3f56888-62cf-4f77-94f6-d651ac3eb371 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.039 250022 DEBUG nova.compute.manager [req-f1cd253e-1252-4a4b-b743-a177d87a68a6 req-a3f56888-62cf-4f77-94f6-d651ac3eb371 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Processing event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.040 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.047 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921124.0466983, 9a8ed269-8170-4a79-aa6d-75abae8562c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.047 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] VM Resumed (Lifecycle Event)
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.052 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.058 250022 INFO nova.virt.libvirt.driver [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Instance spawned successfully.
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.059 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:58:44 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3110291340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2546240148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:58:44 compute-0 systemd[1]: Started libpod-conmon-720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834.scope.
Jan 20 14:58:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f0f322157c442b34b7c1d3e0e326d38341db9196f4a7204388c5d0f60386e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f0f322157c442b34b7c1d3e0e326d38341db9196f4a7204388c5d0f60386e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f0f322157c442b34b7c1d3e0e326d38341db9196f4a7204388c5d0f60386e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f0f322157c442b34b7c1d3e0e326d38341db9196f4a7204388c5d0f60386e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:44 compute-0 podman[332487]: 2026-01-20 14:58:44.132178256 +0000 UTC m=+0.788193072 container init 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 14:58:44 compute-0 podman[332487]: 2026-01-20 14:58:44.139792292 +0000 UTC m=+0.795807078 container start 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:44 compute-0 podman[332487]: 2026-01-20 14:58:44.142627968 +0000 UTC m=+0.798642814 container attach 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.428 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.434 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.440 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.440 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.441 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.441 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.442 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.442 250022 DEBUG nova.virt.libvirt.driver [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.457 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.480 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.480 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.480 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] No VIF found with MAC fa:16:3e:b6:5b:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.481 250022 INFO nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Using config drive
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.508 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.665 250022 INFO nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Took 13.35 seconds to spawn the instance on the hypervisor.
Jan 20 14:58:44 compute-0 nova_compute[250018]: 2026-01-20 14:58:44.665 250022 DEBUG nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:45 compute-0 nova_compute[250018]: 2026-01-20 14:58:45.015 250022 INFO nova.compute.manager [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Took 14.76 seconds to build instance.
Jan 20 14:58:45 compute-0 nova_compute[250018]: 2026-01-20 14:58:45.131 250022 DEBUG oslo_concurrency.lockutils [None req-35d13496-d712-45a4-beed-ff19fa0f29f3 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:45.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]: [
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:     {
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "available": false,
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "ceph_device": false,
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "lsm_data": {},
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "lvs": [],
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "path": "/dev/sr0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "rejected_reasons": [
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "Has a FileSystem",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "Insufficient space (<5GB)"
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         ],
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         "sys_api": {
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "actuators": null,
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "device_nodes": "sr0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "devname": "sr0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "human_readable_size": "482.00 KB",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "id_bus": "ata",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "model": "QEMU DVD-ROM",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "nr_requests": "2",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "parent": "/dev/sr0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "partitions": {},
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "path": "/dev/sr0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "removable": "1",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "rev": "2.5+",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "ro": "0",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "rotational": "1",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "sas_address": "",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "sas_device_handle": "",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "scheduler_mode": "mq-deadline",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "sectors": 0,
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "sectorsize": "2048",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "size": 493568.0,
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "support_discard": "2048",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "type": "disk",
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:             "vendor": "QEMU"
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:         }
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]:     }
Jan 20 14:58:45 compute-0 stupefied_keldysh[332509]: ]
Jan 20 14:58:45 compute-0 systemd[1]: libpod-720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834.scope: Deactivated successfully.
Jan 20 14:58:45 compute-0 systemd[1]: libpod-720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834.scope: Consumed 1.130s CPU time.
Jan 20 14:58:45 compute-0 podman[333773]: 2026-01-20 14:58:45.324245556 +0000 UTC m=+0.025609561 container died 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:46 compute-0 ceph-mon[74360]: pgmap v2200: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Jan 20 14:58:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 2.4 MiB/s wr, 92 op/s
Jan 20 14:58:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-620f0f322157c442b34b7c1d3e0e326d38341db9196f4a7204388c5d0f60386e-merged.mount: Deactivated successfully.
Jan 20 14:58:46 compute-0 podman[333773]: 2026-01-20 14:58:46.214511667 +0000 UTC m=+0.915875622 container remove 720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 14:58:46 compute-0 systemd[1]: libpod-conmon-720785ab90816d8683614a6bd43a6b878205ad0cfc96e112903753cc723ee834.scope: Deactivated successfully.
Jan 20 14:58:46 compute-0 sudo[332322]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.323 250022 DEBUG nova.compute.manager [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.324 250022 DEBUG oslo_concurrency.lockutils [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.324 250022 DEBUG oslo_concurrency.lockutils [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.328 250022 DEBUG oslo_concurrency.lockutils [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.328 250022 DEBUG nova.compute.manager [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] No waiting events found dispatching network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.329 250022 WARNING nova.compute.manager [req-ae9c65e5-39e9-4d43-b3a4-fa2fb5b933f9 req-ec6a6159-ebff-4062-b90c-da632b8ea55d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received unexpected event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 for instance with vm_state active and task_state None.
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dd2ecd7a-b847-4326-8760-c69c44c50384 does not exist
Jan 20 14:58:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dffa667c-69c0-4263-8200-7ba03f822ce9 does not exist
Jan 20 14:58:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 60f1a643-e028-4aec-b487-da1ecbd5735f does not exist
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:58:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:58:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:46 compute-0 sudo[333790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:46 compute-0 sudo[333790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:46 compute-0 sudo[333790]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.512 250022 INFO nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Creating config drive at /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config
Jan 20 14:58:46 compute-0 sudo[333815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:46 compute-0 sudo[333815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:46 compute-0 sudo[333815]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.520 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqb5a54p4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:46 compute-0 sudo[333840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:46 compute-0 sudo[333840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:46 compute-0 sudo[333840]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:46 compute-0 sudo[333868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:58:46 compute-0 sudo[333868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.659 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqb5a54p4" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.702 250022 DEBUG nova.storage.rbd_utils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] rbd image 20a008fa-d059-4906-ba1c-697755b1ba06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.706 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config 20a008fa-d059-4906-ba1c-697755b1ba06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.874 250022 DEBUG oslo_concurrency.processutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config 20a008fa-d059-4906-ba1c-697755b1ba06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.875 250022 INFO nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Deleting local config drive /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06/disk.config because it was imported into RBD.
Jan 20 14:58:46 compute-0 kernel: tap947d815e-38: entered promiscuous mode
Jan 20 14:58:46 compute-0 NetworkManager[48960]: <info>  [1768921126.9173] manager: (tap947d815e-38): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:46 compute-0 ovn_controller[148666]: 2026-01-20T14:58:46Z|00517|binding|INFO|Claiming lport 947d815e-382a-4899-acd3-af215e4fbab4 for this chassis.
Jan 20 14:58:46 compute-0 ovn_controller[148666]: 2026-01-20T14:58:46Z|00518|binding|INFO|947d815e-382a-4899-acd3-af215e4fbab4: Claiming fa:16:3e:b6:5b:09 10.100.0.11
Jan 20 14:58:46 compute-0 nova_compute[250018]: 2026-01-20 14:58:46.926 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.937 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5b:09 10.100.0.11'], port_security=['fa:16:3e:b6:5b:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '20a008fa-d059-4906-ba1c-697755b1ba06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae96a4c20ce4f3a859ff518a8423db5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c183cbd6-8d12-4b40-95e1-0acae9d53763', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e10dd74-88ae-4292-9b27-90c6b0f9f3a8, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=947d815e-382a-4899-acd3-af215e4fbab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.940 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 947d815e-382a-4899-acd3-af215e4fbab4 in datapath 7fc355b3-a02c-425b-a7e9-a307be1be8e7 bound to our chassis
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.941 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc355b3-a02c-425b-a7e9-a307be1be8e7
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.952 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6550c208-db03-4c0a-b6e7-de358d88898d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.953 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc355b3-a1 in ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.955 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc355b3-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.955 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[48d9d1bb-5cea-410b-adbe-aab7f792d5cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[09ddc7c9-73fe-48c5-89e5-6c125c95900f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:46 compute-0 systemd-machined[216401]: New machine qemu-67-instance-0000008e.
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.968 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e236a26d-09fb-4f5d-afa9-fbd7946e6f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:46 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000008e.
Jan 20 14:58:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:46.992 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[43489610-5a70-420c-be11-17e83a2335eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 systemd-udevd[333998]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:58:47 compute-0 ovn_controller[148666]: 2026-01-20T14:58:47Z|00519|binding|INFO|Setting lport 947d815e-382a-4899-acd3-af215e4fbab4 ovn-installed in OVS
Jan 20 14:58:47 compute-0 ovn_controller[148666]: 2026-01-20T14:58:47Z|00520|binding|INFO|Setting lport 947d815e-382a-4899-acd3-af215e4fbab4 up in Southbound
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:47 compute-0 NetworkManager[48960]: <info>  [1768921127.0224] device (tap947d815e-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:58:47 compute-0 NetworkManager[48960]: <info>  [1768921127.0245] device (tap947d815e-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.029 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f45c9d9a-7bbb-4e52-a279-7ef208416d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.034 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[87eeccd1-9eb0-4bd6-882c-3c836d65602e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 NetworkManager[48960]: <info>  [1768921127.0353] manager: (tap7fc355b3-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.063 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b18d1d19-5a89-4752-96b6-895798a5b520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.066 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cd43d5-d9b5-4621-9cc4-4a3447c8d52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:46.988497294 +0000 UTC m=+0.030940244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:47 compute-0 NetworkManager[48960]: <info>  [1768921127.0838] device (tap7fc355b3-a0): carrier: link connected
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.089 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e42ec786-5087-4107-82d2-9703b55f7580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.099 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.099 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.099 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.100 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.100 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.104 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c5a090-1888-4499-8865-33f2de058046]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc355b3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c3:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706780, 'reachable_time': 37432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334027, 'error': None, 'target': 'ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.124 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[88e87f97-6a73-4fe6-8492-e0569f5d2c01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:c34d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706780, 'tstamp': 706780}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334029, 'error': None, 'target': 'ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.139 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c4d4b0-8e55-464d-8c07-99a537e78cba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc355b3-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c3:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706780, 'reachable_time': 37432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334030, 'error': None, 'target': 'ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.166 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bdd03e-d4e7-4904-a665-b7f8d52b995a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.181698368 +0000 UTC m=+0.224141288 container create b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:58:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:47.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:47 compute-0 systemd[1]: Started libpod-conmon-b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5.scope.
Jan 20 14:58:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.251 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fece2e2c-6555-49bd-93bd-46b2549b62e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.255 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc355b3-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.255 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.255 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc355b3-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:47 compute-0 NetworkManager[48960]: <info>  [1768921127.3011] manager: (tap7fc355b3-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 20 14:58:47 compute-0 kernel: tap7fc355b3-a0: entered promiscuous mode
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.305 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.306 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc355b3-a0, col_values=(('external_ids', {'iface-id': '80e43d7d-0d7a-4de6-9e33-1fd916d2bb92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:47 compute-0 ovn_controller[148666]: 2026-01-20T14:58:47Z|00521|binding|INFO|Releasing lport 80e43d7d-0d7a-4de6-9e33-1fd916d2bb92 from this chassis (sb_readonly=0)
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.314 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc355b3-a02c-425b-a7e9-a307be1be8e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc355b3-a02c-425b-a7e9-a307be1be8e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.315 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9d75496a-c853-4afe-800b-85ec279f1f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.316 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-7fc355b3-a02c-425b-a7e9-a307be1be8e7
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/7fc355b3-a02c-425b-a7e9-a307be1be8e7.pid.haproxy
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 7fc355b3-a02c-425b-a7e9-a307be1be8e7
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:58:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:47.318 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'env', 'PROCESS_TAG=haproxy-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc355b3-a02c-425b-a7e9-a307be1be8e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.364964565 +0000 UTC m=+0.407407505 container init b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:58:47 compute-0 ceph-mon[74360]: pgmap v2201: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 2.4 MiB/s wr, 92 op/s
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:58:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.373794913 +0000 UTC m=+0.416237833 container start b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:47 compute-0 condescending_heisenberg[334038]: 167 167
Jan 20 14:58:47 compute-0 systemd[1]: libpod-b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5.scope: Deactivated successfully.
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.380797062 +0000 UTC m=+0.423240032 container attach b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.381305125 +0000 UTC m=+0.423748075 container died b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 14:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-378e1fb7de2e647e9eac7a67622f9d0e685f2b0c40be9baacfd3f4de4c92e4ce-merged.mount: Deactivated successfully.
Jan 20 14:58:47 compute-0 podman[333979]: 2026-01-20 14:58:47.423038939 +0000 UTC m=+0.465481869 container remove b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.430 250022 DEBUG nova.network.neutron [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updated VIF entry in instance network info cache for port 947d815e-382a-4899-acd3-af215e4fbab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.431 250022 DEBUG nova.network.neutron [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updating instance_info_cache with network_info: [{"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:47 compute-0 systemd[1]: libpod-conmon-b3830f36c3314fdc3661aed60c543d4797f344d83557e96c77da47c8f3d4c0b5.scope: Deactivated successfully.
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.452 250022 DEBUG oslo_concurrency.lockutils [req-19aaf3fb-be71-4e08-aae7-ebd74e6eb18f req-d0c3b58d-ac83-490b-9362-bcac78b2931f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:58:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:47.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.608 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.614 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921127.6144807, 20a008fa-d059-4906-ba1c-697755b1ba06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.615 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] VM Started (Lifecycle Event)
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.640 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.643 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921127.6157315, 20a008fa-d059-4906-ba1c-697755b1ba06 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.643 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] VM Paused (Lifecycle Event)
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.668 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.671 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:58:47 compute-0 podman[334126]: 2026-01-20 14:58:47.602076622 +0000 UTC m=+0.028038226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.707 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.720 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.720 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.723 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.723 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:58:47 compute-0 podman[334126]: 2026-01-20 14:58:47.764224889 +0000 UTC m=+0.190186473 container create c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:58:47 compute-0 systemd[1]: Started libpod-conmon-c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790.scope.
Jan 20 14:58:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:47 compute-0 podman[334161]: 2026-01-20 14:58:47.781610458 +0000 UTC m=+0.133189119 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.903 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.904 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3940MB free_disk=20.94619369506836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.904 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:47 compute-0 nova_compute[250018]: 2026-01-20 14:58:47.905 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:47 compute-0 podman[334161]: 2026-01-20 14:58:47.921601219 +0000 UTC m=+0.273179860 container create fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:47 compute-0 podman[334126]: 2026-01-20 14:58:47.959577571 +0000 UTC m=+0.385539175 container init c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:47 compute-0 podman[334126]: 2026-01-20 14:58:47.967942867 +0000 UTC m=+0.393904451 container start c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 14:58:47 compute-0 podman[334126]: 2026-01-20 14:58:47.97143326 +0000 UTC m=+0.397394864 container attach c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:47 compute-0 systemd[1]: Started libpod-conmon-fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf.scope.
Jan 20 14:58:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e34faa6dee8d4b679c654cb9e7f3696da806537700fb720b345a019ba529ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:48 compute-0 podman[334161]: 2026-01-20 14:58:48.032041374 +0000 UTC m=+0.383620045 container init fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:58:48 compute-0 podman[334161]: 2026-01-20 14:58:48.03713053 +0000 UTC m=+0.388709171 container start fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:58:48 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [NOTICE]   (334191) : New worker (334193) forked
Jan 20 14:58:48 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [NOTICE]   (334191) : Loading success.
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.087 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 9a8ed269-8170-4a79-aa6d-75abae8562c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.087 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 20a008fa-d059-4906-ba1c-697755b1ba06 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.087 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.088 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.105 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 14:58:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 723 KiB/s rd, 15 KiB/s wr, 37 op/s
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.119 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.120 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.141 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.159 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.213 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/579533939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.434 250022 DEBUG nova.compute.manager [req-aa21337e-86b2-42ab-bd99-2842a7420a0f req-c6d80b07-1d35-405f-bbdc-0ebd35de3504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.434 250022 DEBUG oslo_concurrency.lockutils [req-aa21337e-86b2-42ab-bd99-2842a7420a0f req-c6d80b07-1d35-405f-bbdc-0ebd35de3504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.435 250022 DEBUG oslo_concurrency.lockutils [req-aa21337e-86b2-42ab-bd99-2842a7420a0f req-c6d80b07-1d35-405f-bbdc-0ebd35de3504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.435 250022 DEBUG oslo_concurrency.lockutils [req-aa21337e-86b2-42ab-bd99-2842a7420a0f req-c6d80b07-1d35-405f-bbdc-0ebd35de3504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.435 250022 DEBUG nova.compute.manager [req-aa21337e-86b2-42ab-bd99-2842a7420a0f req-c6d80b07-1d35-405f-bbdc-0ebd35de3504 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Processing event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.436 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.440 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921128.4397628, 20a008fa-d059-4906-ba1c-697755b1ba06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.440 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] VM Resumed (Lifecycle Event)
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.443 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.446 250022 INFO nova.virt.libvirt.driver [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Instance spawned successfully.
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.446 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.484 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.492 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.495 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.496 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.496 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.497 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.497 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.498 250022 DEBUG nova.virt.libvirt.driver [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.552 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.608 250022 INFO nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Took 16.49 seconds to spawn the instance on the hypervisor.
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.608 250022 DEBUG nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.616 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719794551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.644 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.651 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.667 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.675 250022 INFO nova.compute.manager [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Took 17.81 seconds to build instance.
Jan 20 14:58:48 compute-0 cranky_lamarr[334179]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:58:48 compute-0 cranky_lamarr[334179]: --> relative data size: 1.0
Jan 20 14:58:48 compute-0 cranky_lamarr[334179]: --> All data devices are unavailable
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.808 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:48 compute-0 systemd[1]: libpod-c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790.scope: Deactivated successfully.
Jan 20 14:58:48 compute-0 podman[334126]: 2026-01-20 14:58:48.829850474 +0000 UTC m=+1.255812088 container died c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3be0d855fbcf4daa27c33d2679ec1f00649eeabf302fbbdb7e3774d66d8a5e5a-merged.mount: Deactivated successfully.
Jan 20 14:58:48 compute-0 podman[334126]: 2026-01-20 14:58:48.897978668 +0000 UTC m=+1.323940272 container remove c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:58:48 compute-0 systemd[1]: libpod-conmon-c63ffacac6a4aad70f31876fe7a070e0e425d1fbfe6b73c22271e21a05d1b790.scope: Deactivated successfully.
Jan 20 14:58:48 compute-0 sudo[333868]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.968 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.969 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:48 compute-0 nova_compute[250018]: 2026-01-20 14:58:48.975 250022 DEBUG oslo_concurrency.lockutils [None req-23fd4c8f-0123-4fbe-84d6-b757bdbac2d4 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:49 compute-0 sudo[334247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:49 compute-0 sudo[334247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:49 compute-0 sudo[334247]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:49 compute-0 sudo[334272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:49 compute-0 sudo[334272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:49 compute-0 sudo[334272]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:49 compute-0 sudo[334297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:49 compute-0 sudo[334297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:49 compute-0 sudo[334297]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:49.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:49 compute-0 sudo[334322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:58:49 compute-0 sudo[334322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:49 compute-0 ceph-mon[74360]: pgmap v2202: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 723 KiB/s rd, 15 KiB/s wr, 37 op/s
Jan 20 14:58:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2719794551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4234632430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.581319205 +0000 UTC m=+0.049645508 container create 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:58:49 compute-0 systemd[1]: Started libpod-conmon-6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690.scope.
Jan 20 14:58:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.562147079 +0000 UTC m=+0.030473402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.664771193 +0000 UTC m=+0.133097516 container init 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.672220183 +0000 UTC m=+0.140546496 container start 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.676527809 +0000 UTC m=+0.144854142 container attach 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:49 compute-0 nifty_gould[334400]: 167 167
Jan 20 14:58:49 compute-0 systemd[1]: libpod-6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690.scope: Deactivated successfully.
Jan 20 14:58:49 compute-0 conmon[334400]: conmon 6cde7ddf76de9853d416 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690.scope/container/memory.events
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.681211016 +0000 UTC m=+0.149537329 container died 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf0c0ef6d48d8862693d87c5cc1e6a4acccfebdb6b26a7b8d296ba17735613f6-merged.mount: Deactivated successfully.
Jan 20 14:58:49 compute-0 podman[334386]: 2026-01-20 14:58:49.719184729 +0000 UTC m=+0.187511032 container remove 6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 14:58:49 compute-0 systemd[1]: libpod-conmon-6cde7ddf76de9853d4163e1e2e7c5703d00944c250d00c44625d42524386a690.scope: Deactivated successfully.
Jan 20 14:58:49 compute-0 podman[334424]: 2026-01-20 14:58:49.887371989 +0000 UTC m=+0.042787324 container create 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 14:58:49 compute-0 systemd[1]: Started libpod-conmon-88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a.scope.
Jan 20 14:58:49 compute-0 nova_compute[250018]: 2026-01-20 14:58:49.950 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:49 compute-0 nova_compute[250018]: 2026-01-20 14:58:49.952 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:49 compute-0 podman[334424]: 2026-01-20 14:58:49.866748753 +0000 UTC m=+0.022164098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283eb361b70133c54b522df10a60f642876751d834b2e8df02689302b756371e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283eb361b70133c54b522df10a60f642876751d834b2e8df02689302b756371e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283eb361b70133c54b522df10a60f642876751d834b2e8df02689302b756371e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283eb361b70133c54b522df10a60f642876751d834b2e8df02689302b756371e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:49 compute-0 podman[334424]: 2026-01-20 14:58:49.98172041 +0000 UTC m=+0.137135755 container init 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:49 compute-0 podman[334424]: 2026-01-20 14:58:49.994905226 +0000 UTC m=+0.150320551 container start 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 14:58:49 compute-0 podman[334424]: 2026-01-20 14:58:49.99842644 +0000 UTC m=+0.153841765 container attach 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 16 KiB/s wr, 141 op/s
Jan 20 14:58:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2379891077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:50 compute-0 ceph-mon[74360]: pgmap v2203: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 16 KiB/s wr, 141 op/s
Jan 20 14:58:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2990147587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.536 250022 DEBUG nova.compute.manager [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.537 250022 DEBUG oslo_concurrency.lockutils [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.537 250022 DEBUG oslo_concurrency.lockutils [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.538 250022 DEBUG oslo_concurrency.lockutils [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.538 250022 DEBUG nova.compute.manager [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] No waiting events found dispatching network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.538 250022 WARNING nova.compute.manager [req-25dd4768-db57-4cdc-9236-4cda9ee525b6 req-82c710e6-88ac-4439-9649-b239e5c66ee8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received unexpected event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 for instance with vm_state active and task_state None.
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.556 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.556 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.557 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.557 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.557 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.559 250022 INFO nova.compute.manager [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Terminating instance
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.560 250022 DEBUG nova.compute.manager [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:58:50 compute-0 kernel: tap05a3e492-ec (unregistering): left promiscuous mode
Jan 20 14:58:50 compute-0 NetworkManager[48960]: <info>  [1768921130.6016] device (tap05a3e492-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:58:50 compute-0 ovn_controller[148666]: 2026-01-20T14:58:50Z|00522|binding|INFO|Releasing lport 05a3e492-ecc2-4c60-8410-6fde269e9285 from this chassis (sb_readonly=0)
Jan 20 14:58:50 compute-0 ovn_controller[148666]: 2026-01-20T14:58:50Z|00523|binding|INFO|Setting lport 05a3e492-ecc2-4c60-8410-6fde269e9285 down in Southbound
Jan 20 14:58:50 compute-0 ovn_controller[148666]: 2026-01-20T14:58:50Z|00524|binding|INFO|Removing iface tap05a3e492-ec ovn-installed in OVS
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.657 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3f:23 10.100.0.3'], port_security=['fa:16:3e:af:3f:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9a8ed269-8170-4a79-aa6d-75abae8562c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9edd9bb3389a4ef2a90569bcdd524d35', 'neutron:revision_number': '4', 'neutron:security_group_ids': '95ba9616-5b79-4819-afc2-f87bef352b1d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=769ddbf1-bd27-45c3-a935-758b0ad7c249, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=05a3e492-ecc2-4c60-8410-6fde269e9285) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.659 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 05a3e492-ecc2-4c60-8410-6fde269e9285 in datapath 2bc7e730-1c24-4817-b1fa-ada338da8f3d unbound from our chassis
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.666 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2bc7e730-1c24-4817-b1fa-ada338da8f3d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.667 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb0fe98-9997-4819-9bff-df1231f53bda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.670 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d namespace which is not needed anymore
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.678 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 20 14:58:50 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Consumed 6.926s CPU time.
Jan 20 14:58:50 compute-0 systemd-machined[216401]: Machine qemu-66-instance-0000008d terminated.
Jan 20 14:58:50 compute-0 angry_yonath[334440]: {
Jan 20 14:58:50 compute-0 angry_yonath[334440]:     "0": [
Jan 20 14:58:50 compute-0 angry_yonath[334440]:         {
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "devices": [
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "/dev/loop3"
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             ],
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "lv_name": "ceph_lv0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "lv_size": "7511998464",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "name": "ceph_lv0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "tags": {
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.cluster_name": "ceph",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.crush_device_class": "",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.encrypted": "0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.osd_id": "0",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.type": "block",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:                 "ceph.vdo": "0"
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             },
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "type": "block",
Jan 20 14:58:50 compute-0 angry_yonath[334440]:             "vg_name": "ceph_vg0"
Jan 20 14:58:50 compute-0 angry_yonath[334440]:         }
Jan 20 14:58:50 compute-0 angry_yonath[334440]:     ]
Jan 20 14:58:50 compute-0 angry_yonath[334440]: }
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.790 250022 INFO nova.virt.libvirt.driver [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Instance destroyed successfully.
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.792 250022 DEBUG nova.objects.instance [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lazy-loading 'resources' on Instance uuid 9a8ed269-8170-4a79-aa6d-75abae8562c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:50 compute-0 systemd[1]: libpod-88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a.scope: Deactivated successfully.
Jan 20 14:58:50 compute-0 podman[334424]: 2026-01-20 14:58:50.808243444 +0000 UTC m=+0.963658769 container died 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.810 250022 DEBUG nova.virt.libvirt.vif [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:58:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1583931054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1583931054',id=141,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:58:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9edd9bb3389a4ef2a90569bcdd524d35',ramdisk_id='',reservation_id='r-gvd7ruz4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-881955012',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-881955012-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:58:44Z,user_data=None,user_id='f78d8330caf745e1b9d9eb71d167e735',uuid=9a8ed269-8170-4a79-aa6d-75abae8562c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.811 250022 DEBUG nova.network.os_vif_util [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converting VIF {"id": "05a3e492-ecc2-4c60-8410-6fde269e9285", "address": "fa:16:3e:af:3f:23", "network": {"id": "2bc7e730-1c24-4817-b1fa-ada338da8f3d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1806138289-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9edd9bb3389a4ef2a90569bcdd524d35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05a3e492-ec", "ovs_interfaceid": "05a3e492-ecc2-4c60-8410-6fde269e9285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.812 250022 DEBUG nova.network.os_vif_util [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.813 250022 DEBUG os_vif [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.815 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.816 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05a3e492-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.821 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.825 250022 INFO os_vif [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:3f:23,bridge_name='br-int',has_traffic_filtering=True,id=05a3e492-ecc2-4c60-8410-6fde269e9285,network=Network(2bc7e730-1c24-4817-b1fa-ada338da8f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05a3e492-ec')
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [NOTICE]   (332283) : haproxy version is 2.8.14-c23fe91
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [NOTICE]   (332283) : path to executable is /usr/sbin/haproxy
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [WARNING]  (332283) : Exiting Master process...
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [WARNING]  (332283) : Exiting Master process...
Jan 20 14:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-283eb361b70133c54b522df10a60f642876751d834b2e8df02689302b756371e-merged.mount: Deactivated successfully.
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [ALERT]    (332283) : Current worker (332288) exited with code 143 (Terminated)
Jan 20 14:58:50 compute-0 neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d[332255]: [WARNING]  (332283) : All workers exited. Exiting... (0)
Jan 20 14:58:50 compute-0 systemd[1]: libpod-7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf.scope: Deactivated successfully.
Jan 20 14:58:50 compute-0 podman[334471]: 2026-01-20 14:58:50.846276238 +0000 UTC m=+0.076125912 container died 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:58:50 compute-0 podman[334424]: 2026-01-20 14:58:50.862995229 +0000 UTC m=+1.018410554 container remove 88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf-userdata-shm.mount: Deactivated successfully.
Jan 20 14:58:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7054240a97e6afec8d4f56479e38c290736e007edb81fe7a0d4af7e74ce14ad-merged.mount: Deactivated successfully.
Jan 20 14:58:50 compute-0 systemd[1]: libpod-conmon-88d52b2dbf5a64a789effc7bea1c50eabf4f991b649c556964b20862b481312a.scope: Deactivated successfully.
Jan 20 14:58:50 compute-0 podman[334471]: 2026-01-20 14:58:50.885777582 +0000 UTC m=+0.115627246 container cleanup 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:50 compute-0 sudo[334322]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:50 compute-0 systemd[1]: libpod-conmon-7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf.scope: Deactivated successfully.
Jan 20 14:58:50 compute-0 sudo[334535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:50 compute-0 podman[334533]: 2026-01-20 14:58:50.970831123 +0000 UTC m=+0.058469136 container remove 7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:58:50 compute-0 sudo[334535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:50 compute-0 sudo[334535]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.988 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcd66ba-8182-4227-8b8f-55da118aa402]: (4, ('Tue Jan 20 02:58:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d (7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf)\n7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf\nTue Jan 20 02:58:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d (7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf)\n7ea49403ae5b5f873d62e2dc6bb18f9b2d07a0b93be5918f1086af4281f7ecdf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.991 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[93d1eb33-61c4-42f4-a857-283ced0a4c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:50.993 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bc7e730-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:50 compute-0 nova_compute[250018]: 2026-01-20 14:58:50.995 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:50 compute-0 kernel: tap2bc7e730-10: left promiscuous mode
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.017 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[da8d34dd-314e-407d-b4a3-2e1518a6f903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.038 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2d659b-2d96-43cc-9621-9b1713632540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.039 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[31de4398-17cc-47cd-bff6-169a0692b24e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:51 compute-0 sudo[334572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:58:51 compute-0 sudo[334572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:51 compute-0 sudo[334572]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.063 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0e88028f-e44d-427b-9a19-93274965f8d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706256, 'reachable_time': 15924, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334598, 'error': None, 'target': 'ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d2bc7e730\x2d1c24\x2d4817\x2db1fa\x2dada338da8f3d.mount: Deactivated successfully.
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.068 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2bc7e730-1c24-4817-b1fa-ada338da8f3d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:58:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:51.068 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1583c-b2a4-4cf5-adb5-85d17fdd9e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:51 compute-0 sudo[334600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:51 compute-0 sudo[334600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:51 compute-0 sudo[334600]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:51 compute-0 sudo[334626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 14:58:51 compute-0 sudo[334626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.363 250022 INFO nova.virt.libvirt.driver [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Deleting instance files /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3_del
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.365 250022 INFO nova.virt.libvirt.driver [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Deletion of /var/lib/nova/instances/9a8ed269-8170-4a79-aa6d-75abae8562c3_del complete
Jan 20 14:58:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3942644152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.459 250022 INFO nova.compute.manager [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.460 250022 DEBUG oslo.service.loopingcall [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.460 250022 DEBUG nova.compute.manager [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:58:51 compute-0 nova_compute[250018]: 2026-01-20 14:58:51.460 250022 DEBUG nova.network.neutron [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.55097042 +0000 UTC m=+0.037742228 container create 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:51.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:51 compute-0 systemd[1]: Started libpod-conmon-0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188.scope.
Jan 20 14:58:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.535299228 +0000 UTC m=+0.022071066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.639863293 +0000 UTC m=+0.126635121 container init 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.649434481 +0000 UTC m=+0.136206299 container start 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.654328734 +0000 UTC m=+0.141100562 container attach 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:58:51 compute-0 cranky_jones[334705]: 167 167
Jan 20 14:58:51 compute-0 systemd[1]: libpod-0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188.scope: Deactivated successfully.
Jan 20 14:58:51 compute-0 conmon[334705]: conmon 0b4e6173cb3ee76cc7ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188.scope/container/memory.events
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.658839065 +0000 UTC m=+0.145610883 container died 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-309124b81fa06c2dc811174fb1a960984ea38bc6435d0c0a0b28f75290ca07d0-merged.mount: Deactivated successfully.
Jan 20 14:58:51 compute-0 podman[334688]: 2026-01-20 14:58:51.708921314 +0000 UTC m=+0.195693122 container remove 0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:58:51 compute-0 systemd[1]: libpod-conmon-0b4e6173cb3ee76cc7adce08d1179e5942ef394a0c99c305af12203e42a46188.scope: Deactivated successfully.
Jan 20 14:58:51 compute-0 podman[334730]: 2026-01-20 14:58:51.897160154 +0000 UTC m=+0.044669954 container create 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 14:58:51 compute-0 systemd[1]: Started libpod-conmon-71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e.scope.
Jan 20 14:58:51 compute-0 podman[334730]: 2026-01-20 14:58:51.876986641 +0000 UTC m=+0.024496461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:58:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d104c3cd1f923a47605efb4e8e3ef535a78c82d4c4bd82cab9eaa93998c4bf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d104c3cd1f923a47605efb4e8e3ef535a78c82d4c4bd82cab9eaa93998c4bf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d104c3cd1f923a47605efb4e8e3ef535a78c82d4c4bd82cab9eaa93998c4bf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d104c3cd1f923a47605efb4e8e3ef535a78c82d4c4bd82cab9eaa93998c4bf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:58:51 compute-0 podman[334730]: 2026-01-20 14:58:51.993761846 +0000 UTC m=+0.141271666 container init 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:58:52 compute-0 podman[334730]: 2026-01-20 14:58:52.000865988 +0000 UTC m=+0.148375808 container start 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:58:52 compute-0 podman[334730]: 2026-01-20 14:58:52.004588048 +0000 UTC m=+0.152097848 container attach 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 191 op/s
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.402 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.402 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.403 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.403 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20a008fa-d059-4906-ba1c-697755b1ba06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:52 compute-0 ceph-mon[74360]: pgmap v2204: 321 pgs: 321 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 191 op/s
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:58:52
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', 'backups']
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:58:52 compute-0 serene_meninsky[334746]: {
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:         "osd_id": 0,
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:         "type": "bluestore"
Jan 20 14:58:52 compute-0 serene_meninsky[334746]:     }
Jan 20 14:58:52 compute-0 serene_meninsky[334746]: }
Jan 20 14:58:52 compute-0 systemd[1]: libpod-71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e.scope: Deactivated successfully.
Jan 20 14:58:52 compute-0 podman[334730]: 2026-01-20 14:58:52.81956362 +0000 UTC m=+0.967073420 container died 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 14:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d104c3cd1f923a47605efb4e8e3ef535a78c82d4c4bd82cab9eaa93998c4bf1-merged.mount: Deactivated successfully.
Jan 20 14:58:52 compute-0 podman[334730]: 2026-01-20 14:58:52.872412144 +0000 UTC m=+1.019921944 container remove 71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.879 250022 DEBUG nova.compute.manager [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-unplugged-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.881 250022 DEBUG oslo_concurrency.lockutils [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.882 250022 DEBUG oslo_concurrency.lockutils [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.882 250022 DEBUG oslo_concurrency.lockutils [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:52 compute-0 systemd[1]: libpod-conmon-71e5ccda5a87272ae61d13b3f9dfa56951f4f864393c092011d185a4a8490f4e.scope: Deactivated successfully.
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.882 250022 DEBUG nova.compute.manager [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] No waiting events found dispatching network-vif-unplugged-05a3e492-ecc2-4c60-8410-6fde269e9285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:52 compute-0 nova_compute[250018]: 2026-01-20 14:58:52.883 250022 DEBUG nova.compute.manager [req-665b765a-cd75-4346-baef-89f06bd74a96 req-10702034-f738-485a-8640-228ba594e93a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-unplugged-05a3e492-ecc2-4c60-8410-6fde269e9285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 14:58:52 compute-0 sudo[334626]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 14:58:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 14:58:52 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7fa5f419-c12f-4283-a48e-b18bd34cb3fa does not exist
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0bbb570b-5e3f-48c4-a9de-66e675d8a94d does not exist
Jan 20 14:58:52 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e8e8e974-9210-41d9-9133-7f2fa2f25e05 does not exist
Jan 20 14:58:53 compute-0 sudo[334780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-0 sudo[334780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-0 sudo[334780]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-0 sudo[334805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 14:58:53 compute-0 sudo[334805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-0 sudo[334805]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.363 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.367 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.367 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.368 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.369 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.372 250022 INFO nova.compute.manager [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Terminating instance
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.374 250022 DEBUG nova.compute.manager [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 14:58:53 compute-0 kernel: tap947d815e-38 (unregistering): left promiscuous mode
Jan 20 14:58:53 compute-0 NetworkManager[48960]: <info>  [1768921133.4242] device (tap947d815e-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 14:58:53 compute-0 ovn_controller[148666]: 2026-01-20T14:58:53Z|00525|binding|INFO|Releasing lport 947d815e-382a-4899-acd3-af215e4fbab4 from this chassis (sb_readonly=0)
Jan 20 14:58:53 compute-0 ovn_controller[148666]: 2026-01-20T14:58:53Z|00526|binding|INFO|Setting lport 947d815e-382a-4899-acd3-af215e4fbab4 down in Southbound
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.427 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 ovn_controller[148666]: 2026-01-20T14:58:53Z|00527|binding|INFO|Removing iface tap947d815e-38 ovn-installed in OVS
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.438 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:5b:09 10.100.0.11'], port_security=['fa:16:3e:b6:5b:09 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '20a008fa-d059-4906-ba1c-697755b1ba06', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae96a4c20ce4f3a859ff518a8423db5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c183cbd6-8d12-4b40-95e1-0acae9d53763', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e10dd74-88ae-4292-9b27-90c6b0f9f3a8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=947d815e-382a-4899-acd3-af215e4fbab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.439 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 947d815e-382a-4899-acd3-af215e4fbab4 in datapath 7fc355b3-a02c-425b-a7e9-a307be1be8e7 unbound from our chassis
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.441 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc355b3-a02c-425b-a7e9-a307be1be8e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.442 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b0422d70-7a94-413e-982f-ed8dfc074a1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.443 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7 namespace which is not needed anymore
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 20 14:58:53 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Consumed 5.389s CPU time.
Jan 20 14:58:53 compute-0 systemd-machined[216401]: Machine qemu-67-instance-0000008e terminated.
Jan 20 14:58:53 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [NOTICE]   (334191) : haproxy version is 2.8.14-c23fe91
Jan 20 14:58:53 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [NOTICE]   (334191) : path to executable is /usr/sbin/haproxy
Jan 20 14:58:53 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [WARNING]  (334191) : Exiting Master process...
Jan 20 14:58:53 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [ALERT]    (334191) : Current worker (334193) exited with code 143 (Terminated)
Jan 20 14:58:53 compute-0 neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7[334187]: [WARNING]  (334191) : All workers exited. Exiting... (0)
Jan 20 14:58:53 compute-0 systemd[1]: libpod-fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf.scope: Deactivated successfully.
Jan 20 14:58:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:53.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:53 compute-0 podman[334852]: 2026-01-20 14:58:53.579323696 +0000 UTC m=+0.044757837 container died fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:53 compute-0 NetworkManager[48960]: <info>  [1768921133.5906] manager: (tap947d815e-38): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.609 250022 INFO nova.virt.libvirt.driver [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Instance destroyed successfully.
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.611 250022 DEBUG nova.objects.instance [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lazy-loading 'resources' on Instance uuid 20a008fa-d059-4906-ba1c-697755b1ba06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf-userdata-shm.mount: Deactivated successfully.
Jan 20 14:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e34faa6dee8d4b679c654cb9e7f3696da806537700fb720b345a019ba529ab9-merged.mount: Deactivated successfully.
Jan 20 14:58:53 compute-0 podman[334852]: 2026-01-20 14:58:53.632510208 +0000 UTC m=+0.097944319 container cleanup fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:58:53 compute-0 systemd[1]: libpod-conmon-fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf.scope: Deactivated successfully.
Jan 20 14:58:53 compute-0 podman[334891]: 2026-01-20 14:58:53.690525261 +0000 UTC m=+0.038156969 container remove fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.697 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ca961824-d5d4-4326-b88f-d1d79172fbbe]: (4, ('Tue Jan 20 02:58:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7 (fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf)\nfe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf\nTue Jan 20 02:58:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7 (fe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf)\nfe80699978c668e0465a59539fccbd21282b3832cf604e03b7232d7192cff4bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.698 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2716d980-7c74-451a-a843-ec59a13c6a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.699 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc355b3-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.700 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 kernel: tap7fc355b3-a0: left promiscuous mode
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.730 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ab42a3d4-58a1-409c-a0bf-c052b28586f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.746 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bb552c6a-b6aa-4ca9-a4fd-8a260ce134ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.747 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[07408706-bd20-4f10-8051-754afeefb69d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.761 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1f1fc-0655-4a1f-b127-93ecdef57f96]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706774, 'reachable_time': 29115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334908, 'error': None, 'target': 'ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d7fc355b3\x2da02c\x2d425b\x2da7e9\x2da307be1be8e7.mount: Deactivated successfully.
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.763 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc355b3-a02c-425b-a7e9-a307be1be8e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 14:58:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:58:53.763 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[bc94e161-d70c-4615-bd0a-1ac67b95a3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.831 250022 DEBUG nova.virt.libvirt.vif [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:58:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1617556540',display_name='tempest-ServerAddressesTestJSON-server-1617556540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1617556540',id=142,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:58:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bae96a4c20ce4f3a859ff518a8423db5',ramdisk_id='',reservation_id='r-08ubvzco',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1769818234',owner_user_name='tempest-ServerAddressesTestJSON-1769818234-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T14:58:48Z,user_data=None,user_id='0362dea63d5a43778f8d4164a77cd3c6',uuid=20a008fa-d059-4906-ba1c-697755b1ba06,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.831 250022 DEBUG nova.network.os_vif_util [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converting VIF {"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.832 250022 DEBUG nova.network.os_vif_util [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.832 250022 DEBUG os_vif [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.833 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap947d815e-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.834 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:53 compute-0 nova_compute[250018]: 2026-01-20 14:58:53.837 250022 INFO os_vif [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:5b:09,bridge_name='br-int',has_traffic_filtering=True,id=947d815e-382a-4899-acd3-af215e4fbab4,network=Network(7fc355b3-a02c-425b-a7e9-a307be1be8e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap947d815e-38')
Jan 20 14:58:53 compute-0 sudo[334910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-0 sudo[334910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-0 sudo[334910]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:53 compute-0 sudo[334953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:58:53 compute-0 sudo[334953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:58:53 compute-0 sudo[334953]: pam_unix(sudo:session): session closed for user root
Jan 20 14:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:58:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.339 250022 DEBUG nova.network.neutron [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.537 250022 INFO nova.compute.manager [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Took 3.08 seconds to deallocate network for instance.
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.613 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.614 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.710 250022 DEBUG oslo_concurrency.processutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.764 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updating instance_info_cache with network_info: [{"id": "947d815e-382a-4899-acd3-af215e4fbab4", "address": "fa:16:3e:b6:5b:09", "network": {"id": "7fc355b3-a02c-425b-a7e9-a307be1be8e7", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-388554845-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bae96a4c20ce4f3a859ff518a8423db5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap947d815e-38", "ovs_interfaceid": "947d815e-382a-4899-acd3-af215e4fbab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.787 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-20a008fa-d059-4906-ba1c-697755b1ba06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:58:54 compute-0 nova_compute[250018]: 2026-01-20 14:58:54.787 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:58:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.016 250022 INFO nova.virt.libvirt.driver [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Deleting instance files /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06_del
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.017 250022 INFO nova.virt.libvirt.driver [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Deletion of /var/lib/nova/instances/20a008fa-d059-4906-ba1c-697755b1ba06_del complete
Jan 20 14:58:55 compute-0 ceph-mon[74360]: pgmap v2205: 321 pgs: 321 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 282 op/s
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.119 250022 INFO nova.compute.manager [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Took 1.74 seconds to destroy the instance on the hypervisor.
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.119 250022 DEBUG oslo.service.loopingcall [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.120 250022 DEBUG nova.compute.manager [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.121 250022 DEBUG nova.network.neutron [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 14:58:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260322840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.205 250022 DEBUG oslo_concurrency.processutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.213 250022 DEBUG nova.compute.provider_tree [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.406 250022 DEBUG nova.scheduler.client.report [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.430 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.488 250022 INFO nova.scheduler.client.report [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Deleted allocations for instance 9a8ed269-8170-4a79-aa6d-75abae8562c3
Jan 20 14:58:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:55.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.587 250022 DEBUG oslo_concurrency.lockutils [None req-fb77b1cd-59f8-45b7-ba1c-855328190968 f78d8330caf745e1b9d9eb71d167e735 9edd9bb3389a4ef2a90569bcdd524d35 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:55 compute-0 nova_compute[250018]: 2026-01-20 14:58:55.999 250022 DEBUG nova.network.neutron [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.036 250022 INFO nova.compute.manager [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Took 0.92 seconds to deallocate network for instance.
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.095 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.096 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.102 250022 DEBUG nova.compute.manager [req-4fa0e4cc-bc29-4624-9bd2-c22891f695a5 req-9be1b624-edbd-4f98-a555-3015a1563f39 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-vif-deleted-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 29 KiB/s wr, 411 op/s
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.152 250022 DEBUG oslo_concurrency.processutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1260322840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.194 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.195 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.195 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.196 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9a8ed269-8170-4a79-aa6d-75abae8562c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.196 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] No waiting events found dispatching network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.196 250022 WARNING nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received unexpected event network-vif-plugged-05a3e492-ecc2-4c60-8410-6fde269e9285 for instance with vm_state deleted and task_state None.
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.197 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-vif-unplugged-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.197 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.198 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.198 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.198 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] No waiting events found dispatching network-vif-unplugged-947d815e-382a-4899-acd3-af215e4fbab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.199 250022 WARNING nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received unexpected event network-vif-unplugged-947d815e-382a-4899-acd3-af215e4fbab4 for instance with vm_state deleted and task_state None.
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.199 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Received event network-vif-deleted-05a3e492-ecc2-4c60-8410-6fde269e9285 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.200 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.200 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.200 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.201 250022 DEBUG oslo_concurrency.lockutils [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.201 250022 DEBUG nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] No waiting events found dispatching network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.201 250022 WARNING nova.compute.manager [req-47264c49-11cf-48db-8f50-c25289e752d0 req-64086a7a-9b62-4eab-b0f5-c911f35c727e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Received unexpected event network-vif-plugged-947d815e-382a-4899-acd3-af215e4fbab4 for instance with vm_state deleted and task_state None.
Jan 20 14:58:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.592 250022 DEBUG oslo_concurrency.processutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.598 250022 DEBUG nova.compute.provider_tree [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.620 250022 DEBUG nova.scheduler.client.report [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.649 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.678 250022 INFO nova.scheduler.client.report [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Deleted allocations for instance 20a008fa-d059-4906-ba1c-697755b1ba06
Jan 20 14:58:56 compute-0 nova_compute[250018]: 2026-01-20 14:58:56.743 250022 DEBUG oslo_concurrency.lockutils [None req-eda00add-c545-41b3-b637-f013983a41c1 0362dea63d5a43778f8d4164a77cd3c6 bae96a4c20ce4f3a859ff518a8423db5 - - default default] Lock "20a008fa-d059-4906-ba1c-697755b1ba06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:57 compute-0 ceph-mon[74360]: pgmap v2206: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 29 KiB/s wr, 411 op/s
Jan 20 14:58:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3497436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 14:58:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:57.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.355 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.355 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.386 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.477 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.478 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.486 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.487 250022 INFO nova.compute.claims [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Claim successful on node compute-0.ctlplane.example.com
Jan 20 14:58:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:57.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:58:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:58:57 compute-0 nova_compute[250018]: 2026-01-20 14:58:57.658 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.073 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 14:58:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:58:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355284715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.092 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.099 250022 DEBUG nova.compute.provider_tree [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:58:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 378 op/s
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.136 250022 DEBUG nova.scheduler.client.report [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.161 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.162 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.211 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.211 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 14:58:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2355284715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.235 250022 INFO nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.270 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.367 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.370 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.371 250022 INFO nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Creating image(s)
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.414 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.455 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.487 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.490 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.524 250022 DEBUG nova.policy [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd77d3db3cf924683a608d10efefcd156', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '105e56abe3804424885c7aa8d1216d12', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.570 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.571 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.572 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.572 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.603 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.607 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.636 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.834 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:58:58 compute-0 nova_compute[250018]: 2026-01-20 14:58:58.996 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.075 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] resizing rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.184 250022 DEBUG nova.objects.instance [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'migration_context' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.198 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.199 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Ensure instance console log exists: /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.199 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.200 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.200 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:58:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:58:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:58:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:58:59 compute-0 ceph-mon[74360]: pgmap v2207: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 16 KiB/s wr, 378 op/s
Jan 20 14:58:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:58:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:58:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:58:59.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:58:59 compute-0 nova_compute[250018]: 2026-01-20 14:58:59.692 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Successfully created port: ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 14:58:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1007 KiB/s wr, 387 op/s
Jan 20 14:59:00 compute-0 nova_compute[250018]: 2026-01-20 14:59:00.906 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.167 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Successfully updated port: ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.199 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.199 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.200 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 14:59:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:01.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:01 compute-0 ceph-mon[74360]: pgmap v2208: 321 pgs: 321 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1007 KiB/s wr, 387 op/s
Jan 20 14:59:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2943226124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.378 250022 DEBUG nova.compute.manager [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-changed-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.379 250022 DEBUG nova.compute.manager [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Refreshing instance network info cache due to event network-changed-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.379 250022 DEBUG oslo_concurrency.lockutils [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:01 compute-0 nova_compute[250018]: 2026-01-20 14:59:01.447 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 14:59:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:59:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:01.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:59:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 302 op/s
Jan 20 14:59:02 compute-0 nova_compute[250018]: 2026-01-20 14:59:02.678 250022 DEBUG nova.network.neutron [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.058 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:03.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.261 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.262 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance network_info: |[{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.262 250022 DEBUG oslo_concurrency.lockutils [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.263 250022 DEBUG nova.network.neutron [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Refreshing network info cache for port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.267 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Start _get_guest_xml network_info=[{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.275 250022 WARNING nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:59:03 compute-0 ceph-mon[74360]: pgmap v2209: 321 pgs: 321 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 302 op/s
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.282 250022 DEBUG nova.virt.libvirt.host [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.283 250022 DEBUG nova.virt.libvirt.host [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.290 250022 DEBUG nova.virt.libvirt.host [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.291 250022 DEBUG nova.virt.libvirt.host [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.293 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.293 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.293 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.293 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.293 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.294 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.295 250022 DEBUG nova.virt.hardware [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.297 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:59:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:03.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:59:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626265555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.752 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.779 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.783 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:03 compute-0 nova_compute[250018]: 2026-01-20 14:59:03.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 14:59:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 14:59:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110536162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.595 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.597 250022 DEBUG nova.virt.libvirt.vif [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1098898119',display_name='tempest-ServersNegativeTestJSON-server-1098898119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1098898119',id=143,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='105e56abe3804424885c7aa8d1216d12',ramdisk_id='',reservation_id='r-yf5960ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1233513591',owner_user_name='tempest-ServersNegativeTestJSON-1233513591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:58Z,user_data=None,user_id='d77d3db3cf924683a608d10efefcd156',uuid=61ae2c61-01df-4ef1-8aa3-0527a43b1798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.597 250022 DEBUG nova.network.os_vif_util [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converting VIF {"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.598 250022 DEBUG nova.network.os_vif_util [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.599 250022 DEBUG nova.objects.instance [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.618 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] End _get_guest_xml xml=<domain type="kvm">
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <uuid>61ae2c61-01df-4ef1-8aa3-0527a43b1798</uuid>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <name>instance-0000008f</name>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <metadata>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:name>tempest-ServersNegativeTestJSON-server-1098898119</nova:name>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 14:59:03</nova:creationTime>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:user uuid="d77d3db3cf924683a608d10efefcd156">tempest-ServersNegativeTestJSON-1233513591-project-member</nova:user>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:project uuid="105e56abe3804424885c7aa8d1216d12">tempest-ServersNegativeTestJSON-1233513591</nova:project>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <nova:port uuid="ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7">
Jan 20 14:59:04 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </metadata>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <system>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="serial">61ae2c61-01df-4ef1-8aa3-0527a43b1798</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="uuid">61ae2c61-01df-4ef1-8aa3-0527a43b1798</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </system>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <os>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </os>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <features>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <apic/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </features>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </clock>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </cpu>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   <devices>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk">
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </source>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config">
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </source>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 14:59:04 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       </auth>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </disk>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:8c:76:cf"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <target dev="tapee1c78ce-d0"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </interface>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/console.log" append="off"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </serial>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <video>
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </video>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </rng>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 14:59:04 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 14:59:04 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 14:59:04 compute-0 nova_compute[250018]:   </devices>
Jan 20 14:59:04 compute-0 nova_compute[250018]: </domain>
Jan 20 14:59:04 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.619 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Preparing to wait for external event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.620 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.620 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.621 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.622 250022 DEBUG nova.virt.libvirt.vif [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T14:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1098898119',display_name='tempest-ServersNegativeTestJSON-server-1098898119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1098898119',id=143,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='105e56abe3804424885c7aa8d1216d12',ramdisk_id='',reservation_id='r-yf5960ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1233513591',owner_user_name='tempest-ServersNegativeTestJSON-1233513591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T14:58:58Z,user_data=None,user_id='d77d3db3cf924683a608d10efefcd156',uuid=61ae2c61-01df-4ef1-8aa3-0527a43b1798,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.622 250022 DEBUG nova.network.os_vif_util [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converting VIF {"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.623 250022 DEBUG nova.network.os_vif_util [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.624 250022 DEBUG os_vif [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.625 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.625 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.629 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.629 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee1c78ce-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.630 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee1c78ce-d0, col_values=(('external_ids', {'iface-id': 'ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:76:cf', 'vm-uuid': '61ae2c61-01df-4ef1-8aa3-0527a43b1798'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:04 compute-0 NetworkManager[48960]: <info>  [1768921144.6531] manager: (tapee1c78ce-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.656 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.658 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:04 compute-0 nova_compute[250018]: 2026-01-20 14:59:04.659 250022 INFO os_vif [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0')
Jan 20 14:59:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/626265555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.005 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.005 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.005 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] No VIF found with MAC fa:16:3e:8c:76:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.006 250022 INFO nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Using config drive
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.033 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:59:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:59:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:05.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:59:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:59:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:59:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.789 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921130.7887268, 9a8ed269-8170-4a79-aa6d-75abae8562c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.790 250022 INFO nova.compute.manager [-] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] VM Stopped (Lifecycle Event)
Jan 20 14:59:05 compute-0 nova_compute[250018]: 2026-01-20 14:59:05.845 250022 DEBUG nova.compute.manager [None req-d20aac5f-34fa-4368-b469-7bb9c991ca2f - - - - - -] [instance: 9a8ed269-8170-4a79-aa6d-75abae8562c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.112 250022 INFO nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Creating config drive at /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.120 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphsjddkfg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 2.1 MiB/s wr, 199 op/s
Jan 20 14:59:06 compute-0 ceph-mon[74360]: pgmap v2210: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 14:59:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1110536162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1895765005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.276 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphsjddkfg" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.308 250022 DEBUG nova.storage.rbd_utils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] rbd image 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.314 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.516 250022 DEBUG oslo_concurrency.processutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config 61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.517 250022 INFO nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Deleting local config drive /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798/disk.config because it was imported into RBD.
Jan 20 14:59:06 compute-0 kernel: tapee1c78ce-d0: entered promiscuous mode
Jan 20 14:59:06 compute-0 NetworkManager[48960]: <info>  [1768921146.5578] manager: (tapee1c78ce-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.558 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:06 compute-0 ovn_controller[148666]: 2026-01-20T14:59:06Z|00528|binding|INFO|Claiming lport ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 for this chassis.
Jan 20 14:59:06 compute-0 ovn_controller[148666]: 2026-01-20T14:59:06Z|00529|binding|INFO|ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7: Claiming fa:16:3e:8c:76:cf 10.100.0.3
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.562 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.565 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:06 compute-0 systemd-udevd[335353]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:59:06 compute-0 systemd-machined[216401]: New machine qemu-68-instance-0000008f.
Jan 20 14:59:06 compute-0 NetworkManager[48960]: <info>  [1768921146.6017] device (tapee1c78ce-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 14:59:06 compute-0 NetworkManager[48960]: <info>  [1768921146.6025] device (tapee1c78ce-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 14:59:06 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000008f.
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.633 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:06 compute-0 ovn_controller[148666]: 2026-01-20T14:59:06Z|00530|binding|INFO|Setting lport ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 ovn-installed in OVS
Jan 20 14:59:06 compute-0 nova_compute[250018]: 2026-01-20 14:59:06.638 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:06 compute-0 ovn_controller[148666]: 2026-01-20T14:59:06Z|00531|binding|INFO|Setting lport ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 up in Southbound
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.964 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:76:cf 10.100.0.3'], port_security=['fa:16:3e:8c:76:cf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '61ae2c61-01df-4ef1-8aa3-0527a43b1798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3aad5d71-9bbf-496d-805e-819d17c4343e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '105e56abe3804424885c7aa8d1216d12', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5c26cf5d-4215-4bd2-8a4b-3ad6a5f65504', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298e3802-e88f-473c-a925-fb8c9f7cfd27, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.965 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 in datapath 3aad5d71-9bbf-496d-805e-819d17c4343e bound to our chassis
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.966 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3aad5d71-9bbf-496d-805e-819d17c4343e
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.977 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ad23f75f-9187-407c-a1d3-f61c3c7eb15d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.978 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3aad5d71-91 in ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.981 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3aad5d71-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.981 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f030b0f5-6c67-4c91-bd3b-bf9fe1667a87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.982 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[229416d9-483b-487e-b53c-444eb3f1fec5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:06 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:06.992 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3f64ae-79a2-453d-84aa-48e3f933e47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.007 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f4668ec0-4c39-4d2c-8372-eff271f29ec0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.037 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[496f017b-8df5-4af1-859d-d1ac8bef9519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 NetworkManager[48960]: <info>  [1768921147.0445] manager: (tap3aad5d71-90): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.044 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8526d4-f06d-4099-ba58-bbbe7c4b0b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.075 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0b389b9d-a178-4670-ab0c-61e39a3921b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.077 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9720feee-3d84-4737-b7dc-e2868491e2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 NetworkManager[48960]: <info>  [1768921147.0991] device (tap3aad5d71-90): carrier: link connected
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.105 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[edadc604-d7ad-44dd-9d14-ec0c5e554025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.122 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1a60a736-1f45-4c29-b235-d609be832a84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3aad5d71-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:0d:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708782, 'reachable_time': 33230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335405, 'error': None, 'target': 'ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.137 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[df30689a-8004-44f4-96a0-36a56896cca6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:d1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708782, 'tstamp': 708782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335406, 'error': None, 'target': 'ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.155 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1da70c-108c-42ef-8542-f3b9d3415ecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3aad5d71-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:0d:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708782, 'reachable_time': 33230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335407, 'error': None, 'target': 'ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.185 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[306fd18e-b648-4829-a957-dec3bf0d610b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:07.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.257 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1302cbc-f138-4e1d-95cd-954408f8f399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aad5d71-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.259 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.260 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aad5d71-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.262 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-0 kernel: tap3aad5d71-90: entered promiscuous mode
Jan 20 14:59:07 compute-0 NetworkManager[48960]: <info>  [1768921147.2633] manager: (tap3aad5d71-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.265 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-0 ceph-mon[74360]: pgmap v2211: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 2.1 MiB/s wr, 199 op/s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.268 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3aad5d71-90, col_values=(('external_ids', {'iface-id': '326d4a7f-b98b-4d21-8fb2-256cf03a3e6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.269 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-0 ovn_controller[148666]: 2026-01-20T14:59:07Z|00532|binding|INFO|Releasing lport 326d4a7f-b98b-4d21-8fb2-256cf03a3e6a from this chassis (sb_readonly=0)
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.271 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3aad5d71-9bbf-496d-805e-819d17c4343e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3aad5d71-9bbf-496d-805e-819d17c4343e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.272 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b959c594-ee2c-408e-99f6-75797e41a06f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.272 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: global
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-3aad5d71-9bbf-496d-805e-819d17c4343e
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/3aad5d71-9bbf-496d-805e-819d17c4343e.pid.haproxy
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 3aad5d71-9bbf-496d-805e-819d17c4343e
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 14:59:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:07.273 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e', 'env', 'PROCESS_TAG=haproxy-3aad5d71-9bbf-496d-805e-819d17c4343e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3aad5d71-9bbf-496d-805e-819d17c4343e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.287 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.394 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921147.3940635, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.396 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Started (Lifecycle Event)
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.446 250022 DEBUG nova.network.neutron [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updated VIF entry in instance network info cache for port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.446 250022 DEBUG nova.network.neutron [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:07.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.612 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.616 250022 DEBUG oslo_concurrency.lockutils [req-21de8da0-fdeb-4de3-87b7-280a7e090561 req-cea20daa-d74f-48be-b7f1-a80b414a344a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.618 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921147.394368, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.618 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Paused (Lifecycle Event)
Jan 20 14:59:07 compute-0 podman[335463]: 2026-01-20 14:59:07.628102844 +0000 UTC m=+0.045421774 container create 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.662 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.665 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.686 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:59:07 compute-0 podman[335463]: 2026-01-20 14:59:07.604509889 +0000 UTC m=+0.021828839 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:59:07 compute-0 systemd[1]: Started libpod-conmon-87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767.scope.
Jan 20 14:59:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:59:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edaf9b25b7c09127a180b87f64f7f3770f583b98e1e559a9df6f8af018d995e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:07 compute-0 podman[335463]: 2026-01-20 14:59:07.860970167 +0000 UTC m=+0.278289147 container init 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 14:59:07 compute-0 podman[335463]: 2026-01-20 14:59:07.867279706 +0000 UTC m=+0.284598676 container start 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:59:07 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [NOTICE]   (335483) : New worker (335485) forked
Jan 20 14:59:07 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [NOTICE]   (335483) : Loading success.
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.978 250022 DEBUG nova.compute.manager [req-ea375a43-f5c9-4a89-866e-9f1bd6b79c17 req-4e6f00d7-3793-447c-9f88-978d3c822f79 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.978 250022 DEBUG oslo_concurrency.lockutils [req-ea375a43-f5c9-4a89-866e-9f1bd6b79c17 req-4e6f00d7-3793-447c-9f88-978d3c822f79 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.979 250022 DEBUG oslo_concurrency.lockutils [req-ea375a43-f5c9-4a89-866e-9f1bd6b79c17 req-4e6f00d7-3793-447c-9f88-978d3c822f79 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.979 250022 DEBUG oslo_concurrency.lockutils [req-ea375a43-f5c9-4a89-866e-9f1bd6b79c17 req-4e6f00d7-3793-447c-9f88-978d3c822f79 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.979 250022 DEBUG nova.compute.manager [req-ea375a43-f5c9-4a89-866e-9f1bd6b79c17 req-4e6f00d7-3793-447c-9f88-978d3c822f79 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Processing event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.979 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.983 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.983 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921147.9833066, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.984 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Resumed (Lifecycle Event)
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.987 250022 INFO nova.virt.libvirt.driver [-] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance spawned successfully.
Jan 20 14:59:07 compute-0 nova_compute[250018]: 2026-01-20 14:59:07.988 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.009 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.019 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.022 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.022 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.023 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.023 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.023 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.024 250022 DEBUG nova.virt.libvirt.driver [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.049 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 14:59:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.159 250022 INFO nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Took 9.79 seconds to spawn the instance on the hypervisor.
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.159 250022 DEBUG nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.250 250022 INFO nova.compute.manager [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Took 10.79 seconds to build instance.
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.273 250022 DEBUG oslo_concurrency.lockutils [None req-76b3c0bf-955a-4fa9-b2e3-244f2f87dbbf d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.606 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921133.6044376, 20a008fa-d059-4906-ba1c-697755b1ba06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.606 250022 INFO nova.compute.manager [-] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] VM Stopped (Lifecycle Event)
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:08 compute-0 nova_compute[250018]: 2026-01-20 14:59:08.692 250022 DEBUG nova.compute.manager [None req-76b2c6d5-aa04-4386-a59a-5c901927787d - - - - - -] [instance: 20a008fa-d059-4906-ba1c-697755b1ba06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 14:59:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:09.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:09 compute-0 ceph-mon[74360]: pgmap v2212: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 14:59:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:09.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:09 compute-0 nova_compute[250018]: 2026-01-20 14:59:09.695 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.069 250022 DEBUG nova.compute.manager [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.069 250022 DEBUG oslo_concurrency.lockutils [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.070 250022 DEBUG oslo_concurrency.lockutils [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.070 250022 DEBUG oslo_concurrency.lockutils [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.070 250022 DEBUG nova.compute.manager [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] No waiting events found dispatching network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 14:59:10 compute-0 nova_compute[250018]: 2026-01-20 14:59:10.070 250022 WARNING nova.compute.manager [req-3d190c41-97a2-439d-9f71-7fe30a4a4525 req-167af293-7ec1-4971-8ce9-efe9ed056139 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received unexpected event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 for instance with vm_state active and task_state None.
Jan 20 14:59:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 14:59:10 compute-0 podman[335496]: 2026-01-20 14:59:10.470364693 +0000 UTC m=+0.052457823 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:59:10 compute-0 podman[335495]: 2026-01-20 14:59:10.494309809 +0000 UTC m=+0.078611089 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:59:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:11.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:11 compute-0 ceph-mon[74360]: pgmap v2213: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010163665549972082 of space, bias 1.0, pg target 0.30490996649916247 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0065064180857993675 of space, bias 1.0, pg target 1.9519254257398102 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 14:59:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 14:59:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:11.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 119 op/s
Jan 20 14:59:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:13.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:13 compute-0 ceph-mon[74360]: pgmap v2214: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 119 op/s
Jan 20 14:59:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/19077082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:59:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:13.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:59:13 compute-0 nova_compute[250018]: 2026-01-20 14:59:13.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:14 compute-0 sudo[335543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:14 compute-0 sudo[335543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:14 compute-0 sudo[335543]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:14 compute-0 sudo[335568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:14 compute-0 sudo[335568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 713 KiB/s wr, 117 op/s
Jan 20 14:59:14 compute-0 sudo[335568]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/857366021' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:14 compute-0 nova_compute[250018]: 2026-01-20 14:59:14.699 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:15 compute-0 ceph-mon[74360]: pgmap v2215: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 713 KiB/s wr, 117 op/s
Jan 20 14:59:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:15.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 104 op/s
Jan 20 14:59:16 compute-0 ceph-mon[74360]: pgmap v2216: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 104 op/s
Jan 20 14:59:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:17.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 20 14:59:18 compute-0 nova_compute[250018]: 2026-01-20 14:59:18.633 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:19 compute-0 ceph-mon[74360]: pgmap v2217: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 20 14:59:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:19.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:19 compute-0 nova_compute[250018]: 2026-01-20 14:59:19.736 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 349 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 120 KiB/s wr, 85 op/s
Jan 20 14:59:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2897357433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2904376589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:21 compute-0 ceph-mon[74360]: pgmap v2218: 321 pgs: 321 active+clean; 349 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 120 KiB/s wr, 85 op/s
Jan 20 14:59:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 662 KiB/s wr, 56 op/s
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:22 compute-0 ovn_controller[148666]: 2026-01-20T14:59:22Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:76:cf 10.100.0.3
Jan 20 14:59:22 compute-0 ovn_controller[148666]: 2026-01-20T14:59:22Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:76:cf 10.100.0.3
Jan 20 14:59:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:23 compute-0 ceph-mon[74360]: pgmap v2219: 321 pgs: 321 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 662 KiB/s wr, 56 op/s
Jan 20 14:59:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:23 compute-0 nova_compute[250018]: 2026-01-20 14:59:23.637 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 377 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 20 14:59:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2388556621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1151045178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:24 compute-0 nova_compute[250018]: 2026-01-20 14:59:24.740 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:25 compute-0 ceph-mon[74360]: pgmap v2220: 321 pgs: 321 active+clean; 377 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 20 14:59:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:25.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 20 14:59:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:27 compute-0 ceph-mon[74360]: pgmap v2221: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 20 14:59:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:27.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:59:28 compute-0 ceph-mon[74360]: pgmap v2222: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 20 14:59:28 compute-0 nova_compute[250018]: 2026-01-20 14:59:28.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:29.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:29 compute-0 nova_compute[250018]: 2026-01-20 14:59:29.743 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 399 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 20 14:59:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:30.770 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:30.770 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:30.771 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:31 compute-0 ceph-mon[74360]: pgmap v2223: 321 pgs: 321 active+clean; 399 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 20 14:59:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2951671161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:31.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:31.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 395 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 182 op/s
Jan 20 14:59:33 compute-0 ceph-mon[74360]: pgmap v2224: 321 pgs: 321 active+clean; 395 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 182 op/s
Jan 20 14:59:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:33.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:33.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:33 compute-0 nova_compute[250018]: 2026-01-20 14:59:33.642 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 180 op/s
Jan 20 14:59:34 compute-0 sudo[335603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:34 compute-0 sudo[335603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:34 compute-0 sudo[335603]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:34 compute-0 sudo[335628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:34 compute-0 sudo[335628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:34 compute-0 sudo[335628]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:34 compute-0 nova_compute[250018]: 2026-01-20 14:59:34.746 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:35 compute-0 nova_compute[250018]: 2026-01-20 14:59:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:35.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 14:59:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 14:59:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:35 compute-0 ceph-mon[74360]: pgmap v2225: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 180 op/s
Jan 20 14:59:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Jan 20 14:59:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/280429019' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:37 compute-0 ceph-mon[74360]: pgmap v2226: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Jan 20 14:59:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:37.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 215 KiB/s wr, 121 op/s
Jan 20 14:59:38 compute-0 ceph-mon[74360]: pgmap v2227: 321 pgs: 321 active+clean; 381 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 215 KiB/s wr, 121 op/s
Jan 20 14:59:38 compute-0 nova_compute[250018]: 2026-01-20 14:59:38.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/385370313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 14:59:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 14:59:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:39.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 14:59:39 compute-0 nova_compute[250018]: 2026-01-20 14:59:39.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 392 KiB/s wr, 129 op/s
Jan 20 14:59:40 compute-0 ceph-mon[74360]: pgmap v2228: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 392 KiB/s wr, 129 op/s
Jan 20 14:59:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:41.018 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 14:59:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:41.019 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 14:59:41 compute-0 nova_compute[250018]: 2026-01-20 14:59:41.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:41 compute-0 podman[335657]: 2026-01-20 14:59:41.508366627 +0000 UTC m=+0.087441536 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 14:59:41 compute-0 podman[335656]: 2026-01-20 14:59:41.553447541 +0000 UTC m=+0.132520430 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:59:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:41.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 368 KiB/s wr, 66 op/s
Jan 20 14:59:43 compute-0 ceph-mon[74360]: pgmap v2229: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 368 KiB/s wr, 66 op/s
Jan 20 14:59:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:43 compute-0 nova_compute[250018]: 2026-01-20 14:59:43.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 368 KiB/s wr, 48 op/s
Jan 20 14:59:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/764431920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:44 compute-0 nova_compute[250018]: 2026-01-20 14:59:44.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:45 compute-0 ceph-mon[74360]: pgmap v2230: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 368 KiB/s wr, 48 op/s
Jan 20 14:59:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 14:59:46.022 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 14:59:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Jan 20 14:59:46 compute-0 ceph-mon[74360]: pgmap v2231: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Jan 20 14:59:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:47 compute-0 nova_compute[250018]: 2026-01-20 14:59:47.891 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:47 compute-0 nova_compute[250018]: 2026-01-20 14:59:47.910 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 14:59:47 compute-0 nova_compute[250018]: 2026-01-20 14:59:47.911 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:47 compute-0 nova_compute[250018]: 2026-01-20 14:59:47.911 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:47 compute-0 nova_compute[250018]: 2026-01-20 14:59:47.932 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Jan 20 14:59:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937710557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.495 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.589 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.590 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.648 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.810 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.811 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4092MB free_disk=20.925037384033203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.812 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.812 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.985 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 61ae2c61-01df-4ef1-8aa3-0527a43b1798 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.985 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 14:59:48 compute-0 nova_compute[250018]: 2026-01-20 14:59:48.986 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.026 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 14:59:49 compute-0 ceph-mon[74360]: pgmap v2232: 321 pgs: 321 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 47 op/s
Jan 20 14:59:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/937710557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 14:59:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559493939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.519 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.524 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.546 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.598 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.598 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 14:59:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:49 compute-0 nova_compute[250018]: 2026-01-20 14:59:49.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Jan 20 14:59:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1559493939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:50 compute-0 nova_compute[250018]: 2026-01-20 14:59:50.599 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:50 compute-0 nova_compute[250018]: 2026-01-20 14:59:50.599 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:50 compute-0 nova_compute[250018]: 2026-01-20 14:59:50.599 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 14:59:51 compute-0 nova_compute[250018]: 2026-01-20 14:59:51.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:51.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:51 compute-0 ceph-mon[74360]: pgmap v2233: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Jan 20 14:59:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2081012519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1363363369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1540140882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:59:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4015297812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2238468495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2401738277' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 14:59:52 compute-0 ceph-mon[74360]: pgmap v2234: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_14:59:52
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Jan 20 14:59:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 14:59:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:53.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:53 compute-0 sudo[335752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:53 compute-0 sudo[335752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-0 sudo[335752]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-0 sudo[335777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:59:53 compute-0 sudo[335777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-0 sudo[335777]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-0 sudo[335802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:53 compute-0 sudo[335802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-0 sudo[335802]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:53 compute-0 sudo[335827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 14:59:53 compute-0 sudo[335827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:53 compute-0 nova_compute[250018]: 2026-01-20 14:59:53.650 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:53.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 14:59:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 14:59:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:59:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:59:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 14:59:54 compute-0 sudo[335827]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 14:59:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 14:59:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.263 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.264 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.264 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.264 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 14:59:54 compute-0 sudo[335884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:54 compute-0 sudo[335884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-0 sudo[335884]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-0 sudo[335909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:54 compute-0 sudo[335909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:54 compute-0 sudo[335909]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 14:59:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:54 compute-0 nova_compute[250018]: 2026-01-20 14:59:54.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:54 compute-0 ceph-mon[74360]: pgmap v2235: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 14:59:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 14:59:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:55.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2354938243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 14:59:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 14:59:56 compute-0 nova_compute[250018]: 2026-01-20 14:59:56.511 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 14:59:56 compute-0 nova_compute[250018]: 2026-01-20 14:59:56.539 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 14:59:56 compute-0 nova_compute[250018]: 2026-01-20 14:59:56.539 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 14:59:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:57 compute-0 ceph-mon[74360]: pgmap v2236: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 245968ac-0f45-47eb-9da4-2a6d2bf86878 does not exist
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c37a36e8-0ced-4306-9287-022a51dd2a0a does not exist
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 65a58588-41c0-47da-be9d-35c149865765 does not exist
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:59:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 14:59:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:57 compute-0 sudo[335936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:57 compute-0 sudo[335936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:57 compute-0 sudo[335936]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:57 compute-0 sudo[335961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:59:57 compute-0 sudo[335961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:57 compute-0 sudo[335961]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:57 compute-0 sudo[335986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:57 compute-0 sudo[335986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:57 compute-0 sudo[335986]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:57 compute-0 sudo[336011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 14:59:57 compute-0 sudo[336011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:59:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:59:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 14:59:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:57.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 14:59:57 compute-0 podman[336078]: 2026-01-20 14:59:57.946889507 +0000 UTC m=+0.044089379 container create 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 14:59:57 compute-0 systemd[1]: Started libpod-conmon-9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6.scope.
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:57.92399803 +0000 UTC m=+0.021197952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:59:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:58.039494831 +0000 UTC m=+0.136694733 container init 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:58.047324362 +0000 UTC m=+0.144524244 container start 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:58.050273572 +0000 UTC m=+0.147473494 container attach 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 14:59:58 compute-0 compassionate_colden[336094]: 167 167
Jan 20 14:59:58 compute-0 systemd[1]: libpod-9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6.scope: Deactivated successfully.
Jan 20 14:59:58 compute-0 conmon[336094]: conmon 9a9c0428f47df4c7feb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6.scope/container/memory.events
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:58.054473175 +0000 UTC m=+0.151673047 container died 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f184971a9729b3759f8b36f09a5fa3c46f3092cd2e87f2909e9358488d0dcbbd-merged.mount: Deactivated successfully.
Jan 20 14:59:58 compute-0 podman[336078]: 2026-01-20 14:59:58.097815713 +0000 UTC m=+0.195015585 container remove 9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:59:58 compute-0 systemd[1]: libpod-conmon-9a9c0428f47df4c7feb0db9a64a7e9f6adaee97925166311e2860e413df9a0e6.scope: Deactivated successfully.
Jan 20 14:59:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 317 KiB/s wr, 77 op/s
Jan 20 14:59:58 compute-0 podman[336117]: 2026-01-20 14:59:58.268302994 +0000 UTC m=+0.038820186 container create 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:59:58 compute-0 systemd[1]: Started libpod-conmon-5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097.scope.
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 14:59:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 14:59:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:59:58 compute-0 podman[336117]: 2026-01-20 14:59:58.252872318 +0000 UTC m=+0.023389530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:59:58 compute-0 podman[336117]: 2026-01-20 14:59:58.35241808 +0000 UTC m=+0.122935292 container init 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:59:58 compute-0 podman[336117]: 2026-01-20 14:59:58.359941173 +0000 UTC m=+0.130458365 container start 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:59:58 compute-0 podman[336117]: 2026-01-20 14:59:58.362857451 +0000 UTC m=+0.133374643 container attach 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:59:58 compute-0 nova_compute[250018]: 2026-01-20 14:59:58.653 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:59 compute-0 peaceful_kowalevski[336133]: --> passed data devices: 0 physical, 1 LVM
Jan 20 14:59:59 compute-0 peaceful_kowalevski[336133]: --> relative data size: 1.0
Jan 20 14:59:59 compute-0 peaceful_kowalevski[336133]: --> All data devices are unavailable
Jan 20 14:59:59 compute-0 systemd[1]: libpod-5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097.scope: Deactivated successfully.
Jan 20 14:59:59 compute-0 podman[336117]: 2026-01-20 14:59:59.20628816 +0000 UTC m=+0.976805362 container died 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 20 14:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-43d06a7d47cce9ee4b1b8ee0e5bebb686173bab1fe7cfafb66cec7c02322b5cb-merged.mount: Deactivated successfully.
Jan 20 14:59:59 compute-0 podman[336117]: 2026-01-20 14:59:59.273540351 +0000 UTC m=+1.044057563 container remove 5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:59:59 compute-0 systemd[1]: libpod-conmon-5e72cac464c3ceea9015e59b05c958516a91206af177ae321c4108ac9884a097.scope: Deactivated successfully.
Jan 20 14:59:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:14:59:59.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:59 compute-0 sudo[336011]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:59 compute-0 ceph-mon[74360]: pgmap v2237: 321 pgs: 321 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 317 KiB/s wr, 77 op/s
Jan 20 14:59:59 compute-0 sudo[336162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:59 compute-0 sudo[336162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:59 compute-0 sudo[336162]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:59 compute-0 sudo[336187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 14:59:59 compute-0 sudo[336187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:59 compute-0 sudo[336187]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:59 compute-0 sudo[336212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 14:59:59 compute-0 sudo[336212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:59 compute-0 sudo[336212]: pam_unix(sudo:session): session closed for user root
Jan 20 14:59:59 compute-0 sudo[336237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 14:59:59 compute-0 sudo[336237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 14:59:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 14:59:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 14:59:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:14:59:59.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 14:59:59 compute-0 nova_compute[250018]: 2026-01-20 14:59:59.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.849359422 +0000 UTC m=+0.041557040 container create c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:59:59 compute-0 systemd[1]: Started libpod-conmon-c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b.scope.
Jan 20 14:59:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.829857686 +0000 UTC m=+0.022055334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.929806079 +0000 UTC m=+0.122003747 container init c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.93763295 +0000 UTC m=+0.129830578 container start c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.941184695 +0000 UTC m=+0.133382343 container attach c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:59:59 compute-0 magical_leavitt[336322]: 167 167
Jan 20 14:59:59 compute-0 systemd[1]: libpod-c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b.scope: Deactivated successfully.
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.942752467 +0000 UTC m=+0.134950085 container died c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:59:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c75b595a9abde964d4e40a11e76605525e261c869e85202f7bac6eacf47c5b-merged.mount: Deactivated successfully.
Jan 20 14:59:59 compute-0 podman[336306]: 2026-01-20 14:59:59.986065494 +0000 UTC m=+0.178263112 container remove c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_leavitt, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:00:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:00:00 compute-0 systemd[1]: libpod-conmon-c2fda3242578b5cbe0778eec387f2756f1c6341185d94386d27c1e7609a1714b.scope: Deactivated successfully.
Jan 20 15:00:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 318 KiB/s wr, 106 op/s
Jan 20 15:00:00 compute-0 podman[336346]: 2026-01-20 15:00:00.162478086 +0000 UTC m=+0.050261835 container create 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 20 15:00:00 compute-0 systemd[1]: Started libpod-conmon-464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd.scope.
Jan 20 15:00:00 compute-0 podman[336346]: 2026-01-20 15:00:00.138830639 +0000 UTC m=+0.026614428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:00:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:00:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716fa1c8ef2bd166803fda939cee1f34281d68d550af5cbe99c06933c6f38b36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716fa1c8ef2bd166803fda939cee1f34281d68d550af5cbe99c06933c6f38b36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716fa1c8ef2bd166803fda939cee1f34281d68d550af5cbe99c06933c6f38b36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716fa1c8ef2bd166803fda939cee1f34281d68d550af5cbe99c06933c6f38b36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:00 compute-0 podman[336346]: 2026-01-20 15:00:00.250709022 +0000 UTC m=+0.138492781 container init 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:00:00 compute-0 podman[336346]: 2026-01-20 15:00:00.256490108 +0000 UTC m=+0.144273847 container start 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:00:00 compute-0 podman[336346]: 2026-01-20 15:00:00.259214761 +0000 UTC m=+0.146998520 container attach 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:00:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1717918447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:00:01 compute-0 objective_meitner[336361]: {
Jan 20 15:00:01 compute-0 objective_meitner[336361]:     "0": [
Jan 20 15:00:01 compute-0 objective_meitner[336361]:         {
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "devices": [
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "/dev/loop3"
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             ],
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "lv_name": "ceph_lv0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "lv_size": "7511998464",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "name": "ceph_lv0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "tags": {
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.cluster_name": "ceph",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.crush_device_class": "",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.encrypted": "0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.osd_id": "0",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.type": "block",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:                 "ceph.vdo": "0"
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             },
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "type": "block",
Jan 20 15:00:01 compute-0 objective_meitner[336361]:             "vg_name": "ceph_vg0"
Jan 20 15:00:01 compute-0 objective_meitner[336361]:         }
Jan 20 15:00:01 compute-0 objective_meitner[336361]:     ]
Jan 20 15:00:01 compute-0 objective_meitner[336361]: }
Jan 20 15:00:01 compute-0 systemd[1]: libpod-464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd.scope: Deactivated successfully.
Jan 20 15:00:01 compute-0 conmon[336361]: conmon 464d901fbf5eb99eb472 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd.scope/container/memory.events
Jan 20 15:00:01 compute-0 podman[336346]: 2026-01-20 15:00:01.068614993 +0000 UTC m=+0.956398732 container died 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-716fa1c8ef2bd166803fda939cee1f34281d68d550af5cbe99c06933c6f38b36-merged.mount: Deactivated successfully.
Jan 20 15:00:01 compute-0 podman[336346]: 2026-01-20 15:00:01.119559286 +0000 UTC m=+1.007343025 container remove 464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meitner, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:00:01 compute-0 systemd[1]: libpod-conmon-464d901fbf5eb99eb472d0333c8b16c0381da54239f45351a062f1a6c8fc3bdd.scope: Deactivated successfully.
Jan 20 15:00:01 compute-0 sudo[336237]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:01 compute-0 sudo[336383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:01 compute-0 sudo[336383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:01 compute-0 sudo[336383]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:01 compute-0 sudo[336408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:00:01 compute-0 sudo[336408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:01 compute-0 sudo[336408]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:01.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:01 compute-0 sudo[336433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:01 compute-0 sudo[336433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:01 compute-0 sudo[336433]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:01 compute-0 sudo[336458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:00:01 compute-0 sudo[336458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:01 compute-0 ceph-mon[74360]: pgmap v2238: 321 pgs: 321 active+clean; 400 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 318 KiB/s wr, 106 op/s
Jan 20 15:00:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3689656102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:00:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:01.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.679568101 +0000 UTC m=+0.040361239 container create 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:00:01 compute-0 systemd[1]: Started libpod-conmon-68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090.scope.
Jan 20 15:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.745368183 +0000 UTC m=+0.106161331 container init 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.751783695 +0000 UTC m=+0.112576843 container start 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.754856228 +0000 UTC m=+0.115649386 container attach 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.660753743 +0000 UTC m=+0.021546901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:00:01 compute-0 boring_goldberg[336538]: 167 167
Jan 20 15:00:01 compute-0 systemd[1]: libpod-68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090.scope: Deactivated successfully.
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.756695698 +0000 UTC m=+0.117488836 container died 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0b3d33b583908d207c52a055a8c853d154ce5ccef91c0b3897dd4a1cd354f0b-merged.mount: Deactivated successfully.
Jan 20 15:00:01 compute-0 podman[336522]: 2026-01-20 15:00:01.791986439 +0000 UTC m=+0.152779577 container remove 68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_goldberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:00:01 compute-0 systemd[1]: libpod-conmon-68930a6bccf08084136e3710e47660414d86da017b745d5350cfbce3e967b090.scope: Deactivated successfully.
Jan 20 15:00:01 compute-0 podman[336564]: 2026-01-20 15:00:01.956334755 +0000 UTC m=+0.041200360 container create 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:00:01 compute-0 systemd[1]: Started libpod-conmon-248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a.scope.
Jan 20 15:00:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0abc12b8d4f2a6b52f47c050a434523299b1f439e968bbce972f3c595d7e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0abc12b8d4f2a6b52f47c050a434523299b1f439e968bbce972f3c595d7e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0abc12b8d4f2a6b52f47c050a434523299b1f439e968bbce972f3c595d7e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0abc12b8d4f2a6b52f47c050a434523299b1f439e968bbce972f3c595d7e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:02.030520303 +0000 UTC m=+0.115385938 container init 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:01.938017472 +0000 UTC m=+0.022883097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:02.037267295 +0000 UTC m=+0.122132900 container start 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:02.041989693 +0000 UTC m=+0.126855298 container attach 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:00:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 126 op/s
Jan 20 15:00:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3486972496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:02 compute-0 trusting_banach[336581]: {
Jan 20 15:00:02 compute-0 trusting_banach[336581]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:00:02 compute-0 trusting_banach[336581]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:00:02 compute-0 trusting_banach[336581]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:00:02 compute-0 trusting_banach[336581]:         "osd_id": 0,
Jan 20 15:00:02 compute-0 trusting_banach[336581]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:00:02 compute-0 trusting_banach[336581]:         "type": "bluestore"
Jan 20 15:00:02 compute-0 trusting_banach[336581]:     }
Jan 20 15:00:02 compute-0 trusting_banach[336581]: }
Jan 20 15:00:02 compute-0 systemd[1]: libpod-248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a.scope: Deactivated successfully.
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:02.920839785 +0000 UTC m=+1.005705390 container died 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd0abc12b8d4f2a6b52f47c050a434523299b1f439e968bbce972f3c595d7e75-merged.mount: Deactivated successfully.
Jan 20 15:00:02 compute-0 podman[336564]: 2026-01-20 15:00:02.982560118 +0000 UTC m=+1.067425723 container remove 248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banach, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:00:02 compute-0 systemd[1]: libpod-conmon-248a32a7293fef6d156e1f37a41636f70794a3aee4606d1b26adec91f8d8a57a.scope: Deactivated successfully.
Jan 20 15:00:03 compute-0 sudo[336458]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:00:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:00:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f7c223b9-22ad-474d-ba52-608ed18cd898 does not exist
Jan 20 15:00:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1b4f926b-4de7-45af-a39f-4fde219d5516 does not exist
Jan 20 15:00:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8f8072c6-77c0-4b72-8a4a-9d3f1d1359ca does not exist
Jan 20 15:00:03 compute-0 sudo[336616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:03 compute-0 sudo[336616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:03 compute-0 sudo[336616]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:03 compute-0 sudo[336641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:00:03 compute-0 sudo[336641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:03 compute-0 sudo[336641]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:03.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:03 compute-0 ceph-mon[74360]: pgmap v2239: 321 pgs: 321 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 126 op/s
Jan 20 15:00:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2499902095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:00:03 compute-0 nova_compute[250018]: 2026-01-20 15:00:03.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:03.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 394 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 705 KiB/s wr, 141 op/s
Jan 20 15:00:04 compute-0 ceph-mon[74360]: pgmap v2240: 321 pgs: 321 active+clean; 394 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 705 KiB/s wr, 141 op/s
Jan 20 15:00:04 compute-0 nova_compute[250018]: 2026-01-20 15:00:04.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.014812) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205014895, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1733, "num_deletes": 259, "total_data_size": 2926845, "memory_usage": 2968800, "flush_reason": "Manual Compaction"}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205031687, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2830046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48946, "largest_seqno": 50678, "table_properties": {"data_size": 2822148, "index_size": 4648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17420, "raw_average_key_size": 20, "raw_value_size": 2805888, "raw_average_value_size": 3277, "num_data_blocks": 203, "num_entries": 856, "num_filter_entries": 856, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921060, "oldest_key_time": 1768921060, "file_creation_time": 1768921205, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 16922 microseconds, and 8531 cpu microseconds.
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.031738) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2830046 bytes OK
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.031761) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.033338) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.033354) EVENT_LOG_v1 {"time_micros": 1768921205033349, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.033372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2919423, prev total WAL file size 2919423, number of live WAL files 2.
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.034471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373634' seq:72057594037927935, type:22 .. '6C6F676D0032303138' seq:0, type:0; will stop at (end)
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2763KB)], [107(10108KB)]
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205034549, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 13181009, "oldest_snapshot_seqno": -1}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8092 keys, 13022669 bytes, temperature: kUnknown
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205104335, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 13022669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12967004, "index_size": 34305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208928, "raw_average_key_size": 25, "raw_value_size": 12821292, "raw_average_value_size": 1584, "num_data_blocks": 1357, "num_entries": 8092, "num_filter_entries": 8092, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921205, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.104670) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13022669 bytes
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.106201) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.6 rd, 186.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.9 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 8632, records dropped: 540 output_compression: NoCompression
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.106220) EVENT_LOG_v1 {"time_micros": 1768921205106212, "job": 64, "event": "compaction_finished", "compaction_time_micros": 69901, "compaction_time_cpu_micros": 34781, "output_level": 6, "num_output_files": 1, "total_output_size": 13022669, "num_input_records": 8632, "num_output_records": 8092, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205106782, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921205108737, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.034338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.108817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.108822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.108824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.108826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:05.108828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:05.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3429916275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:05.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Jan 20 15:00:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3768612646' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3768612646' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:06 compute-0 ceph-mon[74360]: pgmap v2241: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Jan 20 15:00:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1652995315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2825997578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/407955871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:00:08 compute-0 ceph-mon[74360]: pgmap v2242: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:00:08 compute-0 nova_compute[250018]: 2026-01-20 15:00:08.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:09.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 20 15:00:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 20 15:00:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 20 15:00:09 compute-0 nova_compute[250018]: 2026-01-20 15:00:09.839 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 205 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 20 15:00:10 compute-0 ceph-mon[74360]: osdmap e322: 3 total, 3 up, 3 in
Jan 20 15:00:10 compute-0 ceph-mon[74360]: pgmap v2244: 321 pgs: 321 active+clean; 205 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 20 15:00:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:11.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003156879594732924 of space, bias 1.0, pg target 0.9470638784198772 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000585792097059371 of space, bias 1.0, pg target 0.1757376291178113 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:00:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:00:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 20 15:00:12 compute-0 podman[336672]: 2026-01-20 15:00:12.468085631 +0000 UTC m=+0.057231033 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 15:00:12 compute-0 podman[336671]: 2026-01-20 15:00:12.49328321 +0000 UTC m=+0.081979070 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 15:00:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:13 compute-0 ceph-mon[74360]: pgmap v2245: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 20 15:00:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:13 compute-0 nova_compute[250018]: 2026-01-20 15:00:13.709 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Jan 20 15:00:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3702275476' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:00:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3702275476' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:00:14 compute-0 sudo[336716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:14 compute-0 sudo[336716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:14 compute-0 sudo[336716]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:14 compute-0 sudo[336741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:14 compute-0 sudo[336741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:14 compute-0 sudo[336741]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.753 250022 INFO nova.compute.manager [None req-9f0fee02-5977-4633-89fa-3a63d90bff70 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Pausing
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.754 250022 DEBUG nova.objects.instance [None req-9f0fee02-5977-4633-89fa-3a63d90bff70 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'flavor' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.781 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921214.780675, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.781 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Paused (Lifecycle Event)
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.782 250022 DEBUG nova.compute.manager [None req-9f0fee02-5977-4633-89fa-3a63d90bff70 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.810 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.815 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.841 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:14 compute-0 nova_compute[250018]: 2026-01-20 15:00:14.848 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 20 15:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 20 15:00:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 20 15:00:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 20 15:00:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:15 compute-0 ceph-mon[74360]: pgmap v2246: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 163 op/s
Jan 20 15:00:15 compute-0 ceph-mon[74360]: osdmap e323: 3 total, 3 up, 3 in
Jan 20 15:00:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 196 op/s
Jan 20 15:00:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:17.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.314 250022 INFO nova.compute.manager [None req-ebda033a-62de-4b19-8f9c-35a3b521d1ea d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Unpausing
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.315 250022 DEBUG nova.objects.instance [None req-ebda033a-62de-4b19-8f9c-35a3b521d1ea d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'flavor' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.335 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921217.3350384, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.335 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Resumed (Lifecycle Event)
Jan 20 15:00:17 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.339 250022 DEBUG nova.virt.libvirt.guest [None req-ebda033a-62de-4b19-8f9c-35a3b521d1ea d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.339 250022 DEBUG nova.compute.manager [None req-ebda033a-62de-4b19-8f9c-35a3b521d1ea d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:17 compute-0 ceph-mon[74360]: pgmap v2248: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 196 op/s
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.396 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.399 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:00:17 compute-0 nova_compute[250018]: 2026-01-20 15:00:17.449 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 20 15:00:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 164 op/s
Jan 20 15:00:18 compute-0 nova_compute[250018]: 2026-01-20 15:00:18.711 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:19.084 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:19 compute-0 nova_compute[250018]: 2026-01-20 15:00:19.084 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:19.085 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:00:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:19.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:19 compute-0 ceph-mon[74360]: pgmap v2249: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 164 op/s
Jan 20 15:00:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:19 compute-0 nova_compute[250018]: 2026-01-20 15:00:19.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 108 op/s
Jan 20 15:00:20 compute-0 ceph-mon[74360]: pgmap v2250: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 108 op/s
Jan 20 15:00:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:21.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:21.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 90 op/s
Jan 20 15:00:22 compute-0 ceph-mon[74360]: pgmap v2251: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 90 op/s
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:23.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:00:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:00:23 compute-0 nova_compute[250018]: 2026-01-20 15:00:23.713 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 20 15:00:24 compute-0 nova_compute[250018]: 2026-01-20 15:00:24.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:25.086 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:25 compute-0 ceph-mon[74360]: pgmap v2252: 321 pgs: 321 active+clean; 170 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 20 15:00:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:00:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:25.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:00:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:25.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 20 15:00:27 compute-0 ceph-mon[74360]: pgmap v2253: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 20 15:00:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:27.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 20 15:00:28 compute-0 nova_compute[250018]: 2026-01-20 15:00:28.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:29 compute-0 ceph-mon[74360]: pgmap v2254: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 20 15:00:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:29.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:29 compute-0 nova_compute[250018]: 2026-01-20 15:00:29.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:30.771 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:30.771 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:30.772 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:31 compute-0 ceph-mon[74360]: pgmap v2255: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3920558439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:31.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:32 compute-0 ceph-mon[74360]: pgmap v2256: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 379 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:00:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:33.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:33 compute-0 nova_compute[250018]: 2026-01-20 15:00:33.716 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 212 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 20 15:00:34 compute-0 sudo[336776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:34 compute-0 sudo[336776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:34 compute-0 sudo[336776]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:34 compute-0 sudo[336801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:34 compute-0 sudo[336801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:34 compute-0 sudo[336801]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:34 compute-0 nova_compute[250018]: 2026-01-20 15:00:34.852 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:35 compute-0 nova_compute[250018]: 2026-01-20 15:00:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:35 compute-0 ceph-mon[74360]: pgmap v2257: 321 pgs: 321 active+clean; 212 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 20 15:00:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 78 op/s
Jan 20 15:00:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/987469144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3787191888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:37 compute-0 ceph-mon[74360]: pgmap v2258: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 78 op/s
Jan 20 15:00:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:00:38 compute-0 nova_compute[250018]: 2026-01-20 15:00:38.719 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:39.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:39 compute-0 ceph-mon[74360]: pgmap v2259: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:00:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/842019597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:39 compute-0 nova_compute[250018]: 2026-01-20 15:00:39.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:00:40 compute-0 ceph-mon[74360]: pgmap v2260: 321 pgs: 321 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:00:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:41.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/292610090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 20 15:00:42 compute-0 nova_compute[250018]: 2026-01-20 15:00:42.674 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:42 compute-0 nova_compute[250018]: 2026-01-20 15:00:42.675 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:42 compute-0 nova_compute[250018]: 2026-01-20 15:00:42.675 250022 INFO nova.compute.manager [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Shelving
Jan 20 15:00:42 compute-0 nova_compute[250018]: 2026-01-20 15:00:42.705 250022 DEBUG nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:00:43 compute-0 ceph-mon[74360]: pgmap v2261: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 862 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 20 15:00:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:43 compute-0 podman[336831]: 2026-01-20 15:00:43.48068636 +0000 UTC m=+0.066233316 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 15:00:43 compute-0 podman[336830]: 2026-01-20 15:00:43.554308902 +0000 UTC m=+0.139856968 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:00:43 compute-0 nova_compute[250018]: 2026-01-20 15:00:43.720 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:00:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2942301129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:44 compute-0 nova_compute[250018]: 2026-01-20 15:00:44.857 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:45 compute-0 kernel: tapee1c78ce-d0 (unregistering): left promiscuous mode
Jan 20 15:00:45 compute-0 NetworkManager[48960]: <info>  [1768921245.0508] device (tapee1c78ce-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:00:45 compute-0 ovn_controller[148666]: 2026-01-20T15:00:45Z|00533|binding|INFO|Releasing lport ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 from this chassis (sb_readonly=0)
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.057 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 ovn_controller[148666]: 2026-01-20T15:00:45Z|00534|binding|INFO|Setting lport ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 down in Southbound
Jan 20 15:00:45 compute-0 ovn_controller[148666]: 2026-01-20T15:00:45Z|00535|binding|INFO|Removing iface tapee1c78ce-d0 ovn-installed in OVS
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.074 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:76:cf 10.100.0.3'], port_security=['fa:16:3e:8c:76:cf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '61ae2c61-01df-4ef1-8aa3-0527a43b1798', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3aad5d71-9bbf-496d-805e-819d17c4343e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '105e56abe3804424885c7aa8d1216d12', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5c26cf5d-4215-4bd2-8a4b-3ad6a5f65504', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298e3802-e88f-473c-a925-fb8c9f7cfd27, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.075 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 in datapath 3aad5d71-9bbf-496d-805e-819d17c4343e unbound from our chassis
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.076 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3aad5d71-9bbf-496d-805e-819d17c4343e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.077 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fa00495a-8f85-4cce-9dd9-9c04a632cfe4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.078 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e namespace which is not needed anymore
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 20 15:00:45 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Consumed 17.489s CPU time.
Jan 20 15:00:45 compute-0 systemd-machined[216401]: Machine qemu-68-instance-0000008f terminated.
Jan 20 15:00:45 compute-0 ceph-mon[74360]: pgmap v2262: 321 pgs: 321 active+clean; 171 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [NOTICE]   (335483) : haproxy version is 2.8.14-c23fe91
Jan 20 15:00:45 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [NOTICE]   (335483) : path to executable is /usr/sbin/haproxy
Jan 20 15:00:45 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [WARNING]  (335483) : Exiting Master process...
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.306 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [ALERT]    (335483) : Current worker (335485) exited with code 143 (Terminated)
Jan 20 15:00:45 compute-0 neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e[335478]: [WARNING]  (335483) : All workers exited. Exiting... (0)
Jan 20 15:00:45 compute-0 systemd[1]: libpod-87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767.scope: Deactivated successfully.
Jan 20 15:00:45 compute-0 podman[336895]: 2026-01-20 15:00:45.334946956 +0000 UTC m=+0.165520440 container died 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 15:00:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767-userdata-shm.mount: Deactivated successfully.
Jan 20 15:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-edaf9b25b7c09127a180b87f64f7f3770f583b98e1e559a9df6f8af018d995e1-merged.mount: Deactivated successfully.
Jan 20 15:00:45 compute-0 podman[336895]: 2026-01-20 15:00:45.428775443 +0000 UTC m=+0.259348927 container cleanup 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:00:45 compute-0 systemd[1]: libpod-conmon-87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767.scope: Deactivated successfully.
Jan 20 15:00:45 compute-0 podman[336933]: 2026-01-20 15:00:45.611314839 +0000 UTC m=+0.149329632 container remove 87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.616 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[05b4faa8-c2bd-415a-a189-4976c1411611]: (4, ('Tue Jan 20 03:00:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e (87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767)\n87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767\nTue Jan 20 03:00:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e (87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767)\n87fd5efa2ce9fff3d396eba609e6c8de548a15d3cdc587a4a32002e2b0692767\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.619 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69218e98-b53a-41d3-87de-fff25fa64a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.620 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aad5d71-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 kernel: tap3aad5d71-90: left promiscuous mode
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.645 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.649 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b3721ed3-0bb1-4314-a092-aad7c73abc0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.667 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0762d225-44ef-45ce-acea-1ff6f53ae90d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.668 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[62c6b0f3-ba08-417b-8e35-ffdcfdf0cb8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.685 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6f1d71-60ff-4074-b504-ac3be523563e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708775, 'reachable_time': 25450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336951, 'error': None, 'target': 'ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d3aad5d71\x2d9bbf\x2d496d\x2d805e\x2d819d17c4343e.mount: Deactivated successfully.
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.691 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3aad5d71-9bbf-496d-805e-819d17c4343e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:00:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:00:45.691 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[11db92eb-80f0-4393-87e2-8853795882ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.723 250022 INFO nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance shutdown successfully after 3 seconds.
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.727 250022 INFO nova.virt.libvirt.driver [-] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance destroyed successfully.
Jan 20 15:00:45 compute-0 nova_compute[250018]: 2026-01-20 15:00:45.728 250022 DEBUG nova.objects.instance [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'numa_topology' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:45.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.033 250022 INFO nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Beginning cold snapshot process
Jan 20 15:00:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.382 250022 DEBUG nova.virt.libvirt.imagebackend [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 15:00:46 compute-0 ceph-mon[74360]: pgmap v2263: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 175 op/s
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.666 250022 DEBUG nova.storage.rbd_utils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] creating snapshot(844bd4a99df74c4a98640764fe9315b2) on rbd image(61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.769 250022 DEBUG nova.compute.manager [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-vif-unplugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.770 250022 DEBUG oslo_concurrency.lockutils [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.770 250022 DEBUG oslo_concurrency.lockutils [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.771 250022 DEBUG oslo_concurrency.lockutils [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.771 250022 DEBUG nova.compute.manager [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] No waiting events found dispatching network-vif-unplugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:46 compute-0 nova_compute[250018]: 2026-01-20 15:00:46.771 250022 WARNING nova.compute.manager [req-48879301-837b-4377-8d8d-90ea6fcad4d3 req-409e2dd2-6edf-44ed-9735-d7b2d5a39634 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received unexpected event network-vif-unplugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 for instance with vm_state active and task_state shelving_image_uploading.
Jan 20 15:00:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 20 15:00:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 20 15:00:47 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 20 15:00:47 compute-0 nova_compute[250018]: 2026-01-20 15:00:47.620 250022 DEBUG nova.storage.rbd_utils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] cloning vms/61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk@844bd4a99df74c4a98640764fe9315b2 to images/8976571c-92ae-42ce-94dd-a05ec6e308b3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:00:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:47.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:47 compute-0 nova_compute[250018]: 2026-01-20 15:00:47.758 250022 DEBUG nova.storage.rbd_utils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] flattening images/8976571c-92ae-42ce-94dd-a05ec6e308b3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 15:00:47 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.124 250022 DEBUG nova.storage.rbd_utils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] removing snapshot(844bd4a99df74c4a98640764fe9315b2) on rbd image(61ae2c61-01df-4ef1-8aa3-0527a43b1798_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:00:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 20 15:00:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 20 15:00:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 20 15:00:48 compute-0 ceph-mon[74360]: osdmap e324: 3 total, 3 up, 3 in
Jan 20 15:00:48 compute-0 ceph-mon[74360]: pgmap v2265: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 20 15:00:48 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.617 250022 DEBUG nova.storage.rbd_utils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] creating snapshot(snap) on rbd image(8976571c-92ae-42ce-94dd-a05ec6e308b3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.882 250022 DEBUG nova.compute.manager [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.882 250022 DEBUG oslo_concurrency.lockutils [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.882 250022 DEBUG oslo_concurrency.lockutils [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.883 250022 DEBUG oslo_concurrency.lockutils [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.883 250022 DEBUG nova.compute.manager [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] No waiting events found dispatching network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:00:48 compute-0 nova_compute[250018]: 2026-01-20 15:00:48.883 250022 WARNING nova.compute.manager [req-d36ec18c-3a4a-4a1c-80e9-12a2a16c9c09 req-cc1a0d63-acf3-4d98-9de3-7bcf188deea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received unexpected event network-vif-plugged-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 for instance with vm_state active and task_state shelving_image_uploading.
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.072 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.072 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:49.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288864457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.528 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 20 15:00:49 compute-0 ceph-mon[74360]: osdmap e325: 3 total, 3 up, 3 in
Jan 20 15:00:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2122915934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1288864457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 20 15:00:49 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.645 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.645 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:00:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:49.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.790 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.792 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4280MB free_disk=20.921974182128906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.792 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.792 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.859 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.996 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 61ae2c61-01df-4ef1-8aa3-0527a43b1798 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.997 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:00:49 compute-0 nova_compute[250018]: 2026-01-20 15:00:49.998 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:00:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.154 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:00:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.7 MiB/s wr, 241 op/s
Jan 20 15:00:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:00:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511139781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.596 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.602 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:00:50 compute-0 ceph-mon[74360]: osdmap e326: 3 total, 3 up, 3 in
Jan 20 15:00:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4026246915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-0 ceph-mon[74360]: pgmap v2268: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.7 MiB/s wr, 241 op/s
Jan 20 15:00:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1511139781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.618 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.646 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:00:50 compute-0 nova_compute[250018]: 2026-01-20 15:00:50.647 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:00:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.477 250022 INFO nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Snapshot image upload complete
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.478 250022 DEBUG nova.compute.manager [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.549 250022 INFO nova.compute.manager [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Shelve offloading
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.560 250022 INFO nova.virt.libvirt.driver [-] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance destroyed successfully.
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.561 250022 DEBUG nova.compute.manager [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.565 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.566 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:51 compute-0 nova_compute[250018]: 2026-01-20 15:00:51.567 250022 DEBUG nova.network.neutron [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:00:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2685457807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1371259099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3543908004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2231245816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.638335) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251638470, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 716, "num_deletes": 253, "total_data_size": 865774, "memory_usage": 879168, "flush_reason": "Manual Compaction"}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251647322, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 855033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50679, "largest_seqno": 51394, "table_properties": {"data_size": 851339, "index_size": 1474, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8784, "raw_average_key_size": 19, "raw_value_size": 843815, "raw_average_value_size": 1904, "num_data_blocks": 66, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921205, "oldest_key_time": 1768921205, "file_creation_time": 1768921251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 9099 microseconds, and 5142 cpu microseconds.
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.647437) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 855033 bytes OK
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.647467) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.649076) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.649097) EVENT_LOG_v1 {"time_micros": 1768921251649090, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.649118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 862093, prev total WAL file size 862093, number of live WAL files 2.
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.650070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(834KB)], [110(12MB)]
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251650125, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 13877702, "oldest_snapshot_seqno": -1}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 8017 keys, 11993103 bytes, temperature: kUnknown
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251716271, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 11993103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938940, "index_size": 32995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208162, "raw_average_key_size": 25, "raw_value_size": 11795452, "raw_average_value_size": 1471, "num_data_blocks": 1295, "num_entries": 8017, "num_filter_entries": 8017, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.716679) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11993103 bytes
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.718896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.2 rd, 180.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(30.3) write-amplify(14.0) OK, records in: 8535, records dropped: 518 output_compression: NoCompression
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.718924) EVENT_LOG_v1 {"time_micros": 1768921251718912, "job": 66, "event": "compaction_finished", "compaction_time_micros": 66351, "compaction_time_cpu_micros": 31256, "output_level": 6, "num_output_files": 1, "total_output_size": 11993103, "num_input_records": 8535, "num_output_records": 8017, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251719624, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921251724016, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.649931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.724137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.724142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.724145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.724148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:00:51.724151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:00:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:51.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 140 op/s
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:00:52
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'default.rgw.log']
Jan 20 15:00:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:00:52 compute-0 nova_compute[250018]: 2026-01-20 15:00:52.650 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:52 compute-0 nova_compute[250018]: 2026-01-20 15:00:52.650 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:52 compute-0 nova_compute[250018]: 2026-01-20 15:00:52.651 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:00:52 compute-0 ceph-mon[74360]: pgmap v2269: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 140 op/s
Jan 20 15:00:53 compute-0 nova_compute[250018]: 2026-01-20 15:00:53.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:53 compute-0 nova_compute[250018]: 2026-01-20 15:00:53.725 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:53.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 155 op/s
Jan 20 15:00:54 compute-0 nova_compute[250018]: 2026-01-20 15:00:54.547 250022 DEBUG nova.network.neutron [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:54 compute-0 nova_compute[250018]: 2026-01-20 15:00:54.573 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:54 compute-0 nova_compute[250018]: 2026-01-20 15:00:54.863 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:54 compute-0 sudo[337143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:54 compute-0 sudo[337143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:54 compute-0 sudo[337143]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:54 compute-0 sudo[337168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:00:54 compute-0 sudo[337168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:00:54 compute-0 sudo[337168]: pam_unix(sudo:session): session closed for user root
Jan 20 15:00:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:00:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 20 15:00:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 20 15:00:55 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.150 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.150 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.150 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:00:55 compute-0 nova_compute[250018]: 2026-01-20 15:00:55.150 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:55 compute-0 ceph-mon[74360]: pgmap v2270: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 155 op/s
Jan 20 15:00:55 compute-0 ceph-mon[74360]: osdmap e327: 3 total, 3 up, 3 in
Jan 20 15:00:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:55.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Jan 20 15:00:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:00:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.574 250022 INFO nova.virt.libvirt.driver [-] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Instance destroyed successfully.
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.575 250022 DEBUG nova.objects.instance [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lazy-loading 'resources' on Instance uuid 61ae2c61-01df-4ef1-8aa3-0527a43b1798 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.641 250022 DEBUG nova.virt.libvirt.vif [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T14:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1098898119',display_name='tempest-ServersNegativeTestJSON-server-1098898119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1098898119',id=143,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T14:59:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='105e56abe3804424885c7aa8d1216d12',ramdisk_id='',reservation_id='r-yf5960ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1233513591',owner_user_name='tempest-ServersNegativeTestJSON-1233513591-project-member',shelved_at='2026-01-20T15:00:51.478483',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='8976571c-92ae-42ce-94dd-a05ec6e308b3'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:00:46Z,user_data=None,user_id='d77d3db3cf924683a608d10efefcd156',uuid=61ae2c61-01df-4ef1-8aa3-0527a43b1798,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.641 250022 DEBUG nova.network.os_vif_util [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converting VIF {"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.642 250022 DEBUG nova.network.os_vif_util [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.643 250022 DEBUG os_vif [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.645 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.645 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee1c78ce-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.649 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.652 250022 INFO os_vif [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:76:cf,bridge_name='br-int',has_traffic_filtering=True,id=ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7,network=Network(3aad5d71-9bbf-496d-805e-819d17c4343e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee1c78ce-d0')
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:00:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:00:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:00:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:57.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:00:57 compute-0 nova_compute[250018]: 2026-01-20 15:00:57.864 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:00:57 compute-0 ceph-mon[74360]: pgmap v2272: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Jan 20 15:00:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 100 op/s
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.352 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.353 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.409 250022 INFO nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Deleting instance files /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798_del
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.410 250022 INFO nova.virt.libvirt.driver [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Deletion of /var/lib/nova/instances/61ae2c61-01df-4ef1-8aa3-0527a43b1798_del complete
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.540 250022 DEBUG nova.compute.manager [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Received event network-changed-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.540 250022 DEBUG nova.compute.manager [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Refreshing instance network info cache due to event network-changed-ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.541 250022 DEBUG oslo_concurrency.lockutils [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.541 250022 DEBUG oslo_concurrency.lockutils [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.541 250022 DEBUG nova.network.neutron [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Refreshing network info cache for port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:00:58 compute-0 nova_compute[250018]: 2026-01-20 15:00:58.729 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:00:58 compute-0 ceph-mon[74360]: pgmap v2273: 321 pgs: 321 active+clean; 246 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 100 op/s
Jan 20 15:00:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:00:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:00:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:00:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:00:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:00:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:00:59.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:01:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.240 250022 INFO nova.scheduler.client.report [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Deleted allocations for instance 61ae2c61-01df-4ef1-8aa3-0527a43b1798
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.330 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921245.3287902, 61ae2c61-01df-4ef1-8aa3-0527a43b1798 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.330 250022 INFO nova.compute.manager [-] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] VM Stopped (Lifecycle Event)
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.560 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.560 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.568 250022 DEBUG nova.compute.manager [None req-6cb84c21-ce0c-4d4b-8ecf-5d151c378029 - - - - - -] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:00 compute-0 nova_compute[250018]: 2026-01-20 15:01:00.603 250022 DEBUG oslo_concurrency.processutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203376434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.035 250022 DEBUG oslo_concurrency.processutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.041 250022 DEBUG nova.compute.provider_tree [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.086 250022 DEBUG nova.scheduler.client.report [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.156 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.235 250022 DEBUG oslo_concurrency.lockutils [None req-25e6ec83-bdd3-4789-ab20-d9e010a5cd48 d77d3db3cf924683a608d10efefcd156 105e56abe3804424885c7aa8d1216d12 - - default default] Lock "61ae2c61-01df-4ef1-8aa3-0527a43b1798" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:01 compute-0 ceph-mon[74360]: pgmap v2274: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 20 15:01:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/203376434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.648 250022 DEBUG nova.network.neutron [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updated VIF entry in instance network info cache for port ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.648 250022 DEBUG nova.network.neutron [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 61ae2c61-01df-4ef1-8aa3-0527a43b1798] Updating instance_info_cache with network_info: [{"id": "ee1c78ce-d0fd-4b6b-8a7c-e3aff97e74d7", "address": "fa:16:3e:8c:76:cf", "network": {"id": "3aad5d71-9bbf-496d-805e-819d17c4343e", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1714826441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "105e56abe3804424885c7aa8d1216d12", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapee1c78ce-d0", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.668 250022 DEBUG oslo_concurrency.lockutils [req-3dd6803d-a978-4e14-83fd-6d2d4b78926e req-ffe9de12-7657-428d-b3d7-c19e241da032 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-61ae2c61-01df-4ef1-8aa3-0527a43b1798" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:01 compute-0 CROND[337239]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 15:01:01 compute-0 run-parts[337242]: (/etc/cron.hourly) starting 0anacron
Jan 20 15:01:01 compute-0 run-parts[337248]: (/etc/cron.hourly) finished 0anacron
Jan 20 15:01:01 compute-0 CROND[337238]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 15:01:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:01.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:01 compute-0 nova_compute[250018]: 2026-01-20 15:01:01.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:01.963 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:01.964 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:01:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:01.965 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 140 op/s
Jan 20 15:01:02 compute-0 nova_compute[250018]: 2026-01-20 15:01:02.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:03 compute-0 ceph-mon[74360]: pgmap v2275: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 140 op/s
Jan 20 15:01:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:03.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:03 compute-0 sudo[337250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:03 compute-0 sudo[337250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-0 sudo[337250]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-0 sudo[337275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:01:03 compute-0 sudo[337275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-0 sudo[337275]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-0 sudo[337300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:03 compute-0 sudo[337300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-0 sudo[337300]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:03 compute-0 sudo[337325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:01:03 compute-0 sudo[337325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:03 compute-0 nova_compute[250018]: 2026-01-20 15:01:03.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:04 compute-0 sudo[337325]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 122 op/s
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c52a0e97-eee2-4114-bcc3-eb340970fd4c does not exist
Jan 20 15:01:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4d4dfb4e-c695-4351-a664-1d2e618770c0 does not exist
Jan 20 15:01:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6e891653-1c93-4e17-952e-bc53d4cf4fa2 does not exist
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:01:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-0 sudo[337381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:04 compute-0 sudo[337381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:01:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:01:04 compute-0 sudo[337381]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:04 compute-0 sudo[337406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:01:04 compute-0 sudo[337406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:04 compute-0 sudo[337406]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:04 compute-0 sudo[337431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:04 compute-0 sudo[337431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:04 compute-0 sudo[337431]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:04 compute-0 sudo[337456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:01:04 compute-0 sudo[337456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.767624326 +0000 UTC m=+0.038464978 container create ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:01:04 compute-0 systemd[1]: Started libpod-conmon-ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab.scope.
Jan 20 15:01:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.750642388 +0000 UTC m=+0.021483060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.84948157 +0000 UTC m=+0.120322252 container init ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.857535927 +0000 UTC m=+0.128376579 container start ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.861290659 +0000 UTC m=+0.132131311 container attach ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:01:04 compute-0 admiring_hugle[337539]: 167 167
Jan 20 15:01:04 compute-0 systemd[1]: libpod-ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab.scope: Deactivated successfully.
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.867110386 +0000 UTC m=+0.137951068 container died ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ceb5258afec8148e6a0e9a6d8f7520599d4dd10f6904b33cef0ea6bb521f831-merged.mount: Deactivated successfully.
Jan 20 15:01:04 compute-0 podman[337523]: 2026-01-20 15:01:04.905251223 +0000 UTC m=+0.176091895 container remove ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:01:04 compute-0 systemd[1]: libpod-conmon-ca2a5b58e31cad925848f434a2476107915671f792cc9d3dfda2b9cc347ac7ab.scope: Deactivated successfully.
Jan 20 15:01:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:05 compute-0 podman[337565]: 2026-01-20 15:01:05.110255345 +0000 UTC m=+0.091092864 container create 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:01:05 compute-0 podman[337565]: 2026-01-20 15:01:05.051814171 +0000 UTC m=+0.032651670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:05 compute-0 systemd[1]: Started libpod-conmon-4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d.scope.
Jan 20 15:01:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:05 compute-0 podman[337565]: 2026-01-20 15:01:05.207552106 +0000 UTC m=+0.188389685 container init 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:05 compute-0 podman[337565]: 2026-01-20 15:01:05.218653745 +0000 UTC m=+0.199491294 container start 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:01:05 compute-0 podman[337565]: 2026-01-20 15:01:05.222577431 +0000 UTC m=+0.203414950 container attach 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:01:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:05 compute-0 ceph-mon[74360]: pgmap v2276: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 122 op/s
Jan 20 15:01:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:05.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:06 compute-0 focused_kowalevski[337582]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:01:06 compute-0 focused_kowalevski[337582]: --> relative data size: 1.0
Jan 20 15:01:06 compute-0 focused_kowalevski[337582]: --> All data devices are unavailable
Jan 20 15:01:06 compute-0 systemd[1]: libpod-4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d.scope: Deactivated successfully.
Jan 20 15:01:06 compute-0 podman[337565]: 2026-01-20 15:01:06.057135 +0000 UTC m=+1.037972509 container died 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 15:01:07 compute-0 nova_compute[250018]: 2026-01-20 15:01:07.347 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:07.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:07 compute-0 nova_compute[250018]: 2026-01-20 15:01:07.649 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c23f4b4539518fa6df8eff08bdcd155651342b0f6ed2a562a8b54fee156b3a-merged.mount: Deactivated successfully.
Jan 20 15:01:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:07.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:07 compute-0 ceph-mon[74360]: pgmap v2277: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 15:01:07 compute-0 podman[337565]: 2026-01-20 15:01:07.973267823 +0000 UTC m=+2.954105332 container remove 4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 15:01:08 compute-0 sudo[337456]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:08 compute-0 systemd[1]: libpod-conmon-4546f6de6b4552e0298395f6e77b7b9950b8081d5fcbe76fdfb878244b00ef4d.scope: Deactivated successfully.
Jan 20 15:01:08 compute-0 sudo[337611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:08 compute-0 sudo[337611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:08 compute-0 sudo[337611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:08 compute-0 sudo[337636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:01:08 compute-0 sudo[337636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:08 compute-0 sudo[337636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:08 compute-0 sudo[337661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:08 compute-0 sudo[337661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:08 compute-0 sudo[337661]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:08 compute-0 sudo[337686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:01:08 compute-0 sudo[337686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.547261534 +0000 UTC m=+0.027106061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:08 compute-0 nova_compute[250018]: 2026-01-20 15:01:08.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.837542033 +0000 UTC m=+0.317386530 container create e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:01:08 compute-0 systemd[1]: Started libpod-conmon-e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63.scope.
Jan 20 15:01:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.939052537 +0000 UTC m=+0.418897034 container init e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.945516312 +0000 UTC m=+0.425360809 container start e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.948989615 +0000 UTC m=+0.428834102 container attach e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:01:08 compute-0 affectionate_keldysh[337767]: 167 167
Jan 20 15:01:08 compute-0 systemd[1]: libpod-e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63.scope: Deactivated successfully.
Jan 20 15:01:08 compute-0 podman[337751]: 2026-01-20 15:01:08.950448604 +0000 UTC m=+0.430293091 container died e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:01:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23a007fd1c10747d92a6bbf0726e55527eb215b8a838d7467fdfaf9e371cd5b-merged.mount: Deactivated successfully.
Jan 20 15:01:09 compute-0 podman[337751]: 2026-01-20 15:01:09.111069271 +0000 UTC m=+0.590913798 container remove e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:01:09 compute-0 systemd[1]: libpod-conmon-e759d27dee80914f7477d7216f7df6683c2b24fbe3ee23261734a487f136ed63.scope: Deactivated successfully.
Jan 20 15:01:09 compute-0 ceph-mon[74360]: pgmap v2278: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:09 compute-0 podman[337792]: 2026-01-20 15:01:09.306466455 +0000 UTC m=+0.044318225 container create f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:01:09 compute-0 systemd[1]: Started libpod-conmon-f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2.scope.
Jan 20 15:01:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:09 compute-0 podman[337792]: 2026-01-20 15:01:09.291478291 +0000 UTC m=+0.029330081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b139122dd21113789e43570144cd3db6193c0ed06e53c0372f2479527643bedb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b139122dd21113789e43570144cd3db6193c0ed06e53c0372f2479527643bedb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b139122dd21113789e43570144cd3db6193c0ed06e53c0372f2479527643bedb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b139122dd21113789e43570144cd3db6193c0ed06e53c0372f2479527643bedb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:09 compute-0 podman[337792]: 2026-01-20 15:01:09.55872003 +0000 UTC m=+0.296571820 container init f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 15:01:09 compute-0 podman[337792]: 2026-01-20 15:01:09.56468504 +0000 UTC m=+0.302536810 container start f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 20 15:01:09 compute-0 podman[337792]: 2026-01-20 15:01:09.596627201 +0000 UTC m=+0.334479001 container attach f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:10 compute-0 recursing_feistel[337808]: {
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:     "0": [
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:         {
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "devices": [
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "/dev/loop3"
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             ],
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "lv_name": "ceph_lv0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "lv_size": "7511998464",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "name": "ceph_lv0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "tags": {
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.cluster_name": "ceph",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.crush_device_class": "",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.encrypted": "0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.osd_id": "0",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.type": "block",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:                 "ceph.vdo": "0"
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             },
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "type": "block",
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:             "vg_name": "ceph_vg0"
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:         }
Jan 20 15:01:10 compute-0 recursing_feistel[337808]:     ]
Jan 20 15:01:10 compute-0 recursing_feistel[337808]: }
Jan 20 15:01:10 compute-0 systemd[1]: libpod-f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2.scope: Deactivated successfully.
Jan 20 15:01:10 compute-0 podman[337792]: 2026-01-20 15:01:10.352810109 +0000 UTC m=+1.090661929 container died f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b139122dd21113789e43570144cd3db6193c0ed06e53c0372f2479527643bedb-merged.mount: Deactivated successfully.
Jan 20 15:01:10 compute-0 podman[337792]: 2026-01-20 15:01:10.44713443 +0000 UTC m=+1.184986200 container remove f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:01:10 compute-0 systemd[1]: libpod-conmon-f6e73b02c15df171f731b9aca36675c98d96698649a082039e8ba7b1a5f42ca2.scope: Deactivated successfully.
Jan 20 15:01:10 compute-0 sudo[337686]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:10 compute-0 sudo[337830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:10 compute-0 sudo[337830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:10 compute-0 sudo[337830]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:10 compute-0 sudo[337855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:01:10 compute-0 sudo[337855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:10 compute-0 sudo[337855]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:10 compute-0 sudo[337880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:10 compute-0 sudo[337880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:10 compute-0 sudo[337880]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:10 compute-0 sudo[337905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:01:10 compute-0 sudo[337905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:10 compute-0 ceph-mon[74360]: pgmap v2279: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 66 op/s
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.047674786 +0000 UTC m=+0.023048942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.188671763 +0000 UTC m=+0.164045849 container create fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:01:11 compute-0 systemd[1]: Started libpod-conmon-fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416.scope.
Jan 20 15:01:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.281887524 +0000 UTC m=+0.257261620 container init fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.288952215 +0000 UTC m=+0.264326291 container start fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.292571922 +0000 UTC m=+0.267946038 container attach fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:01:11 compute-0 hopeful_proskuriakova[337986]: 167 167
Jan 20 15:01:11 compute-0 systemd[1]: libpod-fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416.scope: Deactivated successfully.
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.293806916 +0000 UTC m=+0.269180992 container died fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:01:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:11.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-88b64b9c90298d2044791ba50e578d6bc2ff4f6fec210ffbff18695d52720bd6-merged.mount: Deactivated successfully.
Jan 20 15:01:11 compute-0 podman[337968]: 2026-01-20 15:01:11.429815039 +0000 UTC m=+0.405189115 container remove fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:01:11 compute-0 systemd[1]: libpod-conmon-fa32fb2aa530d62cc2b503cbfff9929527e6c0f7caed7037c39285080b4b0416.scope: Deactivated successfully.
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.2200215487158013 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:01:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:01:11 compute-0 podman[338011]: 2026-01-20 15:01:11.651231733 +0000 UTC m=+0.106256813 container create add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:01:11 compute-0 podman[338011]: 2026-01-20 15:01:11.566085729 +0000 UTC m=+0.021110829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:01:11 compute-0 systemd[1]: Started libpod-conmon-add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a.scope.
Jan 20 15:01:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c229834115d9fb07519bf541ccb9f0832e8f29de7a4d042b84ad7dc44171ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c229834115d9fb07519bf541ccb9f0832e8f29de7a4d042b84ad7dc44171ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c229834115d9fb07519bf541ccb9f0832e8f29de7a4d042b84ad7dc44171ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c229834115d9fb07519bf541ccb9f0832e8f29de7a4d042b84ad7dc44171ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:11 compute-0 podman[338011]: 2026-01-20 15:01:11.730892589 +0000 UTC m=+0.185917689 container init add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:01:11 compute-0 podman[338011]: 2026-01-20 15:01:11.737472837 +0000 UTC m=+0.192497917 container start add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:01:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Jan 20 15:01:12 compute-0 silly_galois[338028]: {
Jan 20 15:01:12 compute-0 silly_galois[338028]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:01:12 compute-0 silly_galois[338028]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:01:12 compute-0 silly_galois[338028]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:01:12 compute-0 silly_galois[338028]:         "osd_id": 0,
Jan 20 15:01:12 compute-0 silly_galois[338028]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:01:12 compute-0 silly_galois[338028]:         "type": "bluestore"
Jan 20 15:01:12 compute-0 silly_galois[338028]:     }
Jan 20 15:01:12 compute-0 silly_galois[338028]: }
Jan 20 15:01:12 compute-0 systemd[1]: libpod-add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a.scope: Deactivated successfully.
Jan 20 15:01:12 compute-0 nova_compute[250018]: 2026-01-20 15:01:12.650 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:12 compute-0 podman[338011]: 2026-01-20 15:01:12.990708734 +0000 UTC m=+1.445733814 container attach add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 15:01:12 compute-0 podman[338011]: 2026-01-20 15:01:12.991411692 +0000 UTC m=+1.446436782 container died add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:01:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1687987888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c229834115d9fb07519bf541ccb9f0832e8f29de7a4d042b84ad7dc44171ebd-merged.mount: Deactivated successfully.
Jan 20 15:01:13 compute-0 podman[338011]: 2026-01-20 15:01:13.315840291 +0000 UTC m=+1.770865381 container remove add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 15:01:13 compute-0 systemd[1]: libpod-conmon-add7addbf3cc01c57f0b84a9cdcbd16794b6a43c4eb760c96ec177031930807a.scope: Deactivated successfully.
Jan 20 15:01:13 compute-0 sudo[337905]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:01:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:01:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1495d18e-b5c2-4298-b30e-a82368ce4f08 does not exist
Jan 20 15:01:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e98c653c-a198-4b80-9b9f-59a4df99099d does not exist
Jan 20 15:01:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9d30d099-148b-4d2a-9578-b7ee265fa508 does not exist
Jan 20 15:01:13 compute-0 sudo[338063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:13 compute-0 sudo[338063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:13 compute-0 sudo[338063]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:13 compute-0 sudo[338089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:01:13 compute-0 sudo[338089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:13 compute-0 sudo[338089]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:13 compute-0 podman[338087]: 2026-01-20 15:01:13.63210157 +0000 UTC m=+0.102146282 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:01:13 compute-0 podman[338125]: 2026-01-20 15:01:13.735204297 +0000 UTC m=+0.132303105 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:01:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:13 compute-0 nova_compute[250018]: 2026-01-20 15:01:13.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:14 compute-0 ceph-mon[74360]: pgmap v2280: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Jan 20 15:01:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:14 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:01:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3160512075' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3160512075' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:15 compute-0 sudo[338160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:15 compute-0 sudo[338160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:15 compute-0 sudo[338160]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:15 compute-0 sudo[338185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:15 compute-0 sudo[338185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:15 compute-0 sudo[338185]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:15 compute-0 ceph-mon[74360]: pgmap v2281: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:15.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:15.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.189 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.190 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.216 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.307 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.308 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.315 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.315 250022 INFO nova.compute.claims [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:01:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:17.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.414 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:17 compute-0 ceph-mon[74360]: pgmap v2282: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.654 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:17.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.881 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.890 250022 DEBUG nova.compute.provider_tree [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:17 compute-0 nova_compute[250018]: 2026-01-20 15:01:17.983 250022 DEBUG nova.scheduler.client.report [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.024 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.025 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.077 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.077 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.106 250022 INFO nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.127 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:01:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.240 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.241 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.241 250022 INFO nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Creating image(s)
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.265 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.289 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.315 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.318 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.410 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.411 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.411 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.411 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.435 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.438 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.584 250022 DEBUG nova.policy [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2446e8399b344b29986c1aaf8bf73adf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63555e5851564db08c6429231d264f2c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:01:18 compute-0 nova_compute[250018]: 2026-01-20 15:01:18.822 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1919300651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:19 compute-0 ceph-mon[74360]: pgmap v2283: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:01:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:19.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:19 compute-0 nova_compute[250018]: 2026-01-20 15:01:19.517 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Successfully created port: 07baaccf-06f0-4af0-a04a-9638078c313f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:01:19 compute-0 nova_compute[250018]: 2026-01-20 15:01:19.584 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:19 compute-0 nova_compute[250018]: 2026-01-20 15:01:19.655 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] resizing rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:01:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:19.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1583981785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2262638477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 66 op/s
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.195 250022 DEBUG nova.objects.instance [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'migration_context' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.210 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.211 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Ensure instance console log exists: /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.211 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.212 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.212 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.798 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Successfully updated port: 07baaccf-06f0-4af0-a04a-9638078c313f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.817 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.818 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.818 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.927 250022 DEBUG nova.compute.manager [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-changed-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.927 250022 DEBUG nova.compute.manager [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Refreshing instance network info cache due to event network-changed-07baaccf-06f0-4af0-a04a-9638078c313f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:01:20 compute-0 nova_compute[250018]: 2026-01-20 15:01:20.927 250022 DEBUG oslo_concurrency.lockutils [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:21 compute-0 nova_compute[250018]: 2026-01-20 15:01:21.028 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:01:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:21.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:21 compute-0 ceph-mon[74360]: pgmap v2284: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 66 op/s
Jan 20 15:01:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 258 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 108 op/s
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.658 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 20 15:01:22 compute-0 ceph-mon[74360]: pgmap v2285: 321 pgs: 321 active+clean; 258 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 108 op/s
Jan 20 15:01:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 20 15:01:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.816 250022 DEBUG nova.network.neutron [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.851 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.851 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Instance network_info: |[{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.852 250022 DEBUG oslo_concurrency.lockutils [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.852 250022 DEBUG nova.network.neutron [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Refreshing network info cache for port 07baaccf-06f0-4af0-a04a-9638078c313f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.854 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Start _get_guest_xml network_info=[{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.859 250022 WARNING nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.868 250022 DEBUG nova.virt.libvirt.host [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.869 250022 DEBUG nova.virt.libvirt.host [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.874 250022 DEBUG nova.virt.libvirt.host [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.875 250022 DEBUG nova.virt.libvirt.host [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.876 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.876 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.876 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.877 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.877 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.877 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.877 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.877 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.878 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.878 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.878 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.878 250022 DEBUG nova.virt.hardware [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:01:22 compute-0 nova_compute[250018]: 2026-01-20 15:01:22.881 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218882102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.312 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.340 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.345 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:23.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:23 compute-0 ceph-mon[74360]: osdmap e328: 3 total, 3 up, 3 in
Jan 20 15:01:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1218882102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458772179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.834 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.835 250022 DEBUG nova.virt.libvirt.vif [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-994461168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-994461168',id=149,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-lbk5hu90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:18Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=b3f961d2-e73f-49bf-b141-6505e77ad9ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.836 250022 DEBUG nova.network.os_vif_util [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.837 250022 DEBUG nova.network.os_vif_util [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.838 250022 DEBUG nova.objects.instance [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.855 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <uuid>b3f961d2-e73f-49bf-b141-6505e77ad9ac</uuid>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <name>instance-00000095</name>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-994461168</nova:name>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:01:22</nova:creationTime>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <nova:port uuid="07baaccf-06f0-4af0-a04a-9638078c313f">
Jan 20 15:01:23 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <system>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="serial">b3f961d2-e73f-49bf-b141-6505e77ad9ac</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="uuid">b3f961d2-e73f-49bf-b141-6505e77ad9ac</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </system>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <os>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </os>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <features>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </features>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk">
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </source>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config">
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </source>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:01:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:12:81:5e"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <target dev="tap07baaccf-06"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/console.log" append="off"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <video>
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </video>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:01:23 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:01:23 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:01:23 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:01:23 compute-0 nova_compute[250018]: </domain>
Jan 20 15:01:23 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.857 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Preparing to wait for external event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.857 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.857 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.858 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.858 250022 DEBUG nova.virt.libvirt.vif [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:01:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-994461168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-994461168',id=149,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-lbk5hu90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:01:18Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=b3f961d2-e73f-49bf-b141-6505e77ad9ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.859 250022 DEBUG nova.network.os_vif_util [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.859 250022 DEBUG nova.network.os_vif_util [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.860 250022 DEBUG os_vif [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.860 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.861 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.861 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.864 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.864 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07baaccf-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.865 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07baaccf-06, col_values=(('external_ids', {'iface-id': '07baaccf-06f0-4af0-a04a-9638078c313f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:81:5e', 'vm-uuid': 'b3f961d2-e73f-49bf-b141-6505e77ad9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:23 compute-0 NetworkManager[48960]: <info>  [1768921283.8676] manager: (tap07baaccf-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.869 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.873 250022 INFO os_vif [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06')
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.948 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.949 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.949 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:12:81:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.949 250022 INFO nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Using config drive
Jan 20 15:01:23 compute-0 nova_compute[250018]: 2026-01-20 15:01:23.972 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 198 op/s
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.557 250022 INFO nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Creating config drive at /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.561 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph75gc_5_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.704 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph75gc_5_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.737 250022 DEBUG nova.storage.rbd_utils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.741 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4191032683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2458772179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:24 compute-0 ceph-mon[74360]: pgmap v2287: 321 pgs: 321 active+clean; 220 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.8 MiB/s wr, 198 op/s
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.903 250022 DEBUG oslo_concurrency.processutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.904 250022 INFO nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Deleting local config drive /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac/disk.config because it was imported into RBD.
Jan 20 15:01:24 compute-0 kernel: tap07baaccf-06: entered promiscuous mode
Jan 20 15:01:24 compute-0 NetworkManager[48960]: <info>  [1768921284.9625] manager: (tap07baaccf-06): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Jan 20 15:01:24 compute-0 nova_compute[250018]: 2026-01-20 15:01:24.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:24 compute-0 ovn_controller[148666]: 2026-01-20T15:01:24Z|00536|binding|INFO|Claiming lport 07baaccf-06f0-4af0-a04a-9638078c313f for this chassis.
Jan 20 15:01:24 compute-0 ovn_controller[148666]: 2026-01-20T15:01:24Z|00537|binding|INFO|07baaccf-06f0-4af0-a04a-9638078c313f: Claiming fa:16:3e:12:81:5e 10.100.0.6
Jan 20 15:01:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.975 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:81:5e 10.100.0.6'], port_security=['fa:16:3e:12:81:5e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b3f961d2-e73f-49bf-b141-6505e77ad9ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=07baaccf-06f0-4af0-a04a-9638078c313f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.977 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 07baaccf-06f0-4af0-a04a-9638078c313f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:01:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.979 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.993 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bd546c8a-3767-452d-a3f8-82aaaf9aeb2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.995 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap671e28d0-01 in ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.999 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap671e28d0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:24.999 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f9993652-0d72-43bd-abfb-16686c62528e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.000 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5d988612-06a1-4033-89fa-193516672c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 systemd-machined[216401]: New machine qemu-69-instance-00000095.
Jan 20 15:01:25 compute-0 systemd-udevd[338540]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.015 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b446a3-0708-43db-b7e5-18a83a12dba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 NetworkManager[48960]: <info>  [1768921285.0212] device (tap07baaccf-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:01:25 compute-0 NetworkManager[48960]: <info>  [1768921285.0226] device (tap07baaccf-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.030 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[882da6a4-0b16-4c2e-90b9-524837eb9bb4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.033 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 ovn_controller[148666]: 2026-01-20T15:01:25Z|00538|binding|INFO|Setting lport 07baaccf-06f0-4af0-a04a-9638078c313f ovn-installed in OVS
Jan 20 15:01:25 compute-0 ovn_controller[148666]: 2026-01-20T15:01:25Z|00539|binding|INFO|Setting lport 07baaccf-06f0-4af0-a04a-9638078c313f up in Southbound
Jan 20 15:01:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000095.
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.060 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[0f124d93-2667-4791-956c-5a2c6caf3ba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.065 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2b4940-7b20-481c-bb67-07e94ecbbd64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 systemd-udevd[338544]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:01:25 compute-0 NetworkManager[48960]: <info>  [1768921285.0665] manager: (tap671e28d0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.091 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2533ae-dd61-4e13-9720-4925819978a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.094 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[440c7c1a-c23f-458a-b067-47f0a56348e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 NetworkManager[48960]: <info>  [1768921285.1162] device (tap671e28d0-00): carrier: link connected
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.122 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[983e6b43-cdd8-4e76-bfd5-ae6062b86617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.143 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e047e089-2ec3-444c-bd25-658c5d2ac0db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722583, 'reachable_time': 31120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338572, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.160 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9b0816-38f9-4e06-aeaf-981828151d5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:4e69'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722583, 'tstamp': 722583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338573, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.183 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe64ce2-4af7-463b-86f1-357e53b3a3e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722583, 'reachable_time': 31120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338574, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.216 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8938106a-65ba-415d-b2dd-bc4d811deb88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.252 250022 DEBUG nova.network.neutron [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updated VIF entry in instance network info cache for port 07baaccf-06f0-4af0-a04a-9638078c313f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.253 250022 DEBUG nova.network.neutron [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.272 250022 DEBUG oslo_concurrency.lockutils [req-431efb28-6965-46b3-aeb2-f628b383248d req-409c4663-4de7-42b8-a025-774919eec885 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.295 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[45caabb9-d256-4dc6-b7d5-00ce4dd91131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.297 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.297 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.298 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 NetworkManager[48960]: <info>  [1768921285.3009] manager: (tap671e28d0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Jan 20 15:01:25 compute-0 kernel: tap671e28d0-00: entered promiscuous mode
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.308 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 ovn_controller[148666]: 2026-01-20T15:01:25Z|00540|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.314 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.315 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a18fe398-8650-407d-b2a7-bc35a7236fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.316 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.pid.haproxy
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:01:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:25.318 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'env', 'PROCESS_TAG=haproxy-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/671e28d0-0b9e-41e0-b5e0-db1ccd4717ec.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:01:25 compute-0 nova_compute[250018]: 2026-01-20 15:01:25.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:25.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:25 compute-0 podman[338605]: 2026-01-20 15:01:25.758303782 +0000 UTC m=+0.054257133 container create 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:01:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:25 compute-0 systemd[1]: Started libpod-conmon-5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde.scope.
Jan 20 15:01:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:01:25 compute-0 podman[338605]: 2026-01-20 15:01:25.731076428 +0000 UTC m=+0.027029799 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/479d335b9a84cd2bddb395b02f918d545db41496cb64ad38a58957b6d714c968/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:01:25 compute-0 podman[338605]: 2026-01-20 15:01:25.861585063 +0000 UTC m=+0.157538434 container init 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:01:25 compute-0 podman[338605]: 2026-01-20 15:01:25.866976488 +0000 UTC m=+0.162929839 container start 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:01:25 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [NOTICE]   (338625) : New worker (338627) forked
Jan 20 15:01:25 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [NOTICE]   (338625) : Loading success.
Jan 20 15:01:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.290 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921286.2897995, b3f961d2-e73f-49bf-b141-6505e77ad9ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.292 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] VM Started (Lifecycle Event)
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.318 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.323 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921286.291178, b3f961d2-e73f-49bf-b141-6505e77ad9ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.323 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] VM Paused (Lifecycle Event)
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.413 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.417 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:26 compute-0 nova_compute[250018]: 2026-01-20 15:01:26.436 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:01:27 compute-0 ceph-mon[74360]: pgmap v2288: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:01:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:01:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:28 compute-0 nova_compute[250018]: 2026-01-20 15:01:28.826 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:28 compute-0 nova_compute[250018]: 2026-01-20 15:01:28.866 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.159 250022 DEBUG nova.compute.manager [req-046c35d8-4721-447e-920e-57d31c71afdb req-06ad2208-1023-427a-ae97-cbb47d5fcd01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.159 250022 DEBUG oslo_concurrency.lockutils [req-046c35d8-4721-447e-920e-57d31c71afdb req-06ad2208-1023-427a-ae97-cbb47d5fcd01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.160 250022 DEBUG oslo_concurrency.lockutils [req-046c35d8-4721-447e-920e-57d31c71afdb req-06ad2208-1023-427a-ae97-cbb47d5fcd01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.160 250022 DEBUG oslo_concurrency.lockutils [req-046c35d8-4721-447e-920e-57d31c71afdb req-06ad2208-1023-427a-ae97-cbb47d5fcd01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.160 250022 DEBUG nova.compute.manager [req-046c35d8-4721-447e-920e-57d31c71afdb req-06ad2208-1023-427a-ae97-cbb47d5fcd01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Processing event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.160 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.163 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921289.1636195, b3f961d2-e73f-49bf-b141-6505e77ad9ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.163 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] VM Resumed (Lifecycle Event)
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.166 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.169 250022 INFO nova.virt.libvirt.driver [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Instance spawned successfully.
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.170 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.194 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.199 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.200 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.201 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.202 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.203 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.203 250022 DEBUG nova.virt.libvirt.driver [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.212 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.255 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.302 250022 INFO nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Took 11.06 seconds to spawn the instance on the hypervisor.
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.302 250022 DEBUG nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:29 compute-0 ceph-mon[74360]: pgmap v2289: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 282 op/s
Jan 20 15:01:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:01:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:29.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.404 250022 INFO nova.compute.manager [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Took 12.13 seconds to build instance.
Jan 20 15:01:29 compute-0 nova_compute[250018]: 2026-01-20 15:01:29.418 250022 DEBUG oslo_concurrency.lockutils [None req-9a0296d2-fd16-4ab7-b360-5daf80ecc613 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:29.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 20 15:01:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 20 15:01:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 20 15:01:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 110 KiB/s wr, 200 op/s
Jan 20 15:01:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:30.772 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:30.773 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:30.774 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:31 compute-0 ceph-mon[74360]: osdmap e329: 3 total, 3 up, 3 in
Jan 20 15:01:31 compute-0 ceph-mon[74360]: pgmap v2291: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 110 KiB/s wr, 200 op/s
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.344 250022 DEBUG nova.compute.manager [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.344 250022 DEBUG oslo_concurrency.lockutils [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.345 250022 DEBUG oslo_concurrency.lockutils [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.345 250022 DEBUG oslo_concurrency.lockutils [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.346 250022 DEBUG nova.compute.manager [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] No waiting events found dispatching network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.346 250022 WARNING nova.compute.manager [req-5ac11ef4-c97c-4f4a-8b0d-d74cd5ff5c29 req-bc67dda0-b18e-4950-aa00-297f98fb262b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received unexpected event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f for instance with vm_state active and task_state image_snapshot_pending.
Jan 20 15:01:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:31.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.409 250022 DEBUG nova.compute.manager [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:01:31 compute-0 nova_compute[250018]: 2026-01-20 15:01:31.505 250022 INFO nova.compute.manager [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] instance snapshotting
Jan 20 15:01:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:01:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:31.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:01:32 compute-0 nova_compute[250018]: 2026-01-20 15:01:32.075 250022 INFO nova.virt.libvirt.driver [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Beginning live snapshot process
Jan 20 15:01:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 93 KiB/s wr, 200 op/s
Jan 20 15:01:32 compute-0 nova_compute[250018]: 2026-01-20 15:01:32.271 250022 DEBUG nova.virt.libvirt.imagebackend [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 15:01:32 compute-0 nova_compute[250018]: 2026-01-20 15:01:32.698 250022 DEBUG nova.storage.rbd_utils [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] creating snapshot(5111b7a2d90d4b9a9a2743b66dc7f9d1) on rbd image(b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:01:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 20 15:01:33 compute-0 ceph-mon[74360]: pgmap v2292: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 93 KiB/s wr, 200 op/s
Jan 20 15:01:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 20 15:01:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.379 250022 DEBUG nova.storage.rbd_utils [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] cloning vms/b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk@5111b7a2d90d4b9a9a2743b66dc7f9d1 to images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:01:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.510 250022 DEBUG nova.storage.rbd_utils [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] flattening images/9c1c8ad1-376e-4dd8-93d8-70f0aa412977 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 15:01:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:33.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.867 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.868 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.868 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.868 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.869 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:33 compute-0 nova_compute[250018]: 2026-01-20 15:01:33.870 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 175 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 120 KiB/s wr, 58 op/s
Jan 20 15:01:34 compute-0 ceph-mon[74360]: osdmap e330: 3 total, 3 up, 3 in
Jan 20 15:01:34 compute-0 nova_compute[250018]: 2026-01-20 15:01:34.725 250022 DEBUG nova.storage.rbd_utils [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] removing snapshot(5111b7a2d90d4b9a9a2743b66dc7f9d1) on rbd image(b3f961d2-e73f-49bf-b141-6505e77ad9ac_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:01:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:35 compute-0 nova_compute[250018]: 2026-01-20 15:01:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:35 compute-0 sudo[338805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:35 compute-0 sudo[338805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:35 compute-0 sudo[338805]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:35 compute-0 sudo[338830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:35 compute-0 sudo[338830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:35 compute-0 sudo[338830]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 20 15:01:35 compute-0 ceph-mon[74360]: pgmap v2294: 321 pgs: 321 active+clean; 175 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 120 KiB/s wr, 58 op/s
Jan 20 15:01:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 20 15:01:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 20 15:01:35 compute-0 nova_compute[250018]: 2026-01-20 15:01:35.683 250022 DEBUG nova.storage.rbd_utils [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] creating snapshot(snap) on rbd image(9c1c8ad1-376e-4dd8-93d8-70f0aa412977) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:01:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.5 MiB/s wr, 216 op/s
Jan 20 15:01:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 20 15:01:36 compute-0 ceph-mon[74360]: osdmap e331: 3 total, 3 up, 3 in
Jan 20 15:01:36 compute-0 ceph-mon[74360]: pgmap v2296: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.5 MiB/s wr, 216 op/s
Jan 20 15:01:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 20 15:01:36 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 20 15:01:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:37.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:37 compute-0 ceph-mon[74360]: osdmap e332: 3 total, 3 up, 3 in
Jan 20 15:01:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.581 250022 INFO nova.virt.libvirt.driver [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Snapshot image upload complete
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.582 250022 INFO nova.compute.manager [None req-f18a4125-5f78-4234-ac03-a26a16157595 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Took 7.08 seconds to snapshot the instance on the hypervisor.
Jan 20 15:01:38 compute-0 ceph-mon[74360]: pgmap v2298: 321 pgs: 321 active+clean; 198 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.874 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.874 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.875 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:38 compute-0 nova_compute[250018]: 2026-01-20 15:01:38.902 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:39.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:39.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 192 op/s
Jan 20 15:01:41 compute-0 ceph-mon[74360]: pgmap v2299: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 192 op/s
Jan 20 15:01:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:41.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Jan 20 15:01:43 compute-0 ceph-mon[74360]: pgmap v2300: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 166 op/s
Jan 20 15:01:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:43 compute-0 ovn_controller[148666]: 2026-01-20T15:01:43Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:81:5e 10.100.0.6
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.934 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:43 compute-0 nova_compute[250018]: 2026-01-20 15:01:43.935 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:43 compute-0 ovn_controller[148666]: 2026-01-20T15:01:43Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:81:5e 10.100.0.6
Jan 20 15:01:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 219 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 87 op/s
Jan 20 15:01:44 compute-0 podman[338879]: 2026-01-20 15:01:44.477242096 +0000 UTC m=+0.061163029 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 15:01:44 compute-0 podman[338878]: 2026-01-20 15:01:44.504061758 +0000 UTC m=+0.088103104 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 15:01:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 20 15:01:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 20 15:01:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 20 15:01:45 compute-0 ceph-mon[74360]: pgmap v2301: 321 pgs: 321 active+clean; 219 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 87 op/s
Jan 20 15:01:45 compute-0 ceph-mon[74360]: osdmap e333: 3 total, 3 up, 3 in
Jan 20 15:01:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3249063162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:45.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 20 15:01:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1401472554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:47.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:47 compute-0 ceph-mon[74360]: pgmap v2303: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 20 15:01:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:01:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2885650483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:47.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 20 15:01:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2885650483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:48 compute-0 ceph-mon[74360]: pgmap v2304: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.936 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.938 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.978 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:48 compute-0 nova_compute[250018]: 2026-01-20 15:01:48.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:01:49 compute-0 nova_compute[250018]: 2026-01-20 15:01:49.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:01:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:49.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:01:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/193634755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:50 compute-0 nova_compute[250018]: 2026-01-20 15:01:50.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 15:01:50 compute-0 ceph-mon[74360]: pgmap v2305: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Jan 20 15:01:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3373454363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:51.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211339276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.528 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:01:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.628 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.629 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.789 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.790 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4058MB free_disk=20.885765075683594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.790 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.790 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:01:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.888 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b3f961d2-e73f-49bf-b141-6505e77ad9ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.889 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.889 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:01:51 compute-0 nova_compute[250018]: 2026-01-20 15:01:51.954 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2575874763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3482231016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1211339276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2784167277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/120102328' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:01:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2888294495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 294 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 4.7 MiB/s wr, 155 op/s
Jan 20 15:01:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:01:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289062265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.390 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.398 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.417 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.457 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.458 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:01:52
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'backups']
Jan 20 15:01:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:01:52 compute-0 nova_compute[250018]: 2026-01-20 15:01:52.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:52.729 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:01:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:52.730 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:01:53 compute-0 ceph-mon[74360]: pgmap v2306: 321 pgs: 321 active+clean; 294 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 4.7 MiB/s wr, 155 op/s
Jan 20 15:01:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2513427491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4289062265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:53.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:53 compute-0 nova_compute[250018]: 2026-01-20 15:01:53.459 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:53 compute-0 nova_compute[250018]: 2026-01-20 15:01:53.459 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:53 compute-0 nova_compute[250018]: 2026-01-20 15:01:53.460 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:01:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:53.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:54 compute-0 nova_compute[250018]: 2026-01-20 15:01:54.011 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:01:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:01:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2483513834' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:01:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 20 15:01:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:01:54.732 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:01:55 compute-0 nova_compute[250018]: 2026-01-20 15:01:55.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:01:55 compute-0 ceph-mon[74360]: pgmap v2307: 321 pgs: 321 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 823 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 20 15:01:55 compute-0 sudo[338974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:55 compute-0 sudo[338974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:55 compute-0 sudo[338974]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:55 compute-0 sudo[338999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:01:55 compute-0 sudo[338999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:01:55 compute-0 sudo[338999]: pam_unix(sudo:session): session closed for user root
Jan 20 15:01:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:55.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 2.5 MiB/s wr, 150 op/s
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.275 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.275 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.275 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:01:57 compute-0 nova_compute[250018]: 2026-01-20 15:01:57.276 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:01:57 compute-0 ceph-mon[74360]: pgmap v2308: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 2.5 MiB/s wr, 150 op/s
Jan 20 15:01:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:01:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:01:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:57.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:01:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2765111203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:01:59 compute-0 nova_compute[250018]: 2026-01-20 15:01:59.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:01:59 compute-0 ceph-mon[74360]: pgmap v2309: 321 pgs: 321 active+clean; 238 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 15:01:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:01:59.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:01:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:01:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:01:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:01:59.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 20 15:02:00 compute-0 nova_compute[250018]: 2026-01-20 15:02:00.511 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:00 compute-0 nova_compute[250018]: 2026-01-20 15:02:00.535 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:00 compute-0 nova_compute[250018]: 2026-01-20 15:02:00.535 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:02:01 compute-0 ceph-mon[74360]: pgmap v2310: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 20 15:02:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:01.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:01 compute-0 nova_compute[250018]: 2026-01-20 15:02:01.953 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:01 compute-0 nova_compute[250018]: 2026-01-20 15:02:01.953 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:01 compute-0 nova_compute[250018]: 2026-01-20 15:02:01.974 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.058 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.059 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.064 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.064 250022 INFO nova.compute.claims [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:02:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 844 KiB/s wr, 140 op/s
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.202 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880140080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.637 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.643 250022 DEBUG nova.compute.provider_tree [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.660 250022 DEBUG nova.scheduler.client.report [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.688 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.689 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.778 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.779 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.800 250022 INFO nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.826 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.923 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.924 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.924 250022 INFO nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Creating image(s)
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.952 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:02 compute-0 nova_compute[250018]: 2026-01-20 15:02:02.984 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.014 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.018 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.083 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.084 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.085 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.085 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.111 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.115 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 363d71d6-a6f3-4145-9964-1d057a891bcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:03 compute-0 ceph-mon[74360]: pgmap v2311: 321 pgs: 321 active+clean; 214 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 844 KiB/s wr, 140 op/s
Jan 20 15:02:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3880140080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.388 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 363d71d6-a6f3-4145-9964-1d057a891bcd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.454 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] resizing rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.542 250022 DEBUG nova.objects.instance [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'migration_context' on Instance uuid 363d71d6-a6f3-4145-9964-1d057a891bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.562 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.563 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Ensure instance console log exists: /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.563 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.564 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:03 compute-0 nova_compute[250018]: 2026-01-20 15:02:03.564 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:03.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:04 compute-0 nova_compute[250018]: 2026-01-20 15:02:04.015 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 506 KiB/s wr, 150 op/s
Jan 20 15:02:04 compute-0 nova_compute[250018]: 2026-01-20 15:02:04.402 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Successfully created port: 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:02:04 compute-0 ovn_controller[148666]: 2026-01-20T15:02:04Z|00541|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:04 compute-0 nova_compute[250018]: 2026-01-20 15:02:04.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.255 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Successfully updated port: 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.284 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.285 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquired lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.285 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.343 250022 DEBUG nova.compute.manager [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-changed-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.344 250022 DEBUG nova.compute.manager [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Refreshing instance network info cache due to event network-changed-42fe52d6-2d12-468d-bf0a-7a1b391b6d17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.344 250022 DEBUG oslo_concurrency.lockutils [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:05 compute-0 ceph-mon[74360]: pgmap v2312: 321 pgs: 321 active+clean; 225 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 506 KiB/s wr, 150 op/s
Jan 20 15:02:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1790087133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:05 compute-0 nova_compute[250018]: 2026-01-20 15:02:05.520 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:02:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:05.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.205 250022 DEBUG nova.network.neutron [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Updating instance_info_cache with network_info: [{"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.236 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Releasing lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.237 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Instance network_info: |[{"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.237 250022 DEBUG oslo_concurrency.lockutils [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.237 250022 DEBUG nova.network.neutron [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Refreshing network info cache for port 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.240 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Start _get_guest_xml network_info=[{"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.245 250022 WARNING nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.250 250022 DEBUG nova.virt.libvirt.host [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.250 250022 DEBUG nova.virt.libvirt.host [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.255 250022 DEBUG nova.virt.libvirt.host [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.256 250022 DEBUG nova.virt.libvirt.host [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.257 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.257 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.257 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.258 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.258 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.258 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.258 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.258 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.259 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.259 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.259 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.259 250022 DEBUG nova.virt.hardware [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:02:07 compute-0 nova_compute[250018]: 2026-01-20 15:02:07.262 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:07.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:07 compute-0 ceph-mon[74360]: pgmap v2313: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 20 15:02:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:02:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:02:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172022102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.447 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.476 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.481 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:02:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349293346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.938 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.940 250022 DEBUG nova.virt.libvirt.vif [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1263754624',display_name='tempest-TestServerMultinode-server-1263754624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1263754624',id=152,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-41ogmh9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:02:02Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=363d71d6-a6f3-4145-9964-1d057a891bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.940 250022 DEBUG nova.network.os_vif_util [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.941 250022 DEBUG nova.network.os_vif_util [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.942 250022 DEBUG nova.objects.instance [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'pci_devices' on Instance uuid 363d71d6-a6f3-4145-9964-1d057a891bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.955 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <uuid>363d71d6-a6f3-4145-9964-1d057a891bcd</uuid>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <name>instance-00000098</name>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:name>tempest-TestServerMultinode-server-1263754624</nova:name>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:02:07</nova:creationTime>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:user uuid="158563a99d4a420890aaa00b05c8bb57">tempest-TestServerMultinode-1071973011-project-admin</nova:user>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:project uuid="654b3ce7b3644fc58f8dc9f60529320b">tempest-TestServerMultinode-1071973011</nova:project>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <nova:port uuid="42fe52d6-2d12-468d-bf0a-7a1b391b6d17">
Jan 20 15:02:08 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <system>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="serial">363d71d6-a6f3-4145-9964-1d057a891bcd</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="uuid">363d71d6-a6f3-4145-9964-1d057a891bcd</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </system>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <os>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </os>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <features>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </features>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/363d71d6-a6f3-4145-9964-1d057a891bcd_disk">
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </source>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config">
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </source>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:02:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:d5:29:9f"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <target dev="tap42fe52d6-2d"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/console.log" append="off"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <video>
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </video>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:02:08 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:02:08 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:02:08 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:02:08 compute-0 nova_compute[250018]: </domain>
Jan 20 15:02:08 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.956 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Preparing to wait for external event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.957 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.957 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.957 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.958 250022 DEBUG nova.virt.libvirt.vif [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1263754624',display_name='tempest-TestServerMultinode-server-1263754624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1263754624',id=152,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-41ogmh9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:02:02Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=363d71d6-a6f3-4145-9964-1d057a891bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.958 250022 DEBUG nova.network.os_vif_util [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.959 250022 DEBUG nova.network.os_vif_util [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.959 250022 DEBUG os_vif [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.960 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.960 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.961 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.965 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42fe52d6-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:08 compute-0 nova_compute[250018]: 2026-01-20 15:02:08.965 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42fe52d6-2d, col_values=(('external_ids', {'iface-id': '42fe52d6-2d12-468d-bf0a-7a1b391b6d17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:29:9f', 'vm-uuid': '363d71d6-a6f3-4145-9964-1d057a891bcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2601699422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-0 ceph-mon[74360]: pgmap v2314: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:02:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2172022102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2349293346' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.015 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:09 compute-0 NetworkManager[48960]: <info>  [1768921329.0170] manager: (tap42fe52d6-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.024 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.025 250022 INFO os_vif [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d')
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.026 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.093 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.094 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.094 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] No VIF found with MAC fa:16:3e:d5:29:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.095 250022 INFO nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Using config drive
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.122 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:09.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.652 250022 DEBUG nova.network.neutron [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Updated VIF entry in instance network info cache for port 42fe52d6-2d12-468d-bf0a-7a1b391b6d17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.653 250022 DEBUG nova.network.neutron [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Updating instance_info_cache with network_info: [{"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.680 250022 DEBUG oslo_concurrency.lockutils [req-ae430ec9-fec5-4742-95bf-e822dae156ba req-6fc16168-1cba-4f33-b130-add86a4f55e7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-363d71d6-a6f3-4145-9964-1d057a891bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.815 250022 INFO nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Creating config drive at /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.820 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphya23mff execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:09.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.957 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphya23mff" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.992 250022 DEBUG nova.storage.rbd_utils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] rbd image 363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:02:09 compute-0 nova_compute[250018]: 2026-01-20 15:02:09.996 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config 363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 177 op/s
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.210 250022 DEBUG oslo_concurrency.processutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config 363d71d6-a6f3-4145-9964-1d057a891bcd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.211 250022 INFO nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Deleting local config drive /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd/disk.config because it was imported into RBD.
Jan 20 15:02:10 compute-0 kernel: tap42fe52d6-2d: entered promiscuous mode
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.2631] manager: (tap42fe52d6-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Jan 20 15:02:10 compute-0 ovn_controller[148666]: 2026-01-20T15:02:10Z|00542|binding|INFO|Claiming lport 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 for this chassis.
Jan 20 15:02:10 compute-0 ovn_controller[148666]: 2026-01-20T15:02:10Z|00543|binding|INFO|42fe52d6-2d12-468d-bf0a-7a1b391b6d17: Claiming fa:16:3e:d5:29:9f 10.100.0.13
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.268 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.277 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:29:9f 10.100.0.13'], port_security=['fa:16:3e:d5:29:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '363d71d6-a6f3-4145-9964-1d057a891bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '654b3ce7b3644fc58f8dc9f60529320b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff1c5b6a-5ab6-401e-b333-7f359193e012', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a3d5928-255d-4c0c-af70-f26be5196416, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=42fe52d6-2d12-468d-bf0a-7a1b391b6d17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.278 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 in datapath 0296a21f-6ec4-43a7-8731-1d3692a5de4a bound to our chassis
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.280 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.290 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[45e0d01a-f222-4722-a10e-a2b1aa49824f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.291 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0296a21f-61 in ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.293 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0296a21f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.293 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cc37c0-dd39-4185-ab83-70008d38cebb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.294 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e2655e1d-e019-458e-b046-1ce2aa7fc304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 systemd-udevd[339356]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:02:10 compute-0 systemd-machined[216401]: New machine qemu-70-instance-00000098.
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.308 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[871c6860-f0c1-4670-9321-e1fbac3f4eb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.3143] device (tap42fe52d6-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.3154] device (tap42fe52d6-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:02:10 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-00000098.
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.334 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[93128462-40ac-4a41-b604-aeb18004944b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.337 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 ovn_controller[148666]: 2026-01-20T15:02:10Z|00544|binding|INFO|Setting lport 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 ovn-installed in OVS
Jan 20 15:02:10 compute-0 ovn_controller[148666]: 2026-01-20T15:02:10Z|00545|binding|INFO|Setting lport 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 up in Southbound
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.344 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.371 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e62d83-bba1-4d37-98f5-037850c178a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.3771] manager: (tap0296a21f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/269)
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.376 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4a587f5f-0e63-4776-8a7e-b0e56b247c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 systemd-udevd[339360]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.405 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[95438192-5b47-45a5-87e4-79b45de6105a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.410 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b11ddf0d-9a16-4b99-9cea-fa34f4f37c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.4375] device (tap0296a21f-60): carrier: link connected
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.447 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2efb99-08be-41d6-b51f-26b1b5341e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.466 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[10f8f6db-fdd6-4c5b-826f-67b9e0a08a5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0296a21f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:1c:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727116, 'reachable_time': 37625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339388, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.479 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[66effbe2-3b8b-4ea1-a592-0b33e235a9ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:1c68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727116, 'tstamp': 727116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339389, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.496 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba72da5-9c16-417d-85ad-ed3768febc44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0296a21f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:1c:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727116, 'reachable_time': 37625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339390, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.526 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[903adaac-55a6-46c3-adff-61de90847650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.589 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69a55ff0-8823-489c-b0f9-8c58a110d6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.590 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0296a21f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.590 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.591 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0296a21f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 NetworkManager[48960]: <info>  [1768921330.5938] manager: (tap0296a21f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 20 15:02:10 compute-0 kernel: tap0296a21f-60: entered promiscuous mode
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.596 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0296a21f-60, col_values=(('external_ids', {'iface-id': 'a6fccd00-2fdb-4d49-8d76-4860c81e4a5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.596 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 ovn_controller[148666]: 2026-01-20T15:02:10Z|00546|binding|INFO|Releasing lport a6fccd00-2fdb-4d49-8d76-4860c81e4a5f from this chassis (sb_readonly=0)
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.619 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.619 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.620 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5de629ea-278a-4c5f-b793-d835b9666f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.621 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/0296a21f-6ec4-43a7-8731-1d3692a5de4a.pid.haproxy
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 0296a21f-6ec4-43a7-8731-1d3692a5de4a
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:02:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:10.623 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'env', 'PROCESS_TAG=haproxy-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0296a21f-6ec4-43a7-8731-1d3692a5de4a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.631 250022 DEBUG nova.compute.manager [req-ce14fcf6-97b5-48aa-a242-43a37d4d030c req-70672b62-76c3-41a5-a6fb-46f75aaf6393 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.631 250022 DEBUG oslo_concurrency.lockutils [req-ce14fcf6-97b5-48aa-a242-43a37d4d030c req-70672b62-76c3-41a5-a6fb-46f75aaf6393 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.632 250022 DEBUG oslo_concurrency.lockutils [req-ce14fcf6-97b5-48aa-a242-43a37d4d030c req-70672b62-76c3-41a5-a6fb-46f75aaf6393 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.632 250022 DEBUG oslo_concurrency.lockutils [req-ce14fcf6-97b5-48aa-a242-43a37d4d030c req-70672b62-76c3-41a5-a6fb-46f75aaf6393 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:10 compute-0 nova_compute[250018]: 2026-01-20 15:02:10.632 250022 DEBUG nova.compute.manager [req-ce14fcf6-97b5-48aa-a242-43a37d4d030c req-70672b62-76c3-41a5-a6fb-46f75aaf6393 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Processing event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:02:10 compute-0 podman[339422]: 2026-01-20 15:02:10.972179995 +0000 UTC m=+0.046678528 container create b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:02:11 compute-0 systemd[1]: Started libpod-conmon-b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954.scope.
Jan 20 15:02:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddcc6fa04685ef5c301ea9d5a9159c45df5ee38552a81ac8386957a4c8dce05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:11 compute-0 podman[339422]: 2026-01-20 15:02:11.042940492 +0000 UTC m=+0.117439035 container init b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:02:11 compute-0 podman[339422]: 2026-01-20 15:02:10.947716867 +0000 UTC m=+0.022215420 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:02:11 compute-0 podman[339422]: 2026-01-20 15:02:11.048444359 +0000 UTC m=+0.122942892 container start b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:02:11 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [NOTICE]   (339442) : New worker (339444) forked
Jan 20 15:02:11 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [NOTICE]   (339442) : Loading success.
Jan 20 15:02:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:11.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:11 compute-0 ceph-mon[74360]: pgmap v2315: 321 pgs: 321 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 177 op/s
Jan 20 15:02:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1448665557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006404272520007445 of space, bias 1.0, pg target 1.9212817560022335 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:02:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.686 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921331.6856172, 363d71d6-a6f3-4145-9964-1d057a891bcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.687 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] VM Started (Lifecycle Event)
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.691 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.694 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.697 250022 INFO nova.virt.libvirt.driver [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Instance spawned successfully.
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.698 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.718 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.723 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.724 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.725 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.725 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.726 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.727 250022 DEBUG nova.virt.libvirt.driver [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.733 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.766 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.767 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921331.685945, 363d71d6-a6f3-4145-9964-1d057a891bcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.767 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] VM Paused (Lifecycle Event)
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.800 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.804 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921331.6939964, 363d71d6-a6f3-4145-9964-1d057a891bcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.805 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] VM Resumed (Lifecycle Event)
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.809 250022 INFO nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Took 8.89 seconds to spawn the instance on the hypervisor.
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.809 250022 DEBUG nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.834 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.838 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:02:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:11.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.869 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.882 250022 INFO nova.compute.manager [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Took 9.85 seconds to build instance.
Jan 20 15:02:11 compute-0 nova_compute[250018]: 2026-01-20 15:02:11.900 250022 DEBUG oslo_concurrency.lockutils [None req-e8527b03-1315-427b-9b2f-e315c0e997a1 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 371 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Jan 20 15:02:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3771773152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:12 compute-0 ceph-mon[74360]: pgmap v2316: 321 pgs: 321 active+clean; 371 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.755 250022 DEBUG nova.compute.manager [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.755 250022 DEBUG oslo_concurrency.lockutils [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.755 250022 DEBUG oslo_concurrency.lockutils [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.756 250022 DEBUG oslo_concurrency.lockutils [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.756 250022 DEBUG nova.compute.manager [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] No waiting events found dispatching network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:02:12 compute-0 nova_compute[250018]: 2026-01-20 15:02:12.756 250022 WARNING nova.compute.manager [req-1f61aba6-4985-4d1c-bcd3-fb8e8999c8a8 req-f1e4d036-60e4-4a66-8d58-0e84f9c656f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received unexpected event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 for instance with vm_state active and task_state None.
Jan 20 15:02:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:13.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:02:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:13.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:02:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3642955436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:02:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3642955436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:02:13 compute-0 sudo[339497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:13 compute-0 sudo[339497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:13 compute-0 sudo[339497]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:14 compute-0 sudo[339522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:14 compute-0 sudo[339522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-0 sudo[339522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:14 compute-0 nova_compute[250018]: 2026-01-20 15:02:14.061 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-0 nova_compute[250018]: 2026-01-20 15:02:14.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:14 compute-0 sudo[339547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:14 compute-0 sudo[339547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-0 sudo[339547]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:14 compute-0 sudo[339572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:02:14 compute-0 sudo[339572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 7.4 MiB/s wr, 169 op/s
Jan 20 15:02:14 compute-0 podman[339642]: 2026-01-20 15:02:14.57827807 +0000 UTC m=+0.060641175 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:02:14 compute-0 podman[339686]: 2026-01-20 15:02:14.66071792 +0000 UTC m=+0.061439075 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:02:14 compute-0 podman[339685]: 2026-01-20 15:02:14.688048116 +0000 UTC m=+0.084573248 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:02:14 compute-0 podman[339686]: 2026-01-20 15:02:14.757809416 +0000 UTC m=+0.158530551 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:02:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3696145664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:14 compute-0 ceph-mon[74360]: pgmap v2317: 321 pgs: 321 active+clean; 385 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 7.4 MiB/s wr, 169 op/s
Jan 20 15:02:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/295816442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 podman[339863]: 2026-01-20 15:02:15.365635408 +0000 UTC m=+0.056951825 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:02:15 compute-0 podman[339863]: 2026-01-20 15:02:15.395443691 +0000 UTC m=+0.086760088 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:02:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:15.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:15 compute-0 sudo[339895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-0 sudo[339895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-0 sudo[339895]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 sudo[339936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-0 sudo[339936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-0 sudo[339936]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 podman[339975]: 2026-01-20 15:02:15.657118579 +0000 UTC m=+0.050007878 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 20 15:02:15 compute-0 podman[339975]: 2026-01-20 15:02:15.672660188 +0000 UTC m=+0.065549297 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, distribution-scope=public, name=keepalived, io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 15:02:15 compute-0 sudo[339572]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 sudo[340006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-0 sudo[340006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-0 sudo[340006]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:02:15 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:15 compute-0 sudo[340032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:15 compute-0 sudo[340032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-0 sudo[340032]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:15.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:15 compute-0 sudo[340057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:15 compute-0 sudo[340057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:15 compute-0 sudo[340057]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:15 compute-0 sudo[340082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:02:15 compute-0 sudo[340082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 266 op/s
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 sudo[340082]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8f8e036a-5718-4b8f-bafe-7d42aa87dbda does not exist
Jan 20 15:02:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 96c05da7-96d1-4d32-92bd-36b32265a3e2 does not exist
Jan 20 15:02:16 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e3972f75-e29d-45da-96e9-1a0b8b49c94a does not exist
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:02:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:02:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:16 compute-0 sudo[340138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:16 compute-0 sudo[340138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-0 sudo[340138]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-0 sudo[340163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:16 compute-0 sudo[340163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-0 sudo[340163]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-0 sudo[340188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:16 compute-0 sudo[340188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:16 compute-0 sudo[340188]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:16 compute-0 sudo[340213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:02:16 compute-0 sudo[340213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:17 compute-0 podman[340278]: 2026-01-20 15:02:17.25163929 +0000 UTC m=+0.025030796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:17.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:17 compute-0 podman[340278]: 2026-01-20 15:02:17.701289311 +0000 UTC m=+0.474680727 container create 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 15:02:17 compute-0 ceph-mon[74360]: pgmap v2318: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 266 op/s
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:02:17 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:02:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:17.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:17 compute-0 systemd[1]: Started libpod-conmon-35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3.scope.
Jan 20 15:02:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:18 compute-0 podman[340278]: 2026-01-20 15:02:18.0034346 +0000 UTC m=+0.776826036 container init 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:02:18 compute-0 podman[340278]: 2026-01-20 15:02:18.011531288 +0000 UTC m=+0.784922704 container start 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:02:18 compute-0 podman[340278]: 2026-01-20 15:02:18.01495386 +0000 UTC m=+0.788345276 container attach 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:02:18 compute-0 unruffled_hodgkin[340296]: 167 167
Jan 20 15:02:18 compute-0 systemd[1]: libpod-35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3.scope: Deactivated successfully.
Jan 20 15:02:18 compute-0 podman[340278]: 2026-01-20 15:02:18.019547894 +0000 UTC m=+0.792939320 container died 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb00b0f09807f20a39b318dad43016a0d94410a94c30eeaa0cc961109a85bcea-merged.mount: Deactivated successfully.
Jan 20 15:02:18 compute-0 podman[340278]: 2026-01-20 15:02:18.057113316 +0000 UTC m=+0.830504732 container remove 35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:02:18 compute-0 systemd[1]: libpod-conmon-35bf7d4fa14b8c3da00e52b1d3da30e554ddc2cbd86cc1fc7e29af37b7d0b7e3.scope: Deactivated successfully.
Jan 20 15:02:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 251 op/s
Jan 20 15:02:18 compute-0 podman[340319]: 2026-01-20 15:02:18.243074455 +0000 UTC m=+0.040782029 container create 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:02:18 compute-0 systemd[1]: Started libpod-conmon-2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea.scope.
Jan 20 15:02:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:18 compute-0 podman[340319]: 2026-01-20 15:02:18.226747085 +0000 UTC m=+0.024454679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:18 compute-0 podman[340319]: 2026-01-20 15:02:18.341520216 +0000 UTC m=+0.139227790 container init 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:02:18 compute-0 podman[340319]: 2026-01-20 15:02:18.349486781 +0000 UTC m=+0.147194355 container start 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:02:18 compute-0 podman[340319]: 2026-01-20 15:02:18.3527996 +0000 UTC m=+0.150507204 container attach 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 15:02:19 compute-0 ceph-mon[74360]: pgmap v2319: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.7 MiB/s wr, 251 op/s
Jan 20 15:02:19 compute-0 infallible_wozniak[340335]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:02:19 compute-0 infallible_wozniak[340335]: --> relative data size: 1.0
Jan 20 15:02:19 compute-0 infallible_wozniak[340335]: --> All data devices are unavailable
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.065 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.104 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:19 compute-0 nova_compute[250018]: 2026-01-20 15:02:19.104 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:19 compute-0 systemd[1]: libpod-2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea.scope: Deactivated successfully.
Jan 20 15:02:19 compute-0 podman[340319]: 2026-01-20 15:02:19.127984181 +0000 UTC m=+0.925691775 container died 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c48762338f28374989e6cbad44828a03d394a79c33d8e6a63f5bf316a9ec4722-merged.mount: Deactivated successfully.
Jan 20 15:02:19 compute-0 podman[340319]: 2026-01-20 15:02:19.198795339 +0000 UTC m=+0.996502913 container remove 2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:02:19 compute-0 systemd[1]: libpod-conmon-2b3ca64a2363131ae08467404912b84a97029c53ddeba3ba9c5b6f046a2f8dea.scope: Deactivated successfully.
Jan 20 15:02:19 compute-0 sudo[340213]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:19 compute-0 sudo[340360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:19 compute-0 sudo[340360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:19 compute-0 sudo[340360]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:19 compute-0 sudo[340385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:19 compute-0 sudo[340385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:19 compute-0 sudo[340385]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:19 compute-0 sudo[340410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:19 compute-0 sudo[340410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:19 compute-0 sudo[340410]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:19 compute-0 sudo[340435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:02:19 compute-0 sudo[340435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.792419949 +0000 UTC m=+0.041191611 container create 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:02:19 compute-0 systemd[1]: Started libpod-conmon-690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2.scope.
Jan 20 15:02:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:19.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.772875812 +0000 UTC m=+0.021647504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.88230237 +0000 UTC m=+0.131074092 container init 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.887960852 +0000 UTC m=+0.136732524 container start 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.890789468 +0000 UTC m=+0.139561200 container attach 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:02:19 compute-0 dazzling_beaver[340517]: 167 167
Jan 20 15:02:19 compute-0 systemd[1]: libpod-690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2.scope: Deactivated successfully.
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.895470144 +0000 UTC m=+0.144241816 container died 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4332ac5383fce4dfca00e18237f1d0ce813eb6c17c6b22c5a3b519fbcbd2e0d-merged.mount: Deactivated successfully.
Jan 20 15:02:19 compute-0 podman[340500]: 2026-01-20 15:02:19.928237847 +0000 UTC m=+0.177009519 container remove 690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:02:19 compute-0 systemd[1]: libpod-conmon-690ffa8ff18a8178fa94aa41900bff419dc02f11e95b163a500d7432e3d10ef2.scope: Deactivated successfully.
Jan 20 15:02:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.098589595 +0000 UTC m=+0.038905049 container create 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:02:20 compute-0 systemd[1]: Started libpod-conmon-8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e.scope.
Jan 20 15:02:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc99925f258fb640a062e777146d771d668b64d970db4b5b703ddd39fa28cfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc99925f258fb640a062e777146d771d668b64d970db4b5b703ddd39fa28cfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc99925f258fb640a062e777146d771d668b64d970db4b5b703ddd39fa28cfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc99925f258fb640a062e777146d771d668b64d970db4b5b703ddd39fa28cfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.174010477 +0000 UTC m=+0.114325961 container init 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.081838994 +0000 UTC m=+0.022154468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.181826028 +0000 UTC m=+0.122141482 container start 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.185184988 +0000 UTC m=+0.125500442 container attach 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:02:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 317 op/s
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]: {
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:     "0": [
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:         {
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "devices": [
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "/dev/loop3"
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             ],
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "lv_name": "ceph_lv0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "lv_size": "7511998464",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "name": "ceph_lv0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "tags": {
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.cluster_name": "ceph",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.crush_device_class": "",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.encrypted": "0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.osd_id": "0",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.type": "block",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:                 "ceph.vdo": "0"
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             },
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "type": "block",
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:             "vg_name": "ceph_vg0"
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:         }
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]:     ]
Jan 20 15:02:20 compute-0 trusting_sanderson[340557]: }
Jan 20 15:02:20 compute-0 systemd[1]: libpod-8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e.scope: Deactivated successfully.
Jan 20 15:02:20 compute-0 podman[340541]: 2026-01-20 15:02:20.982790572 +0000 UTC m=+0.923106046 container died 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:02:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 20 15:02:21 compute-0 ceph-mon[74360]: pgmap v2320: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.7 MiB/s wr, 317 op/s
Jan 20 15:02:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 20 15:02:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 20 15:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc99925f258fb640a062e777146d771d668b64d970db4b5b703ddd39fa28cfa-merged.mount: Deactivated successfully.
Jan 20 15:02:21 compute-0 podman[340541]: 2026-01-20 15:02:21.440931023 +0000 UTC m=+1.381246477 container remove 8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:02:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:21 compute-0 sudo[340435]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:21 compute-0 sudo[340579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:21 compute-0 sudo[340579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:21 compute-0 systemd[1]: libpod-conmon-8b51272c5103311022b192e825f63da46573a44ba823b5027e134f181836266e.scope: Deactivated successfully.
Jan 20 15:02:21 compute-0 sudo[340579]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:21 compute-0 sudo[340604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:02:21 compute-0 sudo[340604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:21 compute-0 sudo[340604]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:21 compute-0 sudo[340629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:21 compute-0 sudo[340629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:21 compute-0 sudo[340629]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:21 compute-0 sudo[340654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:02:21 compute-0 sudo[340654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:21.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.06269472 +0000 UTC m=+0.047739906 container create 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:02:22 compute-0 systemd[1]: Started libpod-conmon-737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72.scope.
Jan 20 15:02:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.042110056 +0000 UTC m=+0.027155272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.149705994 +0000 UTC m=+0.134751230 container init 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.156524178 +0000 UTC m=+0.141569364 container start 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.159805826 +0000 UTC m=+0.144851052 container attach 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:02:22 compute-0 sharp_kowalevski[340738]: 167 167
Jan 20 15:02:22 compute-0 systemd[1]: libpod-737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72.scope: Deactivated successfully.
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.163178957 +0000 UTC m=+0.148224153 container died 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ece28f7288da8ccd73539aeae50541328203f670d83437966a1934a6aa6572f4-merged.mount: Deactivated successfully.
Jan 20 15:02:22 compute-0 podman[340721]: 2026-01-20 15:02:22.201557951 +0000 UTC m=+0.186603137 container remove 737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 1.1 MiB/s wr, 272 op/s
Jan 20 15:02:22 compute-0 systemd[1]: libpod-conmon-737466cba63be733c5b6cd4ae29819fccab0f0b7315b11f4f634a70b58584b72.scope: Deactivated successfully.
Jan 20 15:02:22 compute-0 podman[340762]: 2026-01-20 15:02:22.381619201 +0000 UTC m=+0.046321538 container create dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:02:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 20 15:02:22 compute-0 ceph-mon[74360]: osdmap e334: 3 total, 3 up, 3 in
Jan 20 15:02:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 20 15:02:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 20 15:02:22 compute-0 systemd[1]: Started libpod-conmon-dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f.scope.
Jan 20 15:02:22 compute-0 podman[340762]: 2026-01-20 15:02:22.358153349 +0000 UTC m=+0.022855776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:02:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5d85398882749242adb347e0b3476d486c2bef1834358a9d2483ff5fcd5545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5d85398882749242adb347e0b3476d486c2bef1834358a9d2483ff5fcd5545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5d85398882749242adb347e0b3476d486c2bef1834358a9d2483ff5fcd5545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee5d85398882749242adb347e0b3476d486c2bef1834358a9d2483ff5fcd5545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:02:22 compute-0 podman[340762]: 2026-01-20 15:02:22.475742046 +0000 UTC m=+0.140444413 container init dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:02:22 compute-0 podman[340762]: 2026-01-20 15:02:22.485239462 +0000 UTC m=+0.149941799 container start dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:02:22 compute-0 podman[340762]: 2026-01-20 15:02:22.489670112 +0000 UTC m=+0.154372459 container attach dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]: {
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:         "osd_id": 0,
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:         "type": "bluestore"
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]:     }
Jan 20 15:02:23 compute-0 mystifying_maxwell[340779]: }
Jan 20 15:02:23 compute-0 systemd[1]: libpod-dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f.scope: Deactivated successfully.
Jan 20 15:02:23 compute-0 podman[340762]: 2026-01-20 15:02:23.343054018 +0000 UTC m=+1.007756355 container died dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee5d85398882749242adb347e0b3476d486c2bef1834358a9d2483ff5fcd5545-merged.mount: Deactivated successfully.
Jan 20 15:02:23 compute-0 podman[340762]: 2026-01-20 15:02:23.395817799 +0000 UTC m=+1.060520136 container remove dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:02:23 compute-0 systemd[1]: libpod-conmon-dcc3661cbd44209b863ce15a242b99434e305b4fe445b64a5a04d0315592f71f.scope: Deactivated successfully.
Jan 20 15:02:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 20 15:02:23 compute-0 ceph-mon[74360]: pgmap v2322: 321 pgs: 321 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 1.1 MiB/s wr, 272 op/s
Jan 20 15:02:23 compute-0 ceph-mon[74360]: osdmap e335: 3 total, 3 up, 3 in
Jan 20 15:02:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/33524825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:23 compute-0 sudo[340654]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:02:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 20 15:02:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 20 15:02:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:02:23 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 747020eb-53c2-4882-8224-ba4dff4c1fc3 does not exist
Jan 20 15:02:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a76452fa-8e6c-429f-833d-c81b0e44f5b2 does not exist
Jan 20 15:02:23 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 27980500-f196-4e35-a834-7990210b088f does not exist
Jan 20 15:02:23 compute-0 sudo[340815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:23 compute-0 sudo[340815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:23 compute-0 sudo[340815]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:23 compute-0 sudo[340840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:02:23 compute-0 sudo[340840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:23 compute-0 sudo[340840]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:23.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.155 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:24 compute-0 nova_compute[250018]: 2026-01-20 15:02:24.156 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 402 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 20 15:02:24 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 20 15:02:24 compute-0 ceph-mon[74360]: osdmap e336: 3 total, 3 up, 3 in
Jan 20 15:02:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:24 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:02:24 compute-0 ceph-mon[74360]: pgmap v2325: 321 pgs: 321 active+clean; 402 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 20 15:02:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:02:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:25.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:02:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.2 MiB/s wr, 294 op/s
Jan 20 15:02:27 compute-0 ceph-mon[74360]: pgmap v2326: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 8.2 MiB/s wr, 294 op/s
Jan 20 15:02:27 compute-0 ovn_controller[148666]: 2026-01-20T15:02:27Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:29:9f 10.100.0.13
Jan 20 15:02:27 compute-0 ovn_controller[148666]: 2026-01-20T15:02:27Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:29:9f 10.100.0.13
Jan 20 15:02:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:27.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.2 MiB/s wr, 212 op/s
Jan 20 15:02:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/841351583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.156 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:29 compute-0 nova_compute[250018]: 2026-01-20 15:02:29.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 15:02:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:29 compute-0 ceph-mon[74360]: pgmap v2327: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.2 MiB/s wr, 212 op/s
Jan 20 15:02:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3390699459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.123 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.123 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.123 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.123 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.124 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.125 250022 INFO nova.compute.manager [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Terminating instance
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.126 250022 DEBUG nova.compute.manager [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:02:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 437 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 11 MiB/s wr, 334 op/s
Jan 20 15:02:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 20 15:02:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 20 15:02:30 compute-0 kernel: tap42fe52d6-2d (unregistering): left promiscuous mode
Jan 20 15:02:30 compute-0 NetworkManager[48960]: <info>  [1768921350.4566] device (tap42fe52d6-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.464 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 ovn_controller[148666]: 2026-01-20T15:02:30Z|00547|binding|INFO|Releasing lport 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 from this chassis (sb_readonly=0)
Jan 20 15:02:30 compute-0 ovn_controller[148666]: 2026-01-20T15:02:30Z|00548|binding|INFO|Setting lport 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 down in Southbound
Jan 20 15:02:30 compute-0 ovn_controller[148666]: 2026-01-20T15:02:30Z|00549|binding|INFO|Removing iface tap42fe52d6-2d ovn-installed in OVS
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.466 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.470 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:29:9f 10.100.0.13'], port_security=['fa:16:3e:d5:29:9f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '363d71d6-a6f3-4145-9964-1d057a891bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '654b3ce7b3644fc58f8dc9f60529320b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff1c5b6a-5ab6-401e-b333-7f359193e012', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a3d5928-255d-4c0c-af70-f26be5196416, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=42fe52d6-2d12-468d-bf0a-7a1b391b6d17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.472 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 42fe52d6-2d12-468d-bf0a-7a1b391b6d17 in datapath 0296a21f-6ec4-43a7-8731-1d3692a5de4a unbound from our chassis
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.473 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0296a21f-6ec4-43a7-8731-1d3692a5de4a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.475 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbea841-9c6c-4c07-87d6-a494ce2d1b33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.475 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a namespace which is not needed anymore
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.482 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 20 15:02:30 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000098.scope: Consumed 15.107s CPU time.
Jan 20 15:02:30 compute-0 systemd-machined[216401]: Machine qemu-70-instance-00000098 terminated.
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.565 250022 INFO nova.virt.libvirt.driver [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Instance destroyed successfully.
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.565 250022 DEBUG nova.objects.instance [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lazy-loading 'resources' on Instance uuid 363d71d6-a6f3-4145-9964-1d057a891bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.585 250022 DEBUG nova.virt.libvirt.vif [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:02:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1263754624',display_name='tempest-TestServerMultinode-server-1263754624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1263754624',id=152,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:02:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='654b3ce7b3644fc58f8dc9f60529320b',ramdisk_id='',reservation_id='r-41ogmh9v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1071973011',owner_user_name='tempest-TestServerMultinode-1071973011-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:02:11Z,user_data=None,user_id='158563a99d4a420890aaa00b05c8bb57',uuid=363d71d6-a6f3-4145-9964-1d057a891bcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.586 250022 DEBUG nova.network.os_vif_util [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converting VIF {"id": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "address": "fa:16:3e:d5:29:9f", "network": {"id": "0296a21f-6ec4-43a7-8731-1d3692a5de4a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1878354210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "908b5ba217ab458e8c9aa0e5a471c194", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42fe52d6-2d", "ovs_interfaceid": "42fe52d6-2d12-468d-bf0a-7a1b391b6d17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.587 250022 DEBUG nova.network.os_vif_util [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.588 250022 DEBUG os_vif [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.590 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.590 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42fe52d6-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.592 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.594 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.596 250022 INFO os_vif [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:29:9f,bridge_name='br-int',has_traffic_filtering=True,id=42fe52d6-2d12-468d-bf0a-7a1b391b6d17,network=Network(0296a21f-6ec4-43a7-8731-1d3692a5de4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42fe52d6-2d')
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.666 250022 DEBUG nova.compute.manager [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-unplugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.666 250022 DEBUG oslo_concurrency.lockutils [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.667 250022 DEBUG oslo_concurrency.lockutils [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.667 250022 DEBUG oslo_concurrency.lockutils [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.667 250022 DEBUG nova.compute.manager [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] No waiting events found dispatching network-vif-unplugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:02:30 compute-0 nova_compute[250018]: 2026-01-20 15:02:30.668 250022 DEBUG nova.compute.manager [req-7718ee7d-0e38-4c31-9b66-6494302e083a req-c6e058c0-27c4-4e9c-ba24-37c68816d1d5 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-unplugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:02:30 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [NOTICE]   (339442) : haproxy version is 2.8.14-c23fe91
Jan 20 15:02:30 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [NOTICE]   (339442) : path to executable is /usr/sbin/haproxy
Jan 20 15:02:30 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [WARNING]  (339442) : Exiting Master process...
Jan 20 15:02:30 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [ALERT]    (339442) : Current worker (339444) exited with code 143 (Terminated)
Jan 20 15:02:30 compute-0 neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a[339438]: [WARNING]  (339442) : All workers exited. Exiting... (0)
Jan 20 15:02:30 compute-0 systemd[1]: libpod-b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954.scope: Deactivated successfully.
Jan 20 15:02:30 compute-0 podman[340897]: 2026-01-20 15:02:30.711003142 +0000 UTC m=+0.147502265 container died b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954-userdata-shm.mount: Deactivated successfully.
Jan 20 15:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ddcc6fa04685ef5c301ea9d5a9159c45df5ee38552a81ac8386957a4c8dce05-merged.mount: Deactivated successfully.
Jan 20 15:02:30 compute-0 podman[340897]: 2026-01-20 15:02:30.761592394 +0000 UTC m=+0.198091517 container cleanup b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:02:30 compute-0 systemd[1]: libpod-conmon-b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954.scope: Deactivated successfully.
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.772 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.772 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:30.773 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3652761978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:31 compute-0 ceph-mon[74360]: pgmap v2328: 321 pgs: 321 active+clean; 437 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 11 MiB/s wr, 334 op/s
Jan 20 15:02:31 compute-0 ceph-mon[74360]: osdmap e337: 3 total, 3 up, 3 in
Jan 20 15:02:31 compute-0 podman[340952]: 2026-01-20 15:02:31.038625277 +0000 UTC m=+0.256790479 container remove b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.044 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[05823dc2-bdfb-41ab-b72a-018b26c73b04]: (4, ('Tue Jan 20 03:02:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a (b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954)\nb42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954\nTue Jan 20 03:02:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a (b42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954)\nb42ab89e0a271202bf1197715d4db3529458090b9fb1297bd5fc38636092e954\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.046 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8379271d-3d7e-459c-852d-c6aece808b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.047 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0296a21f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:31 compute-0 nova_compute[250018]: 2026-01-20 15:02:31.048 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:31 compute-0 kernel: tap0296a21f-60: left promiscuous mode
Jan 20 15:02:31 compute-0 nova_compute[250018]: 2026-01-20 15:02:31.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.074 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d66e8eb-3887-46ab-85ac-b5ce83a87a33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.097 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6553fe11-7bbd-4669-a87d-760a073edc29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.099 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[54d77e79-a270-4201-8411-7a09e0b92965]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.118 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f158156c-321c-4d7d-8406-587fd0786719]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727108, 'reachable_time': 31514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340965, 'error': None, 'target': 'ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d0296a21f\x2d6ec4\x2d43a7\x2d8731\x2d1d3692a5de4a.mount: Deactivated successfully.
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.121 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0296a21f-6ec4-43a7-8731-1d3692a5de4a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:02:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:31.122 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ae02e13d-85a4-48d1-8609-d361b388b331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:02:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:31.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 327 op/s
Jan 20 15:02:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3110069501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.772 250022 DEBUG nova.compute.manager [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.773 250022 DEBUG oslo_concurrency.lockutils [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.773 250022 DEBUG oslo_concurrency.lockutils [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.773 250022 DEBUG oslo_concurrency.lockutils [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.774 250022 DEBUG nova.compute.manager [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] No waiting events found dispatching network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:02:32 compute-0 nova_compute[250018]: 2026-01-20 15:02:32.774 250022 WARNING nova.compute.manager [req-fc6912dd-1727-49ff-ad4e-0dc87b6b8dd7 req-b8005f3a-02a3-4cda-b798-54fe768cb749 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received unexpected event network-vif-plugged-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 for instance with vm_state active and task_state deleting.
Jan 20 15:02:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:33.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:33 compute-0 ceph-mon[74360]: pgmap v2330: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 327 op/s
Jan 20 15:02:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2716342833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:33 compute-0 nova_compute[250018]: 2026-01-20 15:02:33.991 250022 INFO nova.virt.libvirt.driver [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Deleting instance files /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd_del
Jan 20 15:02:33 compute-0 nova_compute[250018]: 2026-01-20 15:02:33.993 250022 INFO nova.virt.libvirt.driver [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Deletion of /var/lib/nova/instances/363d71d6-a6f3-4145-9964-1d057a891bcd_del complete
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.050 250022 INFO nova.compute.manager [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Took 3.92 seconds to destroy the instance on the hypervisor.
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.051 250022 DEBUG oslo.service.loopingcall [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.052 250022 DEBUG nova.compute.manager [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.052 250022 DEBUG nova.network.neutron [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.204 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 435 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 258 op/s
Jan 20 15:02:34 compute-0 ceph-mon[74360]: pgmap v2331: 321 pgs: 321 active+clean; 435 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 258 op/s
Jan 20 15:02:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:34.941 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:02:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:34.942 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:02:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:02:34.942 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.966 250022 DEBUG nova.network.neutron [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.981 250022 INFO nova.compute.manager [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Took 0.93 seconds to deallocate network for instance.
Jan 20 15:02:34 compute-0 nova_compute[250018]: 2026-01-20 15:02:34.993 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.021 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.021 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.052 250022 DEBUG nova.compute.manager [req-d0a92d34-a6b9-495b-9e10-3a69bbf9df6e req-0d1660d2-1b4f-40ca-8f6b-b42c9cf634fc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Received event network-vif-deleted-42fe52d6-2d12-468d-bf0a-7a1b391b6d17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.085 250022 DEBUG oslo_concurrency.processutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:35.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115469443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.518 250022 DEBUG oslo_concurrency.processutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.525 250022 DEBUG nova.compute.provider_tree [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.549 250022 DEBUG nova.scheduler.client.report [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.592 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:35 compute-0 sudo[340992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:35 compute-0 sudo[340992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:35 compute-0 sudo[340992]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.681 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:35 compute-0 sudo[341017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:35 compute-0 sudo[341017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.713 250022 INFO nova.scheduler.client.report [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Deleted allocations for instance 363d71d6-a6f3-4145-9964-1d057a891bcd
Jan 20 15:02:35 compute-0 sudo[341017]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:35 compute-0 nova_compute[250018]: 2026-01-20 15:02:35.784 250022 DEBUG oslo_concurrency.lockutils [None req-60030506-6074-4bd1-9874-992a2ec5d88c 158563a99d4a420890aaa00b05c8bb57 654b3ce7b3644fc58f8dc9f60529320b - - default default] Lock "363d71d6-a6f3-4145-9964-1d057a891bcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1771894953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4115469443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3456037100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:36 compute-0 ceph-mon[74360]: pgmap v2332: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:02:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:37.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:02:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:39 compute-0 nova_compute[250018]: 2026-01-20 15:02:39.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:39.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:39 compute-0 ceph-mon[74360]: pgmap v2333: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.4 MiB/s wr, 239 op/s
Jan 20 15:02:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:39.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 880 KiB/s wr, 181 op/s
Jan 20 15:02:40 compute-0 nova_compute[250018]: 2026-01-20 15:02:40.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:40 compute-0 ceph-mon[74360]: pgmap v2334: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 880 KiB/s wr, 181 op/s
Jan 20 15:02:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 749 KiB/s wr, 176 op/s
Jan 20 15:02:42 compute-0 ovn_controller[148666]: 2026-01-20T15:02:42Z|00550|binding|INFO|Releasing lport a8628d9e-196f-4b84-89fd-d3a41792b8a0 from this chassis (sb_readonly=0)
Jan 20 15:02:42 compute-0 nova_compute[250018]: 2026-01-20 15:02:42.657 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:43 compute-0 ceph-mon[74360]: pgmap v2335: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 749 KiB/s wr, 176 op/s
Jan 20 15:02:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:43.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 29 KiB/s wr, 184 op/s
Jan 20 15:02:44 compute-0 nova_compute[250018]: 2026-01-20 15:02:44.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:45 compute-0 ceph-mon[74360]: pgmap v2336: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 29 KiB/s wr, 184 op/s
Jan 20 15:02:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:45.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:45 compute-0 podman[341048]: 2026-01-20 15:02:45.502006454 +0000 UTC m=+0.053765830 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:02:45 compute-0 podman[341047]: 2026-01-20 15:02:45.536136523 +0000 UTC m=+0.088692390 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 20 15:02:45 compute-0 nova_compute[250018]: 2026-01-20 15:02:45.564 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921350.5631266, 363d71d6-a6f3-4145-9964-1d057a891bcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:02:45 compute-0 nova_compute[250018]: 2026-01-20 15:02:45.565 250022 INFO nova.compute.manager [-] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] VM Stopped (Lifecycle Event)
Jan 20 15:02:45 compute-0 nova_compute[250018]: 2026-01-20 15:02:45.597 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:45 compute-0 nova_compute[250018]: 2026-01-20 15:02:45.599 250022 DEBUG nova.compute.manager [None req-d305c7b2-3a15-48a4-afb8-278d7292bfda - - - - - -] [instance: 363d71d6-a6f3-4145-9964-1d057a891bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:02:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 124 KiB/s wr, 190 op/s
Jan 20 15:02:47 compute-0 ceph-mon[74360]: pgmap v2337: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 124 KiB/s wr, 190 op/s
Jan 20 15:02:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:47.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:02:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:02:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 122 KiB/s wr, 119 op/s
Jan 20 15:02:48 compute-0 ceph-mon[74360]: pgmap v2338: 321 pgs: 321 active+clean; 375 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 122 KiB/s wr, 119 op/s
Jan 20 15:02:49 compute-0 nova_compute[250018]: 2026-01-20 15:02:49.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:49.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:50 compute-0 nova_compute[250018]: 2026-01-20 15:02:50.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 20 15:02:50 compute-0 nova_compute[250018]: 2026-01-20 15:02:50.599 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.080 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.080 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:51 compute-0 ceph-mon[74360]: pgmap v2339: 321 pgs: 321 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 20 15:02:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2485144396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.569 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.654 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.654 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.822 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.823 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4054MB free_disk=20.854991912841797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.823 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.823 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b3f961d2-e73f-49bf-b141-6505e77ad9ac actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.900 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:02:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:51 compute-0 nova_compute[250018]: 2026-01-20 15:02:51.936 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 20 15:02:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:02:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220083062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2485144396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/51400553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:52 compute-0 nova_compute[250018]: 2026-01-20 15:02:52.412 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:02:52 compute-0 nova_compute[250018]: 2026-01-20 15:02:52.419 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:02:52 compute-0 nova_compute[250018]: 2026-01-20 15:02:52.439 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:02:52 compute-0 nova_compute[250018]: 2026-01-20 15:02:52.463 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:02:52 compute-0 nova_compute[250018]: 2026-01-20 15:02:52.464 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:02:52
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control']
Jan 20 15:02:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:02:53 compute-0 nova_compute[250018]: 2026-01-20 15:02:53.459 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:53 compute-0 nova_compute[250018]: 2026-01-20 15:02:53.460 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:53 compute-0 nova_compute[250018]: 2026-01-20 15:02:53.460 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:02:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:53 compute-0 ceph-mon[74360]: pgmap v2340: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 20 15:02:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2220083062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3186788731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2057302993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:54 compute-0 nova_compute[250018]: 2026-01-20 15:02:54.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 409 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 15:02:54 compute-0 nova_compute[250018]: 2026-01-20 15:02:54.332 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4019807425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:02:54 compute-0 ceph-mon[74360]: pgmap v2341: 321 pgs: 321 active+clean; 409 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 20 15:02:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:02:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:55.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:02:55 compute-0 nova_compute[250018]: 2026-01-20 15:02:55.601 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:55 compute-0 sudo[341142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:55 compute-0 sudo[341142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:55 compute-0 sudo[341142]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:55 compute-0 sudo[341168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:02:55 compute-0 sudo[341168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:02:55 compute-0 sudo[341168]: pam_unix(sudo:session): session closed for user root
Jan 20 15:02:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.302 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.303 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.303 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:02:57 compute-0 nova_compute[250018]: 2026-01-20 15:02:57.304 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:02:57 compute-0 ceph-mon[74360]: pgmap v2342: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 20 15:02:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:57.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:02:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:02:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 20 15:02:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2084056353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:02:59 compute-0 nova_compute[250018]: 2026-01-20 15:02:59.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:02:59 compute-0 ceph-mon[74360]: pgmap v2343: 321 pgs: 321 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 20 15:02:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:02:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:02:59.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:02:59 compute-0 nova_compute[250018]: 2026-01-20 15:02:59.637 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:02:59 compute-0 nova_compute[250018]: 2026-01-20 15:02:59.656 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:02:59 compute-0 nova_compute[250018]: 2026-01-20 15:02:59.656 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:02:59 compute-0 nova_compute[250018]: 2026-01-20 15:02:59.657 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:02:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:02:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:02:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:02:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Jan 20 15:03:00 compute-0 ceph-mon[74360]: pgmap v2344: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Jan 20 15:03:00 compute-0 nova_compute[250018]: 2026-01-20 15:03:00.603 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 909 KiB/s wr, 75 op/s
Jan 20 15:03:03 compute-0 ceph-mon[74360]: pgmap v2345: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 909 KiB/s wr, 75 op/s
Jan 20 15:03:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2658445181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:03.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 577 KiB/s wr, 71 op/s
Jan 20 15:03:04 compute-0 nova_compute[250018]: 2026-01-20 15:03:04.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2503648110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:05 compute-0 ceph-mon[74360]: pgmap v2346: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 577 KiB/s wr, 71 op/s
Jan 20 15:03:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:05 compute-0 nova_compute[250018]: 2026-01-20 15:03:05.604 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:05.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 333 KiB/s wr, 80 op/s
Jan 20 15:03:07 compute-0 ceph-mon[74360]: pgmap v2347: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 333 KiB/s wr, 80 op/s
Jan 20 15:03:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4071570443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 94 KiB/s wr, 45 op/s
Jan 20 15:03:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/424647466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:08 compute-0 ceph-mon[74360]: pgmap v2348: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 94 KiB/s wr, 45 op/s
Jan 20 15:03:08 compute-0 nova_compute[250018]: 2026-01-20 15:03:08.651 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:09 compute-0 nova_compute[250018]: 2026-01-20 15:03:09.368 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 94 KiB/s wr, 146 op/s
Jan 20 15:03:10 compute-0 nova_compute[250018]: 2026-01-20 15:03:10.605 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:10 compute-0 ceph-mon[74360]: pgmap v2349: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 94 KiB/s wr, 146 op/s
Jan 20 15:03:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:11.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006829212613995906 of space, bias 1.0, pg target 2.048763784198772 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005058023043458031 of space, bias 1.0, pg target 1.5072908669504932 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:03:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 20 15:03:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 188 op/s
Jan 20 15:03:13 compute-0 ceph-mon[74360]: pgmap v2350: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 188 op/s
Jan 20 15:03:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:13.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:03:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:03:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:03:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:03:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 204 op/s
Jan 20 15:03:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:03:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/605902267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:03:14 compute-0 nova_compute[250018]: 2026-01-20 15:03:14.369 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.078 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.079 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.106 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:03:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.219 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.220 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.226 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.226 250022 INFO nova.compute.claims [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:03:15 compute-0 ceph-mon[74360]: pgmap v2351: 321 pgs: 321 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 204 op/s
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.475 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.607 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:03:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:15.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:03:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936023232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:15 compute-0 sudo[341223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:15 compute-0 sudo[341223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:15 compute-0 sudo[341223]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.979 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:15 compute-0 nova_compute[250018]: 2026-01-20 15:03:15.985 250022 DEBUG nova.compute.provider_tree [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.015 250022 DEBUG nova.scheduler.client.report [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:03:16 compute-0 sudo[341255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.046 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:16 compute-0 sudo[341255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.047 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:03:16 compute-0 sudo[341255]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:16 compute-0 podman[341250]: 2026-01-20 15:03:16.069655825 +0000 UTC m=+0.081532597 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:03:16 compute-0 podman[341247]: 2026-01-20 15:03:16.090512506 +0000 UTC m=+0.102967394 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.097 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.098 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.116 250022 INFO nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.132 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:03:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 213 KiB/s wr, 197 op/s
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.261 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.262 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.263 250022 INFO nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Creating image(s)
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.287 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.312 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.337 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.340 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.415 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.416 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.417 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.417 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.441 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.444 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3936023232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:16 compute-0 ceph-mon[74360]: pgmap v2352: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 213 KiB/s wr, 197 op/s
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.629 250022 DEBUG nova.policy [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2446e8399b344b29986c1aaf8bf73adf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63555e5851564db08c6429231d264f2c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.732 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.803 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] resizing rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:03:16 compute-0 nova_compute[250018]: 2026-01-20 15:03:16.908 250022 DEBUG nova.objects.instance [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'migration_context' on Instance uuid 83bbf40a-f44e-42fe-b09a-0e635a302f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.050 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.050 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Ensure instance console log exists: /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.051 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.051 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.051 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 20 15:03:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 20 15:03:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:17 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 20 15:03:17 compute-0 nova_compute[250018]: 2026-01-20 15:03:17.849 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Successfully created port: d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:03:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 236 KiB/s wr, 205 op/s
Jan 20 15:03:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 20 15:03:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 20 15:03:18 compute-0 ceph-mon[74360]: osdmap e338: 3 total, 3 up, 3 in
Jan 20 15:03:18 compute-0 ceph-mon[74360]: pgmap v2354: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 236 KiB/s wr, 205 op/s
Jan 20 15:03:18 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.042 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Successfully updated port: d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.069 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.069 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquired lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.069 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.296 250022 DEBUG nova.compute.manager [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-changed-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.297 250022 DEBUG nova.compute.manager [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Refreshing instance network info cache due to event network-changed-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.297 250022 DEBUG oslo_concurrency.lockutils [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.376 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:19 compute-0 nova_compute[250018]: 2026-01-20 15:03:19.452 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:03:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 20 15:03:19 compute-0 ceph-mon[74360]: osdmap e339: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 20 15:03:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:19.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 514 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 11 MiB/s wr, 118 op/s
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:20 compute-0 ceph-mon[74360]: osdmap e340: 3 total, 3 up, 3 in
Jan 20 15:03:20 compute-0 ceph-mon[74360]: pgmap v2357: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 514 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 11 MiB/s wr, 118 op/s
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.652 250022 DEBUG nova.network.neutron [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updating instance_info_cache with network_info: [{"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.682 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Releasing lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.683 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Instance network_info: |[{"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.683 250022 DEBUG oslo_concurrency.lockutils [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.683 250022 DEBUG nova.network.neutron [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Refreshing network info cache for port d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.686 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Start _get_guest_xml network_info=[{"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.691 250022 WARNING nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.696 250022 DEBUG nova.virt.libvirt.host [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.697 250022 DEBUG nova.virt.libvirt.host [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.704 250022 DEBUG nova.virt.libvirt.host [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.704 250022 DEBUG nova.virt.libvirt.host [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.706 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.706 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.707 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.707 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.707 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.708 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.708 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.708 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.709 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.709 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.709 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.710 250022 DEBUG nova.virt.hardware [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:03:20 compute-0 nova_compute[250018]: 2026-01-20 15:03:20.713 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:03:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281668790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.142 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.168 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.171 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:03:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3406533712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3281668790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3406533712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.660 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.664 250022 DEBUG nova.virt.libvirt.vif [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:03:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-550119680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-550119680',id=157,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-4zf0x6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:03:16Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=83bbf40a-f44e-42fe-b09a-0e635a302f6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.665 250022 DEBUG nova.network.os_vif_util [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.667 250022 DEBUG nova.network.os_vif_util [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.669 250022 DEBUG nova.objects.instance [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'pci_devices' on Instance uuid 83bbf40a-f44e-42fe-b09a-0e635a302f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.721 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <uuid>83bbf40a-f44e-42fe-b09a-0e635a302f6d</uuid>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <name>instance-0000009d</name>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-550119680</nova:name>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:03:20</nova:creationTime>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:user uuid="2446e8399b344b29986c1aaf8bf73adf">tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member</nova:user>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:project uuid="63555e5851564db08c6429231d264f2c">tempest-ServerBootFromVolumeStableRescueTest-1871371328</nova:project>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <nova:port uuid="d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73">
Jan 20 15:03:21 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <system>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="serial">83bbf40a-f44e-42fe-b09a-0e635a302f6d</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="uuid">83bbf40a-f44e-42fe-b09a-0e635a302f6d</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </system>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <os>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </os>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <features>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </features>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk">
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </source>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config">
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </source>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:03:21 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:70:7d:d6"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <target dev="tapd0fce58e-a2"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/console.log" append="off"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <video>
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </video>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:03:21 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:03:21 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:03:21 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:03:21 compute-0 nova_compute[250018]: </domain>
Jan 20 15:03:21 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.722 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Preparing to wait for external event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.723 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.724 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.724 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.725 250022 DEBUG nova.virt.libvirt.vif [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:03:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-550119680',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-550119680',id=157,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-4zf0x6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:03:16Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=83bbf40a-f44e-42fe-b09a-0e635a302f6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.726 250022 DEBUG nova.network.os_vif_util [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.727 250022 DEBUG nova.network.os_vif_util [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.729 250022 DEBUG os_vif [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.730 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.731 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.735 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.735 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0fce58e-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.736 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0fce58e-a2, col_values=(('external_ids', {'iface-id': 'd0fce58e-a22e-40e2-9a2a-bab6b9a4ea73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:7d:d6', 'vm-uuid': '83bbf40a-f44e-42fe-b09a-0e635a302f6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.739 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:21 compute-0 NetworkManager[48960]: <info>  [1768921401.7406] manager: (tapd0fce58e-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.742 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.748 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.750 250022 INFO os_vif [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2')
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.803 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.804 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.804 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No VIF found with MAC fa:16:3e:70:7d:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.804 250022 INFO nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Using config drive
Jan 20 15:03:21 compute-0 nova_compute[250018]: 2026-01-20 15:03:21.829 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 18 MiB/s wr, 256 op/s
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.572 250022 INFO nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Creating config drive at /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.582 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd2bp4y2b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:22 compute-0 ceph-mon[74360]: pgmap v2358: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 18 MiB/s wr, 256 op/s
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.742 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd2bp4y2b" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.771 250022 DEBUG nova.storage.rbd_utils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] rbd image 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.775 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.957 250022 DEBUG oslo_concurrency.processutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config 83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:22 compute-0 nova_compute[250018]: 2026-01-20 15:03:22.958 250022 INFO nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Deleting local config drive /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d/disk.config because it was imported into RBD.
Jan 20 15:03:23 compute-0 kernel: tapd0fce58e-a2: entered promiscuous mode
Jan 20 15:03:23 compute-0 ovn_controller[148666]: 2026-01-20T15:03:23Z|00551|binding|INFO|Claiming lport d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 for this chassis.
Jan 20 15:03:23 compute-0 ovn_controller[148666]: 2026-01-20T15:03:23Z|00552|binding|INFO|d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73: Claiming fa:16:3e:70:7d:d6 10.100.0.11
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:23 compute-0 NetworkManager[48960]: <info>  [1768921403.0208] manager: (tapd0fce58e-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.035 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:7d:d6 10.100.0.11'], port_security=['fa:16:3e:70:7d:d6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '83bbf40a-f44e-42fe-b09a-0e635a302f6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.036 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec bound to our chassis
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.038 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:03:23 compute-0 ovn_controller[148666]: 2026-01-20T15:03:23Z|00553|binding|INFO|Setting lport d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 ovn-installed in OVS
Jan 20 15:03:23 compute-0 ovn_controller[148666]: 2026-01-20T15:03:23Z|00554|binding|INFO|Setting lport d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 up in Southbound
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.045 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:23 compute-0 systemd-udevd[341621]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.057 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[02783fd4-e228-416b-854a-6e8ee9d2a5fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 systemd-machined[216401]: New machine qemu-71-instance-0000009d.
Jan 20 15:03:23 compute-0 NetworkManager[48960]: <info>  [1768921403.0691] device (tapd0fce58e-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:03:23 compute-0 NetworkManager[48960]: <info>  [1768921403.0699] device (tapd0fce58e-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:03:23 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000009d.
Jan 20 15:03:23 compute-0 auditd[701]: Audit daemon rotating log files
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.100 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[01861736-d07b-417e-85e5-70350318ade2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.103 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[dbaf28bd-d974-4ab3-b419-93ac2442bab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.128 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[72d6b025-1144-4161-b951-9aa8110cf782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.142 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[615dee40-2fba-4d47-a832-6b9e4945c57b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722583, 'reachable_time': 22271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341634, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.155 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[49674ba0-22eb-4319-9cfc-ad099763fd9f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722597, 'tstamp': 722597}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341636, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722601, 'tstamp': 722601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341636, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.157 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.159 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.159 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.160 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:23.160 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.490 250022 DEBUG nova.compute.manager [req-27d61502-a138-4285-a5f6-9be9acdf6289 req-d15eb03a-58e1-4a76-ae5b-91bda181ad10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.490 250022 DEBUG oslo_concurrency.lockutils [req-27d61502-a138-4285-a5f6-9be9acdf6289 req-d15eb03a-58e1-4a76-ae5b-91bda181ad10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.490 250022 DEBUG oslo_concurrency.lockutils [req-27d61502-a138-4285-a5f6-9be9acdf6289 req-d15eb03a-58e1-4a76-ae5b-91bda181ad10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.491 250022 DEBUG oslo_concurrency.lockutils [req-27d61502-a138-4285-a5f6-9be9acdf6289 req-d15eb03a-58e1-4a76-ae5b-91bda181ad10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.491 250022 DEBUG nova.compute.manager [req-27d61502-a138-4285-a5f6-9be9acdf6289 req-d15eb03a-58e1-4a76-ae5b-91bda181ad10 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Processing event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:03:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:23.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.604 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.605 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921403.6032698, 83bbf40a-f44e-42fe-b09a-0e635a302f6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.606 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] VM Started (Lifecycle Event)
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.615 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.621 250022 INFO nova.virt.libvirt.driver [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Instance spawned successfully.
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.621 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.648 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.651 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.663 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.663 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.664 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.664 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.665 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.665 250022 DEBUG nova.virt.libvirt.driver [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:03:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 20 15:03:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 20 15:03:23 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.687 250022 DEBUG nova.network.neutron [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updated VIF entry in instance network info cache for port d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.688 250022 DEBUG nova.network.neutron [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updating instance_info_cache with network_info: [{"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.690 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.691 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921403.6035256, 83bbf40a-f44e-42fe-b09a-0e635a302f6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.691 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] VM Paused (Lifecycle Event)
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.727 250022 DEBUG oslo_concurrency.lockutils [req-42272ebd-1c03-4b3d-8701-079fb9bc402b req-23ac437e-14b3-4f7d-9948-22801b243479 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.731 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.734 250022 INFO nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Took 7.47 seconds to spawn the instance on the hypervisor.
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.735 250022 DEBUG nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.737 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921403.6144438, 83bbf40a-f44e-42fe-b09a-0e635a302f6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.738 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] VM Resumed (Lifecycle Event)
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.768 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.774 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.823 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.837 250022 INFO nova.compute.manager [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Took 8.67 seconds to build instance.
Jan 20 15:03:23 compute-0 nova_compute[250018]: 2026-01-20 15:03:23.862 250022 DEBUG oslo_concurrency.lockutils [None req-66d7582b-4ffe-4889-b374-86002374d94e 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:23.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:23 compute-0 sudo[341680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:23 compute-0 sudo[341680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:23 compute-0 sudo[341680]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-0 sudo[341705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:03:24 compute-0 sudo[341705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-0 sudo[341705]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-0 sudo[341730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:24 compute-0 sudo[341730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-0 sudo[341730]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:24 compute-0 sudo[341755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:03:24 compute-0 sudo[341755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 18 MiB/s wr, 297 op/s
Jan 20 15:03:24 compute-0 nova_compute[250018]: 2026-01-20 15:03:24.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:24 compute-0 ceph-mon[74360]: osdmap e341: 3 total, 3 up, 3 in
Jan 20 15:03:24 compute-0 ceph-mon[74360]: pgmap v2360: 321 pgs: 321 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 18 MiB/s wr, 297 op/s
Jan 20 15:03:24 compute-0 sudo[341755]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cec39fb9-1b68-4347-8ec8-58be893118c7 does not exist
Jan 20 15:03:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e2f77c10-d762-44df-9e7a-5cc66e517e4d does not exist
Jan 20 15:03:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a6148f92-f747-4db9-a698-47775a37b8ed does not exist
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-0 sudo[341810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:25 compute-0 sudo[341810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:25 compute-0 sudo[341810]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:25 compute-0 sudo[341835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:03:25 compute-0 sudo[341835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:25 compute-0 sudo[341835]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 20 15:03:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 20 15:03:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 20 15:03:25 compute-0 sudo[341860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:25 compute-0 sudo[341860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:25 compute-0 sudo[341860]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:25 compute-0 sudo[341885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:03:25 compute-0 sudo[341885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.625273616 +0000 UTC m=+0.037036088 container create b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:03:25 compute-0 systemd[1]: Started libpod-conmon-b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841.scope.
Jan 20 15:03:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.608654218 +0000 UTC m=+0.020416710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:03:25 compute-0 ceph-mon[74360]: osdmap e342: 3 total, 3 up, 3 in
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.709160785 +0000 UTC m=+0.120923277 container init b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.716773161 +0000 UTC m=+0.128535633 container start b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.720564762 +0000 UTC m=+0.132327234 container attach b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:03:25 compute-0 systemd[1]: libpod-b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841.scope: Deactivated successfully.
Jan 20 15:03:25 compute-0 strange_montalcini[341966]: 167 167
Jan 20 15:03:25 compute-0 conmon[341966]: conmon b456668f4b3a0c4e1484 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841.scope/container/memory.events
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.722896165 +0000 UTC m=+0.134658638 container died b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-03c646e2bcdd36ddbbcdb72365f0f02ce45038a5f150004eb5be98032cf73e3d-merged.mount: Deactivated successfully.
Jan 20 15:03:25 compute-0 podman[341949]: 2026-01-20 15:03:25.759770958 +0000 UTC m=+0.171533430 container remove b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:03:25 compute-0 systemd[1]: libpod-conmon-b456668f4b3a0c4e1484da009863a9d63734e1bd630c8a861fbf3efaa529f841.scope: Deactivated successfully.
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.897 250022 DEBUG nova.compute.manager [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.898 250022 DEBUG oslo_concurrency.lockutils [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.898 250022 DEBUG oslo_concurrency.lockutils [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.899 250022 DEBUG oslo_concurrency.lockutils [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.899 250022 DEBUG nova.compute.manager [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] No waiting events found dispatching network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:03:25 compute-0 nova_compute[250018]: 2026-01-20 15:03:25.899 250022 WARNING nova.compute.manager [req-1753b9ea-94d6-4863-bc30-6a02bb52032c req-a01dada6-9573-4db6-97a9-4891bb59de38 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received unexpected event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 for instance with vm_state active and task_state None.
Jan 20 15:03:25 compute-0 podman[341991]: 2026-01-20 15:03:25.942498621 +0000 UTC m=+0.053855112 container create e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:03:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:25 compute-0 systemd[1]: Started libpod-conmon-e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862.scope.
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:25.918938496 +0000 UTC m=+0.030295007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:26.047636823 +0000 UTC m=+0.158993354 container init e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:26.0560876 +0000 UTC m=+0.167444091 container start e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:26.059599115 +0000 UTC m=+0.170955646 container attach e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:03:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 12 MiB/s wr, 372 op/s
Jan 20 15:03:26 compute-0 ceph-mon[74360]: pgmap v2362: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 12 MiB/s wr, 372 op/s
Jan 20 15:03:26 compute-0 nova_compute[250018]: 2026-01-20 15:03:26.768 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:26 compute-0 kind_wilbur[342007]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:03:26 compute-0 kind_wilbur[342007]: --> relative data size: 1.0
Jan 20 15:03:26 compute-0 kind_wilbur[342007]: --> All data devices are unavailable
Jan 20 15:03:26 compute-0 systemd[1]: libpod-e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862.scope: Deactivated successfully.
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:26.852429771 +0000 UTC m=+0.963786262 container died e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd8c9c209f68432cd97d533b83d6f9dff5b4821e5364473fe31acf2ef198c40-merged.mount: Deactivated successfully.
Jan 20 15:03:26 compute-0 podman[341991]: 2026-01-20 15:03:26.902644523 +0000 UTC m=+1.014001014 container remove e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:03:26 compute-0 systemd[1]: libpod-conmon-e796a72b8aee0455d8238bf638af0ddfa1a7e47f8d9589600b6568edd87a2862.scope: Deactivated successfully.
Jan 20 15:03:26 compute-0 sudo[341885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:26 compute-0 sudo[342035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:26 compute-0 sudo[342035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:26 compute-0 sudo[342035]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:27 compute-0 sudo[342060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:03:27 compute-0 sudo[342060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:27 compute-0 sudo[342060]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:27 compute-0 sudo[342085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:27 compute-0 sudo[342085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:27 compute-0 sudo[342085]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:27 compute-0 sudo[342110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:03:27 compute-0 sudo[342110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.504082984 +0000 UTC m=+0.048004434 container create 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:03:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:27.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:27 compute-0 systemd[1]: Started libpod-conmon-004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e.scope.
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.480990592 +0000 UTC m=+0.024912092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.592721071 +0000 UTC m=+0.136642491 container init 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.60235356 +0000 UTC m=+0.146274960 container start 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.605686861 +0000 UTC m=+0.149608291 container attach 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:03:27 compute-0 epic_lumiere[342190]: 167 167
Jan 20 15:03:27 compute-0 systemd[1]: libpod-004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e.scope: Deactivated successfully.
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.606980576 +0000 UTC m=+0.150901986 container died 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5bae17a59b8ffb98c48eaeb4864356b1c3d95d73bb384021e8f88a3861c646-merged.mount: Deactivated successfully.
Jan 20 15:03:27 compute-0 podman[342174]: 2026-01-20 15:03:27.642103182 +0000 UTC m=+0.186024582 container remove 004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:03:27 compute-0 systemd[1]: libpod-conmon-004a280b888624a87d7f4765318c81f26ce95153b54cead8bcff7403f564b41e.scope: Deactivated successfully.
Jan 20 15:03:27 compute-0 podman[342216]: 2026-01-20 15:03:27.890693017 +0000 UTC m=+0.099841280 container create 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:03:27 compute-0 podman[342216]: 2026-01-20 15:03:27.819618003 +0000 UTC m=+0.028766286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:27 compute-0 systemd[1]: Started libpod-conmon-9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50.scope.
Jan 20 15:03:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/817872846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd680838ef5e96a4301abdc0d70c34e28d4f65326001ecad7d2ad1f7bf42c240/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd680838ef5e96a4301abdc0d70c34e28d4f65326001ecad7d2ad1f7bf42c240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd680838ef5e96a4301abdc0d70c34e28d4f65326001ecad7d2ad1f7bf42c240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd680838ef5e96a4301abdc0d70c34e28d4f65326001ecad7d2ad1f7bf42c240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:27 compute-0 podman[342216]: 2026-01-20 15:03:27.985953993 +0000 UTC m=+0.195102276 container init 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:03:27 compute-0 podman[342216]: 2026-01-20 15:03:27.992159381 +0000 UTC m=+0.201307644 container start 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:03:27 compute-0 podman[342216]: 2026-01-20 15:03:27.995269944 +0000 UTC m=+0.204418227 container attach 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:03:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.0 MiB/s wr, 273 op/s
Jan 20 15:03:28 compute-0 nova_compute[250018]: 2026-01-20 15:03:28.397 250022 DEBUG nova.compute.manager [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:03:28 compute-0 nova_compute[250018]: 2026-01-20 15:03:28.456 250022 INFO nova.compute.manager [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] instance snapshotting
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]: {
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:     "0": [
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:         {
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "devices": [
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "/dev/loop3"
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             ],
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "lv_name": "ceph_lv0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "lv_size": "7511998464",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "name": "ceph_lv0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "tags": {
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.cluster_name": "ceph",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.crush_device_class": "",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.encrypted": "0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.osd_id": "0",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.type": "block",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:                 "ceph.vdo": "0"
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             },
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "type": "block",
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:             "vg_name": "ceph_vg0"
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:         }
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]:     ]
Jan 20 15:03:28 compute-0 unruffled_poincare[342233]: }
Jan 20 15:03:28 compute-0 systemd[1]: libpod-9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50.scope: Deactivated successfully.
Jan 20 15:03:28 compute-0 podman[342216]: 2026-01-20 15:03:28.785050948 +0000 UTC m=+0.994199211 container died 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd680838ef5e96a4301abdc0d70c34e28d4f65326001ecad7d2ad1f7bf42c240-merged.mount: Deactivated successfully.
Jan 20 15:03:28 compute-0 podman[342216]: 2026-01-20 15:03:28.838334663 +0000 UTC m=+1.047482926 container remove 9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_poincare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 15:03:28 compute-0 systemd[1]: libpod-conmon-9529ded25c97bafc81b3f19fef0cbb9ad80fde010e21602c1f48517057932e50.scope: Deactivated successfully.
Jan 20 15:03:28 compute-0 sudo[342110]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:28 compute-0 sudo[342256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:28 compute-0 sudo[342256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:28 compute-0 sudo[342256]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:28 compute-0 nova_compute[250018]: 2026-01-20 15:03:28.988 250022 INFO nova.virt.libvirt.driver [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Beginning live snapshot process
Jan 20 15:03:29 compute-0 sudo[342281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:03:29 compute-0 sudo[342281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:29 compute-0 sudo[342281]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:29 compute-0 ceph-mon[74360]: pgmap v2363: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.0 MiB/s wr, 273 op/s
Jan 20 15:03:29 compute-0 sudo[342306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:29 compute-0 sudo[342306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:29 compute-0 sudo[342306]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:29 compute-0 sudo[342331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:03:29 compute-0 sudo[342331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:29 compute-0 nova_compute[250018]: 2026-01-20 15:03:29.391 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.471235631 +0000 UTC m=+0.038945030 container create 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:03:29 compute-0 systemd[1]: Started libpod-conmon-4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601.scope.
Jan 20 15:03:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.545526082 +0000 UTC m=+0.113235511 container init 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.453961986 +0000 UTC m=+0.021671405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.551119623 +0000 UTC m=+0.118829022 container start 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.554567166 +0000 UTC m=+0.122276615 container attach 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:03:29 compute-0 focused_wozniak[342410]: 167 167
Jan 20 15:03:29 compute-0 systemd[1]: libpod-4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601.scope: Deactivated successfully.
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.557634709 +0000 UTC m=+0.125344118 container died 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0f17466c9b4fe6353d865dbf5a1e451026cc0dcffc2531fa8f41e24241f84e-merged.mount: Deactivated successfully.
Jan 20 15:03:29 compute-0 podman[342394]: 2026-01-20 15:03:29.59814632 +0000 UTC m=+0.165855719 container remove 4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wozniak, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:03:29 compute-0 systemd[1]: libpod-conmon-4a8635fa7c9880aebe2b25f30da814165f6fdc68ece4fc9949860ac8b56fb601.scope: Deactivated successfully.
Jan 20 15:03:29 compute-0 podman[342433]: 2026-01-20 15:03:29.758494779 +0000 UTC m=+0.039016843 container create 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:03:29 compute-0 systemd[1]: Started libpod-conmon-660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8.scope.
Jan 20 15:03:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:03:29 compute-0 podman[342433]: 2026-01-20 15:03:29.739695543 +0000 UTC m=+0.020217637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3d8dec9009f998dd7524285a2b42b1cf5113e4cd0f3addb7738e8c3fd6b8a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3d8dec9009f998dd7524285a2b42b1cf5113e4cd0f3addb7738e8c3fd6b8a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3d8dec9009f998dd7524285a2b42b1cf5113e4cd0f3addb7738e8c3fd6b8a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee3d8dec9009f998dd7524285a2b42b1cf5113e4cd0f3addb7738e8c3fd6b8a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:03:29 compute-0 podman[342433]: 2026-01-20 15:03:29.859578561 +0000 UTC m=+0.140100655 container init 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:03:29 compute-0 podman[342433]: 2026-01-20 15:03:29.868571454 +0000 UTC m=+0.149093528 container start 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:03:29 compute-0 podman[342433]: 2026-01-20 15:03:29.871921484 +0000 UTC m=+0.152443558 container attach 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:03:29 compute-0 nova_compute[250018]: 2026-01-20 15:03:29.925 250022 DEBUG nova.virt.libvirt.imagebackend [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] No parent info for a32b3e07-16d8-46fd-9a7b-c242c432fcf9; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 20 15:03:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 271 op/s
Jan 20 15:03:30 compute-0 nova_compute[250018]: 2026-01-20 15:03:30.275 250022 DEBUG nova.storage.rbd_utils [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] creating snapshot(6a6a190d61194e63a0d76137971befa5) on rbd image(83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]: {
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:         "osd_id": 0,
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:         "type": "bluestore"
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]:     }
Jan 20 15:03:30 compute-0 xenodochial_davinci[342451]: }
Jan 20 15:03:30 compute-0 systemd[1]: libpod-660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8.scope: Deactivated successfully.
Jan 20 15:03:30 compute-0 podman[342433]: 2026-01-20 15:03:30.752400011 +0000 UTC m=+1.032922095 container died 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:03:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:30.773 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:30.774 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:30.775 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee3d8dec9009f998dd7524285a2b42b1cf5113e4cd0f3addb7738e8c3fd6b8a1-merged.mount: Deactivated successfully.
Jan 20 15:03:30 compute-0 podman[342433]: 2026-01-20 15:03:30.809491059 +0000 UTC m=+1.090013133 container remove 660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:03:30 compute-0 systemd[1]: libpod-conmon-660cd96331b5fbb1b1d0e3c78168efe15914e0e2df8aec57538acc6d6a13c5d8.scope: Deactivated successfully.
Jan 20 15:03:30 compute-0 sudo[342331]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:03:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:03:30 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 43869a7d-0558-4517-8461-7be7fed88a53 does not exist
Jan 20 15:03:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9202d75c-de5f-474b-b6de-936a7c0801ae does not exist
Jan 20 15:03:30 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fbe81470-dfd8-476c-bb3d-880e8bcdca5a does not exist
Jan 20 15:03:31 compute-0 sudo[342534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:31 compute-0 sudo[342534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:31 compute-0 sudo[342534]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:31 compute-0 sudo[342559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:03:31 compute-0 sudo[342559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:31 compute-0 sudo[342559]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 20 15:03:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 20 15:03:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 20 15:03:31 compute-0 ceph-mon[74360]: pgmap v2364: 321 pgs: 321 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 271 op/s
Jan 20 15:03:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:31 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:03:31 compute-0 nova_compute[250018]: 2026-01-20 15:03:31.415 250022 DEBUG nova.storage.rbd_utils [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] cloning vms/83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk@6a6a190d61194e63a0d76137971befa5 to images/7f0d068e-5d2b-485d-b65c-7244508ab6b6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:03:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:31 compute-0 nova_compute[250018]: 2026-01-20 15:03:31.635 250022 DEBUG nova.storage.rbd_utils [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] flattening images/7f0d068e-5d2b-485d-b65c-7244508ab6b6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 20 15:03:31 compute-0 nova_compute[250018]: 2026-01-20 15:03:31.772 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:31 compute-0 nova_compute[250018]: 2026-01-20 15:03:31.974 250022 DEBUG nova.storage.rbd_utils [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] removing snapshot(6a6a190d61194e63a0d76137971befa5) on rbd image(83bbf40a-f44e-42fe-b09a-0e635a302f6d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:03:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 488 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 267 op/s
Jan 20 15:03:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 20 15:03:32 compute-0 ceph-mon[74360]: osdmap e343: 3 total, 3 up, 3 in
Jan 20 15:03:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 20 15:03:32 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 20 15:03:32 compute-0 nova_compute[250018]: 2026-01-20 15:03:32.439 250022 DEBUG nova.storage.rbd_utils [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] creating snapshot(snap) on rbd image(7f0d068e-5d2b-485d-b65c-7244508ab6b6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:03:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 20 15:03:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:33 compute-0 ceph-mon[74360]: pgmap v2366: 321 pgs: 321 active+clean; 488 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 267 op/s
Jan 20 15:03:33 compute-0 ceph-mon[74360]: osdmap e344: 3 total, 3 up, 3 in
Jan 20 15:03:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 20 15:03:33 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 20 15:03:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:33.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 516 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 20 15:03:34 compute-0 nova_compute[250018]: 2026-01-20 15:03:34.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:34 compute-0 ceph-mon[74360]: osdmap e345: 3 total, 3 up, 3 in
Jan 20 15:03:34 compute-0 ceph-mon[74360]: pgmap v2369: 321 pgs: 321 active+clean; 516 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 20 15:03:35 compute-0 nova_compute[250018]: 2026-01-20 15:03:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 20 15:03:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 20 15:03:35 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 20 15:03:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:36 compute-0 sudo[342678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:36 compute-0 sudo[342678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:36 compute-0 sudo[342678]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:36 compute-0 sudo[342703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:36 compute-0 sudo[342703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:36 compute-0 sudo[342703]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:36 compute-0 ceph-mon[74360]: osdmap e346: 3 total, 3 up, 3 in
Jan 20 15:03:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 149 op/s
Jan 20 15:03:36 compute-0 ovn_controller[148666]: 2026-01-20T15:03:36Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:7d:d6 10.100.0.11
Jan 20 15:03:36 compute-0 ovn_controller[148666]: 2026-01-20T15:03:36Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:7d:d6 10.100.0.11
Jan 20 15:03:36 compute-0 nova_compute[250018]: 2026-01-20 15:03:36.775 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:37 compute-0 nova_compute[250018]: 2026-01-20 15:03:37.040 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:37.042 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:03:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:37.043 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:03:37 compute-0 ceph-mon[74360]: pgmap v2371: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 149 op/s
Jan 20 15:03:37 compute-0 nova_compute[250018]: 2026-01-20 15:03:37.499 250022 INFO nova.virt.libvirt.driver [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Snapshot image upload complete
Jan 20 15:03:37 compute-0 nova_compute[250018]: 2026-01-20 15:03:37.500 250022 INFO nova.compute.manager [None req-31117064-3f6f-4608-8f82-d04bf57ff108 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Took 9.04 seconds to snapshot the instance on the hypervisor.
Jan 20 15:03:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.6 MiB/s wr, 123 op/s
Jan 20 15:03:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1143980325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 20 15:03:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 20 15:03:39 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 20 15:03:39 compute-0 ceph-mon[74360]: pgmap v2372: 321 pgs: 321 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.6 MiB/s wr, 123 op/s
Jan 20 15:03:39 compute-0 nova_compute[250018]: 2026-01-20 15:03:39.395 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:39.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 20 15:03:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 20 15:03:40 compute-0 ceph-mon[74360]: osdmap e347: 3 total, 3 up, 3 in
Jan 20 15:03:40 compute-0 ceph-mon[74360]: osdmap e348: 3 total, 3 up, 3 in
Jan 20 15:03:41 compute-0 ceph-mon[74360]: pgmap v2375: 321 pgs: 321 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 20 15:03:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:41 compute-0 nova_compute[250018]: 2026-01-20 15:03:41.777 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 205 op/s
Jan 20 15:03:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1625105208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1704404478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:03:43 compute-0 ceph-mon[74360]: pgmap v2376: 321 pgs: 321 active+clean; 560 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 205 op/s
Jan 20 15:03:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:03:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:43.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:03:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 528 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 541 KiB/s rd, 3.2 MiB/s wr, 134 op/s
Jan 20 15:03:44 compute-0 nova_compute[250018]: 2026-01-20 15:03:44.433 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:03:45.045 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:03:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 20 15:03:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 20 15:03:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 20 15:03:45 compute-0 ceph-mon[74360]: pgmap v2377: 321 pgs: 321 active+clean; 528 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 541 KiB/s rd, 3.2 MiB/s wr, 134 op/s
Jan 20 15:03:45 compute-0 ceph-mon[74360]: osdmap e349: 3 total, 3 up, 3 in
Jan 20 15:03:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 20 15:03:46 compute-0 podman[342734]: 2026-01-20 15:03:46.470765042 +0000 UTC m=+0.054475329 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:03:46 compute-0 podman[342733]: 2026-01-20 15:03:46.505250401 +0000 UTC m=+0.088414903 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Jan 20 15:03:46 compute-0 nova_compute[250018]: 2026-01-20 15:03:46.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:47 compute-0 ceph-mon[74360]: pgmap v2379: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 20 15:03:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/375679175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 703 KiB/s wr, 78 op/s
Jan 20 15:03:48 compute-0 ceph-mon[74360]: pgmap v2380: 321 pgs: 321 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 703 KiB/s wr, 78 op/s
Jan 20 15:03:49 compute-0 nova_compute[250018]: 2026-01-20 15:03:49.470 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1811257346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:50 compute-0 nova_compute[250018]: 2026-01-20 15:03:50.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 582 KiB/s wr, 143 op/s
Jan 20 15:03:50 compute-0 ceph-mon[74360]: pgmap v2381: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 582 KiB/s wr, 143 op/s
Jan 20 15:03:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:51 compute-0 nova_compute[250018]: 2026-01-20 15:03:51.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:51.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 38 KiB/s wr, 137 op/s
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.311 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.311 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.312 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.312 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.312 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:03:52
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.mgr', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'default.rgw.log']
Jan 20 15:03:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:03:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845887982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.803 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.916 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.917 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.921 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:52 compute-0 nova_compute[250018]: 2026-01-20 15:03:52.922 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.097 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.098 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3817MB free_disk=20.830543518066406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.332 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b3f961d2-e73f-49bf-b141-6505e77ad9ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.332 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 83bbf40a-f44e-42fe-b09a-0e635a302f6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.332 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.333 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:03:53 compute-0 ceph-mon[74360]: pgmap v2382: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 38 KiB/s wr, 137 op/s
Jan 20 15:03:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3845887982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3930603493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.408 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.432 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.432 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.499 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.529 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:03:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:53.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:53 compute-0 nova_compute[250018]: 2026-01-20 15:03:53.684 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:03:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:53.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:03:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548541487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:54 compute-0 nova_compute[250018]: 2026-01-20 15:03:54.138 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:03:54 compute-0 nova_compute[250018]: 2026-01-20 15:03:54.145 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:03:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 37 KiB/s wr, 135 op/s
Jan 20 15:03:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1886119184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/548541487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:54 compute-0 nova_compute[250018]: 2026-01-20 15:03:54.472 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:03:55 compute-0 nova_compute[250018]: 2026-01-20 15:03:55.335 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:03:55 compute-0 ceph-mon[74360]: pgmap v2383: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 37 KiB/s wr, 135 op/s
Jan 20 15:03:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:55 compute-0 nova_compute[250018]: 2026-01-20 15:03:55.642 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:03:55 compute-0 nova_compute[250018]: 2026-01-20 15:03:55.642 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:03:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:56 compute-0 sudo[342828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:56 compute-0 sudo[342828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:56 compute-0 sudo[342828]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 102 op/s
Jan 20 15:03:56 compute-0 sudo[342853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:03:56 compute-0 sudo[342853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:03:56 compute-0 sudo[342853]: pam_unix(sudo:session): session closed for user root
Jan 20 15:03:56 compute-0 ceph-mon[74360]: pgmap v2384: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 102 op/s
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.518373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436518490, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 2260, "num_deletes": 257, "total_data_size": 3784096, "memory_usage": 3830272, "flush_reason": "Manual Compaction"}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436544678, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 3702117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51395, "largest_seqno": 53654, "table_properties": {"data_size": 3691912, "index_size": 6507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21936, "raw_average_key_size": 21, "raw_value_size": 3671190, "raw_average_value_size": 3519, "num_data_blocks": 281, "num_entries": 1043, "num_filter_entries": 1043, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921252, "oldest_key_time": 1768921252, "file_creation_time": 1768921436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 26306 microseconds, and 10711 cpu microseconds.
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.544723) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 3702117 bytes OK
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.544741) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.546238) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.546253) EVENT_LOG_v1 {"time_micros": 1768921436546248, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.546270) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 3774763, prev total WAL file size 3774763, number of live WAL files 2.
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.547463) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(3615KB)], [113(11MB)]
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436547521, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 15695220, "oldest_snapshot_seqno": -1}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8528 keys, 13824775 bytes, temperature: kUnknown
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436655175, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 13824775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13765498, "index_size": 36829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 219889, "raw_average_key_size": 25, "raw_value_size": 13611442, "raw_average_value_size": 1596, "num_data_blocks": 1451, "num_entries": 8528, "num_filter_entries": 8528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.655374) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 13824775 bytes
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.656618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.7 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 9060, records dropped: 532 output_compression: NoCompression
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.656633) EVENT_LOG_v1 {"time_micros": 1768921436656625, "job": 68, "event": "compaction_finished", "compaction_time_micros": 107716, "compaction_time_cpu_micros": 38712, "output_level": 6, "num_output_files": 1, "total_output_size": 13824775, "num_input_records": 9060, "num_output_records": 8528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436657359, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921436659594, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.547357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.659711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.659717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.659719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.659721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:03:56.659723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:03:56 compute-0 nova_compute[250018]: 2026-01-20 15:03:56.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.518 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.519 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.519 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:03:57 compute-0 nova_compute[250018]: 2026-01-20 15:03:57.519 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:03:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:03:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:03:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3864538602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:03:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:03:57.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:03:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 20 15:03:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/571537298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:03:58 compute-0 ceph-mon[74360]: pgmap v2385: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 20 15:03:59 compute-0 nova_compute[250018]: 2026-01-20 15:03:59.474 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:03:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:03:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:03:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:03:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:04:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:04:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 130 op/s
Jan 20 15:04:01 compute-0 ceph-mon[74360]: pgmap v2386: 321 pgs: 321 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 130 op/s
Jan 20 15:04:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:01 compute-0 nova_compute[250018]: 2026-01-20 15:04:01.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:04:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3862250939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/656038064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:03 compute-0 nova_compute[250018]: 2026-01-20 15:04:03.165 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [{"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:04:03 compute-0 ceph-mon[74360]: pgmap v2387: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:04:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:03.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.208 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-b3f961d2-e73f-49bf-b141-6505e77ad9ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.209 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.210 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.211 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.212 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.212 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.213 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.214 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:04:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.704 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.705 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:04 compute-0 nova_compute[250018]: 2026-01-20 15:04:04.705 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:04:05 compute-0 nova_compute[250018]: 2026-01-20 15:04:05.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:05 compute-0 ceph-mon[74360]: pgmap v2388: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 20 15:04:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 15:04:06 compute-0 nova_compute[250018]: 2026-01-20 15:04:06.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:07 compute-0 ceph-mon[74360]: pgmap v2389: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 20 15:04:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:07.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:08.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 20 15:04:09 compute-0 ceph-mon[74360]: pgmap v2390: 321 pgs: 321 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 20 15:04:09 compute-0 nova_compute[250018]: 2026-01-20 15:04:09.537 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:10.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 169 op/s
Jan 20 15:04:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:04:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1358398100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:11 compute-0 ceph-mon[74360]: pgmap v2391: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 169 op/s
Jan 20 15:04:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1358398100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:11.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00869273305183324 of space, bias 1.0, pg target 2.607819915549972 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0005052752186860228 of space, bias 1.0, pg target 0.15057201516843477 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:04:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 20 15:04:11 compute-0 nova_compute[250018]: 2026-01-20 15:04:11.790 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:12.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Jan 20 15:04:13 compute-0 ceph-mon[74360]: pgmap v2392: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Jan 20 15:04:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:13.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:04:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:04:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:04:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:04:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:14.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 20 15:04:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 20 15:04:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 20 15:04:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:04:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2818727511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:04:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/950145876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 20 15:04:14 compute-0 nova_compute[250018]: 2026-01-20 15:04:14.538 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.240177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455240258, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 409, "num_deletes": 250, "total_data_size": 328284, "memory_usage": 335616, "flush_reason": "Manual Compaction"}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455244286, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 278919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53655, "largest_seqno": 54063, "table_properties": {"data_size": 276551, "index_size": 468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6502, "raw_average_key_size": 20, "raw_value_size": 271740, "raw_average_value_size": 854, "num_data_blocks": 21, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921437, "oldest_key_time": 1768921437, "file_creation_time": 1768921455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 4166 microseconds, and 1781 cpu microseconds.
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.244353) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 278919 bytes OK
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.244374) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.246053) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.246073) EVENT_LOG_v1 {"time_micros": 1768921455246067, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.246092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 325723, prev total WAL file size 325723, number of live WAL files 2.
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.246651) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303036' seq:0, type:0; will stop at (end)
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(272KB)], [116(13MB)]
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455246731, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 14103694, "oldest_snapshot_seqno": -1}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8337 keys, 10313095 bytes, temperature: kUnknown
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455319921, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10313095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10259773, "index_size": 31386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 216123, "raw_average_key_size": 25, "raw_value_size": 10113686, "raw_average_value_size": 1213, "num_data_blocks": 1224, "num_entries": 8337, "num_filter_entries": 8337, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.320159) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10313095 bytes
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.321610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 140.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(87.5) write-amplify(37.0) OK, records in: 8846, records dropped: 509 output_compression: NoCompression
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.321626) EVENT_LOG_v1 {"time_micros": 1768921455321618, "job": 70, "event": "compaction_finished", "compaction_time_micros": 73254, "compaction_time_cpu_micros": 37476, "output_level": 6, "num_output_files": 1, "total_output_size": 10313095, "num_input_records": 8846, "num_output_records": 8337, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455321784, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921455323729, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.246516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.323753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.323757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.323758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.323760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:04:15.323762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:04:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 20 15:04:15 compute-0 ceph-mon[74360]: pgmap v2393: 321 pgs: 321 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 20 15:04:15 compute-0 ceph-mon[74360]: osdmap e350: 3 total, 3 up, 3 in
Jan 20 15:04:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 20 15:04:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 20 15:04:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:15.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:16.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.4 MiB/s wr, 113 op/s
Jan 20 15:04:16 compute-0 sudo[342888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:16 compute-0 sudo[342888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:16 compute-0 sudo[342888]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:16 compute-0 sudo[342913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:16 compute-0 sudo[342913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:16 compute-0 sudo[342913]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 20 15:04:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 20 15:04:16 compute-0 ceph-mon[74360]: osdmap e351: 3 total, 3 up, 3 in
Jan 20 15:04:16 compute-0 ceph-mon[74360]: pgmap v2396: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.4 MiB/s wr, 113 op/s
Jan 20 15:04:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 20 15:04:16 compute-0 nova_compute[250018]: 2026-01-20 15:04:16.793 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:17 compute-0 podman[342939]: 2026-01-20 15:04:17.468273972 +0000 UTC m=+0.058676111 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 20 15:04:17 compute-0 podman[342938]: 2026-01-20 15:04:17.495908537 +0000 UTC m=+0.078165097 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:04:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:17.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:17 compute-0 ceph-mon[74360]: osdmap e352: 3 total, 3 up, 3 in
Jan 20 15:04:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:18.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 67 op/s
Jan 20 15:04:18 compute-0 ceph-mon[74360]: pgmap v2398: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 67 op/s
Jan 20 15:04:19 compute-0 nova_compute[250018]: 2026-01-20 15:04:19.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:19.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:20.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 317 op/s
Jan 20 15:04:21 compute-0 ceph-mon[74360]: pgmap v2399: 321 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 315 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 317 op/s
Jan 20 15:04:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:21.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:21 compute-0 nova_compute[250018]: 2026-01-20 15:04:21.795 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:22.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 20 15:04:22 compute-0 ceph-mon[74360]: pgmap v2400: 321 pgs: 321 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:04:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:24.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:04:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 558 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 255 op/s
Jan 20 15:04:24 compute-0 nova_compute[250018]: 2026-01-20 15:04:24.540 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 20 15:04:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 20 15:04:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 20 15:04:25 compute-0 ceph-mon[74360]: pgmap v2401: 321 pgs: 321 active+clean; 558 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 255 op/s
Jan 20 15:04:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:25.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Jan 20 15:04:26 compute-0 ceph-mon[74360]: osdmap e353: 3 total, 3 up, 3 in
Jan 20 15:04:26 compute-0 ceph-mon[74360]: pgmap v2403: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Jan 20 15:04:26 compute-0 nova_compute[250018]: 2026-01-20 15:04:26.798 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3419700180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/962284744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:27.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:28.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 239 op/s
Jan 20 15:04:28 compute-0 ceph-mon[74360]: pgmap v2404: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 239 op/s
Jan 20 15:04:29 compute-0 nova_compute[250018]: 2026-01-20 15:04:29.543 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:29.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 683 KiB/s wr, 148 op/s
Jan 20 15:04:30 compute-0 sshd-session[342990]: Connection closed by 134.122.57.138 port 59976
Jan 20 15:04:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:30.774 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:30.775 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:30.775 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:31 compute-0 ceph-mon[74360]: pgmap v2405: 321 pgs: 321 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 683 KiB/s wr, 148 op/s
Jan 20 15:04:31 compute-0 sudo[342991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:31 compute-0 sudo[342991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-0 sudo[342991]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-0 sudo[343016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:04:31 compute-0 sudo[343016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-0 sudo[343016]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:31.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:31 compute-0 sudo[343041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:31 compute-0 sudo[343041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-0 sudo[343041]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:31 compute-0 sudo[343066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:04:31 compute-0 sudo[343066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:31 compute-0 nova_compute[250018]: 2026-01-20 15:04:31.801 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:32.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:32 compute-0 sudo[343066]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 380ec629-403d-4430-9627-1b1b2cac755b does not exist
Jan 20 15:04:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0c5f8d4b-e026-4e52-b579-780aa44b345b does not exist
Jan 20 15:04:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 98d5ad97-2b41-4d77-8c73-6c83ccb0d48a does not exist
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:04:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 447 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.5 MiB/s wr, 140 op/s
Jan 20 15:04:32 compute-0 sudo[343123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:32 compute-0 sudo[343123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:32 compute-0 sudo[343123]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:32 compute-0 sudo[343148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:04:32 compute-0 sudo[343148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:32 compute-0 sudo[343148]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:04:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:04:32 compute-0 sudo[343173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:32 compute-0 sudo[343173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:32 compute-0 sudo[343173]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:32 compute-0 sudo[343198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:04:32 compute-0 sudo[343198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.85060538 +0000 UTC m=+0.042854124 container create 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:04:32 compute-0 systemd[1]: Started libpod-conmon-542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f.scope.
Jan 20 15:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.82904074 +0000 UTC m=+0.021289504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.935274452 +0000 UTC m=+0.127523196 container init 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.941614322 +0000 UTC m=+0.133863066 container start 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.94525194 +0000 UTC m=+0.137500704 container attach 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:04:32 compute-0 nostalgic_dubinsky[343280]: 167 167
Jan 20 15:04:32 compute-0 systemd[1]: libpod-542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f.scope: Deactivated successfully.
Jan 20 15:04:32 compute-0 conmon[343280]: conmon 542b253f6dd1a5bd03ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f.scope/container/memory.events
Jan 20 15:04:32 compute-0 podman[343263]: 2026-01-20 15:04:32.949595057 +0000 UTC m=+0.141843801 container died 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b4ab86af29bd17f426a26729f5d26feee557fddc038c27288af365db09e0f6d-merged.mount: Deactivated successfully.
Jan 20 15:04:33 compute-0 podman[343263]: 2026-01-20 15:04:33.005280727 +0000 UTC m=+0.197529461 container remove 542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 15:04:33 compute-0 systemd[1]: libpod-conmon-542b253f6dd1a5bd03ef37ada4e6d60897911db6eafd2009690540cfaf10ee0f.scope: Deactivated successfully.
Jan 20 15:04:33 compute-0 podman[343303]: 2026-01-20 15:04:33.192617904 +0000 UTC m=+0.046101923 container create b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:04:33 compute-0 systemd[1]: Started libpod-conmon-b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6.scope.
Jan 20 15:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:33 compute-0 podman[343303]: 2026-01-20 15:04:33.175961025 +0000 UTC m=+0.029445064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:33 compute-0 podman[343303]: 2026-01-20 15:04:33.28975594 +0000 UTC m=+0.143239979 container init b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:04:33 compute-0 podman[343303]: 2026-01-20 15:04:33.300139979 +0000 UTC m=+0.153623988 container start b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:04:33 compute-0 podman[343303]: 2026-01-20 15:04:33.303401878 +0000 UTC m=+0.156885927 container attach b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:04:33 compute-0 ceph-mon[74360]: pgmap v2406: 321 pgs: 321 active+clean; 447 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.5 MiB/s wr, 140 op/s
Jan 20 15:04:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:33.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:04:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:04:34 compute-0 awesome_stonebraker[343320]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:04:34 compute-0 awesome_stonebraker[343320]: --> relative data size: 1.0
Jan 20 15:04:34 compute-0 awesome_stonebraker[343320]: --> All data devices are unavailable
Jan 20 15:04:34 compute-0 systemd[1]: libpod-b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6.scope: Deactivated successfully.
Jan 20 15:04:34 compute-0 podman[343303]: 2026-01-20 15:04:34.10910465 +0000 UTC m=+0.962588689 container died b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e95059b51709f6536f6835aeff4ccc2efd2f396c895b574f2331502bf1aeb0a-merged.mount: Deactivated successfully.
Jan 20 15:04:34 compute-0 podman[343303]: 2026-01-20 15:04:34.167080782 +0000 UTC m=+1.020564801 container remove b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:04:34 compute-0 systemd[1]: libpod-conmon-b20de3be7eb63bf1271769d8fcc1b8d8b47091698fec96f91df084b5970319b6.scope: Deactivated successfully.
Jan 20 15:04:34 compute-0 sudo[343198]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:34 compute-0 sudo[343350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:34 compute-0 sudo[343350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:34 compute-0 sudo[343350]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:04:34 compute-0 sudo[343375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:04:34 compute-0 sudo[343375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:34 compute-0 sudo[343375]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:34 compute-0 sudo[343400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:34 compute-0 sudo[343400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:34 compute-0 sudo[343400]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:34 compute-0 sudo[343425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:04:34 compute-0 sudo[343425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:34 compute-0 ceph-mon[74360]: pgmap v2407: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:04:34 compute-0 nova_compute[250018]: 2026-01-20 15:04:34.545 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.814359377 +0000 UTC m=+0.057731296 container create d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:04:34 compute-0 systemd[1]: Started libpod-conmon-d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee.scope.
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.785788346 +0000 UTC m=+0.029160355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.910008553 +0000 UTC m=+0.153380572 container init d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.922306724 +0000 UTC m=+0.165678683 container start d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.927192425 +0000 UTC m=+0.170564384 container attach d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:04:34 compute-0 nifty_mendel[343506]: 167 167
Jan 20 15:04:34 compute-0 systemd[1]: libpod-d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee.scope: Deactivated successfully.
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.930469924 +0000 UTC m=+0.173841863 container died d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf993db8dfa471a0e8264bd56eea16accd5505338ff2c92bfa1cfb29ad981d2a-merged.mount: Deactivated successfully.
Jan 20 15:04:34 compute-0 podman[343490]: 2026-01-20 15:04:34.967687576 +0000 UTC m=+0.211059495 container remove d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:04:34 compute-0 systemd[1]: libpod-conmon-d2b39b02fafa73329edf607bad623f43d4edc83a94940939c0d6fa318e6694ee.scope: Deactivated successfully.
Jan 20 15:04:35 compute-0 podman[343529]: 2026-01-20 15:04:35.222336936 +0000 UTC m=+0.107437836 container create 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:04:35 compute-0 podman[343529]: 2026-01-20 15:04:35.137483791 +0000 UTC m=+0.022584641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:35 compute-0 systemd[1]: Started libpod-conmon-273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a.scope.
Jan 20 15:04:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66b42e8e1d5f62a39bc2732e24e67533dd5dabf5b88a327d685acb99a406f91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66b42e8e1d5f62a39bc2732e24e67533dd5dabf5b88a327d685acb99a406f91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66b42e8e1d5f62a39bc2732e24e67533dd5dabf5b88a327d685acb99a406f91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66b42e8e1d5f62a39bc2732e24e67533dd5dabf5b88a327d685acb99a406f91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:35 compute-0 podman[343529]: 2026-01-20 15:04:35.330858369 +0000 UTC m=+0.215959219 container init 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:04:35 compute-0 podman[343529]: 2026-01-20 15:04:35.338616468 +0000 UTC m=+0.223717308 container start 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:04:35 compute-0 podman[343529]: 2026-01-20 15:04:35.342972875 +0000 UTC m=+0.228073725 container attach 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:04:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1897084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:35.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]: {
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:     "0": [
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:         {
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "devices": [
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "/dev/loop3"
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             ],
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "lv_name": "ceph_lv0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "lv_size": "7511998464",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "name": "ceph_lv0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "tags": {
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.cluster_name": "ceph",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.crush_device_class": "",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.encrypted": "0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.osd_id": "0",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.type": "block",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:                 "ceph.vdo": "0"
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             },
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "type": "block",
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:             "vg_name": "ceph_vg0"
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:         }
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]:     ]
Jan 20 15:04:36 compute-0 nostalgic_morse[343546]: }
Jan 20 15:04:36 compute-0 systemd[1]: libpod-273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a.scope: Deactivated successfully.
Jan 20 15:04:36 compute-0 podman[343529]: 2026-01-20 15:04:36.114579409 +0000 UTC m=+0.999680259 container died 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66b42e8e1d5f62a39bc2732e24e67533dd5dabf5b88a327d685acb99a406f91-merged.mount: Deactivated successfully.
Jan 20 15:04:36 compute-0 podman[343529]: 2026-01-20 15:04:36.168595774 +0000 UTC m=+1.053696604 container remove 273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:04:36 compute-0 systemd[1]: libpod-conmon-273047e9091ebe9262f766eaa2972eaf7fcedf506e5ebb79c014a48b8cfd0c9a.scope: Deactivated successfully.
Jan 20 15:04:36 compute-0 sudo[343425]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 sudo[343568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-0 sudo[343568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 sudo[343568]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 20 15:04:36 compute-0 sudo[343593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:04:36 compute-0 sudo[343593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 sudo[343593]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 sudo[343618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-0 sudo[343618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 sudo[343618]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 sudo[343643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:04:36 compute-0 sudo[343643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 ceph-mon[74360]: pgmap v2408: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 20 15:04:36 compute-0 sudo[343668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-0 sudo[343668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 sudo[343668]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 sudo[343693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:36 compute-0 sudo[343693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:36 compute-0 sudo[343693]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.786291352 +0000 UTC m=+0.064575220 container create 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:04:36 compute-0 nova_compute[250018]: 2026-01-20 15:04:36.803 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:36 compute-0 systemd[1]: Started libpod-conmon-5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56.scope.
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.745126604 +0000 UTC m=+0.023410562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.866576506 +0000 UTC m=+0.144860424 container init 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.87308115 +0000 UTC m=+0.151365038 container start 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.877190971 +0000 UTC m=+0.155474839 container attach 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:04:36 compute-0 systemd[1]: libpod-5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56.scope: Deactivated successfully.
Jan 20 15:04:36 compute-0 eloquent_pascal[343772]: 167 167
Jan 20 15:04:36 compute-0 conmon[343772]: conmon 5052c577108c5e154f63 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56.scope/container/memory.events
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.879136154 +0000 UTC m=+0.157420022 container died 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 20 15:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97f5182acde014e5381a87da226b1cbd4be15f9c1502d1bbe7195294e8057d4-merged.mount: Deactivated successfully.
Jan 20 15:04:36 compute-0 podman[343756]: 2026-01-20 15:04:36.914298751 +0000 UTC m=+0.192582619 container remove 5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:04:36 compute-0 systemd[1]: libpod-conmon-5052c577108c5e154f63b7ed64bf31fc480949d0932c3fc220d96f16d6dc4f56.scope: Deactivated successfully.
Jan 20 15:04:37 compute-0 nova_compute[250018]: 2026-01-20 15:04:37.077 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:37 compute-0 podman[343798]: 2026-01-20 15:04:37.105164272 +0000 UTC m=+0.034836420 container create 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:04:37 compute-0 systemd[1]: Started libpod-conmon-3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00.scope.
Jan 20 15:04:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1770211ddd3d899b9317877068b8c85bcae7338cd41e26f46cc02576b678c33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1770211ddd3d899b9317877068b8c85bcae7338cd41e26f46cc02576b678c33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1770211ddd3d899b9317877068b8c85bcae7338cd41e26f46cc02576b678c33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1770211ddd3d899b9317877068b8c85bcae7338cd41e26f46cc02576b678c33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:04:37 compute-0 podman[343798]: 2026-01-20 15:04:37.182788652 +0000 UTC m=+0.112460820 container init 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:04:37 compute-0 podman[343798]: 2026-01-20 15:04:37.089472319 +0000 UTC m=+0.019144487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:04:37 compute-0 podman[343798]: 2026-01-20 15:04:37.190218753 +0000 UTC m=+0.119890911 container start 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:04:37 compute-0 podman[343798]: 2026-01-20 15:04:37.19347104 +0000 UTC m=+0.123143208 container attach 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:04:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:37 compute-0 agitated_bassi[343815]: {
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:         "osd_id": 0,
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:         "type": "bluestore"
Jan 20 15:04:37 compute-0 agitated_bassi[343815]:     }
Jan 20 15:04:37 compute-0 agitated_bassi[343815]: }
Jan 20 15:04:38 compute-0 systemd[1]: libpod-3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00.scope: Deactivated successfully.
Jan 20 15:04:38 compute-0 conmon[343815]: conmon 3425831dd9c1861bc7f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00.scope/container/memory.events
Jan 20 15:04:38 compute-0 podman[343798]: 2026-01-20 15:04:38.002910193 +0000 UTC m=+0.932582341 container died 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1770211ddd3d899b9317877068b8c85bcae7338cd41e26f46cc02576b678c33e-merged.mount: Deactivated successfully.
Jan 20 15:04:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:38 compute-0 podman[343798]: 2026-01-20 15:04:38.059905689 +0000 UTC m=+0.989577867 container remove 3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:04:38 compute-0 systemd[1]: libpod-conmon-3425831dd9c1861bc7f0e61559cdc9001779ea71b35e31514c423b268607fa00.scope: Deactivated successfully.
Jan 20 15:04:38 compute-0 sudo[343643]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:04:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:04:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ab9684d9-0b60-4223-ad20-fae6ea6a7895 does not exist
Jan 20 15:04:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4311f7a9-0814-4622-b403-b5ba0aceb823 does not exist
Jan 20 15:04:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8809e863-9c80-46ea-b311-76354bd71aba does not exist
Jan 20 15:04:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:04:38 compute-0 sudo[343850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:38 compute-0 sudo[343850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:38 compute-0 sudo[343850]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:38 compute-0 sudo[343875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:04:38 compute-0 sudo[343875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:38 compute-0 sudo[343875]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:38.829 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:04:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:38.829 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:04:38 compute-0 nova_compute[250018]: 2026-01-20 15:04:38.830 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:39 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:04:39 compute-0 ceph-mon[74360]: pgmap v2409: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:04:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3218525619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:39 compute-0 nova_compute[250018]: 2026-01-20 15:04:39.546 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:04:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1912797042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:04:40.831 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:04:41 compute-0 ceph-mon[74360]: pgmap v2410: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:04:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3769637538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:41.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:41 compute-0 nova_compute[250018]: 2026-01-20 15:04:41.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 1.6 MiB/s wr, 63 op/s
Jan 20 15:04:43 compute-0 ceph-mon[74360]: pgmap v2411: 321 pgs: 321 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 1.6 MiB/s wr, 63 op/s
Jan 20 15:04:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2179692119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3343341651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:43.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 94 op/s
Jan 20 15:04:44 compute-0 nova_compute[250018]: 2026-01-20 15:04:44.549 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:45 compute-0 ceph-mon[74360]: pgmap v2412: 321 pgs: 321 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 94 op/s
Jan 20 15:04:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 208 op/s
Jan 20 15:04:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 20 15:04:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 20 15:04:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 20 15:04:46 compute-0 ceph-mon[74360]: pgmap v2413: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 208 op/s
Jan 20 15:04:46 compute-0 ceph-mon[74360]: osdmap e354: 3 total, 3 up, 3 in
Jan 20 15:04:46 compute-0 nova_compute[250018]: 2026-01-20 15:04:46.810 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:47.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Jan 20 15:04:48 compute-0 podman[343906]: 2026-01-20 15:04:48.47123746 +0000 UTC m=+0.060936693 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:04:48 compute-0 podman[343905]: 2026-01-20 15:04:48.503183331 +0000 UTC m=+0.088273909 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 15:04:48 compute-0 ceph-mon[74360]: pgmap v2415: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 246 op/s
Jan 20 15:04:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/259831174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:49 compute-0 nova_compute[250018]: 2026-01-20 15:04:49.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2211886453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:04:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.8 MiB/s wr, 321 op/s
Jan 20 15:04:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:50 compute-0 ceph-mon[74360]: pgmap v2416: 321 pgs: 321 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.8 MiB/s wr, 321 op/s
Jan 20 15:04:51 compute-0 nova_compute[250018]: 2026-01-20 15:04:51.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:51 compute-0 nova_compute[250018]: 2026-01-20 15:04:51.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:52 compute-0 nova_compute[250018]: 2026-01-20 15:04:52.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 336 op/s
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:04:52
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'backups', 'vms']
Jan 20 15:04:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.277 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.277 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.278 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.278 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.278 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:53 compute-0 ceph-mon[74360]: pgmap v2417: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 336 op/s
Jan 20 15:04:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:04:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3317582168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:53 compute-0 nova_compute[250018]: 2026-01-20 15:04:53.724 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Jan 20 15:04:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3317582168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:54 compute-0 nova_compute[250018]: 2026-01-20 15:04:54.598 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:54 compute-0 nova_compute[250018]: 2026-01-20 15:04:54.928 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:54 compute-0 nova_compute[250018]: 2026-01-20 15:04:54.929 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:54 compute-0 nova_compute[250018]: 2026-01-20 15:04:54.934 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:54 compute-0 nova_compute[250018]: 2026-01-20 15:04:54.934 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.111 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.112 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3843MB free_disk=20.830242156982422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.112 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.113 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:04:55 compute-0 ceph-mon[74360]: pgmap v2418: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 287 op/s
Jan 20 15:04:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/943230783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1672774260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:04:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 20 15:04:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 20 15:04:55 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.559 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b3f961d2-e73f-49bf-b141-6505e77ad9ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.560 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 83bbf40a-f44e-42fe-b09a-0e635a302f6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.560 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.561 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:04:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:04:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:55.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:04:55 compute-0 nova_compute[250018]: 2026-01-20 15:04:55.833 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:04:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:04:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828516164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 36 KiB/s wr, 258 op/s
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.297 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.303 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.351 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.352 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.352 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:04:56 compute-0 ceph-mon[74360]: osdmap e355: 3 total, 3 up, 3 in
Jan 20 15:04:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/828516164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:56 compute-0 ceph-mon[74360]: pgmap v2420: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 36 KiB/s wr, 258 op/s
Jan 20 15:04:56 compute-0 sudo[343997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:56 compute-0 sudo[343997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:56 compute-0 sudo[343997]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:56 compute-0 sudo[344022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:04:56 compute-0 sudo[344022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:04:56 compute-0 sudo[344022]: pam_unix(sudo:session): session closed for user root
Jan 20 15:04:56 compute-0 nova_compute[250018]: 2026-01-20 15:04:56.813 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:57 compute-0 nova_compute[250018]: 2026-01-20 15:04:57.346 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:57 compute-0 nova_compute[250018]: 2026-01-20 15:04:57.347 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:57 compute-0 nova_compute[250018]: 2026-01-20 15:04:57.347 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:57 compute-0 nova_compute[250018]: 2026-01-20 15:04:57.347 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:04:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2547310442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3976258873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:04:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:04:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:04:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:04:58.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:04:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 253 op/s
Jan 20 15:04:58 compute-0 ceph-mon[74360]: pgmap v2421: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 35 KiB/s wr, 253 op/s
Jan 20 15:04:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2668567201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.601 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:04:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:04:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:04:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:04:59.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.621 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.623 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:04:59 compute-0 nova_compute[250018]: 2026-01-20 15:04:59.624 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:05:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 34 KiB/s wr, 263 op/s
Jan 20 15:05:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 20 15:05:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 20 15:05:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 20 15:05:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.200 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.201 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.202 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.202 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.203 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.205 250022 INFO nova.compute.manager [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Terminating instance
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.207 250022 DEBUG nova.compute.manager [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:05:01 compute-0 kernel: tapd0fce58e-a2 (unregistering): left promiscuous mode
Jan 20 15:05:01 compute-0 NetworkManager[48960]: <info>  [1768921501.2666] device (tapd0fce58e-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:05:01 compute-0 ovn_controller[148666]: 2026-01-20T15:05:01Z|00555|binding|INFO|Releasing lport d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 from this chassis (sb_readonly=0)
Jan 20 15:05:01 compute-0 ovn_controller[148666]: 2026-01-20T15:05:01Z|00556|binding|INFO|Setting lport d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 down in Southbound
Jan 20 15:05:01 compute-0 ovn_controller[148666]: 2026-01-20T15:05:01Z|00557|binding|INFO|Removing iface tapd0fce58e-a2 ovn-installed in OVS
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.279 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.292 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.300 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:7d:d6 10.100.0.11'], port_security=['fa:16:3e:70:7d:d6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '83bbf40a-f44e-42fe-b09a-0e635a302f6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.302 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.304 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.324 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0e4d59-3f67-4fe9-b446-5e2249a2d24f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Jan 20 15:05:01 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009d.scope: Consumed 17.417s CPU time.
Jan 20 15:05:01 compute-0 ceph-mon[74360]: pgmap v2422: 321 pgs: 321 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 34 KiB/s wr, 263 op/s
Jan 20 15:05:01 compute-0 ceph-mon[74360]: osdmap e356: 3 total, 3 up, 3 in
Jan 20 15:05:01 compute-0 systemd-machined[216401]: Machine qemu-71-instance-0000009d terminated.
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.366 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[23c9460d-9fe9-4d76-82b5-7b7474349e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.369 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0f8936-6168-4488-831a-f1264837ea04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.407 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8acd1bc9-2619-4097-a007-ee61177bcb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 kernel: tapd0fce58e-a2: entered promiscuous mode
Jan 20 15:05:01 compute-0 kernel: tapd0fce58e-a2 (unregistering): left promiscuous mode
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.431 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a147b4c9-7700-49f1-93f7-74d308e9edf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap671e28d0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:4e:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722583, 'reachable_time': 22271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344061, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.441 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.448 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2b80ba5b-3786-4015-be8b-c18c15cf3355]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722597, 'tstamp': 722597}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344067, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap671e28d0-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722601, 'tstamp': 722601}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344067, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.449 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.451 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.452 250022 INFO nova.virt.libvirt.driver [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Instance destroyed successfully.
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.452 250022 DEBUG nova.objects.instance [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid 83bbf40a-f44e-42fe-b09a-0e635a302f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.456 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap671e28d0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.456 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.457 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap671e28d0-00, col_values=(('external_ids', {'iface-id': 'a8628d9e-196f-4b84-89fd-d3a41792b8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:01.457 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:01.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.815 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.840 250022 DEBUG nova.virt.libvirt.vif [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:03:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-550119680',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-550119680',id=157,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:03:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-4zf0x6i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:03:37Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=83bbf40a-f44e-42fe-b09a-0e635a302f6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.840 250022 DEBUG nova.network.os_vif_util [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.841 250022 DEBUG nova.network.os_vif_util [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.841 250022 DEBUG os_vif [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.844 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0fce58e-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.847 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:01 compute-0 nova_compute[250018]: 2026-01-20 15:05:01.850 250022 INFO os_vif [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:7d:d6,bridge_name='br-int',has_traffic_filtering=True,id=d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0fce58e-a2')
Jan 20 15:05:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:02.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.180 250022 DEBUG nova.compute.manager [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-unplugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.181 250022 DEBUG oslo_concurrency.lockutils [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.182 250022 DEBUG oslo_concurrency.lockutils [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.183 250022 DEBUG oslo_concurrency.lockutils [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.183 250022 DEBUG nova.compute.manager [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] No waiting events found dispatching network-vif-unplugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.184 250022 DEBUG nova.compute.manager [req-6dd93142-0a9b-4373-b56a-8de41780b0c2 req-c6529965-33c0-4b26-b9d2-239ab976bf69 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-unplugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:05:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 23 KiB/s wr, 307 op/s
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.435 250022 INFO nova.virt.libvirt.driver [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Deleting instance files /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d_del
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.436 250022 INFO nova.virt.libvirt.driver [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Deletion of /var/lib/nova/instances/83bbf40a-f44e-42fe-b09a-0e635a302f6d_del complete
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.593 250022 INFO nova.compute.manager [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Took 1.38 seconds to destroy the instance on the hypervisor.
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.593 250022 DEBUG oslo.service.loopingcall [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.594 250022 DEBUG nova.compute.manager [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.594 250022 DEBUG nova.network.neutron [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.722 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updating instance_info_cache with network_info: [{"id": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "address": "fa:16:3e:70:7d:d6", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0fce58e-a2", "ovs_interfaceid": "d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.787 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-83bbf40a-f44e-42fe-b09a-0e635a302f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.788 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:05:02 compute-0 nova_compute[250018]: 2026-01-20 15:05:02.788 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:03 compute-0 ceph-mon[74360]: pgmap v2424: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 23 KiB/s wr, 307 op/s
Jan 20 15:05:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:03.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.044 250022 DEBUG nova.network.neutron [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.077 250022 INFO nova.compute.manager [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Took 1.48 seconds to deallocate network for instance.
Jan 20 15:05:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:04.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.257 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.258 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 440 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 205 op/s
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.374 250022 DEBUG nova.compute.manager [req-c72b0d13-caa7-4abb-a113-5bda79a9e350 req-6243deec-44ce-4264-86c2-5fdf94bda2b4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-deleted-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.473 250022 DEBUG oslo_concurrency.processutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:04 compute-0 ceph-mon[74360]: pgmap v2425: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 440 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 205 op/s
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.603 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.664 250022 DEBUG nova.compute.manager [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.664 250022 DEBUG oslo_concurrency.lockutils [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.665 250022 DEBUG oslo_concurrency.lockutils [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.665 250022 DEBUG oslo_concurrency.lockutils [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.665 250022 DEBUG nova.compute.manager [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] No waiting events found dispatching network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.665 250022 WARNING nova.compute.manager [req-944adb2a-be75-44dc-9717-ff54a5933fbd req-c8dd2527-9f20-49de-800f-536f39a23480 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Received unexpected event network-vif-plugged-d0fce58e-a22e-40e2-9a2a-bab6b9a4ea73 for instance with vm_state deleted and task_state None.
Jan 20 15:05:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855842943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.934 250022 DEBUG oslo_concurrency.processutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.940 250022 DEBUG nova.compute.provider_tree [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:04 compute-0 nova_compute[250018]: 2026-01-20 15:05:04.966 250022 DEBUG nova.scheduler.client.report [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:05 compute-0 nova_compute[250018]: 2026-01-20 15:05:05.134 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:05 compute-0 nova_compute[250018]: 2026-01-20 15:05:05.280 250022 INFO nova.scheduler.client.report [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Deleted allocations for instance 83bbf40a-f44e-42fe-b09a-0e635a302f6d
Jan 20 15:05:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:05 compute-0 nova_compute[250018]: 2026-01-20 15:05:05.581 250022 DEBUG oslo_concurrency.lockutils [None req-3da8ef44-962c-4543-9904-99bf475ac6f6 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "83bbf40a-f44e-42fe-b09a-0e635a302f6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:05.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2855842943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:06 compute-0 ceph-mon[74360]: pgmap v2426: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:06 compute-0 nova_compute[250018]: 2026-01-20 15:05:06.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:07.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:05:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 54K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1788 writes, 7911 keys, 1788 commit groups, 1.0 writes per commit group, ingest: 11.50 MB, 0.02 MB/s
                                           Interval WAL: 1789 writes, 1789 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     44.7      1.57              0.24        35    0.045       0      0       0.0       0.0
                                             L6      1/0    9.84 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    132.9    111.9      2.87              1.03        34    0.084    218K    18K       0.0       0.0
                                            Sum      1/0    9.84 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     86.0     88.2      4.43              1.26        69    0.064    218K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    133.2    129.7      0.59              0.25        12    0.049     51K   3125       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    132.9    111.9      2.87              1.03        34    0.084    218K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     44.8      1.56              0.24        34    0.046       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.068, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.38 GB write, 0.09 MB/s write, 0.37 GB read, 0.09 MB/s read, 4.4 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 43.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000509 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2558,42.35 MB,13.9316%) FilterBlock(70,615.05 KB,0.197576%) IndexBlock(70,1.02 MB,0.335402%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:05:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:08 compute-0 ceph-mon[74360]: pgmap v2427: 321 pgs: 321 active+clean; 374 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 31 KiB/s wr, 189 op/s
Jan 20 15:05:09 compute-0 nova_compute[250018]: 2026-01-20 15:05:09.605 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:09.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 20 15:05:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 20 15:05:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 20 15:05:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 20 15:05:10 compute-0 nova_compute[250018]: 2026-01-20 15:05:10.782 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:11.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:11 compute-0 ceph-mon[74360]: pgmap v2428: 321 pgs: 321 active+clean; 347 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 20 15:05:11 compute-0 ceph-mon[74360]: osdmap e357: 3 total, 3 up, 3 in
Jan 20 15:05:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1033225945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004925706367485576 of space, bias 1.0, pg target 1.4777119102456728 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6463716877210125 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:05:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:05:11 compute-0 nova_compute[250018]: 2026-01-20 15:05:11.847 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 676 KiB/s rd, 2.5 MiB/s wr, 163 op/s
Jan 20 15:05:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2837469865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:12 compute-0 ceph-mon[74360]: pgmap v2430: 321 pgs: 321 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 676 KiB/s rd, 2.5 MiB/s wr, 163 op/s
Jan 20 15:05:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:13.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 20 15:05:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 20 15:05:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 20 15:05:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1230677250' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:14.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 887 KiB/s rd, 3.1 MiB/s wr, 172 op/s
Jan 20 15:05:14 compute-0 nova_compute[250018]: 2026-01-20 15:05:14.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:15.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1230677250' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:15 compute-0 ceph-mon[74360]: osdmap e358: 3 total, 3 up, 3 in
Jan 20 15:05:15 compute-0 ceph-mon[74360]: pgmap v2432: 321 pgs: 321 active+clean; 323 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 887 KiB/s rd, 3.1 MiB/s wr, 172 op/s
Jan 20 15:05:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:16.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 3.2 MiB/s wr, 186 op/s
Jan 20 15:05:16 compute-0 nova_compute[250018]: 2026-01-20 15:05:16.450 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921501.4484856, 83bbf40a-f44e-42fe-b09a-0e635a302f6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:16 compute-0 nova_compute[250018]: 2026-01-20 15:05:16.450 250022 INFO nova.compute.manager [-] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] VM Stopped (Lifecycle Event)
Jan 20 15:05:16 compute-0 nova_compute[250018]: 2026-01-20 15:05:16.474 250022 DEBUG nova.compute.manager [None req-797a461c-a7ec-4389-90b3-2c1f478ec2fc - - - - - -] [instance: 83bbf40a-f44e-42fe-b09a-0e635a302f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:16 compute-0 nova_compute[250018]: 2026-01-20 15:05:16.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:16 compute-0 sudo[344124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:16 compute-0 sudo[344124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:16 compute-0 sudo[344124]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:16 compute-0 sudo[344149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:16 compute-0 sudo[344149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:16 compute-0 sudo[344149]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:17.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:18 compute-0 ceph-mon[74360]: pgmap v2433: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 3.2 MiB/s wr, 186 op/s
Jan 20 15:05:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.611 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.611 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.611 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.611 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.612 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.613 250022 INFO nova.compute.manager [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Terminating instance
Jan 20 15:05:18 compute-0 nova_compute[250018]: 2026-01-20 15:05:18.613 250022 DEBUG nova.compute.manager [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:05:18 compute-0 kernel: tap07baaccf-06 (unregistering): left promiscuous mode
Jan 20 15:05:18 compute-0 NetworkManager[48960]: <info>  [1768921518.9647] device (tap07baaccf-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:05:19 compute-0 ovn_controller[148666]: 2026-01-20T15:05:19Z|00558|binding|INFO|Releasing lport 07baaccf-06f0-4af0-a04a-9638078c313f from this chassis (sb_readonly=0)
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.016 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 ovn_controller[148666]: 2026-01-20T15:05:19Z|00559|binding|INFO|Setting lport 07baaccf-06f0-4af0-a04a-9638078c313f down in Southbound
Jan 20 15:05:19 compute-0 ovn_controller[148666]: 2026-01-20T15:05:19Z|00560|binding|INFO|Removing iface tap07baaccf-06 ovn-installed in OVS
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.024 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:81:5e 10.100.0.6'], port_security=['fa:16:3e:12:81:5e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b3f961d2-e73f-49bf-b141-6505e77ad9ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63555e5851564db08c6429231d264f2c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e54c470-6a6f-454e-ae01-9d2d59b2c74d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248fa32c-94be-4e1b-b4d3-cb9fac0ec155, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=07baaccf-06f0-4af0-a04a-9638078c313f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.026 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 07baaccf-06f0-4af0-a04a-9638078c313f in datapath 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec unbound from our chassis
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.028 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.029 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[11fdc392-8cb3-4e6e-9196-4e681ae694cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.029 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec namespace which is not needed anymore
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.031 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 20 15:05:19 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000095.scope: Consumed 23.752s CPU time.
Jan 20 15:05:19 compute-0 systemd-machined[216401]: Machine qemu-69-instance-00000095 terminated.
Jan 20 15:05:19 compute-0 podman[344177]: 2026-01-20 15:05:19.124085649 +0000 UTC m=+0.092609985 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 15:05:19 compute-0 podman[344179]: 2026-01-20 15:05:19.136631897 +0000 UTC m=+0.090377555 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.249 250022 INFO nova.virt.libvirt.driver [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Instance destroyed successfully.
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.250 250022 DEBUG nova.objects.instance [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lazy-loading 'resources' on Instance uuid b3f961d2-e73f-49bf-b141-6505e77ad9ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:19 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [NOTICE]   (338625) : haproxy version is 2.8.14-c23fe91
Jan 20 15:05:19 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [NOTICE]   (338625) : path to executable is /usr/sbin/haproxy
Jan 20 15:05:19 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [WARNING]  (338625) : Exiting Master process...
Jan 20 15:05:19 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [ALERT]    (338625) : Current worker (338627) exited with code 143 (Terminated)
Jan 20 15:05:19 compute-0 neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec[338621]: [WARNING]  (338625) : All workers exited. Exiting... (0)
Jan 20 15:05:19 compute-0 systemd[1]: libpod-5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde.scope: Deactivated successfully.
Jan 20 15:05:19 compute-0 podman[344243]: 2026-01-20 15:05:19.297585533 +0000 UTC m=+0.167217105 container died 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.346 250022 DEBUG nova.virt.libvirt.vif [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:01:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-994461168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-994461168',id=149,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:01:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63555e5851564db08c6429231d264f2c',ramdisk_id='',reservation_id='r-lbk5hu90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1871371328-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:01:38Z,user_data=None,user_id='2446e8399b344b29986c1aaf8bf73adf',uuid=b3f961d2-e73f-49bf-b141-6505e77ad9ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.347 250022 DEBUG nova.network.os_vif_util [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converting VIF {"id": "07baaccf-06f0-4af0-a04a-9638078c313f", "address": "fa:16:3e:12:81:5e", "network": {"id": "671e28d0-0b9e-41e0-b5e0-db1ccd4717ec", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-884777184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63555e5851564db08c6429231d264f2c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07baaccf-06", "ovs_interfaceid": "07baaccf-06f0-4af0-a04a-9638078c313f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.347 250022 DEBUG nova.network.os_vif_util [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.348 250022 DEBUG os_vif [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.350 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.350 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07baaccf-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.352 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.353 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde-userdata-shm.mount: Deactivated successfully.
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.356 250022 INFO os_vif [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:81:5e,bridge_name='br-int',has_traffic_filtering=True,id=07baaccf-06f0-4af0-a04a-9638078c313f,network=Network(671e28d0-0b9e-41e0-b5e0-db1ccd4717ec),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07baaccf-06')
Jan 20 15:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-479d335b9a84cd2bddb395b02f918d545db41496cb64ad38a58957b6d714c968-merged.mount: Deactivated successfully.
Jan 20 15:05:19 compute-0 podman[344243]: 2026-01-20 15:05:19.370548128 +0000 UTC m=+0.240179700 container cleanup 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:05:19 compute-0 systemd[1]: libpod-conmon-5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde.scope: Deactivated successfully.
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.384 250022 DEBUG nova.compute.manager [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-unplugged-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.384 250022 DEBUG oslo_concurrency.lockutils [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.385 250022 DEBUG oslo_concurrency.lockutils [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.385 250022 DEBUG oslo_concurrency.lockutils [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.385 250022 DEBUG nova.compute.manager [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] No waiting events found dispatching network-vif-unplugged-07baaccf-06f0-4af0-a04a-9638078c313f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.385 250022 DEBUG nova.compute.manager [req-8c0a3d59-9078-40c1-80a0-9e9843ba2a6a req-27089b24-5e51-43e7-aa29-4ed952ec9b1d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-unplugged-07baaccf-06f0-4af0-a04a-9638078c313f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:05:19 compute-0 ceph-mon[74360]: pgmap v2434: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:05:19 compute-0 podman[344299]: 2026-01-20 15:05:19.50205525 +0000 UTC m=+0.108271227 container remove 5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.507 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[503165bf-8c5e-407c-b7d7-4e997560b3f9]: (4, ('Tue Jan 20 03:05:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde)\n5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde\nTue Jan 20 03:05:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec (5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde)\n5810b0dee8367a82946546c7c70bc3d30342d5b471c721005ff452512f162fde\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.509 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[16807d4d-e145-4cf8-88f2-222e37dc2412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.510 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap671e28d0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:19 compute-0 kernel: tap671e28d0-00: left promiscuous mode
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.524 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.527 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[27e61083-2be6-482c-b91e-b01d8a7774ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.542 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[db00295b-f598-47ad-b119-f06e13c59269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.543 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf6a8e7-7c81-4f81-a880-d7444de883cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.557 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b0788022-233c-4a7f-87fd-d816760f9ebe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722577, 'reachable_time': 41941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344317, 'error': None, 'target': 'ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.559 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-671e28d0-0b9e-41e0-b5e0-db1ccd4717ec deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:05:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:19.559 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[5a48438b-ac1a-4fdc-9d58-df0e01ecc2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d671e28d0\x2d0b9e\x2d41e0\x2db5e0\x2ddb1ccd4717ec.mount: Deactivated successfully.
Jan 20 15:05:19 compute-0 nova_compute[250018]: 2026-01-20 15:05:19.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:19.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 984 KiB/s wr, 107 op/s
Jan 20 15:05:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.523 250022 DEBUG nova.compute.manager [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.523 250022 DEBUG oslo_concurrency.lockutils [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.523 250022 DEBUG oslo_concurrency.lockutils [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.524 250022 DEBUG oslo_concurrency.lockutils [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.524 250022 DEBUG nova.compute.manager [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] No waiting events found dispatching network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:21 compute-0 nova_compute[250018]: 2026-01-20 15:05:21.524 250022 WARNING nova.compute.manager [req-b84c9900-ff2e-4723-a7f7-71038d80d435 req-9d7a91ed-a905-45ee-95b2-de566bf2bb0e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received unexpected event network-vif-plugged-07baaccf-06f0-4af0-a04a-9638078c313f for instance with vm_state active and task_state deleting.
Jan 20 15:05:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:21 compute-0 ceph-mon[74360]: pgmap v2435: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 984 KiB/s wr, 107 op/s
Jan 20 15:05:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:05:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3052434455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1800558050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:22.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 123 KiB/s wr, 64 op/s
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3052434455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:23 compute-0 ceph-mon[74360]: pgmap v2436: 321 pgs: 321 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 123 KiB/s wr, 64 op/s
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.567 250022 INFO nova.virt.libvirt.driver [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Deleting instance files /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac_del
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.568 250022 INFO nova.virt.libvirt.driver [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Deletion of /var/lib/nova/instances/b3f961d2-e73f-49bf-b141-6505e77ad9ac_del complete
Jan 20 15:05:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.649 250022 INFO nova.compute.manager [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Took 5.04 seconds to destroy the instance on the hypervisor.
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.650 250022 DEBUG oslo.service.loopingcall [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.650 250022 DEBUG nova.compute.manager [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:05:23 compute-0 nova_compute[250018]: 2026-01-20 15:05:23.650 250022 DEBUG nova.network.neutron [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:05:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 20 15:05:24 compute-0 nova_compute[250018]: 2026-01-20 15:05:24.353 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:24 compute-0 ceph-mon[74360]: pgmap v2437: 321 pgs: 321 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 20 15:05:24 compute-0 nova_compute[250018]: 2026-01-20 15:05:24.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.365 250022 DEBUG nova.compute.manager [req-9b1ad8fe-9df6-40cb-9889-5a4fd2b7b808 req-e903be0b-8871-4502-a1d7-0234fe8df36a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Received event network-vif-deleted-07baaccf-06f0-4af0-a04a-9638078c313f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.366 250022 INFO nova.compute.manager [req-9b1ad8fe-9df6-40cb-9889-5a4fd2b7b808 req-e903be0b-8871-4502-a1d7-0234fe8df36a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Neutron deleted interface 07baaccf-06f0-4af0-a04a-9638078c313f; detaching it from the instance and deleting it from the info cache
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.366 250022 DEBUG nova.network.neutron [req-9b1ad8fe-9df6-40cb-9889-5a4fd2b7b808 req-e903be0b-8871-4502-a1d7-0234fe8df36a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.369 250022 DEBUG nova.network.neutron [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.410 250022 DEBUG nova.compute.manager [req-9b1ad8fe-9df6-40cb-9889-5a4fd2b7b808 req-e903be0b-8871-4502-a1d7-0234fe8df36a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Detach interface failed, port_id=07baaccf-06f0-4af0-a04a-9638078c313f, reason: Instance b3f961d2-e73f-49bf-b141-6505e77ad9ac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.424 250022 INFO nova.compute.manager [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Took 1.77 seconds to deallocate network for instance.
Jan 20 15:05:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.730 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.730 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:25 compute-0 nova_compute[250018]: 2026-01-20 15:05:25.820 250022 DEBUG oslo_concurrency.processutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 20 15:05:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 20 15:05:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 20 15:05:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.294 250022 DEBUG oslo_concurrency.processutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.301 250022 DEBUG nova.compute.provider_tree [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.579 250022 DEBUG nova.scheduler.client.report [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.631 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.682 250022 INFO nova.scheduler.client.report [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Deleted allocations for instance b3f961d2-e73f-49bf-b141-6505e77ad9ac
Jan 20 15:05:26 compute-0 nova_compute[250018]: 2026-01-20 15:05:26.775 250022 DEBUG oslo_concurrency.lockutils [None req-3b2751fb-8f89-4613-af6d-d75262d5b65c 2446e8399b344b29986c1aaf8bf73adf 63555e5851564db08c6429231d264f2c - - default default] Lock "b3f961d2-e73f-49bf-b141-6505e77ad9ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:26 compute-0 ceph-mon[74360]: osdmap e359: 3 total, 3 up, 3 in
Jan 20 15:05:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/883178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:26 compute-0 ceph-mon[74360]: pgmap v2439: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2898267171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3990956425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:28.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3884054008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:29 compute-0 ceph-mon[74360]: pgmap v2440: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 20 15:05:29 compute-0 nova_compute[250018]: 2026-01-20 15:05:29.356 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-0 nova_compute[250018]: 2026-01-20 15:05:29.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:29.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:30.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 20 15:05:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:30.775 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:30.775 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:30.776 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:31 compute-0 ceph-mon[74360]: pgmap v2441: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 20 15:05:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:31.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:32.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 273 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:05:32 compute-0 ceph-mon[74360]: pgmap v2442: 321 pgs: 321 active+clean; 273 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:05:33 compute-0 nova_compute[250018]: 2026-01-20 15:05:33.204 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:33.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:34.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:34 compute-0 nova_compute[250018]: 2026-01-20 15:05:34.248 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921519.2473626, b3f961d2-e73f-49bf-b141-6505e77ad9ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:34 compute-0 nova_compute[250018]: 2026-01-20 15:05:34.249 250022 INFO nova.compute.manager [-] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] VM Stopped (Lifecycle Event)
Jan 20 15:05:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 277 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Jan 20 15:05:34 compute-0 nova_compute[250018]: 2026-01-20 15:05:34.314 250022 DEBUG nova.compute.manager [None req-3ae11136-38eb-406e-9d2b-06fce81194ab - - - - - -] [instance: b3f961d2-e73f-49bf-b141-6505e77ad9ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:34 compute-0 nova_compute[250018]: 2026-01-20 15:05:34.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:34 compute-0 nova_compute[250018]: 2026-01-20 15:05:34.616 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:35 compute-0 ceph-mon[74360]: pgmap v2443: 321 pgs: 321 active+clean; 277 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Jan 20 15:05:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:35.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:36.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 20 15:05:36 compute-0 ceph-mon[74360]: pgmap v2444: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 20 15:05:37 compute-0 sudo[344350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:37 compute-0 sudo[344350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:37 compute-0 sudo[344350]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:37 compute-0 sudo[344375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:37 compute-0 sudo[344375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:37 compute-0 sudo[344375]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.139 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.141 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.158 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.280 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.280 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.291 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.292 250022 INFO nova.compute.claims [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.461 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262336420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.935 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.941 250022 DEBUG nova.compute.provider_tree [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3262336420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:37 compute-0 nova_compute[250018]: 2026-01-20 15:05:37.996 250022 DEBUG nova.scheduler.client.report [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.080 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.080 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:05:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.176 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.177 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.206 250022 INFO nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.260 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:05:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.371 250022 INFO nova.virt.block_device [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Booting with volume 94300d81-b4ca-4c0a-9283-83b76826d40f at /dev/vda
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.577 250022 DEBUG os_brick.utils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.579 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.590 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.590 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[724602ad-2935-4087-a7b1-03ad35a20684]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.591 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.598 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.598 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[45e2dd20-453d-4716-8b79-9ee37cdb1af5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.600 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.608 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.608 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1d7540-31d8-455b-9735-a0688211ce5f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.610 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c95e62-c511-4b30-9dcd-e1a53fd66233]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.610 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.645 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.649 250022 DEBUG os_brick.initiator.connectors.lightos [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.650 250022 DEBUG os_brick.initiator.connectors.lightos [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.650 250022 DEBUG os_brick.initiator.connectors.lightos [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.651 250022 DEBUG os_brick.utils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:05:38 compute-0 nova_compute[250018]: 2026-01-20 15:05:38.652 250022 DEBUG nova.virt.block_device [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating existing volume attachment record: 355735e8-db73-48e6-9318-c418940c37c9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:05:38 compute-0 sudo[344429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:38 compute-0 sudo[344429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-0 sudo[344429]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-0 sudo[344455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:05:38 compute-0 sudo[344455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-0 sudo[344455]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-0 sudo[344480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:38 compute-0 sudo[344480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-0 sudo[344480]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:38 compute-0 sudo[344505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:05:38 compute-0 sudo[344505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:38 compute-0 ceph-mon[74360]: pgmap v2445: 321 pgs: 321 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 20 15:05:39 compute-0 sudo[344505]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:39 compute-0 nova_compute[250018]: 2026-01-20 15:05:39.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a223cbbd-56df-4697-b46d-e878e4b5a849 does not exist
Jan 20 15:05:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e810b9a3-d2fc-4aa2-a6f7-c71f7c4fb7cf does not exist
Jan 20 15:05:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f4c7e98a-4e04-40d6-9e9c-c47704a34bf0 does not exist
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:05:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:05:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:39 compute-0 sudo[344561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:39 compute-0 sudo[344561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:39 compute-0 sudo[344561]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:39 compute-0 sudo[344586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:05:39 compute-0 sudo[344586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:39 compute-0 sudo[344586]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:39 compute-0 nova_compute[250018]: 2026-01-20 15:05:39.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:39 compute-0 sudo[344611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:39 compute-0 sudo[344611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:39 compute-0 sudo[344611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:39 compute-0 nova_compute[250018]: 2026-01-20 15:05:39.720 250022 DEBUG nova.policy [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b02a8ef6cc3946ceb2c8846aae2eae68', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0fc924d2df984301897e81920c5e192f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:05:39 compute-0 sudo[344636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:05:39 compute-0 sudo[344636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:05:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2436202043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.045685746 +0000 UTC m=+0.045010133 container create 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:05:40 compute-0 systemd[1]: Started libpod-conmon-07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973.scope.
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.094 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.096 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.096 250022 INFO nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating image(s)
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.097 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.097 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Ensure instance console log exists: /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.097 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.098 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.098 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.023617822 +0000 UTC m=+0.022942219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.145117835 +0000 UTC m=+0.144442232 container init 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:05:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.157054266 +0000 UTC m=+0.156378673 container start 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.161850666 +0000 UTC m=+0.161175053 container attach 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:05:40 compute-0 adoring_spence[344720]: 167 167
Jan 20 15:05:40 compute-0 systemd[1]: libpod-07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973.scope: Deactivated successfully.
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.16390242 +0000 UTC m=+0.163226837 container died 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e4b33fe6f8869904eb58a22b5d80769787c27d730266304dc4a4d0dae4edcf5-merged.mount: Deactivated successfully.
Jan 20 15:05:40 compute-0 podman[344704]: 2026-01-20 15:05:40.208970434 +0000 UTC m=+0.208294811 container remove 07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_spence, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:05:40 compute-0 systemd[1]: libpod-conmon-07c2e3f4f8f20579c31c2dd1a01808cf4a5e4f1cb46c830ca27cdcef10367973.scope: Deactivated successfully.
Jan 20 15:05:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:40.341 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:40.343 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:05:40 compute-0 podman[344744]: 2026-01-20 15:05:40.403609507 +0000 UTC m=+0.049302619 container create 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:05:40 compute-0 systemd[1]: Started libpod-conmon-4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f.scope.
Jan 20 15:05:40 compute-0 podman[344744]: 2026-01-20 15:05:40.380869495 +0000 UTC m=+0.026562687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:40 compute-0 podman[344744]: 2026-01-20 15:05:40.501996898 +0000 UTC m=+0.147690000 container init 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:05:40 compute-0 podman[344744]: 2026-01-20 15:05:40.510518997 +0000 UTC m=+0.156212099 container start 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:05:40 compute-0 podman[344744]: 2026-01-20 15:05:40.515208903 +0000 UTC m=+0.160902005 container attach 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:05:40 compute-0 nova_compute[250018]: 2026-01-20 15:05:40.822 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Successfully created port: 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:05:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:41 compute-0 ceph-mon[74360]: pgmap v2446: 321 pgs: 321 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 20 15:05:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1135024478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:41 compute-0 naughty_merkle[344761]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:05:41 compute-0 naughty_merkle[344761]: --> relative data size: 1.0
Jan 20 15:05:41 compute-0 naughty_merkle[344761]: --> All data devices are unavailable
Jan 20 15:05:41 compute-0 systemd[1]: libpod-4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f.scope: Deactivated successfully.
Jan 20 15:05:41 compute-0 podman[344776]: 2026-01-20 15:05:41.453796016 +0000 UTC m=+0.030992147 container died 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbaa56f57ba16fa74a7bac11c34e3d77f3b4bf76fd8b7186e51331df5459b828-merged.mount: Deactivated successfully.
Jan 20 15:05:41 compute-0 podman[344776]: 2026-01-20 15:05:41.549162414 +0000 UTC m=+0.126358505 container remove 4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_merkle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:05:41 compute-0 systemd[1]: libpod-conmon-4ffec87073ea6984b0a5f351cf534000b648ecd2d13d364a1eac92821d7c1d5f.scope: Deactivated successfully.
Jan 20 15:05:41 compute-0 sudo[344636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:41.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:41 compute-0 sudo[344791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:41 compute-0 sudo[344791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:41 compute-0 sudo[344791]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:41 compute-0 sudo[344816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:05:41 compute-0 sudo[344816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:41 compute-0 sudo[344816]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:41 compute-0 nova_compute[250018]: 2026-01-20 15:05:41.799 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Successfully updated port: 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:05:41 compute-0 sudo[344841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:41 compute-0 sudo[344841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:41 compute-0 nova_compute[250018]: 2026-01-20 15:05:41.831 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:41 compute-0 nova_compute[250018]: 2026-01-20 15:05:41.832 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:41 compute-0 nova_compute[250018]: 2026-01-20 15:05:41.832 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:05:41 compute-0 sudo[344841]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:41 compute-0 sudo[344867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:05:41 compute-0 sudo[344867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:42 compute-0 nova_compute[250018]: 2026-01-20 15:05:42.084 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:05:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.228581415 +0000 UTC m=+0.047293795 container create 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:05:42 compute-0 systemd[1]: Started libpod-conmon-9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6.scope.
Jan 20 15:05:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.206014487 +0000 UTC m=+0.024726827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.325548017 +0000 UTC m=+0.144260377 container init 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.333631065 +0000 UTC m=+0.152343395 container start 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.337062246 +0000 UTC m=+0.155774707 container attach 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:05:42 compute-0 condescending_wiles[344949]: 167 167
Jan 20 15:05:42 compute-0 systemd[1]: libpod-9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6.scope: Deactivated successfully.
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.339368649 +0000 UTC m=+0.158080979 container died 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 15:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac7b8f5520747db8345779b7a1e038e74b9920973e5a10b2fd18c5165949ca4e-merged.mount: Deactivated successfully.
Jan 20 15:05:42 compute-0 podman[344933]: 2026-01-20 15:05:42.374477415 +0000 UTC m=+0.193189745 container remove 9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:05:42 compute-0 systemd[1]: libpod-conmon-9d08e08280381e34700467ff08467c60a7c834f0b7dd76fa93b3ac2d8c3e90c6.scope: Deactivated successfully.
Jan 20 15:05:42 compute-0 podman[344971]: 2026-01-20 15:05:42.54136484 +0000 UTC m=+0.038258851 container create d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:05:42 compute-0 systemd[1]: Started libpod-conmon-d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992.scope.
Jan 20 15:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb114ea1ac1aebfab6facddb783754ffc3dcacf3efc722128f87009f8edaf92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb114ea1ac1aebfab6facddb783754ffc3dcacf3efc722128f87009f8edaf92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb114ea1ac1aebfab6facddb783754ffc3dcacf3efc722128f87009f8edaf92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb114ea1ac1aebfab6facddb783754ffc3dcacf3efc722128f87009f8edaf92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:42 compute-0 podman[344971]: 2026-01-20 15:05:42.603557266 +0000 UTC m=+0.100451307 container init d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:05:42 compute-0 podman[344971]: 2026-01-20 15:05:42.610473552 +0000 UTC m=+0.107367543 container start d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:05:42 compute-0 podman[344971]: 2026-01-20 15:05:42.613667928 +0000 UTC m=+0.110561969 container attach d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:05:42 compute-0 podman[344971]: 2026-01-20 15:05:42.525524753 +0000 UTC m=+0.022418764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:42 compute-0 nova_compute[250018]: 2026-01-20 15:05:42.709 250022 DEBUG nova.compute.manager [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:42 compute-0 nova_compute[250018]: 2026-01-20 15:05:42.710 250022 DEBUG nova.compute.manager [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing instance network info cache due to event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:05:42 compute-0 nova_compute[250018]: 2026-01-20 15:05:42.710 250022 DEBUG oslo_concurrency.lockutils [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:43 compute-0 ceph-mon[74360]: pgmap v2447: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:05:43 compute-0 musing_diffie[344987]: {
Jan 20 15:05:43 compute-0 musing_diffie[344987]:     "0": [
Jan 20 15:05:43 compute-0 musing_diffie[344987]:         {
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "devices": [
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "/dev/loop3"
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             ],
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "lv_name": "ceph_lv0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "lv_size": "7511998464",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "name": "ceph_lv0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "tags": {
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.cluster_name": "ceph",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.crush_device_class": "",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.encrypted": "0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.osd_id": "0",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.type": "block",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:                 "ceph.vdo": "0"
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             },
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "type": "block",
Jan 20 15:05:43 compute-0 musing_diffie[344987]:             "vg_name": "ceph_vg0"
Jan 20 15:05:43 compute-0 musing_diffie[344987]:         }
Jan 20 15:05:43 compute-0 musing_diffie[344987]:     ]
Jan 20 15:05:43 compute-0 musing_diffie[344987]: }
Jan 20 15:05:43 compute-0 systemd[1]: libpod-d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992.scope: Deactivated successfully.
Jan 20 15:05:43 compute-0 podman[344971]: 2026-01-20 15:05:43.419552554 +0000 UTC m=+0.916446555 container died d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb114ea1ac1aebfab6facddb783754ffc3dcacf3efc722128f87009f8edaf92-merged.mount: Deactivated successfully.
Jan 20 15:05:43 compute-0 podman[344971]: 2026-01-20 15:05:43.471212206 +0000 UTC m=+0.968106217 container remove d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 15:05:43 compute-0 systemd[1]: libpod-conmon-d16ebc4a6dda621bd9d71441651a41c81b9311048b34ad2f799856cdeefc1992.scope: Deactivated successfully.
Jan 20 15:05:43 compute-0 sudo[344867]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:43 compute-0 sudo[345010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:43 compute-0 sudo[345010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:43 compute-0 sudo[345010]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:43 compute-0 sudo[345035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:05:43 compute-0 sudo[345035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:43 compute-0 sudo[345035]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:43.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.660 250022 DEBUG nova.network.neutron [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.691 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.691 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance network_info: |[{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.693 250022 DEBUG oslo_concurrency.lockutils [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.693 250022 DEBUG nova.network.neutron [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:05:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:05:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:05:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.698 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Start _get_guest_xml network_info=[{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': '355735e8-db73-48e6-9318-c418940c37c9', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-94300d81-b4ca-4c0a-9283-83b76826d40f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '94300d81-b4ca-4c0a-9283-83b76826d40f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'attached_at': '', 'detached_at': '', 'volume_id': '94300d81-b4ca-4c0a-9283-83b76826d40f', 'serial': '94300d81-b4ca-4c0a-9283-83b76826d40f'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:05:43 compute-0 sudo[345060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:43 compute-0 sudo[345060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:43 compute-0 sudo[345060]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.706 250022 WARNING nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.713 250022 DEBUG nova.virt.libvirt.host [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.714 250022 DEBUG nova.virt.libvirt.host [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.717 250022 DEBUG nova.virt.libvirt.host [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.717 250022 DEBUG nova.virt.libvirt.host [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.718 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.718 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.719 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.720 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.720 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.720 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.720 250022 DEBUG nova.virt.hardware [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.749 250022 DEBUG nova.storage.rbd_utils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:43 compute-0 nova_compute[250018]: 2026-01-20 15:05:43.752 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:43 compute-0 sudo[345085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:05:43 compute-0 sudo[345085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.067460137 +0000 UTC m=+0.033622167 container create d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:05:44 compute-0 systemd[1]: Started libpod-conmon-d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe.scope.
Jan 20 15:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.1395764 +0000 UTC m=+0.105738450 container init d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.053171222 +0000 UTC m=+0.019333272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.152266152 +0000 UTC m=+0.118428172 container start d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.155458927 +0000 UTC m=+0.121620987 container attach d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:05:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:44.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:44 compute-0 flamboyant_jemison[345204]: 167 167
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.160165414 +0000 UTC m=+0.126327454 container died d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:05:44 compute-0 systemd[1]: libpod-d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe.scope: Deactivated successfully.
Jan 20 15:05:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:05:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430875220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3b711eef6c236f909b9fa7ae57f4e856f55c001a969682c36723ffa0e0b5102-merged.mount: Deactivated successfully.
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.208 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:44 compute-0 podman[345188]: 2026-01-20 15:05:44.21533314 +0000 UTC m=+0.181495160 container remove d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:05:44 compute-0 systemd[1]: libpod-conmon-d141e9d7562d935ac510962ef8a006fd8f8333e1165f7c7fce775be267240ffe.scope: Deactivated successfully.
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.262 250022 DEBUG nova.virt.libvirt.vif [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGFkn7BLthBV1Y62q/iaiaFYVNSXov56cyC6gJDof3vS0dj6UwuVwvMnqOok2l8W+oqb55YucgjGf+63NOxxoSCxoRUO/Jcx5MarGHmdQPdT+6u18ixvV1ghiExv/Y0Nog==',key_name='tempest-TestShelveInstance-1862119958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:05:38Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.263 250022 DEBUG nova.network.os_vif_util [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.264 250022 DEBUG nova.network.os_vif_util [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.266 250022 DEBUG nova.objects.instance [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'pci_devices' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:05:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 760 KiB/s wr, 124 op/s
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.312 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <uuid>5feeb9de-434b-4ec7-aa99-6da718514c6f</uuid>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <name>instance-000000a2</name>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:name>tempest-TestShelveInstance-server-670486896</nova:name>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:05:43</nova:creationTime>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:user uuid="b02a8ef6cc3946ceb2c8846aae2eae68">tempest-TestShelveInstance-1425544575-project-member</nova:user>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:project uuid="0fc924d2df984301897e81920c5e192f">tempest-TestShelveInstance-1425544575</nova:project>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <nova:port uuid="70668adb-f9ad-41cb-8eac-2e0aba32bf22">
Jan 20 15:05:44 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <system>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="serial">5feeb9de-434b-4ec7-aa99-6da718514c6f</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="uuid">5feeb9de-434b-4ec7-aa99-6da718514c6f</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </system>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <os>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </os>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <features>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </features>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config">
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </source>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-94300d81-b4ca-4c0a-9283-83b76826d40f">
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </source>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:05:44 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <serial>94300d81-b4ca-4c0a-9283-83b76826d40f</serial>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:6a:c0:d3"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <target dev="tap70668adb-f9"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/console.log" append="off"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <video>
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </video>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:05:44 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:05:44 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:05:44 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:05:44 compute-0 nova_compute[250018]: </domain>
Jan 20 15:05:44 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.313 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Preparing to wait for external event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.313 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.314 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.314 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.316 250022 DEBUG nova.virt.libvirt.vif [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGFkn7BLthBV1Y62q/iaiaFYVNSXov56cyC6gJDof3vS0dj6UwuVwvMnqOok2l8W+oqb55YucgjGf+63NOxxoSCxoRUO/Jcx5MarGHmdQPdT+6u18ixvV1ghiExv/Y0Nog==',key_name='tempest-TestShelveInstance-1862119958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:05:38Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.316 250022 DEBUG nova.network.os_vif_util [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.317 250022 DEBUG nova.network.os_vif_util [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.318 250022 DEBUG os_vif [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.319 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.320 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.321 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.326 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.326 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70668adb-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.327 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70668adb-f9, col_values=(('external_ids', {'iface-id': '70668adb-f9ad-41cb-8eac-2e0aba32bf22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:c0:d3', 'vm-uuid': '5feeb9de-434b-4ec7-aa99-6da718514c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:44 compute-0 NetworkManager[48960]: <info>  [1768921544.3680] manager: (tap70668adb-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.374 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.376 250022 INFO os_vif [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9')
Jan 20 15:05:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4287930050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2430875220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:44 compute-0 podman[345230]: 2026-01-20 15:05:44.414203227 +0000 UTC m=+0.089467091 container create 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:05:44 compute-0 systemd[1]: Started libpod-conmon-8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264.scope.
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.478 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.478 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.479 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] No VIF found with MAC fa:16:3e:6a:c0:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.479 250022 INFO nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Using config drive
Jan 20 15:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a0c9a2fede4d2a68e853940b1a9ebe91089c1d114212bb3dbd1908b6b92073/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a0c9a2fede4d2a68e853940b1a9ebe91089c1d114212bb3dbd1908b6b92073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a0c9a2fede4d2a68e853940b1a9ebe91089c1d114212bb3dbd1908b6b92073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a0c9a2fede4d2a68e853940b1a9ebe91089c1d114212bb3dbd1908b6b92073/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:44 compute-0 podman[345230]: 2026-01-20 15:05:44.396255813 +0000 UTC m=+0.071519637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:05:44 compute-0 podman[345230]: 2026-01-20 15:05:44.505428744 +0000 UTC m=+0.180692628 container init 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:05:44 compute-0 podman[345230]: 2026-01-20 15:05:44.515189347 +0000 UTC m=+0.190453171 container start 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.515 250022 DEBUG nova.storage.rbd_utils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:44 compute-0 podman[345230]: 2026-01-20 15:05:44.519152144 +0000 UTC m=+0.194416008 container attach 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:05:44 compute-0 nova_compute[250018]: 2026-01-20 15:05:44.619 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:45 compute-0 practical_swartz[345249]: {
Jan 20 15:05:45 compute-0 practical_swartz[345249]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:05:45 compute-0 practical_swartz[345249]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:05:45 compute-0 practical_swartz[345249]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:05:45 compute-0 practical_swartz[345249]:         "osd_id": 0,
Jan 20 15:05:45 compute-0 practical_swartz[345249]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:05:45 compute-0 practical_swartz[345249]:         "type": "bluestore"
Jan 20 15:05:45 compute-0 practical_swartz[345249]:     }
Jan 20 15:05:45 compute-0 practical_swartz[345249]: }
Jan 20 15:05:45 compute-0 systemd[1]: libpod-8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264.scope: Deactivated successfully.
Jan 20 15:05:45 compute-0 podman[345230]: 2026-01-20 15:05:45.375724827 +0000 UTC m=+1.050988651 container died 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:05:45 compute-0 ceph-mon[74360]: pgmap v2448: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 760 KiB/s wr, 124 op/s
Jan 20 15:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-02a0c9a2fede4d2a68e853940b1a9ebe91089c1d114212bb3dbd1908b6b92073-merged.mount: Deactivated successfully.
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.441 250022 INFO nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Creating config drive at /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config
Jan 20 15:05:45 compute-0 podman[345230]: 2026-01-20 15:05:45.45416849 +0000 UTC m=+1.129432324 container remove 8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.452 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsvfefu_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:45 compute-0 systemd[1]: libpod-conmon-8ce42dd4722ae122404bc56536c25b49b1cc9db52abb0f45fdfbf0be655ee264.scope: Deactivated successfully.
Jan 20 15:05:45 compute-0 sudo[345085]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:05:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:05:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7746cfa4-4dc4-46a1-a907-e2f5ee0d50af does not exist
Jan 20 15:05:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d3f47552-438d-4f95-84c9-b5553ec3f004 does not exist
Jan 20 15:05:45 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 056a4198-03d7-403f-b0c6-1a365c3c7faf does not exist
Jan 20 15:05:45 compute-0 sudo[345305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:45 compute-0 sudo[345305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:45 compute-0 sudo[345305]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.617 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsvfefu_h" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.648 250022 DEBUG nova.storage.rbd_utils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] rbd image 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.652 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:45 compute-0 sudo[345330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:05:45 compute-0 sudo[345330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:45 compute-0 sudo[345330]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.839 250022 DEBUG oslo_concurrency.processutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config 5feeb9de-434b-4ec7-aa99-6da718514c6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.840 250022 INFO nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deleting local config drive /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f/disk.config because it was imported into RBD.
Jan 20 15:05:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:45 compute-0 kernel: tap70668adb-f9: entered promiscuous mode
Jan 20 15:05:45 compute-0 ovn_controller[148666]: 2026-01-20T15:05:45Z|00561|binding|INFO|Claiming lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 for this chassis.
Jan 20 15:05:45 compute-0 ovn_controller[148666]: 2026-01-20T15:05:45Z|00562|binding|INFO|70668adb-f9ad-41cb-8eac-2e0aba32bf22: Claiming fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:45 compute-0 NetworkManager[48960]: <info>  [1768921545.9097] manager: (tap70668adb-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.920 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.921 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 bound to our chassis
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.923 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.938 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[01154eba-b1f6-4e95-9b14-34a3888bb1a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.940 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f434e83-41 in ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:05:45 compute-0 systemd-udevd[345407]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.941 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f434e83-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.941 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d037d3-995d-4268-9d16-db225916b3e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.942 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e44cce77-079b-44a4-845a-1bf8516dcb15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:45 compute-0 systemd-machined[216401]: New machine qemu-72-instance-000000a2.
Jan 20 15:05:45 compute-0 NetworkManager[48960]: <info>  [1768921545.9553] device (tap70668adb-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:05:45 compute-0 NetworkManager[48960]: <info>  [1768921545.9563] device (tap70668adb-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.961 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc517f1-62c1-4491-86db-653932cb4681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.973 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:45 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-000000a2.
Jan 20 15:05:45 compute-0 ovn_controller[148666]: 2026-01-20T15:05:45Z|00563|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 ovn-installed in OVS
Jan 20 15:05:45 compute-0 ovn_controller[148666]: 2026-01-20T15:05:45Z|00564|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 up in Southbound
Jan 20 15:05:45 compute-0 nova_compute[250018]: 2026-01-20 15:05:45.980 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:45.992 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[215afd1a-3440-4855-8db5-f73e1405c9ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.028 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c4eb2d-613a-4430-8c71-39da4abf30b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.034 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1077ac21-5398-4f28-a0d0-0cc3c5352542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 NetworkManager[48960]: <info>  [1768921546.0352] manager: (tap0f434e83-40): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.070 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[95062279-a598-4f87-8df8-1c33deb2c432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.074 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c4dd7c31-25ed-4bf5-93d4-869e767d3e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 NetworkManager[48960]: <info>  [1768921546.0996] device (tap0f434e83-40): carrier: link connected
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.108 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ede28f1a-4f18-489a-8305-a6e43afb2c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.129 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa7d543-2aaa-43d3-b234-fd304f00f9e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f434e83-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:12:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748682, 'reachable_time': 24229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345439, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.130 250022 DEBUG nova.network.neutron [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated VIF entry in instance network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.131 250022 DEBUG nova.network.neutron [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.152 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c72810fe-1053-405c-b895-e27eb0ee9beb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:128d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 748682, 'tstamp': 748682}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345440, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.158 250022 DEBUG oslo_concurrency.lockutils [req-9814f7af-9044-43a9-9dee-1cb655437915 req-6121deb4-5be9-4e25-9057-4655027fbf35 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:46.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.173 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7baa97-43fc-4b89-b328-368f1289f22c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f434e83-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:12:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748682, 'reachable_time': 24229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345441, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.209 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1f79bc-4ea3-4395-9262-46e49f5a7966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.292 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[de2d7d53-0d4f-4835-a0f8-946e23bf3f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.294 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f434e83-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.294 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.295 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f434e83-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:46 compute-0 kernel: tap0f434e83-40: entered promiscuous mode
Jan 20 15:05:46 compute-0 NetworkManager[48960]: <info>  [1768921546.2984] manager: (tap0f434e83-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 165 op/s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.310 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f434e83-40, col_values=(('external_ids', {'iface-id': '6133323e-bf50-4bbd-bc0b-9ecf135d8cd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:46 compute-0 ovn_controller[148666]: 2026-01-20T15:05:46Z|00565|binding|INFO|Releasing lport 6133323e-bf50-4bbd-bc0b-9ecf135d8cd5 from this chassis (sb_readonly=0)
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.312 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.333 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.334 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.335 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0b796757-d441-45dc-8fd7-72ddb130fadd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.335 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/0f434e83-45c8-454d-820b-af39b696a1d5.pid.haproxy
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 0f434e83-45c8-454d-820b-af39b696a1d5
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.336 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'env', 'PROCESS_TAG=haproxy-0f434e83-45c8-454d-820b-af39b696a1d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f434e83-45c8-454d-820b-af39b696a1d5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:05:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:05:46.352 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:05:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/186537975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/186537975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:05:46 compute-0 podman[345509]: 2026-01-20 15:05:46.76990386 +0000 UTC m=+0.058896168 container create 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.801 250022 DEBUG nova.compute.manager [req-9d44604e-1bad-4c4a-9f56-547c0fe85f7a req-579457de-6e62-488c-82e4-3ad5099c6665 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.802 250022 DEBUG oslo_concurrency.lockutils [req-9d44604e-1bad-4c4a-9f56-547c0fe85f7a req-579457de-6e62-488c-82e4-3ad5099c6665 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.802 250022 DEBUG oslo_concurrency.lockutils [req-9d44604e-1bad-4c4a-9f56-547c0fe85f7a req-579457de-6e62-488c-82e4-3ad5099c6665 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.802 250022 DEBUG oslo_concurrency.lockutils [req-9d44604e-1bad-4c4a-9f56-547c0fe85f7a req-579457de-6e62-488c-82e4-3ad5099c6665 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:46 compute-0 nova_compute[250018]: 2026-01-20 15:05:46.802 250022 DEBUG nova.compute.manager [req-9d44604e-1bad-4c4a-9f56-547c0fe85f7a req-579457de-6e62-488c-82e4-3ad5099c6665 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Processing event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:05:46 compute-0 systemd[1]: Started libpod-conmon-0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33.scope.
Jan 20 15:05:46 compute-0 podman[345509]: 2026-01-20 15:05:46.74092382 +0000 UTC m=+0.029916148 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:05:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4c8a4f7e3c7e6b2dedc37fd5a9c8c068e7d18480670b8078c0ecf34a17dc97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:05:46 compute-0 podman[345509]: 2026-01-20 15:05:46.854644923 +0000 UTC m=+0.143637281 container init 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:05:46 compute-0 podman[345509]: 2026-01-20 15:05:46.862411481 +0000 UTC m=+0.151403799 container start 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 15:05:46 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [NOTICE]   (345528) : New worker (345530) forked
Jan 20 15:05:46 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [NOTICE]   (345528) : Loading success.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.011 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.012 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921547.0121567, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.012 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Started (Lifecycle Event)
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.016 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.020 250022 INFO nova.virt.libvirt.driver [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance spawned successfully.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.021 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.041 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.048 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.049 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.049 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.050 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.051 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.051 250022 DEBUG nova.virt.libvirt.driver [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.058 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.110 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.111 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921547.0123177, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.112 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Paused (Lifecycle Event)
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.129 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.133 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921547.0150177, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.133 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Resumed (Lifecycle Event)
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.139 250022 INFO nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Took 7.04 seconds to spawn the instance on the hypervisor.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.139 250022 DEBUG nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.157 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.162 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.191 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.224 250022 INFO nova.compute.manager [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Took 10.01 seconds to build instance.
Jan 20 15:05:47 compute-0 nova_compute[250018]: 2026-01-20 15:05:47.250 250022 DEBUG oslo_concurrency.lockutils [None req-469311f4-0b1c-40b7-aff0-1b57400a054d b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:47 compute-0 ceph-mon[74360]: pgmap v2449: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 165 op/s
Jan 20 15:05:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:05:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:05:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:47.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:48.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:05:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:05:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3225821264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.908 250022 DEBUG nova.compute.manager [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.908 250022 DEBUG oslo_concurrency.lockutils [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.908 250022 DEBUG oslo_concurrency.lockutils [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.909 250022 DEBUG oslo_concurrency.lockutils [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.909 250022 DEBUG nova.compute.manager [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:05:48 compute-0 nova_compute[250018]: 2026-01-20 15:05:48.909 250022 WARNING nova.compute.manager [req-bb539f81-1982-4a90-9b50-9d89d41a9a86 req-732bab48-c5f4-4ffb-8cae-bf26ef3e2ae8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received unexpected event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with vm_state active and task_state None.
Jan 20 15:05:49 compute-0 nova_compute[250018]: 2026-01-20 15:05:49.368 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:49 compute-0 ceph-mon[74360]: pgmap v2450: 321 pgs: 321 active+clean; 243 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:05:49 compute-0 podman[345547]: 2026-01-20 15:05:49.457818962 +0000 UTC m=+0.049558135 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:05:49 compute-0 podman[345546]: 2026-01-20 15:05:49.518230329 +0000 UTC m=+0.110239940 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:05:49 compute-0 nova_compute[250018]: 2026-01-20 15:05:49.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:05:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:49.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:05:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:50.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 992 KiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 15:05:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:51 compute-0 ceph-mon[74360]: pgmap v2451: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 992 KiB/s rd, 2.1 MiB/s wr, 157 op/s
Jan 20 15:05:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:51.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:52 compute-0 nova_compute[250018]: 2026-01-20 15:05:52.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:52 compute-0 nova_compute[250018]: 2026-01-20 15:05:52.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:05:52
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log']
Jan 20 15:05:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:05:52 compute-0 nova_compute[250018]: 2026-01-20 15:05:52.659 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:52 compute-0 NetworkManager[48960]: <info>  [1768921552.6631] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 20 15:05:52 compute-0 NetworkManager[48960]: <info>  [1768921552.6652] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Jan 20 15:05:52 compute-0 nova_compute[250018]: 2026-01-20 15:05:52.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:52 compute-0 ovn_controller[148666]: 2026-01-20T15:05:52Z|00566|binding|INFO|Releasing lport 6133323e-bf50-4bbd-bc0b-9ecf135d8cd5 from this chassis (sb_readonly=0)
Jan 20 15:05:52 compute-0 nova_compute[250018]: 2026-01-20 15:05:52.917 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.275 250022 DEBUG nova.compute.manager [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.276 250022 DEBUG nova.compute.manager [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing instance network info cache due to event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.276 250022 DEBUG oslo_concurrency.lockutils [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.276 250022 DEBUG oslo_concurrency.lockutils [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.276 250022 DEBUG nova.network.neutron [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:05:53 compute-0 ceph-mon[74360]: pgmap v2452: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Jan 20 15:05:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3737704240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:05:53 compute-0 ovn_controller[148666]: 2026-01-20T15:05:53Z|00567|binding|INFO|Releasing lport 6133323e-bf50-4bbd-bc0b-9ecf135d8cd5 from this chassis (sb_readonly=0)
Jan 20 15:05:53 compute-0 nova_compute[250018]: 2026-01-20 15:05:53.533 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 20 15:05:54 compute-0 nova_compute[250018]: 2026-01-20 15:05:54.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2589043131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:54 compute-0 nova_compute[250018]: 2026-01-20 15:05:54.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.007 250022 DEBUG nova.network.neutron [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated VIF entry in instance network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.008 250022 DEBUG nova.network.neutron [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.027 250022 DEBUG oslo_concurrency.lockutils [req-691d1f01-c457-41e5-a035-d0672e4d184e req-4124cdc7-020c-4375-9faf-8f3bddeab44a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.082 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.083 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.083 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.083 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.084 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:55 compute-0 ceph-mon[74360]: pgmap v2453: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 20 15:05:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2670266435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881052373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.580 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.655 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.655 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:05:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:55.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.825 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.826 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4106MB free_disk=20.942607879638672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.826 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:05:55 compute-0 nova_compute[250018]: 2026-01-20 15:05:55.827 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:05:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.038 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 5feeb9de-434b-4ec7-aa99-6da718514c6f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.039 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.039 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.076 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:05:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:05:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:56.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:05:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 177 op/s
Jan 20 15:05:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:05:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043803133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2881052373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3251268508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2070363681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.536 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.543 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.568 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.611 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:05:56 compute-0 nova_compute[250018]: 2026-01-20 15:05:56.612 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:05:57 compute-0 sudo[345641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:57 compute-0 sudo[345641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:57 compute-0 sudo[345641]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:57 compute-0 sudo[345666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:05:57 compute-0 sudo[345666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:05:57 compute-0 sudo[345666]: pam_unix(sudo:session): session closed for user root
Jan 20 15:05:57 compute-0 ceph-mon[74360]: pgmap v2454: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 177 op/s
Jan 20 15:05:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2043803133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3670247231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:05:57 compute-0 nova_compute[250018]: 2026-01-20 15:05:57.613 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:05:57 compute-0 nova_compute[250018]: 2026-01-20 15:05:57.614 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:05:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:05:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:05:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:05:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:05:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 814 KiB/s wr, 108 op/s
Jan 20 15:05:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 20 15:05:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 20 15:05:59 compute-0 nova_compute[250018]: 2026-01-20 15:05:59.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:59 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 20 15:05:59 compute-0 ceph-mon[74360]: pgmap v2455: 321 pgs: 321 active+clean; 264 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 814 KiB/s wr, 108 op/s
Jan 20 15:05:59 compute-0 nova_compute[250018]: 2026-01-20 15:05:59.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:05:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:05:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:05:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:05:59.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:00 compute-0 ovn_controller[148666]: 2026-01-20T15:06:00Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:06:00 compute-0 ovn_controller[148666]: 2026-01-20T15:06:00Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:06:00 compute-0 nova_compute[250018]: 2026-01-20 15:06:00.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 129 op/s
Jan 20 15:06:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 20 15:06:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3048424037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2837693437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:00 compute-0 ceph-mon[74360]: osdmap e360: 3 total, 3 up, 3 in
Jan 20 15:06:00 compute-0 ceph-mon[74360]: pgmap v2457: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 129 op/s
Jan 20 15:06:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 20 15:06:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 20 15:06:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.345 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.345 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.345 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:06:01 compute-0 nova_compute[250018]: 2026-01-20 15:06:01.345 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 20 15:06:01 compute-0 ceph-mon[74360]: osdmap e361: 3 total, 3 up, 3 in
Jan 20 15:06:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 20 15:06:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 20 15:06:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:02.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 311 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.4 MiB/s wr, 134 op/s
Jan 20 15:06:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 20 15:06:02 compute-0 ceph-mon[74360]: osdmap e362: 3 total, 3 up, 3 in
Jan 20 15:06:02 compute-0 ceph-mon[74360]: pgmap v2460: 321 pgs: 321 active+clean; 311 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.4 MiB/s wr, 134 op/s
Jan 20 15:06:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 20 15:06:02 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 20 15:06:02 compute-0 nova_compute[250018]: 2026-01-20 15:06:02.688 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:02 compute-0 nova_compute[250018]: 2026-01-20 15:06:02.703 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:02 compute-0 nova_compute[250018]: 2026-01-20 15:06:02.704 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:06:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:06:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:06:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 20 15:06:03 compute-0 ceph-mon[74360]: osdmap e363: 3 total, 3 up, 3 in
Jan 20 15:06:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2097377447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 20 15:06:03 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 20 15:06:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:03.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:04.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.4 MiB/s wr, 279 op/s
Jan 20 15:06:04 compute-0 nova_compute[250018]: 2026-01-20 15:06:04.376 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 20 15:06:04 compute-0 nova_compute[250018]: 2026-01-20 15:06:04.626 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 20 15:06:04 compute-0 ceph-mon[74360]: osdmap e364: 3 total, 3 up, 3 in
Jan 20 15:06:04 compute-0 ceph-mon[74360]: pgmap v2463: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.4 MiB/s wr, 279 op/s
Jan 20 15:06:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 20 15:06:05 compute-0 ceph-mon[74360]: osdmap e365: 3 total, 3 up, 3 in
Jan 20 15:06:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 11 MiB/s wr, 508 op/s
Jan 20 15:06:06 compute-0 ceph-mon[74360]: pgmap v2465: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 11 MiB/s wr, 508 op/s
Jan 20 15:06:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:07.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:08.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 9.0 MiB/s wr, 400 op/s
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.119 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.119 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" acquired by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.119 250022 INFO nova.compute.manager [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Shelve offloading
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.146 250022 DEBUG nova.virt.libvirt.driver [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.378 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:09 compute-0 ceph-mon[74360]: pgmap v2466: 321 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 316 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 9.0 MiB/s wr, 400 op/s
Jan 20 15:06:09 compute-0 nova_compute[250018]: 2026-01-20 15:06:09.629 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:10.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 7.1 MiB/s wr, 339 op/s
Jan 20 15:06:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:06:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:06:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 20 15:06:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 20 15:06:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 20 15:06:11 compute-0 kernel: tap70668adb-f9 (unregistering): left promiscuous mode
Jan 20 15:06:11 compute-0 NetworkManager[48960]: <info>  [1768921571.3762] device (tap70668adb-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00568|binding|INFO|Releasing lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 from this chassis (sb_readonly=0)
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.431 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00569|binding|INFO|Setting lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 down in Southbound
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00570|binding|INFO|Removing iface tap70668adb-f9 ovn-installed in OVS
Jan 20 15:06:11 compute-0 ceph-mon[74360]: pgmap v2467: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 7.1 MiB/s wr, 339 op/s
Jan 20 15:06:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1653416062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:11 compute-0 ceph-mon[74360]: osdmap e366: 3 total, 3 up, 3 in
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.438 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.440 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 unbound from our chassis
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.441 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f434e83-45c8-454d-820b-af39b696a1d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.442 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cf5fe8-16a1-4390-8bd7-afd75b66a6f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.442 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 namespace which is not needed anymore
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.453 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Jan 20 15:06:11 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a2.scope: Consumed 14.785s CPU time.
Jan 20 15:06:11 compute-0 systemd-machined[216401]: Machine qemu-72-instance-000000a2 terminated.
Jan 20 15:06:11 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [NOTICE]   (345528) : haproxy version is 2.8.14-c23fe91
Jan 20 15:06:11 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [NOTICE]   (345528) : path to executable is /usr/sbin/haproxy
Jan 20 15:06:11 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [WARNING]  (345528) : Exiting Master process...
Jan 20 15:06:11 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [ALERT]    (345528) : Current worker (345530) exited with code 143 (Terminated)
Jan 20 15:06:11 compute-0 neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5[345524]: [WARNING]  (345528) : All workers exited. Exiting... (0)
Jan 20 15:06:11 compute-0 systemd[1]: libpod-0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33.scope: Deactivated successfully.
Jan 20 15:06:11 compute-0 podman[345722]: 2026-01-20 15:06:11.621854771 +0000 UTC m=+0.045601670 container died 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 15:06:11 compute-0 kernel: tap70668adb-f9: entered promiscuous mode
Jan 20 15:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33-userdata-shm.mount: Deactivated successfully.
Jan 20 15:06:11 compute-0 NetworkManager[48960]: <info>  [1768921571.6502] manager: (tap70668adb-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Jan 20 15:06:11 compute-0 kernel: tap70668adb-f9 (unregistering): left promiscuous mode
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00571|binding|INFO|Claiming lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 for this chassis.
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00572|binding|INFO|70668adb-f9ad-41cb-8eac-2e0aba32bf22: Claiming fa:16:3e:6a:c0:d3 10.100.0.3
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b4c8a4f7e3c7e6b2dedc37fd5a9c8c068e7d18480670b8078c0ecf34a17dc97-merged.mount: Deactivated successfully.
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.662 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031765089917178487 of space, bias 1.0, pg target 0.9529526975153546 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004423702714963707 of space, bias 1.0, pg target 1.3271108144891122 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004070191815559278 of space, bias 1.0, pg target 1.2169873528522241 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:06:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 15:06:11 compute-0 podman[345722]: 2026-01-20 15:06:11.676048631 +0000 UTC m=+0.099795530 container cleanup 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:06:11 compute-0 ovn_controller[148666]: 2026-01-20T15:06:11Z|00573|binding|INFO|Releasing lport 70668adb-f9ad-41cb-8eac-2e0aba32bf22 from this chassis (sb_readonly=0)
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.680 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 systemd[1]: libpod-conmon-0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33.scope: Deactivated successfully.
Jan 20 15:06:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.695 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:c0:d3 10.100.0.3'], port_security=['fa:16:3e:6a:c0:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5feeb9de-434b-4ec7-aa99-6da718514c6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f434e83-45c8-454d-820b-af39b696a1d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0fc924d2df984301897e81920c5e192f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8d958e0-892e-4275-9633-96783d5a96b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf0669b1-9b02-4bfa-859e-dac906b93fdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=70668adb-f9ad-41cb-8eac-2e0aba32bf22) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:11 compute-0 podman[345760]: 2026-01-20 15:06:11.749314855 +0000 UTC m=+0.047614294 container remove 0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.754 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4682eb4b-1ba7-48ac-8eeb-96ee673d8aa9]: (4, ('Tue Jan 20 03:06:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 (0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33)\n0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33\nTue Jan 20 03:06:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 (0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33)\n0bac99c9b8d9db748da367eba0aa7fb2a1f27ef37abcf32748ad93310f041d33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.757 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dc49dae1-eeaf-4179-88ca-9e1f55092f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.758 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f434e83-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.760 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 kernel: tap0f434e83-40: left promiscuous mode
Jan 20 15:06:11 compute-0 nova_compute[250018]: 2026-01-20 15:06:11.776 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.778 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d416cf22-855d-44ee-88a0-16dd13a16318]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.791 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d5331332-ebc3-4bad-bb55-043fced903ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.793 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4d066c-6962-459c-9877-19cb524d2f4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.807 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3a9ce4-836b-4880-9d63-019038c8bef3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 748674, 'reachable_time': 38119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345780, 'error': None, 'target': 'ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.811 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f434e83-45c8-454d-820b-af39b696a1d5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:06:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f434e83\x2d45c8\x2d454d\x2d820b\x2daf39b696a1d5.mount: Deactivated successfully.
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.811 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[f307919c-de21-4363-be07-2b8c44198f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.812 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 unbound from our chassis
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.813 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f434e83-45c8-454d-820b-af39b696a1d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.814 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[37b9cb85-5f0a-41df-8f22-55d88cfe3a72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.815 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 in datapath 0f434e83-45c8-454d-820b-af39b696a1d5 unbound from our chassis
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.816 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f434e83-45c8-454d-820b-af39b696a1d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:06:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:11.816 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b52386b8-cd24-4a7f-995a-fdcd76f6a10f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.161 250022 INFO nova.virt.libvirt.driver [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance shutdown successfully after 3 seconds.
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.167 250022 INFO nova.virt.libvirt.driver [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance destroyed successfully.
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.167 250022 DEBUG nova.objects.instance [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'numa_topology' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.186 250022 DEBUG nova.compute.manager [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.190 250022 DEBUG nova.compute.manager [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.190 250022 DEBUG oslo_concurrency.lockutils [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.190 250022 DEBUG oslo_concurrency.lockutils [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.190 250022 DEBUG oslo_concurrency.lockutils [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.191 250022 DEBUG nova.compute.manager [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.191 250022 WARNING nova.compute.manager [req-1628d247-3cd6-423a-a238-485f1ffb8fde req-e9cedb43-72ab-458a-8e3a-81caa5dda4a7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received unexpected event network-vif-unplugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with vm_state active and task_state shelving.
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.192 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.192 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.192 250022 DEBUG nova.network.neutron [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.403 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.404 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.421 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.488 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.489 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.499 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.499 250022 INFO nova.compute.claims [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:06:12 compute-0 nova_compute[250018]: 2026-01-20 15:06:12.647 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/865544307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.151 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.156 250022 DEBUG nova.compute.provider_tree [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.181 250022 DEBUG nova.scheduler.client.report [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.209 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.210 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.255 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.255 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.273 250022 INFO nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.297 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.392 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.393 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.394 250022 INFO nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Creating image(s)
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.421 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.447 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.471 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.475 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "cf2f08aa35b70eef4ee0125a50e83dbb40a9cab4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.476 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "cf2f08aa35b70eef4ee0125a50e83dbb40a9cab4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:13 compute-0 ceph-mon[74360]: pgmap v2469: 321 pgs: 321 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Jan 20 15:06:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/865544307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:06:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:06:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:13.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.743 250022 DEBUG nova.policy [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bc554998e71a4322bdd27ac727a9044c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e142d118583b4f9ba3531bcf3838e256', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:06:13 compute-0 nova_compute[250018]: 2026-01-20 15:06:13.949 250022 DEBUG nova.virt.libvirt.imagebackend [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/8c970c65-2888-4da3-891e-c2b6eb3ea735/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/8c970c65-2888-4da3-891e-c2b6eb3ea735/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.009 250022 DEBUG nova.virt.libvirt.imagebackend [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Selected location: {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/8c970c65-2888-4da3-891e-c2b6eb3ea735/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.010 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] cloning images/8c970c65-2888-4da3-891e-c2b6eb3ea735@snap to None/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.119 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "cf2f08aa35b70eef4ee0125a50e83dbb40a9cab4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.250 250022 DEBUG nova.objects.instance [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.266 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.266 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Ensure instance console log exists: /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.267 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.267 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.267 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 408 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.9 MiB/s wr, 226 op/s
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.470 250022 DEBUG nova.compute.manager [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.470 250022 DEBUG oslo_concurrency.lockutils [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.470 250022 DEBUG oslo_concurrency.lockutils [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.471 250022 DEBUG oslo_concurrency.lockutils [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.471 250022 DEBUG nova.compute.manager [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] No waiting events found dispatching network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.471 250022 WARNING nova.compute.manager [req-a71db9ad-ba1a-4617-87af-f05d40794c04 req-80b85322-bd20-4f3e-9d2f-8660390ff6e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received unexpected event network-vif-plugged-70668adb-f9ad-41cb-8eac-2e0aba32bf22 for instance with vm_state active and task_state shelving.
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.490 250022 DEBUG nova.network.neutron [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.509 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:06:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3713738264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:06:14 compute-0 ceph-mon[74360]: pgmap v2470: 321 pgs: 321 active+clean; 408 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.9 MiB/s wr, 226 op/s
Jan 20 15:06:14 compute-0 nova_compute[250018]: 2026-01-20 15:06:14.631 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:15 compute-0 nova_compute[250018]: 2026-01-20 15:06:15.664 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Successfully created port: f58244ca-9b84-410e-a77e-f4bb0dc69691 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:06:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:15.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.307 250022 INFO nova.virt.libvirt.driver [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Instance destroyed successfully.
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.308 250022 DEBUG nova.objects.instance [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lazy-loading 'resources' on Instance uuid 5feeb9de-434b-4ec7-aa99-6da718514c6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.321 250022 DEBUG nova.virt.libvirt.vif [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:05:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-670486896',display_name='tempest-TestShelveInstance-server-670486896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-670486896',id=162,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGFkn7BLthBV1Y62q/iaiaFYVNSXov56cyC6gJDof3vS0dj6UwuVwvMnqOok2l8W+oqb55YucgjGf+63NOxxoSCxoRUO/Jcx5MarGHmdQPdT+6u18ixvV1ghiExv/Y0Nog==',key_name='tempest-TestShelveInstance-1862119958',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:05:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0fc924d2df984301897e81920c5e192f',ramdisk_id='',reservation_id='r-18qxw31n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1425544575',owner_user_name='tempest-TestShelveInstance-1425544575-project-member'},tags=<?>,task_state='shelving',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:05:47Z,user_data=None,user_id='b02a8ef6cc3946ceb2c8846aae2eae68',uuid=5feeb9de-434b-4ec7-aa99-6da718514c6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.322 250022 DEBUG nova.network.os_vif_util [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converting VIF {"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": "br-int", "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70668adb-f9", "ovs_interfaceid": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.323 250022 DEBUG nova.network.os_vif_util [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.323 250022 DEBUG os_vif [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.325 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70668adb-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.326 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.329 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.332 250022 INFO os_vif [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:c0:d3,bridge_name='br-int',has_traffic_filtering=True,id=70668adb-f9ad-41cb-8eac-2e0aba32bf22,network=Network(0f434e83-45c8-454d-820b-af39b696a1d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70668adb-f9')
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.550 250022 INFO nova.virt.libvirt.driver [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deleting instance files /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f_del
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.551 250022 INFO nova.virt.libvirt.driver [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Deletion of /var/lib/nova/instances/5feeb9de-434b-4ec7-aa99-6da718514c6f_del complete
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.558 250022 DEBUG nova.compute.manager [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Received event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.559 250022 DEBUG nova.compute.manager [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing instance network info cache due to event network-changed-70668adb-f9ad-41cb-8eac-2e0aba32bf22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.559 250022 DEBUG oslo_concurrency.lockutils [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.560 250022 DEBUG oslo_concurrency.lockutils [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.560 250022 DEBUG nova.network.neutron [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Refreshing network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.884 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Successfully updated port: f58244ca-9b84-410e-a77e-f4bb0dc69691 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.910 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.910 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquired lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.911 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.984 250022 DEBUG nova.compute.manager [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.985 250022 DEBUG nova.compute.manager [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing instance network info cache due to event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:16 compute-0 nova_compute[250018]: 2026-01-20 15:06:16.985 250022 DEBUG oslo_concurrency.lockutils [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.061 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.164 250022 INFO nova.scheduler.client.report [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Deleted allocations for instance 5feeb9de-434b-4ec7-aa99-6da718514c6f
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.219 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.220 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.302 250022 DEBUG oslo_concurrency.processutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:17 compute-0 sudo[346001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:17 compute-0 sudo[346001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:17 compute-0 sudo[346001]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:17 compute-0 ceph-mon[74360]: pgmap v2471: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:17 compute-0 sudo[346027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:17 compute-0 sudo[346027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:17 compute-0 sudo[346027]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:17.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494959299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.786 250022 DEBUG oslo_concurrency.processutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.793 250022 DEBUG nova.compute.provider_tree [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.811 250022 DEBUG nova.scheduler.client.report [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.835 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.889 250022 DEBUG oslo_concurrency.lockutils [None req-66024156-4244-4fe1-b7c1-7ea83e115ab2 b02a8ef6cc3946ceb2c8846aae2eae68 0fc924d2df984301897e81920c5e192f - - default default] Lock "5feeb9de-434b-4ec7-aa99-6da718514c6f" "released" by "nova.compute.manager.ComputeManager.shelve_offload_instance.<locals>.do_shelve_offload_instance" :: held 8.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:17 compute-0 nova_compute[250018]: 2026-01-20 15:06:17.930 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:17.932 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:17.934 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:06:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.276 250022 DEBUG nova.network.neutron [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updated VIF entry in instance network info cache for port 70668adb-f9ad-41cb-8eac-2e0aba32bf22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.277 250022 DEBUG nova.network.neutron [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Updating instance_info_cache with network_info: [{"id": "70668adb-f9ad-41cb-8eac-2e0aba32bf22", "address": "fa:16:3e:6a:c0:d3", "network": {"id": "0f434e83-45c8-454d-820b-af39b696a1d5", "bridge": null, "label": "tempest-TestShelveInstance-1862824485-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0fc924d2df984301897e81920c5e192f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap70668adb-f9", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.299 250022 DEBUG nova.network.neutron [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.310 250022 DEBUG oslo_concurrency.lockutils [req-81f646ab-2e0f-4c5f-b2d3-95f285b816ae req-d7b47746-58c2-489f-bef4-9fa8cd0c2f15 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5feeb9de-434b-4ec7-aa99-6da718514c6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.328 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Releasing lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.329 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Instance network_info: |[{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.330 250022 DEBUG oslo_concurrency.lockutils [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.330 250022 DEBUG nova.network.neutron [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.335 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Start _get_guest_xml network_info=[{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T15:06:01Z,direct_url=<?>,disk_format='raw',id=8c970c65-2888-4da3-891e-c2b6eb3ea735,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-189036364',owner='e142d118583b4f9ba3531bcf3838e256',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T15:06:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': '8c970c65-2888-4da3-891e-c2b6eb3ea735'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.343 250022 WARNING nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.352 250022 DEBUG nova.virt.libvirt.host [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.353 250022 DEBUG nova.virt.libvirt.host [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.363 250022 DEBUG nova.virt.libvirt.host [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.364 250022 DEBUG nova.virt.libvirt.host [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.366 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.366 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-20T15:06:01Z,direct_url=<?>,disk_format='raw',id=8c970c65-2888-4da3-891e-c2b6eb3ea735,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-189036364',owner='e142d118583b4f9ba3531bcf3838e256',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-20T15:06:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.367 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.368 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.368 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.369 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.369 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.370 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.370 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.371 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.371 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.372 250022 DEBUG nova.virt.hardware [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.379 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3494959299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 15:06:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917396713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.875 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.906 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:18 compute-0 nova_compute[250018]: 2026-01-20 15:06:18.911 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247174354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.362 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.365 250022 DEBUG nova.virt.libvirt.vif [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-69339560',display_name='tempest-TestStampPattern-server-69339560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-69339560',id=164,image_ref='8c970c65-2888-4da3-891e-c2b6eb3ea735',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-x2zvlk1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='33ba7a73-3233-40a3-a49a-e5bbd604dc3c',image_min_disk='1',image_min_ram='0',image_owner_id='e142d118583b4f9ba3531bcf3838e256',image_owner_project_name='tempest-TestStampPattern-487600181',image_owner_user_name='tempest-TestStampPattern-487600181-project-member',image_user_id='bc554998e71a4322bdd27ac727a9044c',network_allocated='True',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:13Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.366 250022 DEBUG nova.network.os_vif_util [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.367 250022 DEBUG nova.network.os_vif_util [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.370 250022 DEBUG nova.objects.instance [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.391 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <uuid>0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04</uuid>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <name>instance-000000a4</name>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:name>tempest-TestStampPattern-server-69339560</nova:name>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:06:18</nova:creationTime>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:user uuid="bc554998e71a4322bdd27ac727a9044c">tempest-TestStampPattern-487600181-project-member</nova:user>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:project uuid="e142d118583b4f9ba3531bcf3838e256">tempest-TestStampPattern-487600181</nova:project>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="8c970c65-2888-4da3-891e-c2b6eb3ea735"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <nova:port uuid="f58244ca-9b84-410e-a77e-f4bb0dc69691">
Jan 20 15:06:19 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <system>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="serial">0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="uuid">0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </system>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <os>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </os>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <features>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </features>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk">
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </source>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config">
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </source>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:06:19 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:18:5f:10"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <target dev="tapf58244ca-9b"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/console.log" append="off"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <video>
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </video>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:06:19 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:06:19 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:06:19 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:06:19 compute-0 nova_compute[250018]: </domain>
Jan 20 15:06:19 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.394 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Preparing to wait for external event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.394 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.395 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.395 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.396 250022 DEBUG nova.virt.libvirt.vif [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-69339560',display_name='tempest-TestStampPattern-server-69339560',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-69339560',id=164,image_ref='8c970c65-2888-4da3-891e-c2b6eb3ea735',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-x2zvlk1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='33ba7a73-3233-40a3-a49a-e5bbd604dc3c',image_min_disk='1',image_min_ram='0',image_owner_id='e142d118583b4f9ba3531bcf3838e256',image_owner_project_name='tempest-TestStampPattern-487600181',image_owner_user_name='tempest-TestStampPattern-487600181-project-member',image_user_id='bc554998e71a4322bdd27ac727a9044c',network_allocated='True',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:13Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.397 250022 DEBUG nova.network.os_vif_util [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.398 250022 DEBUG nova.network.os_vif_util [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.399 250022 DEBUG os_vif [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.400 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.401 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.402 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.406 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:19 compute-0 ceph-mon[74360]: pgmap v2472: 321 pgs: 321 active+clean; 430 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.407 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf58244ca-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1917396713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4247174354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.407 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf58244ca-9b, col_values=(('external_ids', {'iface-id': 'f58244ca-9b84-410e-a77e-f4bb0dc69691', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:5f:10', 'vm-uuid': '0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.408 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:19 compute-0 NetworkManager[48960]: <info>  [1768921579.4101] manager: (tapf58244ca-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/280)
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.411 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.417 250022 INFO os_vif [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b')
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.484 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.485 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.485 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No VIF found with MAC fa:16:3e:18:5f:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.485 250022 INFO nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Using config drive
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.512 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.633 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.883 250022 DEBUG nova.network.neutron [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updated VIF entry in instance network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.884 250022 DEBUG nova.network.neutron [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.899 250022 DEBUG oslo_concurrency.lockutils [req-890e5040-e7b0-4468-bc4b-d45e72caf946 req-d518f1a1-c9fe-4152-b008-d0487be5a891 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.994 250022 INFO nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Creating config drive at /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config
Jan 20 15:06:19 compute-0 nova_compute[250018]: 2026-01-20 15:06:19.999 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyjehqdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.144 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoyjehqdc" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:20.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.210 250022 DEBUG nova.storage.rbd_utils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] rbd image 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.216 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.245 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 3.5 MiB/s wr, 157 op/s
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.394 250022 DEBUG oslo_concurrency.processutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.394 250022 INFO nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Deleting local config drive /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04/disk.config because it was imported into RBD.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2692737353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2544975497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:20 compute-0 kernel: tapf58244ca-9b: entered promiscuous mode
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.4581] manager: (tapf58244ca-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Jan 20 15:06:20 compute-0 ovn_controller[148666]: 2026-01-20T15:06:20Z|00574|binding|INFO|Claiming lport f58244ca-9b84-410e-a77e-f4bb0dc69691 for this chassis.
Jan 20 15:06:20 compute-0 ovn_controller[148666]: 2026-01-20T15:06:20Z|00575|binding|INFO|f58244ca-9b84-410e-a77e-f4bb0dc69691: Claiming fa:16:3e:18:5f:10 10.100.0.4
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.459 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 podman[346199]: 2026-01-20 15:06:20.472699162 +0000 UTC m=+0.054558961 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:06:20 compute-0 ovn_controller[148666]: 2026-01-20T15:06:20Z|00576|binding|INFO|Setting lport f58244ca-9b84-410e-a77e-f4bb0dc69691 ovn-installed in OVS
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.476 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.478 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.481 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 systemd-udevd[346247]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.5180] device (tapf58244ca-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.5189] device (tapf58244ca-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:06:20 compute-0 podman[346198]: 2026-01-20 15:06:20.534293821 +0000 UTC m=+0.120882057 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:06:20 compute-0 systemd-machined[216401]: New machine qemu-73-instance-000000a4.
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.552 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:5f:10 10.100.0.4'], port_security=['fa:16:3e:18:5f:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e142d118583b4f9ba3531bcf3838e256', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37efc868-18af-48b7-8d56-e37fd1ec4df0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9deb561-4473-4aa7-8b6f-d70e20e7cf6d, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f58244ca-9b84-410e-a77e-f4bb0dc69691) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:20 compute-0 ovn_controller[148666]: 2026-01-20T15:06:20Z|00577|binding|INFO|Setting lport f58244ca-9b84-410e-a77e-f4bb0dc69691 up in Southbound
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.553 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f58244ca-9b84-410e-a77e-f4bb0dc69691 in datapath 8472bae1-476b-4100-b9fa-e8827bc4f7bf bound to our chassis
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.554 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.564 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ba46e83a-8d72-40d5-a749-def9247aeed1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.565 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8472bae1-41 in ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.566 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8472bae1-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.566 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a1baae00-92c2-4fb0-bc6c-ee111eb3c1cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.568 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[76fe92e0-4e91-4d64-84a1-a05591f12054]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.578 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[8313f599-4be8-4942-9277-adb44f711309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-000000a4.
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.600 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b21d04e6-87e2-4a45-8c0c-2f3ce3133a5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.627 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[14c303ad-eaec-4dc7-a3ed-0dd1e5d97164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.633 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3568ec-4c92-4e28-8ed3-0b34b118323e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 systemd-udevd[346249]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.6339] manager: (tap8472bae1-40): new Veth device (/org/freedesktop/NetworkManager/Devices/282)
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.667 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[94088423-30e7-4cd5-ad58-87aab81214a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.670 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc6a799-0048-46dc-a5c0-025b7937d632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.6930] device (tap8472bae1-40): carrier: link connected
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.700 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6369f7e8-96b1-4c2c-a120-41954876dd9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.719 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b347ec79-c951-427c-9f0a-69148cb28699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8472bae1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:38:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752141, 'reachable_time': 15282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346284, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.737 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c2673017-e22a-4c7f-bd55-896a861925da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7a:38ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752141, 'tstamp': 752141}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346285, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.759 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4ffdeee2-4c37-460d-8d4b-2899305e6f04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8472bae1-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7a:38:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752141, 'reachable_time': 15282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346286, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.790 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[31ef38b8-f68a-43b4-9433-ea103227abf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.844 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e065b516-bc41-4f58-85fd-f7bdb8cb86c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.846 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8472bae1-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.846 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.846 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8472bae1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 kernel: tap8472bae1-40: entered promiscuous mode
Jan 20 15:06:20 compute-0 NetworkManager[48960]: <info>  [1768921580.8493] manager: (tap8472bae1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.852 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8472bae1-40, col_values=(('external_ids', {'iface-id': 'a48fbce9-f06f-49f1-8e61-d1d46e8f5808'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 ovn_controller[148666]: 2026-01-20T15:06:20Z|00578|binding|INFO|Releasing lport a48fbce9-f06f-49f1-8e61-d1d46e8f5808 from this chassis (sb_readonly=0)
Jan 20 15:06:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.867025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580867091, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1604, "num_deletes": 265, "total_data_size": 2465790, "memory_usage": 2520840, "flush_reason": "Manual Compaction"}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 20 15:06:20 compute-0 nova_compute[250018]: 2026-01-20 15:06:20.872 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.873 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.874 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[15fdb097-cc01-4c2f-8592-388cacbcb148]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.875 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/8472bae1-476b-4100-b9fa-e8827bc4f7bf.pid.haproxy
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 8472bae1-476b-4100-b9fa-e8827bc4f7bf
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:06:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:20.875 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'env', 'PROCESS_TAG=haproxy-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8472bae1-476b-4100-b9fa-e8827bc4f7bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580882705, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2422490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54064, "largest_seqno": 55667, "table_properties": {"data_size": 2415041, "index_size": 4327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16450, "raw_average_key_size": 20, "raw_value_size": 2399765, "raw_average_value_size": 2999, "num_data_blocks": 189, "num_entries": 800, "num_filter_entries": 800, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921455, "oldest_key_time": 1768921455, "file_creation_time": 1768921580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15707 microseconds, and 6392 cpu microseconds.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.882743) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2422490 bytes OK
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.882760) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.884583) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.884594) EVENT_LOG_v1 {"time_micros": 1768921580884590, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.884608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2458773, prev total WAL file size 2458773, number of live WAL files 2.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.885332) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303137' seq:72057594037927935, type:22 .. '6C6F676D0032323731' seq:0, type:0; will stop at (end)
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2365KB)], [119(10071KB)]
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580885407, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12735585, "oldest_snapshot_seqno": -1}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8593 keys, 12588643 bytes, temperature: kUnknown
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580958487, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 12588643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12530894, "index_size": 35173, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 222701, "raw_average_key_size": 25, "raw_value_size": 12377685, "raw_average_value_size": 1440, "num_data_blocks": 1380, "num_entries": 8593, "num_filter_entries": 8593, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.958827) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12588643 bytes
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.960691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.8 rd, 171.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 9137, records dropped: 544 output_compression: NoCompression
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.960707) EVENT_LOG_v1 {"time_micros": 1768921580960699, "job": 72, "event": "compaction_finished", "compaction_time_micros": 73259, "compaction_time_cpu_micros": 26931, "output_level": 6, "num_output_files": 1, "total_output_size": 12588643, "num_input_records": 9137, "num_output_records": 8593, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580961450, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921580963465, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.885246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.963595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.963599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.963601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.963602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:20 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:06:20.963603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.235 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921581.2344158, 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.236 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] VM Started (Lifecycle Event)
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.259 250022 DEBUG nova.compute.manager [req-988f8d33-dae5-40d9-aca3-5b227e5b81bb req-abe62774-41eb-494a-a5b5-b778c76d46f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.260 250022 DEBUG oslo_concurrency.lockutils [req-988f8d33-dae5-40d9-aca3-5b227e5b81bb req-abe62774-41eb-494a-a5b5-b778c76d46f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.260 250022 DEBUG oslo_concurrency.lockutils [req-988f8d33-dae5-40d9-aca3-5b227e5b81bb req-abe62774-41eb-494a-a5b5-b778c76d46f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.261 250022 DEBUG oslo_concurrency.lockutils [req-988f8d33-dae5-40d9-aca3-5b227e5b81bb req-abe62774-41eb-494a-a5b5-b778c76d46f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.261 250022 DEBUG nova.compute.manager [req-988f8d33-dae5-40d9-aca3-5b227e5b81bb req-abe62774-41eb-494a-a5b5-b778c76d46f7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Processing event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.262 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.266 250022 DEBUG nova.virt.libvirt.driver [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.269 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:21 compute-0 podman[346360]: 2026-01-20 15:06:21.271758315 +0000 UTC m=+0.051751885 container create eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.272 250022 INFO nova.virt.libvirt.driver [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Instance spawned successfully.
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.273 250022 INFO nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Took 7.88 seconds to spawn the instance on the hypervisor.
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.273 250022 DEBUG nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.274 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:21 compute-0 systemd[1]: Started libpod-conmon-eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1.scope.
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.318 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.320 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921581.2351344, 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.320 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] VM Paused (Lifecycle Event)
Jan 20 15:06:21 compute-0 podman[346360]: 2026-01-20 15:06:21.243936876 +0000 UTC m=+0.023930466 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:06:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21fa6c352bd385a37a177a6641ff27860cc8422de6fef5e56f59b6ddcd6890b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:21 compute-0 podman[346360]: 2026-01-20 15:06:21.357279689 +0000 UTC m=+0.137273279 container init eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 15:06:21 compute-0 podman[346360]: 2026-01-20 15:06:21.363121736 +0000 UTC m=+0.143115306 container start eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:06:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2218343567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.368 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.371 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921581.2655516, 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.371 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] VM Resumed (Lifecycle Event)
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.380 250022 INFO nova.compute.manager [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Took 8.91 seconds to build instance.
Jan 20 15:06:21 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [NOTICE]   (346379) : New worker (346381) forked
Jan 20 15:06:21 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [NOTICE]   (346379) : Loading success.
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.400 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.403 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:21 compute-0 nova_compute[250018]: 2026-01-20 15:06:21.406 250022 DEBUG oslo_concurrency.lockutils [None req-e94c83d1-e40c-496d-b821-cb8c8bf00577 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:21 compute-0 ceph-mon[74360]: pgmap v2473: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 3.5 MiB/s wr, 157 op/s
Jan 20 15:06:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1540173418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2218343567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:21.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:22.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 482 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.1 MiB/s wr, 134 op/s
Jan 20 15:06:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3242676901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.372 250022 DEBUG nova.compute.manager [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.373 250022 DEBUG oslo_concurrency.lockutils [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.373 250022 DEBUG oslo_concurrency.lockutils [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.374 250022 DEBUG oslo_concurrency.lockutils [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.374 250022 DEBUG nova.compute.manager [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] No waiting events found dispatching network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:23 compute-0 nova_compute[250018]: 2026-01-20 15:06:23.375 250022 WARNING nova.compute.manager [req-d6dc13db-f5e7-447a-83df-cdbadcd2e68d req-9e442542-4b73-49a0-b828-f3b77e0fd64d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received unexpected event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 for instance with vm_state active and task_state None.
Jan 20 15:06:23 compute-0 ceph-mon[74360]: pgmap v2474: 321 pgs: 321 active+clean; 482 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.1 MiB/s wr, 134 op/s
Jan 20 15:06:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:23.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:24.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 164 op/s
Jan 20 15:06:24 compute-0 nova_compute[250018]: 2026-01-20 15:06:24.410 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:24 compute-0 nova_compute[250018]: 2026-01-20 15:06:24.554 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:24 compute-0 nova_compute[250018]: 2026-01-20 15:06:24.634 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:25 compute-0 ceph-mon[74360]: pgmap v2475: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 164 op/s
Jan 20 15:06:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:25.936 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:26.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 20 15:06:26 compute-0 nova_compute[250018]: 2026-01-20 15:06:26.673 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921571.6728225, 5feeb9de-434b-4ec7-aa99-6da718514c6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:26 compute-0 nova_compute[250018]: 2026-01-20 15:06:26.674 250022 INFO nova.compute.manager [-] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] VM Stopped (Lifecycle Event)
Jan 20 15:06:26 compute-0 nova_compute[250018]: 2026-01-20 15:06:26.697 250022 DEBUG nova.compute.manager [None req-618789ab-c238-47a9-9da7-3917e2f5c92a - - - - - -] [instance: 5feeb9de-434b-4ec7-aa99-6da718514c6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:27 compute-0 nova_compute[250018]: 2026-01-20 15:06:27.068 250022 DEBUG nova.compute.manager [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:27 compute-0 nova_compute[250018]: 2026-01-20 15:06:27.068 250022 DEBUG nova.compute.manager [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing instance network info cache due to event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:27 compute-0 nova_compute[250018]: 2026-01-20 15:06:27.069 250022 DEBUG oslo_concurrency.lockutils [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:27 compute-0 nova_compute[250018]: 2026-01-20 15:06:27.069 250022 DEBUG oslo_concurrency.lockutils [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:27 compute-0 nova_compute[250018]: 2026-01-20 15:06:27.069 250022 DEBUG nova.network.neutron [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:27 compute-0 ceph-mon[74360]: pgmap v2476: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 20 15:06:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 15:06:29 compute-0 nova_compute[250018]: 2026-01-20 15:06:29.411 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:29 compute-0 ceph-mon[74360]: pgmap v2477: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 201 op/s
Jan 20 15:06:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2756618266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:29 compute-0 nova_compute[250018]: 2026-01-20 15:06:29.636 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Jan 20 15:06:30 compute-0 ceph-mon[74360]: pgmap v2478: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 204 op/s
Jan 20 15:06:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:30.776 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:30.776 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:30.776 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:31 compute-0 nova_compute[250018]: 2026-01-20 15:06:31.523 250022 DEBUG nova.network.neutron [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updated VIF entry in instance network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:31 compute-0 nova_compute[250018]: 2026-01-20 15:06:31.524 250022 DEBUG nova.network.neutron [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:31 compute-0 nova_compute[250018]: 2026-01-20 15:06:31.544 250022 DEBUG oslo_concurrency.lockutils [req-b66074ef-5ae7-4d5b-a07c-0873add5f4f2 req-c02d1ab9-640d-43db-a9b1-621b7dc8f533 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:31.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:32.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3538617676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Jan 20 15:06:32 compute-0 nova_compute[250018]: 2026-01-20 15:06:32.966 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:33 compute-0 ceph-mon[74360]: pgmap v2479: 321 pgs: 321 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Jan 20 15:06:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:33.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 335 KiB/s wr, 174 op/s
Jan 20 15:06:34 compute-0 nova_compute[250018]: 2026-01-20 15:06:34.413 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:34 compute-0 nova_compute[250018]: 2026-01-20 15:06:34.639 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:35 compute-0 ceph-mon[74360]: pgmap v2480: 321 pgs: 321 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 335 KiB/s wr, 174 op/s
Jan 20 15:06:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:35.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:35 compute-0 ovn_controller[148666]: 2026-01-20T15:06:35Z|00068|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Jan 20 15:06:35 compute-0 ovn_controller[148666]: 2026-01-20T15:06:35Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:18:5f:10 10.100.0.4
Jan 20 15:06:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:36.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:06:36 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:06:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 294 op/s
Jan 20 15:06:37 compute-0 nova_compute[250018]: 2026-01-20 15:06:37.176 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:37 compute-0 ceph-mon[74360]: pgmap v2481: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 294 op/s
Jan 20 15:06:37 compute-0 sudo[346399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:37 compute-0 sudo[346399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:37 compute-0 sudo[346399]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:37 compute-0 sudo[346424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:37 compute-0 sudo[346424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:37 compute-0 sudo[346424]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:37.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:38 compute-0 nova_compute[250018]: 2026-01-20 15:06:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 20 15:06:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3922319981' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4171499437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:39 compute-0 nova_compute[250018]: 2026-01-20 15:06:39.458 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:39 compute-0 ceph-mon[74360]: pgmap v2482: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 20 15:06:39 compute-0 ovn_controller[148666]: 2026-01-20T15:06:39Z|00070|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Jan 20 15:06:39 compute-0 ovn_controller[148666]: 2026-01-20T15:06:39Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:18:5f:10 10.100.0.4
Jan 20 15:06:39 compute-0 nova_compute[250018]: 2026-01-20 15:06:39.641 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:39.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:40.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:41 compute-0 ovn_controller[148666]: 2026-01-20T15:06:41Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:5f:10 10.100.0.4
Jan 20 15:06:41 compute-0 ovn_controller[148666]: 2026-01-20T15:06:41Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:5f:10 10.100.0.4
Jan 20 15:06:41 compute-0 ceph-mon[74360]: pgmap v2483: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:41.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:43 compute-0 ceph-mon[74360]: pgmap v2484: 321 pgs: 321 active+clean; 547 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 221 op/s
Jan 20 15:06:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 243 op/s
Jan 20 15:06:44 compute-0 nova_compute[250018]: 2026-01-20 15:06:44.489 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:44 compute-0 nova_compute[250018]: 2026-01-20 15:06:44.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.383 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.384 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.398 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.473 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.473 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.479 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.479 250022 INFO nova.compute.claims [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:06:45 compute-0 ceph-mon[74360]: pgmap v2485: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 243 op/s
Jan 20 15:06:45 compute-0 nova_compute[250018]: 2026-01-20 15:06:45.593 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:45 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937162431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:45 compute-0 sudo[346474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:46 compute-0 sudo[346474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-0 sudo[346474]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.016 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.022 250022 DEBUG nova.compute.provider_tree [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:46 compute-0 sudo[346501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.059 250022 DEBUG nova.scheduler.client.report [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:46 compute-0 sudo[346501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-0 sudo[346501]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.089 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.089 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:06:46 compute-0 sudo[346526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:46 compute-0 sudo[346526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-0 sudo[346526]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.147 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.148 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.168 250022 INFO nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:06:46 compute-0 sudo[346551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:06:46 compute-0 sudo[346551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.192 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:06:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.293 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.294 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.294 250022 INFO nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Creating image(s)
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.316 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.1 MiB/s wr, 311 op/s
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.343 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.368 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.371 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.443 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.444 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.444 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.444 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.470 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:46 compute-0 nova_compute[250018]: 2026-01-20 15:06:46.474 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 26cf4955-374b-4e19-992f-e9348d555edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3937162431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:46 compute-0 sudo[346551]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:06:46 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:06:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:06:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.032 250022 DEBUG nova.policy [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1cdce555ec694255a154517f28a12ae5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '526db518ca3942b58ee346d4bd970e42', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:06:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0b8ff0cd-15b3-4f0d-83b3-df599f37df6f does not exist
Jan 20 15:06:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 693447d0-0b93-4f5b-b26e-1f5f189ae0a1 does not exist
Jan 20 15:06:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ca4adc12-139d-48a5-9142-6f0bab4cdc21 does not exist
Jan 20 15:06:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:06:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:06:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:06:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:47 compute-0 sudo[346701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:47 compute-0 sudo[346701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:47 compute-0 sudo[346701]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:47 compute-0 sudo[346726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:06:47 compute-0 sudo[346726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:47 compute-0 sudo[346726]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:47 compute-0 sudo[346751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:47 compute-0 sudo[346751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:47 compute-0 sudo[346751]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:47 compute-0 sudo[346776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:06:47 compute-0 sudo[346776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.444 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 26cf4955-374b-4e19-992f-e9348d555edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.970s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.526 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] resizing rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:06:47 compute-0 ceph-mon[74360]: pgmap v2486: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.1 MiB/s wr, 311 op/s
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:06:47 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.64667289 +0000 UTC m=+0.044840151 container create 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.670 250022 DEBUG nova.objects.instance [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'migration_context' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:47 compute-0 systemd[1]: Started libpod-conmon-46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77.scope.
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.687 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.688 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Ensure instance console log exists: /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.688 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.688 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:47 compute-0 nova_compute[250018]: 2026-01-20 15:06:47.688 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.622292962 +0000 UTC m=+0.020460253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.733303187 +0000 UTC m=+0.131470528 container init 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.741095917 +0000 UTC m=+0.139263178 container start 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.744325494 +0000 UTC m=+0.142492805 container attach 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:06:47 compute-0 goofy_bassi[346928]: 167 167
Jan 20 15:06:47 compute-0 systemd[1]: libpod-46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77.scope: Deactivated successfully.
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.747500129 +0000 UTC m=+0.145667390 container died 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59acbee7f9a2476690cc57ffae33de5695cbc901cde694d0a380761ed7e1e89-merged.mount: Deactivated successfully.
Jan 20 15:06:47 compute-0 podman[346894]: 2026-01-20 15:06:47.785847535 +0000 UTC m=+0.184014796 container remove 46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 15:06:47 compute-0 systemd[1]: libpod-conmon-46803db9b4951784c63bfa2a255f964539f34a133d4ad6f5e2e5b05ed707ab77.scope: Deactivated successfully.
Jan 20 15:06:47 compute-0 podman[346953]: 2026-01-20 15:06:47.938908943 +0000 UTC m=+0.037833952 container create 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:06:47 compute-0 systemd[1]: Started libpod-conmon-09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb.scope.
Jan 20 15:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:48 compute-0 podman[346953]: 2026-01-20 15:06:47.922571432 +0000 UTC m=+0.021496461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:48 compute-0 podman[346953]: 2026-01-20 15:06:48.032405265 +0000 UTC m=+0.131330314 container init 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:06:48 compute-0 podman[346953]: 2026-01-20 15:06:48.039298871 +0000 UTC m=+0.138223880 container start 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:06:48 compute-0 podman[346953]: 2026-01-20 15:06:48.042694092 +0000 UTC m=+0.141619101 container attach 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:06:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:48.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 152 KiB/s wr, 154 op/s
Jan 20 15:06:48 compute-0 ceph-mon[74360]: pgmap v2487: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 152 KiB/s wr, 154 op/s
Jan 20 15:06:48 compute-0 stupefied_benz[346969]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:06:48 compute-0 stupefied_benz[346969]: --> relative data size: 1.0
Jan 20 15:06:48 compute-0 stupefied_benz[346969]: --> All data devices are unavailable
Jan 20 15:06:48 compute-0 systemd[1]: libpod-09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb.scope: Deactivated successfully.
Jan 20 15:06:48 compute-0 podman[346984]: 2026-01-20 15:06:48.946590846 +0000 UTC m=+0.024227576 container died 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b37157f5c87c2f8ad0f71187ae3943b8aab4a64e50ead65356acad9eeb864f91-merged.mount: Deactivated successfully.
Jan 20 15:06:49 compute-0 podman[346984]: 2026-01-20 15:06:49.094965748 +0000 UTC m=+0.172602478 container remove 09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_benz, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:06:49 compute-0 systemd[1]: libpod-conmon-09729e35c13189c2e327a4a114750790200431184f30022922eff6610a63acfb.scope: Deactivated successfully.
Jan 20 15:06:49 compute-0 sudo[346776]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:49 compute-0 nova_compute[250018]: 2026-01-20 15:06:49.141 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Successfully created port: 7656dd86-9d08-4781-86a7-c3f4abd07200 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:06:49 compute-0 sudo[346999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:49 compute-0 sudo[346999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:49 compute-0 sudo[346999]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:49 compute-0 sudo[347024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:06:49 compute-0 sudo[347024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:49 compute-0 sudo[347024]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:49 compute-0 sudo[347049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:49 compute-0 sudo[347049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:49 compute-0 sudo[347049]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:49 compute-0 sudo[347074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:06:49 compute-0 sudo[347074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:49 compute-0 nova_compute[250018]: 2026-01-20 15:06:49.491 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:49 compute-0 nova_compute[250018]: 2026-01-20 15:06:49.645 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.679174357 +0000 UTC m=+0.045454178 container create 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:06:49 compute-0 systemd[1]: Started libpod-conmon-8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0.scope.
Jan 20 15:06:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.661341856 +0000 UTC m=+0.027621707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.767224372 +0000 UTC m=+0.133504213 container init 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.773278885 +0000 UTC m=+0.139558706 container start 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.776141972 +0000 UTC m=+0.142421793 container attach 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:06:49 compute-0 systemd[1]: libpod-8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0.scope: Deactivated successfully.
Jan 20 15:06:49 compute-0 trusting_sutherland[347155]: 167 167
Jan 20 15:06:49 compute-0 conmon[347155]: conmon 8e6b4bc22c0015b51362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0.scope/container/memory.events
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.778903627 +0000 UTC m=+0.145183448 container died 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d120ad9ed8fc580ae53da40746faec1a9807e30a19d87526053348f52221d71-merged.mount: Deactivated successfully.
Jan 20 15:06:49 compute-0 podman[347139]: 2026-01-20 15:06:49.81833891 +0000 UTC m=+0.184618731 container remove 8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:06:49 compute-0 systemd[1]: libpod-conmon-8e6b4bc22c0015b513620bc7d80cf1f48e5c0c0424de7d7ec4292e5b7f2f84a0.scope: Deactivated successfully.
Jan 20 15:06:49 compute-0 podman[347180]: 2026-01-20 15:06:49.979566119 +0000 UTC m=+0.044445809 container create 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:06:50 compute-0 systemd[1]: Started libpod-conmon-3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc.scope.
Jan 20 15:06:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:50 compute-0 podman[347180]: 2026-01-20 15:06:49.957761202 +0000 UTC m=+0.022640882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd267b83dfeb309c3bc85e75808dfba12406953b5d30ca5c985a117e2b6013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd267b83dfeb309c3bc85e75808dfba12406953b5d30ca5c985a117e2b6013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd267b83dfeb309c3bc85e75808dfba12406953b5d30ca5c985a117e2b6013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3fd267b83dfeb309c3bc85e75808dfba12406953b5d30ca5c985a117e2b6013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:50 compute-0 podman[347180]: 2026-01-20 15:06:50.066253698 +0000 UTC m=+0.131133358 container init 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:06:50 compute-0 podman[347180]: 2026-01-20 15:06:50.078716314 +0000 UTC m=+0.143595964 container start 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:06:50 compute-0 podman[347180]: 2026-01-20 15:06:50.082222369 +0000 UTC m=+0.147102029 container attach 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:06:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 183 op/s
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]: {
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:     "0": [
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:         {
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "devices": [
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "/dev/loop3"
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             ],
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "lv_name": "ceph_lv0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "lv_size": "7511998464",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "name": "ceph_lv0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "tags": {
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.cluster_name": "ceph",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.crush_device_class": "",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.encrypted": "0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.osd_id": "0",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.type": "block",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:                 "ceph.vdo": "0"
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             },
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "type": "block",
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:             "vg_name": "ceph_vg0"
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:         }
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]:     ]
Jan 20 15:06:50 compute-0 elastic_gagarin[347196]: }
Jan 20 15:06:50 compute-0 systemd[1]: libpod-3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc.scope: Deactivated successfully.
Jan 20 15:06:50 compute-0 podman[347180]: 2026-01-20 15:06:50.869874286 +0000 UTC m=+0.934753936 container died 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:06:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.272 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Successfully updated port: 7656dd86-9d08-4781-86a7-c3f4abd07200 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.287 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.287 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquired lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.288 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:06:51 compute-0 ceph-mon[74360]: pgmap v2488: 321 pgs: 321 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 183 op/s
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.416 250022 DEBUG nova.compute.manager [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-changed-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.416 250022 DEBUG nova.compute.manager [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Refreshing instance network info cache due to event network-changed-7656dd86-9d08-4781-86a7-c3f4abd07200. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.416 250022 DEBUG oslo_concurrency.lockutils [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3fd267b83dfeb309c3bc85e75808dfba12406953b5d30ca5c985a117e2b6013-merged.mount: Deactivated successfully.
Jan 20 15:06:51 compute-0 podman[347180]: 2026-01-20 15:06:51.511472873 +0000 UTC m=+1.576352523 container remove 3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:06:51 compute-0 systemd[1]: libpod-conmon-3aebb322af75cf466666c5c2648412c0064a498219ba5a2a6226c571b5a4e8bc.scope: Deactivated successfully.
Jan 20 15:06:51 compute-0 sudo[347074]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:51 compute-0 podman[347212]: 2026-01-20 15:06:51.596574458 +0000 UTC m=+0.695617264 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 15:06:51 compute-0 sudo[347246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:51 compute-0 podman[347205]: 2026-01-20 15:06:51.610779281 +0000 UTC m=+0.710444084 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:06:51 compute-0 sudo[347246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:51 compute-0 sudo[347246]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:51 compute-0 sudo[347287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:06:51 compute-0 sudo[347287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:51 compute-0 sudo[347287]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:51 compute-0 sudo[347312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:51 compute-0 sudo[347312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:51 compute-0 sudo[347312]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:51 compute-0 sudo[347337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:06:51 compute-0 sudo[347337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:51 compute-0 nova_compute[250018]: 2026-01-20 15:06:51.988 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.064928142 +0000 UTC m=+0.023687320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.178492226 +0000 UTC m=+0.137251424 container create c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:06:52 compute-0 systemd[1]: Started libpod-conmon-c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3.scope.
Jan 20 15:06:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:06:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:52.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.268021641 +0000 UTC m=+0.226780819 container init c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.275495402 +0000 UTC m=+0.234254590 container start c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.280109227 +0000 UTC m=+0.238868405 container attach c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:06:52 compute-0 gracious_bardeen[347419]: 167 167
Jan 20 15:06:52 compute-0 systemd[1]: libpod-c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3.scope: Deactivated successfully.
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.281820153 +0000 UTC m=+0.240579301 container died c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 20 15:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-338ddd09150e3cbec9296953898b2334ada224c03d934999eb8cfcda82fcd9eb-merged.mount: Deactivated successfully.
Jan 20 15:06:52 compute-0 podman[347402]: 2026-01-20 15:06:52.430570596 +0000 UTC m=+0.389329754 container remove c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:06:52 compute-0 systemd[1]: libpod-conmon-c449f947baf1c16abc78aff020c6d0da97da432fb0ee65865ee38dfba228c9c3.scope: Deactivated successfully.
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:06:52 compute-0 podman[347443]: 2026-01-20 15:06:52.599007109 +0000 UTC m=+0.041580542 container create 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:06:52
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'vms']
Jan 20 15:06:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:06:52 compute-0 systemd[1]: Started libpod-conmon-28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1.scope.
Jan 20 15:06:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dba2a94b39a031a9667238180d8d6329db377f9b5470d9b90a2e79ec363cf96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dba2a94b39a031a9667238180d8d6329db377f9b5470d9b90a2e79ec363cf96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dba2a94b39a031a9667238180d8d6329db377f9b5470d9b90a2e79ec363cf96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dba2a94b39a031a9667238180d8d6329db377f9b5470d9b90a2e79ec363cf96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:52 compute-0 podman[347443]: 2026-01-20 15:06:52.676085738 +0000 UTC m=+0.118659181 container init 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:06:52 compute-0 podman[347443]: 2026-01-20 15:06:52.583127481 +0000 UTC m=+0.025700944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:06:52 compute-0 podman[347443]: 2026-01-20 15:06:52.689124701 +0000 UTC m=+0.131698134 container start 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 15:06:52 compute-0 podman[347443]: 2026-01-20 15:06:52.695331668 +0000 UTC m=+0.137905121 container attach 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.726 250022 DEBUG nova.network.neutron [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updating instance_info_cache with network_info: [{"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.744 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Releasing lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.745 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Instance network_info: |[{"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.745 250022 DEBUG oslo_concurrency.lockutils [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.746 250022 DEBUG nova.network.neutron [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Refreshing network info cache for port 7656dd86-9d08-4781-86a7-c3f4abd07200 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.751 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Start _get_guest_xml network_info=[{"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.758 250022 WARNING nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.763 250022 DEBUG nova.virt.libvirt.host [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.764 250022 DEBUG nova.virt.libvirt.host [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.769 250022 DEBUG nova.virt.libvirt.host [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.769 250022 DEBUG nova.virt.libvirt.host [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.771 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.771 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.777 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.777 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.778 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.778 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.778 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.779 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.779 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.779 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.780 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.780 250022 DEBUG nova.virt.hardware [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:06:52 compute-0 nova_compute[250018]: 2026-01-20 15:06:52.785 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888624225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.222 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.249 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.254 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:53 compute-0 ceph-mon[74360]: pgmap v2489: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 20 15:06:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1888624225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:53 compute-0 competent_chatelet[347459]: {
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:         "osd_id": 0,
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:         "type": "bluestore"
Jan 20 15:06:53 compute-0 competent_chatelet[347459]:     }
Jan 20 15:06:53 compute-0 competent_chatelet[347459]: }
Jan 20 15:06:53 compute-0 systemd[1]: libpod-28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1.scope: Deactivated successfully.
Jan 20 15:06:53 compute-0 podman[347443]: 2026-01-20 15:06:53.574333559 +0000 UTC m=+1.016906992 container died 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:06:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dba2a94b39a031a9667238180d8d6329db377f9b5470d9b90a2e79ec363cf96-merged.mount: Deactivated successfully.
Jan 20 15:06:53 compute-0 podman[347443]: 2026-01-20 15:06:53.627613176 +0000 UTC m=+1.070186609 container remove 28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:06:53 compute-0 sudo[347337]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:53 compute-0 systemd[1]: libpod-conmon-28c5845369ee0b3beec52cc79c7a1833347fb5d785020ba56ecc47664703ace1.scope: Deactivated successfully.
Jan 20 15:06:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:06:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:06:53 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:06:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255956302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:53 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a3951f31-8990-47c2-a3bc-6976bc06e8a8 does not exist
Jan 20 15:06:53 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d661b70a-7840-48ee-a4f2-77b23cd8f51d does not exist
Jan 20 15:06:53 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 35e9769f-58dd-4cd6-91b3-534afa955d5c does not exist
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.705 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.708 250022 DEBUG nova.virt.libvirt.vif [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1623179757',display_name='tempest-TestEncryptedCinderVolumes-server-1623179757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1623179757',id=166,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBvypThr9hk8H6PWhaLI5kNHP4GspwPBH/HTsm7o79l/I4wkXyvfHjS2+7YQopu7pgpa64VxpTqA3bKvRglz3oqm0zsBygBhRaMVrKGm+gAEA36pXMW5IBaDlXadpljSQg==',key_name='tempest-keypair-277062450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='526db518ca3942b58ee346d4bd970e42',ramdisk_id='',reservation_id='r-2wppq000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-389718176',owner_user_name='tempest-TestEncryptedCinderVolumes-389718176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1cdce555ec694255a154517f28a12ae5',uuid=26cf4955-374b-4e19-992f-e9348d555edf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.708 250022 DEBUG nova.network.os_vif_util [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converting VIF {"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.709 250022 DEBUG nova.network.os_vif_util [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.710 250022 DEBUG nova.objects.instance [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:06:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.732 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <uuid>26cf4955-374b-4e19-992f-e9348d555edf</uuid>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <name>instance-000000a6</name>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1623179757</nova:name>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:06:52</nova:creationTime>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:user uuid="1cdce555ec694255a154517f28a12ae5">tempest-TestEncryptedCinderVolumes-389718176-project-member</nova:user>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:project uuid="526db518ca3942b58ee346d4bd970e42">tempest-TestEncryptedCinderVolumes-389718176</nova:project>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <nova:port uuid="7656dd86-9d08-4781-86a7-c3f4abd07200">
Jan 20 15:06:53 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <system>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="serial">26cf4955-374b-4e19-992f-e9348d555edf</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="uuid">26cf4955-374b-4e19-992f-e9348d555edf</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </system>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <os>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </os>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <features>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </features>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/26cf4955-374b-4e19-992f-e9348d555edf_disk">
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </source>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/26cf4955-374b-4e19-992f-e9348d555edf_disk.config">
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </source>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:06:53 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:cb:54:6b"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <target dev="tap7656dd86-9d"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/console.log" append="off"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <video>
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </video>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:06:53 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:06:53 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:06:53 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:06:53 compute-0 nova_compute[250018]: </domain>
Jan 20 15:06:53 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.733 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Preparing to wait for external event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.733 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.734 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.734 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.735 250022 DEBUG nova.virt.libvirt.vif [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:06:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1623179757',display_name='tempest-TestEncryptedCinderVolumes-server-1623179757',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1623179757',id=166,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBvypThr9hk8H6PWhaLI5kNHP4GspwPBH/HTsm7o79l/I4wkXyvfHjS2+7YQopu7pgpa64VxpTqA3bKvRglz3oqm0zsBygBhRaMVrKGm+gAEA36pXMW5IBaDlXadpljSQg==',key_name='tempest-keypair-277062450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='526db518ca3942b58ee346d4bd970e42',ramdisk_id='',reservation_id='r-2wppq000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-389718176',owner_user_name='tempest-TestEncryptedCinderVolumes-389718176-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:06:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1cdce555ec694255a154517f28a12ae5',uuid=26cf4955-374b-4e19-992f-e9348d555edf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.736 250022 DEBUG nova.network.os_vif_util [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converting VIF {"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.737 250022 DEBUG nova.network.os_vif_util [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.737 250022 DEBUG os_vif [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.738 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.739 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.740 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.744 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.744 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7656dd86-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.744 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7656dd86-9d, col_values=(('external_ids', {'iface-id': '7656dd86-9d08-4781-86a7-c3f4abd07200', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:54:6b', 'vm-uuid': '26cf4955-374b-4e19-992f-e9348d555edf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.746 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:53 compute-0 NetworkManager[48960]: <info>  [1768921613.7484] manager: (tap7656dd86-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Jan 20 15:06:53 compute-0 sudo[347554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.749 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:06:53 compute-0 sudo[347554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.754 250022 INFO os_vif [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d')
Jan 20 15:06:53 compute-0 sudo[347554]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.812 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.812 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.813 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No VIF found with MAC fa:16:3e:cb:54:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.813 250022 INFO nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Using config drive
Jan 20 15:06:53 compute-0 sudo[347582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:06:53 compute-0 sudo[347582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:53 compute-0 sudo[347582]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:53 compute-0 nova_compute[250018]: 2026-01-20 15:06:53.844 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.055 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.208 250022 INFO nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Creating config drive at /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.213 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyihqknw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.351 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyihqknw" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.383 250022 DEBUG nova.storage.rbd_utils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] rbd image 26cf4955-374b-4e19-992f-e9348d555edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.386 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config 26cf4955-374b-4e19-992f-e9348d555edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.415 250022 DEBUG nova.network.neutron [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updated VIF entry in instance network info cache for port 7656dd86-9d08-4781-86a7-c3f4abd07200. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.416 250022 DEBUG nova.network.neutron [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updating instance_info_cache with network_info: [{"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.432 250022 DEBUG oslo_concurrency.lockutils [req-2d07023a-91f6-4285-872f-57b81f82eb83 req-3c320dcd-8dc4-45be-ae9b-d0d237daff04 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.610 250022 DEBUG oslo_concurrency.processutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config 26cf4955-374b-4e19-992f-e9348d555edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.611 250022 INFO nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Deleting local config drive /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf/disk.config because it was imported into RBD.
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.648 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:54 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:06:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1255956302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:06:54 compute-0 ceph-mon[74360]: pgmap v2490: 321 pgs: 321 active+clean; 604 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Jan 20 15:06:54 compute-0 kernel: tap7656dd86-9d: entered promiscuous mode
Jan 20 15:06:54 compute-0 NetworkManager[48960]: <info>  [1768921614.6864] manager: (tap7656dd86-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.688 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:54 compute-0 ovn_controller[148666]: 2026-01-20T15:06:54Z|00579|binding|INFO|Claiming lport 7656dd86-9d08-4781-86a7-c3f4abd07200 for this chassis.
Jan 20 15:06:54 compute-0 ovn_controller[148666]: 2026-01-20T15:06:54Z|00580|binding|INFO|7656dd86-9d08-4781-86a7-c3f4abd07200: Claiming fa:16:3e:cb:54:6b 10.100.0.3
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.700 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:54:6b 10.100.0.3'], port_security=['fa:16:3e:cb:54:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26cf4955-374b-4e19-992f-e9348d555edf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aca22591-2999-4ce4-8358-8365c76ef740', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '526db518ca3942b58ee346d4bd970e42', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d3da700-4f45-4293-94f4-24e0902718f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff8b5d28-350a-493f-9b5e-8ff92d1d40ce, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7656dd86-9d08-4781-86a7-c3f4abd07200) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.702 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7656dd86-9d08-4781-86a7-c3f4abd07200 in datapath aca22591-2999-4ce4-8358-8365c76ef740 bound to our chassis
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.704 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aca22591-2999-4ce4-8358-8365c76ef740
Jan 20 15:06:54 compute-0 ovn_controller[148666]: 2026-01-20T15:06:54Z|00581|binding|INFO|Setting lport 7656dd86-9d08-4781-86a7-c3f4abd07200 ovn-installed in OVS
Jan 20 15:06:54 compute-0 ovn_controller[148666]: 2026-01-20T15:06:54Z|00582|binding|INFO|Setting lport 7656dd86-9d08-4781-86a7-c3f4abd07200 up in Southbound
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.715 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:54 compute-0 nova_compute[250018]: 2026-01-20 15:06:54.718 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.721 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[76eda96b-8f72-432a-8925-1f7e68ac92c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.722 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaca22591-21 in ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:06:54 compute-0 systemd-udevd[347678]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.724 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaca22591-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.724 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b69d212-dc2a-4105-8af4-cd31355e14cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.725 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[593c6071-7b49-411b-8522-2aeb7106dbc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 systemd-machined[216401]: New machine qemu-74-instance-000000a6.
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.739 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[dafb29b5-5a61-4ac1-bb00-9d68cab222d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 NetworkManager[48960]: <info>  [1768921614.7431] device (tap7656dd86-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:06:54 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-000000a6.
Jan 20 15:06:54 compute-0 NetworkManager[48960]: <info>  [1768921614.7455] device (tap7656dd86-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.767 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3560f6d6-32c3-4e5c-a129-aafc712c7fb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.798 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc17089-8d1c-4a1c-a920-048aa937ac02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.803 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1c38c8-7497-4954-9d46-f164e4bae4ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 NetworkManager[48960]: <info>  [1768921614.8046] manager: (tapaca22591-20): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Jan 20 15:06:54 compute-0 systemd-udevd[347682]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.833 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6c0c13-82b5-467e-8961-07c610a99541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.835 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d65db6-43a4-41c3-9f59-eaa0ceb0852a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 NetworkManager[48960]: <info>  [1768921614.8549] device (tapaca22591-20): carrier: link connected
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.858 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f485e6-76dd-4713-b6a7-48b0d6453977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.874 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba7cfeb-e949-4acb-b43a-d084340bd4c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaca22591-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:99:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755557, 'reachable_time': 26002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347711, 'error': None, 'target': 'ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.890 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d3a684-2094-48ea-8fc1-be7b3012084c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:9999'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 755557, 'tstamp': 755557}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347712, 'error': None, 'target': 'ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.907 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[419982fb-757b-4303-85b7-c1ad1712de71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaca22591-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:99:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755557, 'reachable_time': 26002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347713, 'error': None, 'target': 'ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:54.939 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a9593fd9-55cd-4947-8972-2b72cced6cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.035 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ecabf97f-c61f-4a1f-807a-481329103420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.037 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaca22591-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.037 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.038 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaca22591-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.039 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:55 compute-0 NetworkManager[48960]: <info>  [1768921615.0403] manager: (tapaca22591-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Jan 20 15:06:55 compute-0 kernel: tapaca22591-20: entered promiscuous mode
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.043 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaca22591-20, col_values=(('external_ids', {'iface-id': 'dfacd08f-5802-4e49-92e6-0c908c90ddcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.044 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:55 compute-0 ovn_controller[148666]: 2026-01-20T15:06:55Z|00583|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.059 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aca22591-2999-4ce4-8358-8365c76ef740.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aca22591-2999-4ce4-8358-8365c76ef740.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.060 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[38409bd8-0fe8-42e1-8187-f0aba6196cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.061 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-aca22591-2999-4ce4-8358-8365c76ef740
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/aca22591-2999-4ce4-8358-8365c76ef740.pid.haproxy
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID aca22591-2999-4ce4-8358-8365c76ef740
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:06:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:06:55.062 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740', 'env', 'PROCESS_TAG=haproxy-aca22591-2999-4ce4-8358-8365c76ef740', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aca22591-2999-4ce4-8358-8365c76ef740.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.394 250022 DEBUG nova.compute.manager [req-41f13532-537c-45a5-927b-c5d4e7d0b388 req-1b342ec9-a96e-479d-90b0-f14dc5860efb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.396 250022 DEBUG oslo_concurrency.lockutils [req-41f13532-537c-45a5-927b-c5d4e7d0b388 req-1b342ec9-a96e-479d-90b0-f14dc5860efb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.396 250022 DEBUG oslo_concurrency.lockutils [req-41f13532-537c-45a5-927b-c5d4e7d0b388 req-1b342ec9-a96e-479d-90b0-f14dc5860efb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.396 250022 DEBUG oslo_concurrency.lockutils [req-41f13532-537c-45a5-927b-c5d4e7d0b388 req-1b342ec9-a96e-479d-90b0-f14dc5860efb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.396 250022 DEBUG nova.compute.manager [req-41f13532-537c-45a5-927b-c5d4e7d0b388 req-1b342ec9-a96e-479d-90b0-f14dc5860efb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Processing event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:06:55 compute-0 podman[347766]: 2026-01-20 15:06:55.422184416 +0000 UTC m=+0.045680624 container create 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 15:06:55 compute-0 systemd[1]: Started libpod-conmon-77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b.scope.
Jan 20 15:06:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a91053e1f85d3aa07c363e6b5d42c3c432c6566479a3d61ca73af419e05a8c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:06:55 compute-0 podman[347766]: 2026-01-20 15:06:55.486165981 +0000 UTC m=+0.109662209 container init 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:06:55 compute-0 podman[347766]: 2026-01-20 15:06:55.49241721 +0000 UTC m=+0.115913418 container start 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:06:55 compute-0 podman[347766]: 2026-01-20 15:06:55.395916017 +0000 UTC m=+0.019412245 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:06:55 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [NOTICE]   (347805) : New worker (347807) forked
Jan 20 15:06:55 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [NOTICE]   (347805) : Loading success.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.528 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921615.5282526, 26cf4955-374b-4e19-992f-e9348d555edf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.529 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] VM Started (Lifecycle Event)
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.531 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.534 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.542 250022 INFO nova.virt.libvirt.driver [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Instance spawned successfully.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.542 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.557 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.561 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.572 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.573 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.573 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.574 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.574 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.575 250022 DEBUG nova.virt.libvirt.driver [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.587 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.588 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921615.5290897, 26cf4955-374b-4e19-992f-e9348d555edf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.588 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] VM Paused (Lifecycle Event)
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.609 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.613 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921615.5331748, 26cf4955-374b-4e19-992f-e9348d555edf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.613 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] VM Resumed (Lifecycle Event)
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.635 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.638 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.644 250022 INFO nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Took 9.35 seconds to spawn the instance on the hypervisor.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.644 250022 DEBUG nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.671 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.711 250022 INFO nova.compute.manager [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Took 10.27 seconds to build instance.
Jan 20 15:06:55 compute-0 nova_compute[250018]: 2026-01-20 15:06:55.727 250022 DEBUG oslo_concurrency.lockutils [None req-a76b9ea5-d1d3-4ad5-8c22-84badce7b3cd 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:55.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.081 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.081 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.081 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1344781968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 163 op/s
Jan 20 15:06:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:56 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667667141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.543 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.615 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.616 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.620 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.620 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.798 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.799 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3893MB free_disk=20.825679779052734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.800 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.800 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.904 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.905 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 26cf4955-374b-4e19-992f-e9348d555edf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.905 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.905 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:06:56 compute-0 nova_compute[250018]: 2026-01-20 15:06:56.957 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:57 compute-0 ceph-mon[74360]: pgmap v2491: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 163 op/s
Jan 20 15:06:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1667667141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1577499332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/485878596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:06:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241671892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.417 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.424 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.444 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.475 250022 DEBUG nova.compute.manager [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.476 250022 DEBUG oslo_concurrency.lockutils [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.477 250022 DEBUG oslo_concurrency.lockutils [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.478 250022 DEBUG oslo_concurrency.lockutils [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.478 250022 DEBUG nova.compute.manager [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] No waiting events found dispatching network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.479 250022 WARNING nova.compute.manager [req-dfbcc217-8182-4e13-a5cb-afa7278fa08a req-783b6934-c3e2-4200-a553-00d2dd254909 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received unexpected event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 for instance with vm_state active and task_state None.
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.482 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:06:57 compute-0 nova_compute[250018]: 2026-01-20 15:06:57.483 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:57 compute-0 sudo[347862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:57 compute-0 sudo[347862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:57 compute-0 sudo[347862]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:06:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:06:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:57.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:57 compute-0 sudo[347887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:06:57 compute-0 sudo[347887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:06:57 compute-0 sudo[347887]: pam_unix(sudo:session): session closed for user root
Jan 20 15:06:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:06:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:06:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3241671892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4168235077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:06:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.478 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.479 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.479 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.479 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.655 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.656 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.666 250022 DEBUG nova.compute.manager [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-changed-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.667 250022 DEBUG nova.compute.manager [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Refreshing instance network info cache due to event network-changed-7656dd86-9d08-4781-86a7-c3f4abd07200. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.668 250022 DEBUG oslo_concurrency.lockutils [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.668 250022 DEBUG oslo_concurrency.lockutils [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.669 250022 DEBUG nova.network.neutron [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Refreshing network info cache for port 7656dd86-9d08-4781-86a7-c3f4abd07200 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.676 250022 DEBUG nova.objects.instance [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.707 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.746 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.990 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.991 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:06:58 compute-0 nova_compute[250018]: 2026-01-20 15:06:58.991 250022 INFO nova.compute.manager [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Attaching volume 44e852b3-daf0-4085-aca9-bb61206f2ff9 to /dev/vdb
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.156 250022 DEBUG os_brick.utils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.158 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.169 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.169 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[d78bdf72-e094-49f9-8998-0921d77846b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.170 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.178 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.179 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[9598155c-b289-411f-9faa-6390d71f8a46]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.180 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.188 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.188 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[05ca12c3-1663-418e-a155-530455be5239]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.189 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbe417b-25a0-47cd-90a4-4ba19f9681f7]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.190 250022 DEBUG oslo_concurrency.processutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.227 250022 DEBUG oslo_concurrency.processutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.232 250022 DEBUG os_brick.initiator.connectors.lightos [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.233 250022 DEBUG os_brick.initiator.connectors.lightos [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.233 250022 DEBUG os_brick.initiator.connectors.lightos [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.234 250022 DEBUG os_brick.utils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.235 250022 DEBUG nova.virt.block_device [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating existing volume attachment record: dea25cce-bddc-4c9f-bce3-c68d03f0834f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:06:59 compute-0 ceph-mon[74360]: pgmap v2492: 321 pgs: 321 active+clean; 620 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Jan 20 15:06:59 compute-0 nova_compute[250018]: 2026-01-20 15:06:59.687 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:06:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:06:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:06:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:06:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:07:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3667091613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.084 250022 DEBUG nova.network.neutron [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updated VIF entry in instance network info cache for port 7656dd86-9d08-4781-86a7-c3f4abd07200. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.084 250022 DEBUG nova.network.neutron [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updating instance_info_cache with network_info: [{"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.103 250022 DEBUG oslo_concurrency.lockutils [req-4ced68b0-c32f-4170-b2a6-bedd549730aa req-ac7fb97b-5cc0-4d57-b2fc-d0dc29a7ceea 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-26cf4955-374b-4e19-992f-e9348d555edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.173 250022 DEBUG nova.objects.instance [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.190 250022 DEBUG nova.virt.libvirt.driver [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Attempting to attach volume 44e852b3-daf0-4085-aca9-bb61206f2ff9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.193 250022 DEBUG nova.virt.libvirt.guest [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:07:00 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-44e852b3-daf0-4085-aca9-bb61206f2ff9">
Jan 20 15:07:00 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 15:07:00 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   </auth>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:00 compute-0 nova_compute[250018]:   <serial>44e852b3-daf0-4085-aca9-bb61206f2ff9</serial>
Jan 20 15:07:00 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:00 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:07:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3667091613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.330 250022 DEBUG nova.virt.libvirt.driver [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.330 250022 DEBUG nova.virt.libvirt.driver [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.330 250022 DEBUG nova.virt.libvirt.driver [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.330 250022 DEBUG nova.virt.libvirt.driver [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] No VIF found with MAC fa:16:3e:18:5f:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:07:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 636 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 156 op/s
Jan 20 15:07:00 compute-0 nova_compute[250018]: 2026-01-20 15:07:00.575 250022 DEBUG oslo_concurrency.lockutils [None req-0f15c759-9583-42d5-afb2-5e9249162fc8 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:01 compute-0 ceph-mon[74360]: pgmap v2493: 321 pgs: 321 active+clean; 636 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 156 op/s
Jan 20 15:07:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:07:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:02.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.281 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.282 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.282 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.283 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.775 250022 DEBUG oslo_concurrency.lockutils [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.776 250022 DEBUG oslo_concurrency.lockutils [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.787 250022 INFO nova.compute.manager [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Detaching volume 44e852b3-daf0-4085-aca9-bb61206f2ff9
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.895 250022 INFO nova.virt.block_device [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Attempting to driver detach volume 44e852b3-daf0-4085-aca9-bb61206f2ff9 from mountpoint /dev/vdb
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.903 250022 DEBUG nova.virt.libvirt.driver [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Attempting to detach device vdb from instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.903 250022 DEBUG nova.virt.libvirt.guest [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-44e852b3-daf0-4085-aca9-bb61206f2ff9">
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <serial>44e852b3-daf0-4085-aca9-bb61206f2ff9</serial>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:02 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.909 250022 INFO nova.virt.libvirt.driver [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully detached device vdb from instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 from the persistent domain config.
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.909 250022 DEBUG nova.virt.libvirt.driver [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:07:02 compute-0 nova_compute[250018]: 2026-01-20 15:07:02.910 250022 DEBUG nova.virt.libvirt.guest [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-44e852b3-daf0-4085-aca9-bb61206f2ff9">
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <serial>44e852b3-daf0-4085-aca9-bb61206f2ff9</serial>
Jan 20 15:07:02 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:07:02 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:02 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.126 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768921623.1265132, 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.130 250022 DEBUG nova.virt.libvirt.driver [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.132 250022 INFO nova.virt.libvirt.driver [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully detached device vdb from instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 from the live domain config.
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.298 250022 DEBUG nova.objects.instance [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'flavor' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.351 250022 DEBUG oslo_concurrency.lockutils [None req-003693fa-c15a-41e3-8fa3-0b228836bdd9 bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:03 compute-0 ceph-mon[74360]: pgmap v2494: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:07:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:07:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.742 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.748 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.759 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.759 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:07:03 compute-0 nova_compute[250018]: 2026-01-20 15:07:03.760 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:07:04 compute-0 ovn_controller[148666]: 2026-01-20T15:07:04Z|00584|memory|INFO|peak resident set size grew 50% in last 3543.2 seconds, from 16256 kB to 24464 kB
Jan 20 15:07:04 compute-0 ovn_controller[148666]: 2026-01-20T15:07:04Z|00585|memory|INFO|idl-cells-OVN_Southbound:11414 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:419 lflow-cache-entries-cache-matches:296 lflow-cache-size-KB:1694 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:695 ofctrl_installed_flow_usage-KB:509 ofctrl_sb_flow_ref_usage-KB:260
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.540 250022 DEBUG nova.compute.manager [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.541 250022 DEBUG nova.compute.manager [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing instance network info cache due to event network-changed-f58244ca-9b84-410e-a77e-f4bb0dc69691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.542 250022 DEBUG oslo_concurrency.lockutils [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.542 250022 DEBUG oslo_concurrency.lockutils [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.542 250022 DEBUG nova.network.neutron [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Refreshing network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.588 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.588 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.589 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.589 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.589 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.590 250022 INFO nova.compute.manager [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Terminating instance
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.591 250022 DEBUG nova.compute.manager [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:07:04 compute-0 kernel: tapf58244ca-9b (unregistering): left promiscuous mode
Jan 20 15:07:04 compute-0 NetworkManager[48960]: <info>  [1768921624.6447] device (tapf58244ca-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:04 compute-0 ovn_controller[148666]: 2026-01-20T15:07:04Z|00586|binding|INFO|Releasing lport f58244ca-9b84-410e-a77e-f4bb0dc69691 from this chassis (sb_readonly=0)
Jan 20 15:07:04 compute-0 ovn_controller[148666]: 2026-01-20T15:07:04Z|00587|binding|INFO|Setting lport f58244ca-9b84-410e-a77e-f4bb0dc69691 down in Southbound
Jan 20 15:07:04 compute-0 ovn_controller[148666]: 2026-01-20T15:07:04Z|00588|binding|INFO|Removing iface tapf58244ca-9b ovn-installed in OVS
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.654 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.659 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:5f:10 10.100.0.4'], port_security=['fa:16:3e:18:5f:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e142d118583b4f9ba3531bcf3838e256', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37efc868-18af-48b7-8d56-e37fd1ec4df0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9deb561-4473-4aa7-8b6f-d70e20e7cf6d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=f58244ca-9b84-410e-a77e-f4bb0dc69691) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.661 160071 INFO neutron.agent.ovn.metadata.agent [-] Port f58244ca-9b84-410e-a77e-f4bb0dc69691 in datapath 8472bae1-476b-4100-b9fa-e8827bc4f7bf unbound from our chassis
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.662 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8472bae1-476b-4100-b9fa-e8827bc4f7bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.663 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a58981e-b930-4064-808a-766d722d49b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.666 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf namespace which is not needed anymore
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.677 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.688 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 20 15:07:04 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a4.scope: Consumed 17.026s CPU time.
Jan 20 15:07:04 compute-0 systemd-machined[216401]: Machine qemu-73-instance-000000a4 terminated.
Jan 20 15:07:04 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [NOTICE]   (346379) : haproxy version is 2.8.14-c23fe91
Jan 20 15:07:04 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [NOTICE]   (346379) : path to executable is /usr/sbin/haproxy
Jan 20 15:07:04 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [WARNING]  (346379) : Exiting Master process...
Jan 20 15:07:04 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [ALERT]    (346379) : Current worker (346381) exited with code 143 (Terminated)
Jan 20 15:07:04 compute-0 neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf[346375]: [WARNING]  (346379) : All workers exited. Exiting... (0)
Jan 20 15:07:04 compute-0 systemd[1]: libpod-eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1.scope: Deactivated successfully.
Jan 20 15:07:04 compute-0 podman[347968]: 2026-01-20 15:07:04.827724233 +0000 UTC m=+0.049526217 container died eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.830 250022 INFO nova.virt.libvirt.driver [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Instance destroyed successfully.
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.831 250022 DEBUG nova.objects.instance [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lazy-loading 'resources' on Instance uuid 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.850 250022 DEBUG nova.virt.libvirt.vif [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:06:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-69339560',display_name='tempest-TestStampPattern-server-69339560',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-69339560',id=164,image_ref='8c970c65-2888-4da3-891e-c2b6eb3ea735',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHW99EAKkcMHbb6foGeGxm9beD/C9AeSuQLW3fqIuoocya0hep1/utcjh4cUxZzvt5K+5yMQG3K45jiLKihqKM6cawBqTQvgzcywKN5pk06AjS3tvq9GuiAvDAys6caVkA==',key_name='tempest-TestStampPattern-1928143162',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:06:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e142d118583b4f9ba3531bcf3838e256',ramdisk_id='',reservation_id='r-x2zvlk1c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='33ba7a73-3233-40a3-a49a-e5bbd604dc3c',image_min_disk='1',image_min_ram='0',image_owner_id='e142d118583b4f9ba3531bcf3838e256',image_owner_project_name='tempest-TestStampPattern-487600181',image_owner_user_name='tempest-TestStampPattern-487600181-project-member',image_user_id='bc554998e71a4322bdd27ac727a9044c',owner_project_name='tempest-TestStampPattern-487600181',owner_user_name='tempest-TestStampPattern-487600181-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:21Z,user_data=None,user_id='bc554998e71a4322bdd27ac727a9044c',uuid=0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.850 250022 DEBUG nova.network.os_vif_util [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converting VIF {"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.851 250022 DEBUG nova.network.os_vif_util [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.852 250022 DEBUG os_vif [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.854 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf58244ca-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.856 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.859 250022 INFO os_vif [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:5f:10,bridge_name='br-int',has_traffic_filtering=True,id=f58244ca-9b84-410e-a77e-f4bb0dc69691,network=Network(8472bae1-476b-4100-b9fa-e8827bc4f7bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf58244ca-9b')
Jan 20 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1-userdata-shm.mount: Deactivated successfully.
Jan 20 15:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-21fa6c352bd385a37a177a6641ff27860cc8422de6fef5e56f59b6ddcd6890b5-merged.mount: Deactivated successfully.
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.881 250022 DEBUG nova.compute.manager [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-unplugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.882 250022 DEBUG oslo_concurrency.lockutils [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.882 250022 DEBUG oslo_concurrency.lockutils [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.882 250022 DEBUG oslo_concurrency.lockutils [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.883 250022 DEBUG nova.compute.manager [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] No waiting events found dispatching network-vif-unplugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.883 250022 DEBUG nova.compute.manager [req-15d8b7a5-ac1c-4b04-905e-616e345ecea4 req-e1db3111-d5a4-4520-9e79-81f9e6c29027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-unplugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:07:04 compute-0 podman[347968]: 2026-01-20 15:07:04.887782133 +0000 UTC m=+0.109584117 container cleanup eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:07:04 compute-0 systemd[1]: libpod-conmon-eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1.scope: Deactivated successfully.
Jan 20 15:07:04 compute-0 podman[348025]: 2026-01-20 15:07:04.958559503 +0000 UTC m=+0.045736645 container remove eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.972 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[de5ae4e1-28f3-4fc2-9a6c-fa2c022109ff]: (4, ('Tue Jan 20 03:07:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf (eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1)\neaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1\nTue Jan 20 03:07:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf (eaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1)\neaef5bd202e1950c175cc2c02126f22c5dcb588cca20b8aff783ec5bc27a4cd1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cd88d-f976-49b5-895d-4dc45d351e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.975 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8472bae1-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.977 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 kernel: tap8472bae1-40: left promiscuous mode
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.993 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 nova_compute[250018]: 2026-01-20 15:07:04.996 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:04.997 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b537bb-9ab6-46fc-904a-12a064eea1c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:05.010 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[be652d30-2f51-44b7-b20c-8f3e8138a61e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:05.011 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[78bd970f-3d67-4efe-8f0d-d555e863a7fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:05.027 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[34a8e87c-a5f2-499f-b159-98e379dc3a6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752134, 'reachable_time': 27779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348043, 'error': None, 'target': 'ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:05.030 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8472bae1-476b-4100-b9fa-e8827bc4f7bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:07:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:05.031 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e28db27f-15fb-4138-8955-608a9e6006e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d8472bae1\x2d476b\x2d4100\x2db9fa\x2de8827bc4f7bf.mount: Deactivated successfully.
Jan 20 15:07:05 compute-0 ceph-mon[74360]: pgmap v2495: 321 pgs: 321 active+clean; 637 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.429 250022 INFO nova.virt.libvirt.driver [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Deleting instance files /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_del
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.431 250022 INFO nova.virt.libvirt.driver [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Deletion of /var/lib/nova/instances/0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04_del complete
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.496 250022 INFO nova.compute.manager [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.496 250022 DEBUG oslo.service.loopingcall [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.497 250022 DEBUG nova.compute.manager [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:07:05 compute-0 nova_compute[250018]: 2026-01-20 15:07:05.497 250022 DEBUG nova.network.neutron [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:07:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:05.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.956 250022 DEBUG nova.compute.manager [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.957 250022 DEBUG oslo_concurrency.lockutils [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.957 250022 DEBUG oslo_concurrency.lockutils [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.957 250022 DEBUG oslo_concurrency.lockutils [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.957 250022 DEBUG nova.compute.manager [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] No waiting events found dispatching network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:06 compute-0 nova_compute[250018]: 2026-01-20 15:07:06.958 250022 WARNING nova.compute.manager [req-fb20d7f2-e229-49f1-8886-15a918857b22 req-cabf92ee-703c-419c-8a42-704e533d8981 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received unexpected event network-vif-plugged-f58244ca-9b84-410e-a77e-f4bb0dc69691 for instance with vm_state active and task_state deleting.
Jan 20 15:07:07 compute-0 ceph-mon[74360]: pgmap v2496: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 20 15:07:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:07.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 864 KiB/s wr, 135 op/s
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.520 250022 DEBUG nova.network.neutron [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updated VIF entry in instance network info cache for port f58244ca-9b84-410e-a77e-f4bb0dc69691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.521 250022 DEBUG nova.network.neutron [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [{"id": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "address": "fa:16:3e:18:5f:10", "network": {"id": "8472bae1-476b-4100-b9fa-e8827bc4f7bf", "bridge": "br-int", "label": "tempest-TestStampPattern-1138931002-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e142d118583b4f9ba3531bcf3838e256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf58244ca-9b", "ovs_interfaceid": "f58244ca-9b84-410e-a77e-f4bb0dc69691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.548 250022 DEBUG oslo_concurrency.lockutils [req-d079cd98-a430-4bf4-8df3-c2dda32c31fc req-c6f92541-0b0f-4eb4-8f5d-82eb71737989 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.855 250022 DEBUG nova.network.neutron [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.878 250022 INFO nova.compute.manager [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Took 3.38 seconds to deallocate network for instance.
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.924 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:08 compute-0 nova_compute[250018]: 2026-01-20 15:07:08.924 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.011 250022 DEBUG oslo_concurrency.processutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.048 250022 DEBUG nova.compute.manager [req-545408b4-58ab-4abc-b542-a6874fa00bf3 req-9d3fdaa7-2978-4034-b6b0-df47a2cb3cca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Received event network-vif-deleted-f58244ca-9b84-410e-a77e-f4bb0dc69691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:09 compute-0 ceph-mon[74360]: pgmap v2497: 321 pgs: 321 active+clean; 634 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 864 KiB/s wr, 135 op/s
Jan 20 15:07:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3574389037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.480 250022 DEBUG oslo_concurrency.processutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.485 250022 DEBUG nova.compute.provider_tree [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.503 250022 DEBUG nova.scheduler.client.report [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.522 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.576 250022 INFO nova.scheduler.client.report [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Deleted allocations for instance 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04
Jan 20 15:07:09 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.660 250022 DEBUG oslo_concurrency.lockutils [None req-e4dbdd0d-4a02-44b5-8da1-3d341a93803f bc554998e71a4322bdd27ac727a9044c e142d118583b4f9ba3531bcf3838e256 - - default default] Lock "0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.690 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:09 compute-0 sshd-session[348047]: Connection closed by authenticating user root 134.122.57.138 port 37140 [preauth]
Jan 20 15:07:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:09.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.754 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:09 compute-0 nova_compute[250018]: 2026-01-20 15:07:09.856 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:10 compute-0 ovn_controller[148666]: 2026-01-20T15:07:10Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:54:6b 10.100.0.3
Jan 20 15:07:10 compute-0 ovn_controller[148666]: 2026-01-20T15:07:10Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:54:6b 10.100.0.3
Jan 20 15:07:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Jan 20 15:07:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2868866756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3150868772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/272620387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:11 compute-0 ceph-mon[74360]: pgmap v2498: 321 pgs: 321 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 220 op/s
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01018911106458217 of space, bias 1.0, pg target 3.056733319374651 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004530573876326075 of space, bias 1.0, pg target 1.3455804412688441 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004070191815559278 of space, bias 1.0, pg target 1.2047767774055462 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:07:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 20 15:07:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:11.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 690 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 152 op/s
Jan 20 15:07:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1005697138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1766338785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3016473623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.486017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632486456, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 738, "num_deletes": 251, "total_data_size": 941288, "memory_usage": 956280, "flush_reason": "Manual Compaction"}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632493592, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 930086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55668, "largest_seqno": 56405, "table_properties": {"data_size": 926352, "index_size": 1514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8947, "raw_average_key_size": 19, "raw_value_size": 918689, "raw_average_value_size": 2028, "num_data_blocks": 67, "num_entries": 453, "num_filter_entries": 453, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921581, "oldest_key_time": 1768921581, "file_creation_time": 1768921632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 7290 microseconds, and 3220 cpu microseconds.
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.493628) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 930086 bytes OK
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.493646) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.495860) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.495874) EVENT_LOG_v1 {"time_micros": 1768921632495869, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.495889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 937539, prev total WAL file size 937539, number of live WAL files 2.
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.496572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(908KB)], [122(12MB)]
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632496622, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 13518729, "oldest_snapshot_seqno": -1}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8531 keys, 11605985 bytes, temperature: kUnknown
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632564941, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 11605985, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11549587, "index_size": 33950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 222186, "raw_average_key_size": 26, "raw_value_size": 11398378, "raw_average_value_size": 1336, "num_data_blocks": 1323, "num_entries": 8531, "num_filter_entries": 8531, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.565785) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11605985 bytes
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.566893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.6 rd, 169.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(27.0) write-amplify(12.5) OK, records in: 9046, records dropped: 515 output_compression: NoCompression
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.566908) EVENT_LOG_v1 {"time_micros": 1768921632566901, "job": 74, "event": "compaction_finished", "compaction_time_micros": 68404, "compaction_time_cpu_micros": 32980, "output_level": 6, "num_output_files": 1, "total_output_size": 11605985, "num_input_records": 9046, "num_output_records": 8531, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632567189, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921632569582, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.496495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.569626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.569632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.569634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.569636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:07:12.569638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:07:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 20 15:07:13 compute-0 ceph-mon[74360]: pgmap v2499: 321 pgs: 321 active+clean; 690 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 152 op/s
Jan 20 15:07:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 20 15:07:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 20 15:07:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:13.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:14.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 677 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 189 op/s
Jan 20 15:07:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 20 15:07:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-0 ceph-mon[74360]: osdmap e367: 3 total, 3 up, 3 in
Jan 20 15:07:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2028028369' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2028028369' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:14 compute-0 ceph-mon[74360]: pgmap v2501: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 677 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 189 op/s
Jan 20 15:07:14 compute-0 nova_compute[250018]: 2026-01-20 15:07:14.693 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:14 compute-0 nova_compute[250018]: 2026-01-20 15:07:14.857 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:15 compute-0 ceph-mon[74360]: osdmap e368: 3 total, 3 up, 3 in
Jan 20 15:07:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:15.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:16.298 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:16.299 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:07:16 compute-0 nova_compute[250018]: 2026-01-20 15:07:16.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.9 MiB/s wr, 375 op/s
Jan 20 15:07:16 compute-0 ceph-mon[74360]: pgmap v2503: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.9 MiB/s wr, 375 op/s
Jan 20 15:07:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:07:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 46K writes, 173K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.71 writes per sync, written: 0.16 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7987 writes, 28K keys, 7987 commit groups, 1.0 writes per commit group, ingest: 28.84 MB, 0.05 MB/s
                                           Interval WAL: 7987 writes, 3298 syncs, 2.42 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:07:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:17.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:17 compute-0 sudo[348075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:17 compute-0 sudo[348075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:17 compute-0 sudo[348075]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:17 compute-0 sudo[348101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:17 compute-0 sudo[348101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:17 compute-0 sudo[348101]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:18 compute-0 ovn_controller[148666]: 2026-01-20T15:07:18Z|00589|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:07:18 compute-0 nova_compute[250018]: 2026-01-20 15:07:18.147 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:18.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Jan 20 15:07:19 compute-0 ceph-mon[74360]: pgmap v2504: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 559 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 248 op/s
Jan 20 15:07:19 compute-0 nova_compute[250018]: 2026-01-20 15:07:19.717 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:19.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:19 compute-0 nova_compute[250018]: 2026-01-20 15:07:19.828 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921624.8268855, 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:19 compute-0 nova_compute[250018]: 2026-01-20 15:07:19.828 250022 INFO nova.compute.manager [-] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] VM Stopped (Lifecycle Event)
Jan 20 15:07:19 compute-0 nova_compute[250018]: 2026-01-20 15:07:19.854 250022 DEBUG nova.compute.manager [None req-a2bd16cd-60c6-444e-934e-10f63399068d - - - - - -] [instance: 0c2c1ffa-3cdc-4edb-9a81-b38cfdc7cf04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:19 compute-0 nova_compute[250018]: 2026-01-20 15:07:19.859 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 166 KiB/s wr, 285 op/s
Jan 20 15:07:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 20 15:07:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 20 15:07:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 20 15:07:21 compute-0 ceph-mon[74360]: pgmap v2505: 321 pgs: 321 active+clean; 453 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 166 KiB/s wr, 285 op/s
Jan 20 15:07:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3481712973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:21 compute-0 ceph-mon[74360]: osdmap e369: 3 total, 3 up, 3 in
Jan 20 15:07:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:21.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 43 KiB/s wr, 299 op/s
Jan 20 15:07:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/777705666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/777705666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:22 compute-0 podman[348129]: 2026-01-20 15:07:22.516560381 +0000 UTC m=+0.093378740 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent)
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:22 compute-0 podman[348128]: 2026-01-20 15:07:22.550365653 +0000 UTC m=+0.130149992 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:07:23 compute-0 ovn_controller[148666]: 2026-01-20T15:07:23Z|00590|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:07:23 compute-0 nova_compute[250018]: 2026-01-20 15:07:23.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:23 compute-0 ceph-mon[74360]: pgmap v2507: 321 pgs: 321 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 43 KiB/s wr, 299 op/s
Jan 20 15:07:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 386 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 36 KiB/s wr, 282 op/s
Jan 20 15:07:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1342745173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:24 compute-0 nova_compute[250018]: 2026-01-20 15:07:24.719 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:24 compute-0 nova_compute[250018]: 2026-01-20 15:07:24.861 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:25 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:25.301 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:25 compute-0 ceph-mon[74360]: pgmap v2508: 321 pgs: 321 active+clean; 386 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 36 KiB/s wr, 282 op/s
Jan 20 15:07:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 20 15:07:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 20 15:07:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 20 15:07:26 compute-0 ovn_controller[148666]: 2026-01-20T15:07:26Z|00591|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:07:26 compute-0 nova_compute[250018]: 2026-01-20 15:07:26.296 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:26.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 31 KiB/s wr, 288 op/s
Jan 20 15:07:26 compute-0 ceph-mon[74360]: osdmap e370: 3 total, 3 up, 3 in
Jan 20 15:07:26 compute-0 ceph-mon[74360]: pgmap v2510: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 31 KiB/s wr, 288 op/s
Jan 20 15:07:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:28.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 11 KiB/s wr, 231 op/s
Jan 20 15:07:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/479322346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:29 compute-0 ceph-mon[74360]: pgmap v2511: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 11 KiB/s wr, 231 op/s
Jan 20 15:07:29 compute-0 nova_compute[250018]: 2026-01-20 15:07:29.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:29.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:29 compute-0 nova_compute[250018]: 2026-01-20 15:07:29.863 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 187 op/s
Jan 20 15:07:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:30.776 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:30.777 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:30.777 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:31 compute-0 ceph-mon[74360]: pgmap v2512: 321 pgs: 321 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 187 op/s
Jan 20 15:07:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.107 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.108 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.161 250022 DEBUG nova.objects.instance [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'flavor' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.210 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:32 compute-0 ovn_controller[148666]: 2026-01-20T15:07:32Z|00592|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:07:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 KiB/s wr, 181 op/s
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.434 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.434 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.434 250022 INFO nova.compute.manager [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Attaching volume 522e3761-1020-4931-8036-475bf6ddd148 to /dev/vdb
Jan 20 15:07:32 compute-0 ceph-mon[74360]: pgmap v2513: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 KiB/s wr, 181 op/s
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.654 250022 DEBUG os_brick.utils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.655 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.669 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.669 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef81379-fbac-43f1-ba6b-9517a708b9f7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.670 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.680 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.680 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf489b7-1b85-4ff4-8167-df154118e100]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.681 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.691 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.691 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6af59965-f6fc-4905-9365-6481e9ce298c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.692 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[a1382528-8c4d-42b8-a4ed-eea2e55c894c]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.693 250022 DEBUG oslo_concurrency.processutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.729 250022 DEBUG oslo_concurrency.processutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.731 250022 DEBUG os_brick.initiator.connectors.lightos [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.731 250022 DEBUG os_brick.initiator.connectors.lightos [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.732 250022 DEBUG os_brick.initiator.connectors.lightos [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.732 250022 DEBUG os_brick.utils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:07:32 compute-0 nova_compute[250018]: 2026-01-20 15:07:32.732 250022 DEBUG nova.virt.block_device [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updating existing volume attachment record: fc82522d-8641-4a39-b1dc-3d17b874cd70 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:07:33 compute-0 ovn_controller[148666]: 2026-01-20T15:07:33Z|00593|binding|INFO|Releasing lport dfacd08f-5802-4e49-92e6-0c908c90ddcd from this chassis (sb_readonly=0)
Jan 20 15:07:33 compute-0 nova_compute[250018]: 2026-01-20 15:07:33.416 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:33.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.012 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.047 250022 DEBUG os_brick.encryptors [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Using volume encryption metadata '{'encryption_key_id': '44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-522e3761-1020-4931-8036-475bf6ddd148', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '522e3761-1020-4931-8036-475bf6ddd148', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '26cf4955-374b-4e19-992f-e9348d555edf', 'attached_at': '', 'detached_at': '', 'volume_id': '522e3761-1020-4931-8036-475bf6ddd148', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.055 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.082 250022 DEBUG barbicanclient.v1.secrets [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.083 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.111 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.111 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.142 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.142 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.173 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.174 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.213 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.214 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.240 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.240 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1685285972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.282 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.282 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.315 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.316 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:34.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.336 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.336 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 10 KiB/s wr, 146 op/s
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.373 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.373 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.401 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.401 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.419 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.419 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.454 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.455 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.482 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.482 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.506 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.506 250022 INFO barbicanclient.base [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Calculated Secrets uuid ref: secrets/44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.525 250022 DEBUG barbicanclient.client [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.526 250022 DEBUG nova.virt.libvirt.host [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <usage type="volume">
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <volume>522e3761-1020-4931-8036-475bf6ddd148</volume>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   </usage>
Jan 20 15:07:34 compute-0 nova_compute[250018]: </secret>
Jan 20 15:07:34 compute-0 nova_compute[250018]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.534 250022 DEBUG nova.objects.instance [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'flavor' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.578 250022 DEBUG nova.virt.libvirt.driver [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Attempting to attach volume 522e3761-1020-4931-8036-475bf6ddd148 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.581 250022 DEBUG nova.virt.libvirt.guest [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-522e3761-1020-4931-8036-475bf6ddd148">
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   </auth>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <serial>522e3761-1020-4931-8036-475bf6ddd148</serial>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   <encryption format="luks">
Jan 20 15:07:34 compute-0 nova_compute[250018]:     <secret type="passphrase" uuid="3d05513b-7632-404c-9f3b-95645f8d5700"/>
Jan 20 15:07:34 compute-0 nova_compute[250018]:   </encryption>
Jan 20 15:07:34 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:34 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:34 compute-0 nova_compute[250018]: 2026-01-20 15:07:34.864 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:35 compute-0 ceph-mon[74360]: pgmap v2514: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 10 KiB/s wr, 146 op/s
Jan 20 15:07:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:07:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:35.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:07:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:36.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.010 250022 DEBUG nova.virt.libvirt.driver [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.011 250022 DEBUG nova.virt.libvirt.driver [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.011 250022 DEBUG nova.virt.libvirt.driver [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.011 250022 DEBUG nova.virt.libvirt.driver [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] No VIF found with MAC fa:16:3e:cb:54:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.406 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:37 compute-0 ceph-mon[74360]: pgmap v2515: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Jan 20 15:07:37 compute-0 nova_compute[250018]: 2026-01-20 15:07:37.598 250022 DEBUG oslo_concurrency.lockutils [None req-7fa80d64-e96e-4674-ab92-0ae383b0a6f2 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:37 compute-0 sudo[348209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:37 compute-0 sudo[348209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:37 compute-0 sudo[348209]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:38 compute-0 sudo[348234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:38 compute-0 sudo[348234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:38 compute-0 sudo[348234]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 36 op/s
Jan 20 15:07:39 compute-0 ceph-mon[74360]: pgmap v2516: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 36 op/s
Jan 20 15:07:39 compute-0 nova_compute[250018]: 2026-01-20 15:07:39.725 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:07:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:07:39 compute-0 nova_compute[250018]: 2026-01-20 15:07:39.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.065 250022 DEBUG oslo_concurrency.lockutils [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.065 250022 DEBUG oslo_concurrency.lockutils [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.098 250022 INFO nova.compute.manager [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Detaching volume 522e3761-1020-4931-8036-475bf6ddd148
Jan 20 15:07:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.330 250022 INFO nova.virt.block_device [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Attempting to driver detach volume 522e3761-1020-4931-8036-475bf6ddd148 from mountpoint /dev/vdb
Jan 20 15:07:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.2 KiB/s wr, 36 op/s
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.554 250022 DEBUG os_brick.encryptors [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Using volume encryption metadata '{'encryption_key_id': '44fbaa9e-d63a-4bf0-9535-e5558eaaa7e4', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-522e3761-1020-4931-8036-475bf6ddd148', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '522e3761-1020-4931-8036-475bf6ddd148', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '26cf4955-374b-4e19-992f-e9348d555edf', 'attached_at': '', 'detached_at': '', 'volume_id': '522e3761-1020-4931-8036-475bf6ddd148', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.564 250022 DEBUG nova.virt.libvirt.driver [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Attempting to detach device vdb from instance 26cf4955-374b-4e19-992f-e9348d555edf from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.564 250022 DEBUG nova.virt.libvirt.guest [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-522e3761-1020-4931-8036-475bf6ddd148">
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <serial>522e3761-1020-4931-8036-475bf6ddd148</serial>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <encryption format="luks">
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <secret type="passphrase" uuid="3d05513b-7632-404c-9f3b-95645f8d5700"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   </encryption>
Jan 20 15:07:40 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:40 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:07:40 compute-0 ceph-mon[74360]: pgmap v2517: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.2 KiB/s wr, 36 op/s
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.576 250022 INFO nova.virt.libvirt.driver [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Successfully detached device vdb from instance 26cf4955-374b-4e19-992f-e9348d555edf from the persistent domain config.
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.576 250022 DEBUG nova.virt.libvirt.driver [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 26cf4955-374b-4e19-992f-e9348d555edf from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.577 250022 DEBUG nova.virt.libvirt.guest [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-522e3761-1020-4931-8036-475bf6ddd148">
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   </source>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <serial>522e3761-1020-4931-8036-475bf6ddd148</serial>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   <encryption format="luks">
Jan 20 15:07:40 compute-0 nova_compute[250018]:     <secret type="passphrase" uuid="3d05513b-7632-404c-9f3b-95645f8d5700"/>
Jan 20 15:07:40 compute-0 nova_compute[250018]:   </encryption>
Jan 20 15:07:40 compute-0 nova_compute[250018]: </disk>
Jan 20 15:07:40 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.704 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768921660.7042096, 26cf4955-374b-4e19-992f-e9348d555edf => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.706 250022 DEBUG nova.virt.libvirt.driver [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 26cf4955-374b-4e19-992f-e9348d555edf _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.709 250022 INFO nova.virt.libvirt.driver [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Successfully detached device vdb from instance 26cf4955-374b-4e19-992f-e9348d555edf from the live domain config.
Jan 20 15:07:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:40 compute-0 nova_compute[250018]: 2026-01-20 15:07:40.967 250022 DEBUG nova.objects.instance [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'flavor' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.019 250022 DEBUG oslo_concurrency.lockutils [None req-aab8f288-7361-4af4-97eb-99b1b81614c4 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.924 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.925 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.925 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.926 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.926 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.928 250022 INFO nova.compute.manager [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Terminating instance
Jan 20 15:07:41 compute-0 nova_compute[250018]: 2026-01-20 15:07:41.929 250022 DEBUG nova.compute.manager [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:07:41 compute-0 kernel: tap7656dd86-9d (unregistering): left promiscuous mode
Jan 20 15:07:42 compute-0 NetworkManager[48960]: <info>  [1768921662.0005] device (tap7656dd86-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:07:42 compute-0 ovn_controller[148666]: 2026-01-20T15:07:42Z|00594|binding|INFO|Releasing lport 7656dd86-9d08-4781-86a7-c3f4abd07200 from this chassis (sb_readonly=0)
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 ovn_controller[148666]: 2026-01-20T15:07:42Z|00595|binding|INFO|Setting lport 7656dd86-9d08-4781-86a7-c3f4abd07200 down in Southbound
Jan 20 15:07:42 compute-0 ovn_controller[148666]: 2026-01-20T15:07:42Z|00596|binding|INFO|Removing iface tap7656dd86-9d ovn-installed in OVS
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.017 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.022 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:54:6b 10.100.0.3'], port_security=['fa:16:3e:cb:54:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26cf4955-374b-4e19-992f-e9348d555edf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aca22591-2999-4ce4-8358-8365c76ef740', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '526db518ca3942b58ee346d4bd970e42', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d3da700-4f45-4293-94f4-24e0902718f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff8b5d28-350a-493f-9b5e-8ff92d1d40ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=7656dd86-9d08-4781-86a7-c3f4abd07200) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.023 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 7656dd86-9d08-4781-86a7-c3f4abd07200 in datapath aca22591-2999-4ce4-8358-8365c76ef740 unbound from our chassis
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.024 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aca22591-2999-4ce4-8358-8365c76ef740, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.025 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[29870f18-857b-49e6-a5ad-aa483ab85522]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.026 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740 namespace which is not needed anymore
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.043 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Jan 20 15:07:42 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a6.scope: Consumed 17.802s CPU time.
Jan 20 15:07:42 compute-0 systemd-machined[216401]: Machine qemu-74-instance-000000a6 terminated.
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [NOTICE]   (347805) : haproxy version is 2.8.14-c23fe91
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [NOTICE]   (347805) : path to executable is /usr/sbin/haproxy
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [WARNING]  (347805) : Exiting Master process...
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [WARNING]  (347805) : Exiting Master process...
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [ALERT]    (347805) : Current worker (347807) exited with code 143 (Terminated)
Jan 20 15:07:42 compute-0 neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740[347800]: [WARNING]  (347805) : All workers exited. Exiting... (0)
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.171 250022 INFO nova.virt.libvirt.driver [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Instance destroyed successfully.
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.171 250022 DEBUG nova.objects.instance [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lazy-loading 'resources' on Instance uuid 26cf4955-374b-4e19-992f-e9348d555edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:07:42 compute-0 systemd[1]: libpod-77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b.scope: Deactivated successfully.
Jan 20 15:07:42 compute-0 podman[348288]: 2026-01-20 15:07:42.183498828 +0000 UTC m=+0.063212606 container died 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.198 250022 DEBUG nova.virt.libvirt.vif [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:06:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1623179757',display_name='tempest-TestEncryptedCinderVolumes-server-1623179757',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1623179757',id=166,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBvypThr9hk8H6PWhaLI5kNHP4GspwPBH/HTsm7o79l/I4wkXyvfHjS2+7YQopu7pgpa64VxpTqA3bKvRglz3oqm0zsBygBhRaMVrKGm+gAEA36pXMW5IBaDlXadpljSQg==',key_name='tempest-keypair-277062450',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:06:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='526db518ca3942b58ee346d4bd970e42',ramdisk_id='',reservation_id='r-2wppq000',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-389718176',owner_user_name='tempest-TestEncryptedCinderVolumes-389718176-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:06:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1cdce555ec694255a154517f28a12ae5',uuid=26cf4955-374b-4e19-992f-e9348d555edf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.199 250022 DEBUG nova.network.os_vif_util [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converting VIF {"id": "7656dd86-9d08-4781-86a7-c3f4abd07200", "address": "fa:16:3e:cb:54:6b", "network": {"id": "aca22591-2999-4ce4-8358-8365c76ef740", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-134827220-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "526db518ca3942b58ee346d4bd970e42", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7656dd86-9d", "ovs_interfaceid": "7656dd86-9d08-4781-86a7-c3f4abd07200", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.199 250022 DEBUG nova.network.os_vif_util [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.200 250022 DEBUG os_vif [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.203 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.203 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7656dd86-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.238 250022 INFO os_vif [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:54:6b,bridge_name='br-int',has_traffic_filtering=True,id=7656dd86-9d08-4781-86a7-c3f4abd07200,network=Network(aca22591-2999-4ce4-8358-8365c76ef740),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7656dd86-9d')
Jan 20 15:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b-userdata-shm.mount: Deactivated successfully.
Jan 20 15:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a91053e1f85d3aa07c363e6b5d42c3c432c6566479a3d61ca73af419e05a8c2-merged.mount: Deactivated successfully.
Jan 20 15:07:42 compute-0 podman[348288]: 2026-01-20 15:07:42.254234786 +0000 UTC m=+0.133948544 container cleanup 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:07:42 compute-0 systemd[1]: libpod-conmon-77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b.scope: Deactivated successfully.
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.318 250022 DEBUG nova.compute.manager [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-unplugged-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.318 250022 DEBUG oslo_concurrency.lockutils [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.318 250022 DEBUG oslo_concurrency.lockutils [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.318 250022 DEBUG oslo_concurrency.lockutils [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.319 250022 DEBUG nova.compute.manager [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] No waiting events found dispatching network-vif-unplugged-7656dd86-9d08-4781-86a7-c3f4abd07200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.319 250022 DEBUG nova.compute.manager [req-73cda994-a884-40f3-bf97-26b1da024000 req-95c622aa-ae31-4a66-95e9-e38d02fb1cb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-unplugged-7656dd86-9d08-4781-86a7-c3f4abd07200 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:07:42 compute-0 podman[348341]: 2026-01-20 15:07:42.325652173 +0000 UTC m=+0.044985674 container remove 77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:07:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.334 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f8500668-91c0-470a-b11a-1e379aed3ae6]: (4, ('Tue Jan 20 03:07:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740 (77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b)\n77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b\nTue Jan 20 03:07:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740 (77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b)\n77371b5b700686bff500d6e5821bb38a1b475b8cbbdaecaf2b7da5c177f30e8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.336 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[841117b1-ce3e-4ba6-b2b8-43ca4ccf87eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.337 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaca22591-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.338 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 kernel: tapaca22591-20: left promiscuous mode
Jan 20 15:07:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.353 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.355 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[97d97615-b4c7-4362-980c-e9d5446e5d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.371 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d0587749-0eb6-4d17-a784-c5b9f288a1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.373 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7d3de1-53ca-4e1b-ade6-b02a169f1c4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.389 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a02b4ad0-a9d7-47b9-a4d6-10b59feb27c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755551, 'reachable_time': 21440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348359, 'error': None, 'target': 'ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 systemd[1]: run-netns-ovnmeta\x2daca22591\x2d2999\x2d4ce4\x2d8358\x2d8365c76ef740.mount: Deactivated successfully.
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.391 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aca22591-2999-4ce4-8358-8365c76ef740 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:07:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:07:42.391 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3445d45d-340d-443e-afdb-dd6c11558ea3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.694 250022 INFO nova.virt.libvirt.driver [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Deleting instance files /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf_del
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.695 250022 INFO nova.virt.libvirt.driver [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Deletion of /var/lib/nova/instances/26cf4955-374b-4e19-992f-e9348d555edf_del complete
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.751 250022 INFO nova.compute.manager [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.752 250022 DEBUG oslo.service.loopingcall [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.752 250022 DEBUG nova.compute.manager [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:07:42 compute-0 nova_compute[250018]: 2026-01-20 15:07:42.752 250022 DEBUG nova.network.neutron [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:07:43 compute-0 ceph-mon[74360]: pgmap v2518: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 7 op/s
Jan 20 15:07:43 compute-0 nova_compute[250018]: 2026-01-20 15:07:43.751 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:43.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:43 compute-0 nova_compute[250018]: 2026-01-20 15:07:43.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.222 250022 DEBUG nova.network.neutron [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.258 250022 INFO nova.compute.manager [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Took 1.51 seconds to deallocate network for instance.
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.318 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.318 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:44.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 177 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.431 250022 DEBUG oslo_concurrency.processutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.463 250022 DEBUG nova.compute.manager [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.463 250022 DEBUG oslo_concurrency.lockutils [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "26cf4955-374b-4e19-992f-e9348d555edf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.463 250022 DEBUG oslo_concurrency.lockutils [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.464 250022 DEBUG oslo_concurrency.lockutils [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.464 250022 DEBUG nova.compute.manager [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] No waiting events found dispatching network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.464 250022 WARNING nova.compute.manager [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received unexpected event network-vif-plugged-7656dd86-9d08-4781-86a7-c3f4abd07200 for instance with vm_state deleted and task_state None.
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.464 250022 DEBUG nova.compute.manager [req-05ae0b83-b896-4f1a-a0f0-39b2032cb783 req-2b39dca2-444a-4df4-b07d-b4a85f04f426 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Received event network-vif-deleted-7656dd86-9d08-4781-86a7-c3f4abd07200 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2631843235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.880 250022 DEBUG oslo_concurrency.processutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.888 250022 DEBUG nova.compute.provider_tree [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.909 250022 DEBUG nova.scheduler.client.report [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.933 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:44 compute-0 nova_compute[250018]: 2026-01-20 15:07:44.969 250022 INFO nova.scheduler.client.report [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Deleted allocations for instance 26cf4955-374b-4e19-992f-e9348d555edf
Jan 20 15:07:45 compute-0 nova_compute[250018]: 2026-01-20 15:07:45.041 250022 DEBUG oslo_concurrency.lockutils [None req-268e4408-c66c-448f-a309-9b7feb8ca43c 1cdce555ec694255a154517f28a12ae5 526db518ca3942b58ee346d4bd970e42 - - default default] Lock "26cf4955-374b-4e19-992f-e9348d555edf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:45 compute-0 ceph-mon[74360]: pgmap v2519: 321 pgs: 321 active+clean; 177 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 20 15:07:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2631843235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:45.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.2 KiB/s wr, 31 op/s
Jan 20 15:07:47 compute-0 nova_compute[250018]: 2026-01-20 15:07:47.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:47 compute-0 ceph-mon[74360]: pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.2 KiB/s wr, 31 op/s
Jan 20 15:07:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:47.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 20 15:07:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:07:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2739921256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:07:49 compute-0 ceph-mon[74360]: pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 20 15:07:49 compute-0 nova_compute[250018]: 2026-01-20 15:07:49.729 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:49.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 20 15:07:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:51 compute-0 ceph-mon[74360]: pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 20 15:07:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:51.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:52 compute-0 nova_compute[250018]: 2026-01-20 15:07:52.238 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:07:52
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 20 15:07:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:07:53 compute-0 podman[348390]: 2026-01-20 15:07:53.492696704 +0000 UTC m=+0.078401546 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:07:53 compute-0 ceph-mon[74360]: pgmap v2523: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:53 compute-0 podman[348389]: 2026-01-20 15:07:53.511724088 +0000 UTC m=+0.103536134 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:07:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:07:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:53.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:07:54 compute-0 sudo[348434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:54 compute-0 sudo[348434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-0 sudo[348434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-0 sudo[348460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:07:54 compute-0 sudo[348460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-0 sudo[348460]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-0 sudo[348485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:54 compute-0 sudo[348485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-0 sudo[348485]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:54 compute-0 sudo[348510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:07:54 compute-0 sudo[348510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-0 nova_compute[250018]: 2026-01-20 15:07:54.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:54 compute-0 sudo[348510]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:07:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8333a0d0-189b-47ac-ac1a-90edb90a2cc2 does not exist
Jan 20 15:07:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev de476e47-14dd-4d36-aeba-484cdc084246 does not exist
Jan 20 15:07:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3623d878-644b-4112-a59d-d9f739472628 does not exist
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:07:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:07:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:54 compute-0 sudo[348566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:54 compute-0 sudo[348566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:54 compute-0 sudo[348566]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:55 compute-0 sudo[348591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:07:55 compute-0 sudo[348591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:55 compute-0 sudo[348591]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:55 compute-0 nova_compute[250018]: 2026-01-20 15:07:55.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:55 compute-0 sudo[348616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:55 compute-0 sudo[348616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:55 compute-0 sudo[348616]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:55 compute-0 sudo[348642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:07:55 compute-0 sudo[348642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:55 compute-0 ceph-mon[74360]: pgmap v2524: 321 pgs: 321 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:07:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1077560602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.535073908 +0000 UTC m=+0.081507730 container create 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:07:55 compute-0 systemd[1]: Started libpod-conmon-83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c.scope.
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.497850873 +0000 UTC m=+0.044284735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.643258436 +0000 UTC m=+0.189692218 container init 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.651495028 +0000 UTC m=+0.197928810 container start 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.65488777 +0000 UTC m=+0.201321552 container attach 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:07:55 compute-0 inspiring_diffie[348724]: 167 167
Jan 20 15:07:55 compute-0 systemd[1]: libpod-83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c.scope: Deactivated successfully.
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.662840354 +0000 UTC m=+0.209274136 container died 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e72498ef19458439b991ada2388a25efc4bd6610657530c2df1e0e3c1bcc81c8-merged.mount: Deactivated successfully.
Jan 20 15:07:55 compute-0 podman[348708]: 2026-01-20 15:07:55.695559357 +0000 UTC m=+0.241993139 container remove 83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:07:55 compute-0 systemd[1]: libpod-conmon-83462d44a31cb3cda27b8c795bd612abc9c7b796129d92fcecb56b8132e82a3c.scope: Deactivated successfully.
Jan 20 15:07:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:55 compute-0 podman[348748]: 2026-01-20 15:07:55.876308323 +0000 UTC m=+0.043726192 container create dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:07:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:07:55 compute-0 systemd[1]: Started libpod-conmon-dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429.scope.
Jan 20 15:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:55 compute-0 podman[348748]: 2026-01-20 15:07:55.859032486 +0000 UTC m=+0.026450365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:55 compute-0 podman[348748]: 2026-01-20 15:07:55.956101695 +0000 UTC m=+0.123519584 container init dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:07:55 compute-0 podman[348748]: 2026-01-20 15:07:55.96408055 +0000 UTC m=+0.131498429 container start dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:07:55 compute-0 podman[348748]: 2026-01-20 15:07:55.967139573 +0000 UTC m=+0.134557442 container attach dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:07:56 compute-0 nova_compute[250018]: 2026-01-20 15:07:56.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:56.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 178 KiB/s wr, 42 op/s
Jan 20 15:07:56 compute-0 sleepy_heyrovsky[348765]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:07:56 compute-0 sleepy_heyrovsky[348765]: --> relative data size: 1.0
Jan 20 15:07:56 compute-0 sleepy_heyrovsky[348765]: --> All data devices are unavailable
Jan 20 15:07:56 compute-0 systemd[1]: libpod-dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429.scope: Deactivated successfully.
Jan 20 15:07:56 compute-0 podman[348748]: 2026-01-20 15:07:56.823504653 +0000 UTC m=+0.990922522 container died dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ffc1643264dbd989427ba99224ac05e6ba5ae95bfda45c4337b5347d8dfe66d-merged.mount: Deactivated successfully.
Jan 20 15:07:56 compute-0 podman[348748]: 2026-01-20 15:07:56.879789111 +0000 UTC m=+1.047206980 container remove dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:07:56 compute-0 systemd[1]: libpod-conmon-dafcf795b8e6cade40985787e603c4b30afbcedf44a602be36abe3cba4346429.scope: Deactivated successfully.
Jan 20 15:07:56 compute-0 sudo[348642]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:56 compute-0 sudo[348793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:56 compute-0 sudo[348793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:56 compute-0 sudo[348793]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:57 compute-0 sudo[348818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:07:57 compute-0 sudo[348818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:57 compute-0 sudo[348818]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.073 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.073 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:57 compute-0 sudo[348843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:57 compute-0 sudo[348843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:57 compute-0 sudo[348843]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:57 compute-0 sudo[348869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:07:57 compute-0 sudo[348869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.168 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921662.1677928, 26cf4955-374b-4e19-992f-e9348d555edf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.169 250022 INFO nova.compute.manager [-] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] VM Stopped (Lifecycle Event)
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.190 250022 DEBUG nova.compute.manager [None req-649bda11-b0c1-4db4-b3cb-6df7bedeb87d - - - - - -] [instance: 26cf4955-374b-4e19-992f-e9348d555edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.502758586 +0000 UTC m=+0.049124486 container create 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 20 15:07:57 compute-0 ceph-mon[74360]: pgmap v2525: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 178 KiB/s wr, 42 op/s
Jan 20 15:07:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1416750325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1565062573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3044378009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-0 systemd[1]: Started libpod-conmon-8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33.scope.
Jan 20 15:07:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694324407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.579 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.486530189 +0000 UTC m=+0.032896109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.588777557 +0000 UTC m=+0.135143487 container init 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.595823747 +0000 UTC m=+0.142189647 container start 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.599204617 +0000 UTC m=+0.145570537 container attach 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:07:57 compute-0 vigorous_bose[348971]: 167 167
Jan 20 15:07:57 compute-0 systemd[1]: libpod-8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33.scope: Deactivated successfully.
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.60079662 +0000 UTC m=+0.147162520 container died 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 15:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc82ecfa5ef218867fc32fafc96696850bb8fdeaf5777a9bfc78e655c1c023b-merged.mount: Deactivated successfully.
Jan 20 15:07:57 compute-0 podman[348955]: 2026-01-20 15:07:57.639396542 +0000 UTC m=+0.185762452 container remove 8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 15:07:57 compute-0 systemd[1]: libpod-conmon-8dd03bb796abd91dd266f9987295044e137094cc28044c4aa02d0fe61ad7be33.scope: Deactivated successfully.
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:07:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.758 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.759 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4258MB free_disk=20.986278533935547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.759 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.760 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:07:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:57.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.816 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.817 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:07:57 compute-0 podman[348999]: 2026-01-20 15:07:57.834466884 +0000 UTC m=+0.042151578 container create 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:07:57 compute-0 nova_compute[250018]: 2026-01-20 15:07:57.842 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:07:57 compute-0 systemd[1]: Started libpod-conmon-33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe.scope.
Jan 20 15:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11940df3643b624eb7fb324db66b2f0d2759993ea64df0c26fb79b226b2c64b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11940df3643b624eb7fb324db66b2f0d2759993ea64df0c26fb79b226b2c64b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11940df3643b624eb7fb324db66b2f0d2759993ea64df0c26fb79b226b2c64b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11940df3643b624eb7fb324db66b2f0d2759993ea64df0c26fb79b226b2c64b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:57 compute-0 podman[348999]: 2026-01-20 15:07:57.818665778 +0000 UTC m=+0.026350482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:57 compute-0 podman[348999]: 2026-01-20 15:07:57.913474135 +0000 UTC m=+0.121158849 container init 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:07:57 compute-0 podman[348999]: 2026-01-20 15:07:57.923029252 +0000 UTC m=+0.130713946 container start 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:07:57 compute-0 podman[348999]: 2026-01-20 15:07:57.926214939 +0000 UTC m=+0.133899633 container attach 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:07:58 compute-0 sudo[349040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:58 compute-0 sudo[349040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-0 sudo[349040]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-0 sudo[349065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:58 compute-0 sudo[349065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-0 sudo[349065]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:07:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227773044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-0 nova_compute[250018]: 2026-01-20 15:07:58.306 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:07:58 compute-0 nova_compute[250018]: 2026-01-20 15:07:58.312 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:07:58 compute-0 nova_compute[250018]: 2026-01-20 15:07:58.330 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:07:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:07:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:07:58.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:07:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 175 KiB/s wr, 15 op/s
Jan 20 15:07:58 compute-0 nova_compute[250018]: 2026-01-20 15:07:58.368 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:07:58 compute-0 nova_compute[250018]: 2026-01-20 15:07:58.372 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:07:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/694324407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3322640165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4139490107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4227773044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:07:58 compute-0 magical_rhodes[349016]: {
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:     "0": [
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:         {
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "devices": [
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "/dev/loop3"
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             ],
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "lv_name": "ceph_lv0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "lv_size": "7511998464",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "name": "ceph_lv0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "tags": {
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.cluster_name": "ceph",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.crush_device_class": "",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.encrypted": "0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.osd_id": "0",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.type": "block",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:                 "ceph.vdo": "0"
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             },
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "type": "block",
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:             "vg_name": "ceph_vg0"
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:         }
Jan 20 15:07:58 compute-0 magical_rhodes[349016]:     ]
Jan 20 15:07:58 compute-0 magical_rhodes[349016]: }
Jan 20 15:07:58 compute-0 systemd[1]: libpod-33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe.scope: Deactivated successfully.
Jan 20 15:07:58 compute-0 podman[348999]: 2026-01-20 15:07:58.715805618 +0000 UTC m=+0.923490332 container died 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c11940df3643b624eb7fb324db66b2f0d2759993ea64df0c26fb79b226b2c64b-merged.mount: Deactivated successfully.
Jan 20 15:07:58 compute-0 podman[348999]: 2026-01-20 15:07:58.779230019 +0000 UTC m=+0.986914703 container remove 33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:07:58 compute-0 systemd[1]: libpod-conmon-33b1c533ea255c753ecc232e3cd046f9f9ac12682c9a4bbd7d24a428238cadbe.scope: Deactivated successfully.
Jan 20 15:07:58 compute-0 sudo[348869]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-0 sudo[349111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:58 compute-0 sudo[349111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-0 sudo[349111]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:58 compute-0 sudo[349136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:07:58 compute-0 sudo[349136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:58 compute-0 sudo[349136]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:59 compute-0 sudo[349161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:07:59 compute-0 sudo[349161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:59 compute-0 sudo[349161]: pam_unix(sudo:session): session closed for user root
Jan 20 15:07:59 compute-0 sudo[349186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:07:59 compute-0 sudo[349186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.457205227 +0000 UTC m=+0.040546385 container create 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:07:59 compute-0 systemd[1]: Started libpod-conmon-2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15.scope.
Jan 20 15:07:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.531360297 +0000 UTC m=+0.114701475 container init 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.436058797 +0000 UTC m=+0.019400005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.537315368 +0000 UTC m=+0.120656526 container start 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.540644668 +0000 UTC m=+0.123985826 container attach 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:07:59 compute-0 wonderful_lewin[349267]: 167 167
Jan 20 15:07:59 compute-0 systemd[1]: libpod-2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15.scope: Deactivated successfully.
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.54219297 +0000 UTC m=+0.125534158 container died 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b34bc77ce96379c216747f23abb557b631eb7244b7427529a2d63bf4878c3c06-merged.mount: Deactivated successfully.
Jan 20 15:07:59 compute-0 podman[349251]: 2026-01-20 15:07:59.583827083 +0000 UTC m=+0.167168251 container remove 2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:07:59 compute-0 ceph-mon[74360]: pgmap v2526: 321 pgs: 321 active+clean; 126 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 175 KiB/s wr, 15 op/s
Jan 20 15:07:59 compute-0 systemd[1]: libpod-conmon-2b6647c3d8f6cdbb4f4d073a65d9ddf895349d46fa6c11d3c3e06e59825a5d15.scope: Deactivated successfully.
Jan 20 15:07:59 compute-0 nova_compute[250018]: 2026-01-20 15:07:59.759 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:07:59 compute-0 podman[349291]: 2026-01-20 15:07:59.765453382 +0000 UTC m=+0.056484724 container create 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:07:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:07:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:07:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:07:59.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:07:59 compute-0 systemd[1]: Started libpod-conmon-9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e.scope.
Jan 20 15:07:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:07:59 compute-0 podman[349291]: 2026-01-20 15:07:59.733826099 +0000 UTC m=+0.024857491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c96e0ba0c66c7e81f873817c193e90c376bd7e37d2d0661df4a305bc3cc23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c96e0ba0c66c7e81f873817c193e90c376bd7e37d2d0661df4a305bc3cc23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c96e0ba0c66c7e81f873817c193e90c376bd7e37d2d0661df4a305bc3cc23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88c96e0ba0c66c7e81f873817c193e90c376bd7e37d2d0661df4a305bc3cc23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:07:59 compute-0 podman[349291]: 2026-01-20 15:07:59.846013636 +0000 UTC m=+0.137044998 container init 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:07:59 compute-0 podman[349291]: 2026-01-20 15:07:59.853538908 +0000 UTC m=+0.144570270 container start 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:07:59 compute-0 podman[349291]: 2026-01-20 15:07:59.859307114 +0000 UTC m=+0.150338446 container attach 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:08:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 196 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.0 MiB/s wr, 45 op/s
Jan 20 15:08:00 compute-0 nova_compute[250018]: 2026-01-20 15:08:00.367 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:00 compute-0 nova_compute[250018]: 2026-01-20 15:08:00.368 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:00 compute-0 nova_compute[250018]: 2026-01-20 15:08:00.368 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:00 compute-0 nova_compute[250018]: 2026-01-20 15:08:00.368 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:08:00 compute-0 ceph-mon[74360]: pgmap v2527: 321 pgs: 321 active+clean; 196 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.0 MiB/s wr, 45 op/s
Jan 20 15:08:00 compute-0 friendly_brown[349307]: {
Jan 20 15:08:00 compute-0 friendly_brown[349307]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:08:00 compute-0 friendly_brown[349307]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:08:00 compute-0 friendly_brown[349307]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:08:00 compute-0 friendly_brown[349307]:         "osd_id": 0,
Jan 20 15:08:00 compute-0 friendly_brown[349307]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:08:00 compute-0 friendly_brown[349307]:         "type": "bluestore"
Jan 20 15:08:00 compute-0 friendly_brown[349307]:     }
Jan 20 15:08:00 compute-0 friendly_brown[349307]: }
Jan 20 15:08:00 compute-0 systemd[1]: libpod-9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e.scope: Deactivated successfully.
Jan 20 15:08:00 compute-0 podman[349291]: 2026-01-20 15:08:00.774280876 +0000 UTC m=+1.065312218 container died 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c88c96e0ba0c66c7e81f873817c193e90c376bd7e37d2d0661df4a305bc3cc23-merged.mount: Deactivated successfully.
Jan 20 15:08:00 compute-0 podman[349291]: 2026-01-20 15:08:00.821800568 +0000 UTC m=+1.112831910 container remove 9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:08:00 compute-0 systemd[1]: libpod-conmon-9c44c34c2da3209d9118354b8f34dc40a01659e3064bf956b28c512e9742642e.scope: Deactivated successfully.
Jan 20 15:08:00 compute-0 sudo[349186]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:08:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:08:00 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0fd52551-4e7d-445a-8f27-b29961c0c1f5 does not exist
Jan 20 15:08:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev afb8b9e4-8c59-4618-b076-cdea14cf886b does not exist
Jan 20 15:08:00 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3c51cf16-b749-482f-b371-b65874916790 does not exist
Jan 20 15:08:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.902285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680902311, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 744, "num_deletes": 253, "total_data_size": 927928, "memory_usage": 941848, "flush_reason": "Manual Compaction"}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680909177, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 917779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56406, "largest_seqno": 57149, "table_properties": {"data_size": 913954, "index_size": 1605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8198, "raw_average_key_size": 17, "raw_value_size": 906097, "raw_average_value_size": 1982, "num_data_blocks": 70, "num_entries": 457, "num_filter_entries": 457, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921633, "oldest_key_time": 1768921633, "file_creation_time": 1768921680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 6935 microseconds, and 3056 cpu microseconds.
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.909217) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 917779 bytes OK
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.909235) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.910715) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.910736) EVENT_LOG_v1 {"time_micros": 1768921680910730, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.910754) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 924135, prev total WAL file size 924135, number of live WAL files 2.
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.911364) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323536' seq:72057594037927935, type:22 .. '6B7600353037' seq:0, type:0; will stop at (end)
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(896KB)], [125(11MB)]
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680911426, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 12523764, "oldest_snapshot_seqno": -1}
Jan 20 15:08:00 compute-0 sudo[349343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:00 compute-0 sudo[349343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:00 compute-0 sudo[349343]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8466 keys, 11460618 bytes, temperature: kUnknown
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680981333, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 11460618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11404697, "index_size": 33664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 222595, "raw_average_key_size": 26, "raw_value_size": 11254409, "raw_average_value_size": 1329, "num_data_blocks": 1293, "num_entries": 8466, "num_filter_entries": 8466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.981594) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 11460618 bytes
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.982805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.9 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(26.1) write-amplify(12.5) OK, records in: 8988, records dropped: 522 output_compression: NoCompression
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.982822) EVENT_LOG_v1 {"time_micros": 1768921680982814, "job": 76, "event": "compaction_finished", "compaction_time_micros": 70012, "compaction_time_cpu_micros": 35906, "output_level": 6, "num_output_files": 1, "total_output_size": 11460618, "num_input_records": 8988, "num_output_records": 8466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680983174, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921680985254, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.911276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.985333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.985337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.985339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.985341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:08:00.985343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:08:00 compute-0 sudo[349368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:08:00 compute-0 sudo[349368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:01 compute-0 sudo[349368]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/537198121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:01 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:01 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:08:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3470223084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:02 compute-0 nova_compute[250018]: 2026-01-20 15:08:02.269 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 20 15:08:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2382603208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:02 compute-0 ceph-mon[74360]: pgmap v2528: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 20 15:08:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1789117815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:04 compute-0 nova_compute[250018]: 2026-01-20 15:08:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:04 compute-0 nova_compute[250018]: 2026-01-20 15:08:04.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:08:04 compute-0 nova_compute[250018]: 2026-01-20 15:08:04.083 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:08:04 compute-0 nova_compute[250018]: 2026-01-20 15:08:04.084 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:04.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 20 15:08:04 compute-0 nova_compute[250018]: 2026-01-20 15:08:04.760 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:05 compute-0 ceph-mon[74360]: pgmap v2529: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 20 15:08:05 compute-0 sshd-session[349394]: Connection closed by authenticating user root 134.122.57.138 port 46894 [preauth]
Jan 20 15:08:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:05.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Jan 20 15:08:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:07 compute-0 nova_compute[250018]: 2026-01-20 15:08:07.273 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:07 compute-0 ceph-mon[74360]: pgmap v2530: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Jan 20 15:08:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 145 op/s
Jan 20 15:08:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:09 compute-0 ceph-mon[74360]: pgmap v2531: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 145 op/s
Jan 20 15:08:09 compute-0 nova_compute[250018]: 2026-01-20 15:08:09.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:09.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 229 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 235 op/s
Jan 20 15:08:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:11 compute-0 ceph-mon[74360]: pgmap v2532: 321 pgs: 321 active+clean; 229 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.8 MiB/s wr, 235 op/s
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019914750255909173 of space, bias 1.0, pg target 0.5974425076772752 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002376974571933743 of space, bias 1.0, pg target 0.713092371580123 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:08:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:08:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:11.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:12 compute-0 nova_compute[250018]: 2026-01-20 15:08:12.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 241 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 20 15:08:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:12 compute-0 ceph-mon[74360]: pgmap v2533: 321 pgs: 321 active+clean; 241 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 20 15:08:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1594079858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 20 15:08:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 20 15:08:13 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 20 15:08:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1594079858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 217 op/s
Jan 20 15:08:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 20 15:08:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:08:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3538750701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:08:14 compute-0 ceph-mon[74360]: osdmap e371: 3 total, 3 up, 3 in
Jan 20 15:08:14 compute-0 ceph-mon[74360]: pgmap v2535: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 217 op/s
Jan 20 15:08:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 20 15:08:14 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 20 15:08:14 compute-0 nova_compute[250018]: 2026-01-20 15:08:14.763 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 20 15:08:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 20 15:08:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 20 15:08:15 compute-0 ceph-mon[74360]: osdmap e372: 3 total, 3 up, 3 in
Jan 20 15:08:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 6.8 MiB/s wr, 74 op/s
Jan 20 15:08:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 20 15:08:16 compute-0 ceph-mon[74360]: osdmap e373: 3 total, 3 up, 3 in
Jan 20 15:08:16 compute-0 ceph-mon[74360]: pgmap v2538: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 6.8 MiB/s wr, 74 op/s
Jan 20 15:08:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 20 15:08:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 20 15:08:17 compute-0 nova_compute[250018]: 2026-01-20 15:08:17.347 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:17 compute-0 ceph-mon[74360]: osdmap e374: 3 total, 3 up, 3 in
Jan 20 15:08:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:18 compute-0 sudo[349405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:18 compute-0 sudo[349405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:18 compute-0 sudo[349405]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:18 compute-0 sudo[349430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:18 compute-0 sudo[349430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:18 compute-0 sudo[349430]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 7.1 MiB/s wr, 83 op/s
Jan 20 15:08:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:18 compute-0 ceph-mon[74360]: pgmap v2540: 321 pgs: 321 active+clean; 281 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 7.1 MiB/s wr, 83 op/s
Jan 20 15:08:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:19.575 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:19 compute-0 nova_compute[250018]: 2026-01-20 15:08:19.575 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:19 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:19.576 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:08:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2580296979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:19 compute-0 nova_compute[250018]: 2026-01-20 15:08:19.765 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 15:08:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 20 15:08:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 20 15:08:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 20 15:08:20 compute-0 ceph-mon[74360]: pgmap v2541: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 11 MiB/s wr, 278 op/s
Jan 20 15:08:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 20 15:08:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 20 15:08:20 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 20 15:08:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:22 compute-0 ceph-mon[74360]: osdmap e375: 3 total, 3 up, 3 in
Jan 20 15:08:22 compute-0 ceph-mon[74360]: osdmap e376: 3 total, 3 up, 3 in
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.2 MiB/s wr, 263 op/s
Jan 20 15:08:22 compute-0 nova_compute[250018]: 2026-01-20 15:08:22.383 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:23 compute-0 ceph-mon[74360]: pgmap v2544: 321 pgs: 321 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.2 MiB/s wr, 263 op/s
Jan 20 15:08:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:23.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 403 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.4 MiB/s wr, 223 op/s
Jan 20 15:08:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:24 compute-0 podman[349459]: 2026-01-20 15:08:24.507437733 +0000 UTC m=+0.093742639 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:08:24 compute-0 podman[349458]: 2026-01-20 15:08:24.521230076 +0000 UTC m=+0.107535602 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:08:24 compute-0 nova_compute[250018]: 2026-01-20 15:08:24.766 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:25 compute-0 ceph-mon[74360]: pgmap v2545: 321 pgs: 321 active+clean; 403 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.4 MiB/s wr, 223 op/s
Jan 20 15:08:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2739269448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 10 MiB/s wr, 255 op/s
Jan 20 15:08:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:26.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2989555521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2370305360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:26 compute-0 nova_compute[250018]: 2026-01-20 15:08:26.924 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:26 compute-0 nova_compute[250018]: 2026-01-20 15:08:26.924 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:26 compute-0 nova_compute[250018]: 2026-01-20 15:08:26.945 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.023 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.024 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.031 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.031 250022 INFO nova.compute.claims [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.152 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.386 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:27 compute-0 ceph-mon[74360]: pgmap v2546: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 10 MiB/s wr, 255 op/s
Jan 20 15:08:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:08:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684129964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.651 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.659 250022 DEBUG nova.compute.provider_tree [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.682 250022 DEBUG nova.scheduler.client.report [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.709 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.711 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.764 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.765 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.785 250022 INFO nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.808 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:08:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:27 compute-0 nova_compute[250018]: 2026-01-20 15:08:27.862 250022 INFO nova.virt.block_device [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Booting with volume 658f5854-e66e-4c02-a707-dcb4f1727205 at /dev/vda
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.087 250022 DEBUG os_brick.utils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.088 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.099 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.099 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[e2007a5c-1e22-48ef-b40e-7bb2dfbe0d4c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.101 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.110 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.110 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[565b655b-b149-42f2-a688-453ae4538542]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.118 250022 DEBUG nova.policy [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '467630ad66f84c4ba21657f6e5db7d10', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48aeeaa346bb4dfba4dfc598ae41f062', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.114 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.122 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.122 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[216f1659-be5b-44f0-a019-d7337360bf5d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.124 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[a26aed49-8253-4f1e-96ce-63e86b326f77]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.125 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.161 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.163 250022 DEBUG os_brick.initiator.connectors.lightos [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.163 250022 DEBUG os_brick.initiator.connectors.lightos [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.163 250022 DEBUG os_brick.initiator.connectors.lightos [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.164 250022 DEBUG os_brick.utils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.164 250022 DEBUG nova.virt.block_device [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating existing volume attachment record: ac014952-2475-4e81-98c3-d3d0118a657a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:08:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 92 op/s
Jan 20 15:08:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:28.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2684129964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189146071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:28 compute-0 nova_compute[250018]: 2026-01-20 15:08:28.955 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Successfully created port: 4905566f-51f5-4050-b95c-545d1c4eca86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:08:29 compute-0 ovn_controller[148666]: 2026-01-20T15:08:29Z|00597|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.410 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.411 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.412 250022 INFO nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Creating image(s)
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.412 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.413 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Ensure instance console log exists: /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.413 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.413 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.414 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:29 compute-0 ceph-mon[74360]: pgmap v2547: 321 pgs: 321 active+clean; 445 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 92 op/s
Jan 20 15:08:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4189146071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:29.578 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:29 compute-0 nova_compute[250018]: 2026-01-20 15:08:29.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:29.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 116 op/s
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.398 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Successfully updated port: 4905566f-51f5-4050-b95c-545d1c4eca86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:08:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:30.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.415 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.416 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.416 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:08:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 20 15:08:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 20 15:08:30 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.525 250022 DEBUG nova.compute.manager [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.525 250022 DEBUG nova.compute.manager [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing instance network info cache due to event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.525 250022 DEBUG oslo_concurrency.lockutils [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:30 compute-0 nova_compute[250018]: 2026-01-20 15:08:30.617 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:08:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:30.778 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:30.778 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:30.778 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:31 compute-0 ceph-mon[74360]: pgmap v2548: 321 pgs: 321 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 116 op/s
Jan 20 15:08:31 compute-0 ceph-mon[74360]: osdmap e377: 3 total, 3 up, 3 in
Jan 20 15:08:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:31.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.115 250022 DEBUG nova.network.neutron [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.142 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.143 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Instance network_info: |[{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.143 250022 DEBUG oslo_concurrency.lockutils [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.144 250022 DEBUG nova.network.neutron [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.150 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Start _get_guest_xml network_info=[{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': 'ac014952-2475-4e81-98c3-d3d0118a657a', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-658f5854-e66e-4c02-a707-dcb4f1727205', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '658f5854-e66e-4c02-a707-dcb4f1727205', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f36f762-4fcd-4996-a347-43e8704458c9', 'attached_at': '', 'detached_at': '', 'volume_id': '658f5854-e66e-4c02-a707-dcb4f1727205', 'serial': '658f5854-e66e-4c02-a707-dcb4f1727205'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.157 250022 WARNING nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.173 250022 DEBUG nova.virt.libvirt.host [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.174 250022 DEBUG nova.virt.libvirt.host [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.185 250022 DEBUG nova.virt.libvirt.host [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.186 250022 DEBUG nova.virt.libvirt.host [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.188 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.189 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.189 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.190 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.190 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.191 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.191 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.192 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.193 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.193 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.193 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.194 250022 DEBUG nova.virt.hardware [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.237 250022 DEBUG nova.storage.rbd_utils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] rbd image 9f36f762-4fcd-4996-a347-43e8704458c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.242 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 477 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 135 op/s
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:32.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:32 compute-0 ceph-mon[74360]: pgmap v2550: 321 pgs: 321 active+clean; 477 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 135 op/s
Jan 20 15:08:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:08:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243867422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.745 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.770 250022 DEBUG nova.virt.libvirt.vif [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812790462',display_name='tempest-TestVolumeBackupRestore-server-1812790462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812790462',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxLbPiDR7AwdjDmf86eGvFaH6h62+9wOP7b56n23VGMK1V8QSlWsSLfeNXOI/Y73QQp/5icyOUxYMpUaWFgqmFCeN4Fx0NUoAEImZuyySZ+kRnaZMcDsvRcAMwkQP9cMw==',key_name='tempest-TestVolumeBackupRestore-1250360555',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48aeeaa346bb4dfba4dfc598ae41f062',ramdisk_id='',reservation_id='r-1fawb1vq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-61264633',owner_user_name='tempest-TestVolumeBackupRestore-61264633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:08:27Z,user_data=None,user_id='467630ad66f84c4ba21657f6e5db7d10',uuid=9f36f762-4fcd-4996-a347-43e8704458c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.771 250022 DEBUG nova.network.os_vif_util [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converting VIF {"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.772 250022 DEBUG nova.network.os_vif_util [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.774 250022 DEBUG nova.objects.instance [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f36f762-4fcd-4996-a347-43e8704458c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.789 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <uuid>9f36f762-4fcd-4996-a347-43e8704458c9</uuid>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <name>instance-000000a9</name>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:name>tempest-TestVolumeBackupRestore-server-1812790462</nova:name>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:08:32</nova:creationTime>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:user uuid="467630ad66f84c4ba21657f6e5db7d10">tempest-TestVolumeBackupRestore-61264633-project-member</nova:user>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:project uuid="48aeeaa346bb4dfba4dfc598ae41f062">tempest-TestVolumeBackupRestore-61264633</nova:project>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <nova:port uuid="4905566f-51f5-4050-b95c-545d1c4eca86">
Jan 20 15:08:32 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <system>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="serial">9f36f762-4fcd-4996-a347-43e8704458c9</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="uuid">9f36f762-4fcd-4996-a347-43e8704458c9</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </system>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <os>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </os>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <features>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </features>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/9f36f762-4fcd-4996-a347-43e8704458c9_disk.config">
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </source>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-658f5854-e66e-4c02-a707-dcb4f1727205">
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </source>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:08:32 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <serial>658f5854-e66e-4c02-a707-dcb4f1727205</serial>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:24:50:fe"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <target dev="tap4905566f-51"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/console.log" append="off"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <video>
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </video>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:08:32 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:08:32 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:08:32 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:08:32 compute-0 nova_compute[250018]: </domain>
Jan 20 15:08:32 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.791 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Preparing to wait for external event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.791 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.792 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.792 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.794 250022 DEBUG nova.virt.libvirt.vif [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812790462',display_name='tempest-TestVolumeBackupRestore-server-1812790462',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812790462',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxLbPiDR7AwdjDmf86eGvFaH6h62+9wOP7b56n23VGMK1V8QSlWsSLfeNXOI/Y73QQp/5icyOUxYMpUaWFgqmFCeN4Fx0NUoAEImZuyySZ+kRnaZMcDsvRcAMwkQP9cMw==',key_name='tempest-TestVolumeBackupRestore-1250360555',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48aeeaa346bb4dfba4dfc598ae41f062',ramdisk_id='',reservation_id='r-1fawb1vq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-61264633',owner_user_name='tempest-TestVolumeBackupRestore-61264633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:08:27Z,user_data=None,user_id='467630ad66f84c4ba21657f6e5db7d10',uuid=9f36f762-4fcd-4996-a347-43e8704458c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.794 250022 DEBUG nova.network.os_vif_util [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converting VIF {"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.795 250022 DEBUG nova.network.os_vif_util [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.796 250022 DEBUG os_vif [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.797 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.798 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.803 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4905566f-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.804 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4905566f-51, col_values=(('external_ids', {'iface-id': '4905566f-51f5-4050-b95c-545d1c4eca86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:50:fe', 'vm-uuid': '9f36f762-4fcd-4996-a347-43e8704458c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-0 NetworkManager[48960]: <info>  [1768921712.8079] manager: (tap4905566f-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.813 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.814 250022 INFO os_vif [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51')
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.869 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.869 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.870 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] No VIF found with MAC fa:16:3e:24:50:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.870 250022 INFO nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Using config drive
Jan 20 15:08:32 compute-0 nova_compute[250018]: 2026-01-20 15:08:32.891 250022 DEBUG nova.storage.rbd_utils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] rbd image 9f36f762-4fcd-4996-a347-43e8704458c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.281 250022 INFO nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Creating config drive at /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.285 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1vm73ilp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.417 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1vm73ilp" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.444 250022 DEBUG nova.storage.rbd_utils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] rbd image 9f36f762-4fcd-4996-a347-43e8704458c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.447 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config 9f36f762-4fcd-4996-a347-43e8704458c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.500 250022 DEBUG nova.network.neutron [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updated VIF entry in instance network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.501 250022 DEBUG nova.network.neutron [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.521 250022 DEBUG oslo_concurrency.lockutils [req-d81a6d11-2d8a-456e-9344-3154f09991ef req-6904e8e8-5398-4ae6-93c3-641d40935e09 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.615 250022 DEBUG oslo_concurrency.processutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config 9f36f762-4fcd-4996-a347-43e8704458c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.615 250022 INFO nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Deleting local config drive /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9/disk.config because it was imported into RBD.
Jan 20 15:08:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/243867422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:33 compute-0 kernel: tap4905566f-51: entered promiscuous mode
Jan 20 15:08:33 compute-0 NetworkManager[48960]: <info>  [1768921713.6947] manager: (tap4905566f-51): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Jan 20 15:08:33 compute-0 ovn_controller[148666]: 2026-01-20T15:08:33Z|00598|binding|INFO|Claiming lport 4905566f-51f5-4050-b95c-545d1c4eca86 for this chassis.
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.738 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:33 compute-0 ovn_controller[148666]: 2026-01-20T15:08:33Z|00599|binding|INFO|4905566f-51f5-4050-b95c-545d1c4eca86: Claiming fa:16:3e:24:50:fe 10.100.0.3
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.743 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.761 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:50:fe 10.100.0.3'], port_security=['fa:16:3e:24:50:fe 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f36f762-4fcd-4996-a347-43e8704458c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48aeeaa346bb4dfba4dfc598ae41f062', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83634ac6-ccb2-4a9e-943b-5695245061bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb1a11c-202c-4329-9f25-67a4bb6fa2ab, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=4905566f-51f5-4050-b95c-545d1c4eca86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.763 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 4905566f-51f5-4050-b95c-545d1c4eca86 in datapath 8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef bound to our chassis
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.765 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef
Jan 20 15:08:33 compute-0 systemd-udevd[349648]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:33 compute-0 systemd-machined[216401]: New machine qemu-75-instance-000000a9.
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.778 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[40cead76-66fd-408d-a633-46d375d80341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.779 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8ca08cef-01 in ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.781 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8ca08cef-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.781 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f1452774-7aef-43a9-83bc-96b95157c2b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.782 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2442f9b7-58e9-4221-8abc-3a4e4d918867]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 NetworkManager[48960]: <info>  [1768921713.7902] device (tap4905566f-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:08:33 compute-0 NetworkManager[48960]: <info>  [1768921713.7913] device (tap4905566f-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.793 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae41966-15a8-46be-af51-cf3fd46eb2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-000000a9.
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.818 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[627beee5-f645-4470-a91e-3e374b9cc24f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_controller[148666]: 2026-01-20T15:08:33Z|00600|binding|INFO|Setting lport 4905566f-51f5-4050-b95c-545d1c4eca86 ovn-installed in OVS
Jan 20 15:08:33 compute-0 ovn_controller[148666]: 2026-01-20T15:08:33Z|00601|binding|INFO|Setting lport 4905566f-51f5-4050-b95c-545d1c4eca86 up in Southbound
Jan 20 15:08:33 compute-0 nova_compute[250018]: 2026-01-20 15:08:33.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.845 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a4071094-54e6-4a72-aa83-2b4a858a5a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.849 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[276deb7c-25ef-4beb-a25f-33e9e6bb63f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 NetworkManager[48960]: <info>  [1768921713.8507] manager: (tap8ca08cef-00): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Jan 20 15:08:33 compute-0 systemd-udevd[349652]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:08:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:33.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.880 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[77c226f8-db7d-4f9d-b9a5-979a6facb02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.883 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[97465168-b0a8-4647-a9e8-3ee5b20f6061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 NetworkManager[48960]: <info>  [1768921713.9050] device (tap8ca08cef-00): carrier: link connected
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.908 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fe923816-5b43-4247-8385-34901ed8925c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.923 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c4430fc5-d93e-4d78-8636-b7536f764b27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca08cef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:6f:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765462, 'reachable_time': 26800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349681, 'error': None, 'target': 'ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.940 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6a00cc-3751-4857-a103-937137d8c9be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:6f20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765462, 'tstamp': 765462}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349682, 'error': None, 'target': 'ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.956 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[81c5da16-d459-405e-84a5-6ff09c95948c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca08cef-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:6f:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765462, 'reachable_time': 26800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349683, 'error': None, 'target': 'ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:33.999 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f700e22-d472-42f0-9d9a-96f79c387c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.065 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[12432e80-e848-4198-a38d-4d62cd27f170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.066 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca08cef-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.066 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.066 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ca08cef-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:34 compute-0 kernel: tap8ca08cef-00: entered promiscuous mode
Jan 20 15:08:34 compute-0 NetworkManager[48960]: <info>  [1768921714.0691] manager: (tap8ca08cef-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.076 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ca08cef-00, col_values=(('external_ids', {'iface-id': '7e4aa652-26e0-4920-84fc-daed942c5e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:34 compute-0 ovn_controller[148666]: 2026-01-20T15:08:34Z|00602|binding|INFO|Releasing lport 7e4aa652-26e0-4920-84fc-daed942c5e74 from this chassis (sb_readonly=0)
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.077 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.091 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.092 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec4725a-12d8-4f27-a3f6-13f99c420d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.092 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef.pid.haproxy
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:08:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:34.093 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'env', 'PROCESS_TAG=haproxy-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.269 250022 DEBUG nova.compute.manager [req-612c018c-6168-49df-87c5-2d0e484b9295 req-32be7b2f-81f9-44be-8033-9e85356b582a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.269 250022 DEBUG oslo_concurrency.lockutils [req-612c018c-6168-49df-87c5-2d0e484b9295 req-32be7b2f-81f9-44be-8033-9e85356b582a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.275 250022 DEBUG oslo_concurrency.lockutils [req-612c018c-6168-49df-87c5-2d0e484b9295 req-32be7b2f-81f9-44be-8033-9e85356b582a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.275 250022 DEBUG oslo_concurrency.lockutils [req-612c018c-6168-49df-87c5-2d0e484b9295 req-32be7b2f-81f9-44be-8033-9e85356b582a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.275 250022 DEBUG nova.compute.manager [req-612c018c-6168-49df-87c5-2d0e484b9295 req-32be7b2f-81f9-44be-8033-9e85356b582a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Processing event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:08:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 15:08:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:34.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:34 compute-0 podman[349714]: 2026-01-20 15:08:34.450478091 +0000 UTC m=+0.050355760 container create 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:08:34 compute-0 systemd[1]: Started libpod-conmon-81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa.scope.
Jan 20 15:08:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:08:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69f2d7e13cf1e5e64998d07634dc821e111079fb88622e8d9d34240a491916/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:08:34 compute-0 podman[349714]: 2026-01-20 15:08:34.520322785 +0000 UTC m=+0.120200444 container init 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:08:34 compute-0 podman[349714]: 2026-01-20 15:08:34.427865021 +0000 UTC m=+0.027742720 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:08:34 compute-0 podman[349714]: 2026-01-20 15:08:34.526209553 +0000 UTC m=+0.126087222 container start 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:08:34 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [NOTICE]   (349734) : New worker (349736) forked
Jan 20 15:08:34 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [NOTICE]   (349734) : Loading success.
Jan 20 15:08:34 compute-0 nova_compute[250018]: 2026-01-20 15:08:34.771 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/982314999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:34 compute-0 ceph-mon[74360]: pgmap v2551: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 157 op/s
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.108 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921715.1081784, 9f36f762-4fcd-4996-a347-43e8704458c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.109 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] VM Started (Lifecycle Event)
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.111 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.114 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.117 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Instance spawned successfully.
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.118 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.133 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.137 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.144 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.145 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.145 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.146 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.146 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.146 250022 DEBUG nova.virt.libvirt.driver [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.157 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.157 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921715.1082838, 9f36f762-4fcd-4996-a347-43e8704458c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.158 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] VM Paused (Lifecycle Event)
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.186 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.189 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921715.1139598, 9f36f762-4fcd-4996-a347-43e8704458c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.189 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] VM Resumed (Lifecycle Event)
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.210 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.213 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.218 250022 INFO nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Took 5.81 seconds to spawn the instance on the hypervisor.
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.219 250022 DEBUG nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.258 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.286 250022 INFO nova.compute.manager [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Took 8.29 seconds to build instance.
Jan 20 15:08:35 compute-0 nova_compute[250018]: 2026-01-20 15:08:35.311 250022 DEBUG oslo_concurrency.lockutils [None req-4a45862e-c4bb-4693-922a-457930f4f782 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:36.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.431 250022 DEBUG nova.compute.manager [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.432 250022 DEBUG oslo_concurrency.lockutils [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.432 250022 DEBUG oslo_concurrency.lockutils [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.432 250022 DEBUG oslo_concurrency.lockutils [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.432 250022 DEBUG nova.compute.manager [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] No waiting events found dispatching network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:36 compute-0 nova_compute[250018]: 2026-01-20 15:08:36.433 250022 WARNING nova.compute.manager [req-830a966c-4b88-4d8d-bd25-d2992cb53624 req-5f594d33-4222-4bae-8406-c54b9cb6db64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received unexpected event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 for instance with vm_state active and task_state None.
Jan 20 15:08:37 compute-0 ceph-mon[74360]: pgmap v2552: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:37 compute-0 nova_compute[250018]: 2026-01-20 15:08:37.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:08:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:08:38 compute-0 NetworkManager[48960]: <info>  [1768921718.3261] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 20 15:08:38 compute-0 NetworkManager[48960]: <info>  [1768921718.3269] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Jan 20 15:08:38 compute-0 nova_compute[250018]: 2026-01-20 15:08:38.327 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:38.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:38 compute-0 sudo[349789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:38 compute-0 sudo[349789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:38 compute-0 sudo[349789]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:38 compute-0 sudo[349814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:38 compute-0 sudo[349814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:38 compute-0 sudo[349814]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:38 compute-0 nova_compute[250018]: 2026-01-20 15:08:38.561 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:38 compute-0 ovn_controller[148666]: 2026-01-20T15:08:38Z|00603|binding|INFO|Releasing lport 7e4aa652-26e0-4920-84fc-daed942c5e74 from this chassis (sb_readonly=0)
Jan 20 15:08:38 compute-0 nova_compute[250018]: 2026-01-20 15:08:38.599 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:38 compute-0 ceph-mon[74360]: pgmap v2553: 321 pgs: 321 active+clean; 485 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.389 250022 DEBUG nova.compute.manager [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.389 250022 DEBUG nova.compute.manager [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing instance network info cache due to event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.390 250022 DEBUG oslo_concurrency.lockutils [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.390 250022 DEBUG oslo_concurrency.lockutils [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.390 250022 DEBUG nova.network.neutron [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.683 250022 DEBUG nova.compute.manager [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.684 250022 DEBUG nova.compute.manager [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing instance network info cache due to event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.684 250022 DEBUG oslo_concurrency.lockutils [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:39 compute-0 nova_compute[250018]: 2026-01-20 15:08:39.773 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/571252648' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/496706624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2722338201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 553 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.2 MiB/s wr, 184 op/s
Jan 20 15:08:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.791 250022 DEBUG nova.network.neutron [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updated VIF entry in instance network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.791 250022 DEBUG nova.network.neutron [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.818 250022 DEBUG oslo_concurrency.lockutils [req-d57fd0a1-f303-489e-a713-1436bad70fbf req-d80a4bf7-d108-46c4-8260-d09b55b56d4b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.819 250022 DEBUG oslo_concurrency.lockutils [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:40 compute-0 nova_compute[250018]: 2026-01-20 15:08:40.819 250022 DEBUG nova.network.neutron [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2140542339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:41 compute-0 ceph-mon[74360]: pgmap v2554: 321 pgs: 321 active+clean; 553 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.2 MiB/s wr, 184 op/s
Jan 20 15:08:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3093963373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:41 compute-0 nova_compute[250018]: 2026-01-20 15:08:41.890 250022 DEBUG nova.compute.manager [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:41 compute-0 nova_compute[250018]: 2026-01-20 15:08:41.891 250022 DEBUG nova.compute.manager [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing instance network info cache due to event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:41 compute-0 nova_compute[250018]: 2026-01-20 15:08:41.891 250022 DEBUG oslo_concurrency.lockutils [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.2 MiB/s wr, 217 op/s
Jan 20 15:08:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.877 250022 DEBUG nova.network.neutron [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updated VIF entry in instance network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.878 250022 DEBUG nova.network.neutron [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.900 250022 DEBUG oslo_concurrency.lockutils [req-2d0e4bf6-4fa7-4c7a-b2b8-3e93eea725fd req-bfb96ef7-2e37-4a6e-9ca6-1f8639abd0a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.900 250022 DEBUG oslo_concurrency.lockutils [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:42 compute-0 nova_compute[250018]: 2026-01-20 15:08:42.901 250022 DEBUG nova.network.neutron [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:43 compute-0 ceph-mon[74360]: pgmap v2555: 321 pgs: 321 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.2 MiB/s wr, 217 op/s
Jan 20 15:08:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:08:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:08:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Jan 20 15:08:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:44 compute-0 nova_compute[250018]: 2026-01-20 15:08:44.656 250022 DEBUG nova.network.neutron [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updated VIF entry in instance network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:44 compute-0 nova_compute[250018]: 2026-01-20 15:08:44.656 250022 DEBUG nova.network.neutron [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:44 compute-0 nova_compute[250018]: 2026-01-20 15:08:44.685 250022 DEBUG oslo_concurrency.lockutils [req-2ce67267-2992-4385-bbef-b175e6ff0617 req-cfce092d-3f21-49cc-981b-e0f995905588 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:44 compute-0 nova_compute[250018]: 2026-01-20 15:08:44.775 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:45 compute-0 ceph-mon[74360]: pgmap v2556: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Jan 20 15:08:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.6 MiB/s wr, 338 op/s
Jan 20 15:08:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:08:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:08:47 compute-0 ceph-mon[74360]: pgmap v2557: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.6 MiB/s wr, 338 op/s
Jan 20 15:08:47 compute-0 nova_compute[250018]: 2026-01-20 15:08:47.813 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 15:08:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:49 compute-0 ceph-mon[74360]: pgmap v2558: 321 pgs: 321 active+clean; 580 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 3.6 MiB/s wr, 327 op/s
Jan 20 15:08:49 compute-0 nova_compute[250018]: 2026-01-20 15:08:49.777 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:49 compute-0 ovn_controller[148666]: 2026-01-20T15:08:49Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:50:fe 10.100.0.3
Jan 20 15:08:49 compute-0 ovn_controller[148666]: 2026-01-20T15:08:49Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:50:fe 10.100.0.3
Jan 20 15:08:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 15:08:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:49.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 20 15:08:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 599 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.8 MiB/s wr, 379 op/s
Jan 20 15:08:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:50.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2585228950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:08:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 20 15:08:52 compute-0 ceph-mon[74360]: pgmap v2559: 321 pgs: 321 active+clean; 599 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.8 MiB/s wr, 379 op/s
Jan 20 15:08:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 20 15:08:52 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Jan 20 15:08:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:52.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:08:52
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta']
Jan 20 15:08:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:08:52 compute-0 nova_compute[250018]: 2026-01-20 15:08:52.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:53 compute-0 ceph-mon[74360]: osdmap e378: 3 total, 3 up, 3 in
Jan 20 15:08:53 compute-0 ceph-mon[74360]: pgmap v2561: 321 pgs: 321 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Jan 20 15:08:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:08:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:08:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 620 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 349 op/s
Jan 20 15:08:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:54.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:54 compute-0 nova_compute[250018]: 2026-01-20 15:08:54.778 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:55 compute-0 nova_compute[250018]: 2026-01-20 15:08:55.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:55 compute-0 podman[349850]: 2026-01-20 15:08:55.464265404 +0000 UTC m=+0.051243562 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:08:55 compute-0 podman[349849]: 2026-01-20 15:08:55.521252872 +0000 UTC m=+0.106979087 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:08:55 compute-0 ceph-mon[74360]: pgmap v2562: 321 pgs: 321 active+clean; 620 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.3 MiB/s wr, 349 op/s
Jan 20 15:08:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:55.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:08:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:56 compute-0 ceph-mon[74360]: pgmap v2563: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.174 250022 DEBUG nova.compute.manager [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.174 250022 DEBUG nova.compute.manager [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing instance network info cache due to event network-changed-4905566f-51f5-4050-b95c-545d1c4eca86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.174 250022 DEBUG oslo_concurrency.lockutils [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.174 250022 DEBUG oslo_concurrency.lockutils [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.174 250022 DEBUG nova.network.neutron [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Refreshing network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.296 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.297 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.297 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.297 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.297 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.298 250022 INFO nova.compute.manager [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Terminating instance
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.299 250022 DEBUG nova.compute.manager [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:08:57 compute-0 kernel: tap4905566f-51 (unregistering): left promiscuous mode
Jan 20 15:08:57 compute-0 NetworkManager[48960]: <info>  [1768921737.4250] device (tap4905566f-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 ovn_controller[148666]: 2026-01-20T15:08:57Z|00604|binding|INFO|Releasing lport 4905566f-51f5-4050-b95c-545d1c4eca86 from this chassis (sb_readonly=0)
Jan 20 15:08:57 compute-0 ovn_controller[148666]: 2026-01-20T15:08:57Z|00605|binding|INFO|Setting lport 4905566f-51f5-4050-b95c-545d1c4eca86 down in Southbound
Jan 20 15:08:57 compute-0 ovn_controller[148666]: 2026-01-20T15:08:57Z|00606|binding|INFO|Removing iface tap4905566f-51 ovn-installed in OVS
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.457 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.462 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:50:fe 10.100.0.3'], port_security=['fa:16:3e:24:50:fe 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f36f762-4fcd-4996-a347-43e8704458c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48aeeaa346bb4dfba4dfc598ae41f062', 'neutron:revision_number': '4', 'neutron:security_group_ids': '83634ac6-ccb2-4a9e-943b-5695245061bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eb1a11c-202c-4329-9f25-67a4bb6fa2ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=4905566f-51f5-4050-b95c-545d1c4eca86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.464 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 4905566f-51f5-4050-b95c-545d1c4eca86 in datapath 8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef unbound from our chassis
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.467 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.468 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a185db3e-828c-4ef0-8ad8-45612231ec50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.468 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef namespace which is not needed anymore
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.471 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Jan 20 15:08:57 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a9.scope: Consumed 15.661s CPU time.
Jan 20 15:08:57 compute-0 systemd-machined[216401]: Machine qemu-75-instance-000000a9 terminated.
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.531 250022 INFO nova.virt.libvirt.driver [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Instance destroyed successfully.
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.532 250022 DEBUG nova.objects.instance [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lazy-loading 'resources' on Instance uuid 9f36f762-4fcd-4996-a347-43e8704458c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.553 250022 DEBUG nova.virt.libvirt.vif [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1812790462',display_name='tempest-TestVolumeBackupRestore-server-1812790462',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1812790462',id=169,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxLbPiDR7AwdjDmf86eGvFaH6h62+9wOP7b56n23VGMK1V8QSlWsSLfeNXOI/Y73QQp/5icyOUxYMpUaWFgqmFCeN4Fx0NUoAEImZuyySZ+kRnaZMcDsvRcAMwkQP9cMw==',key_name='tempest-TestVolumeBackupRestore-1250360555',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:08:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48aeeaa346bb4dfba4dfc598ae41f062',ramdisk_id='',reservation_id='r-1fawb1vq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-61264633',owner_user_name='tempest-TestVolumeBackupRestore-61264633-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:08:35Z,user_data=None,user_id='467630ad66f84c4ba21657f6e5db7d10',uuid=9f36f762-4fcd-4996-a347-43e8704458c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.554 250022 DEBUG nova.network.os_vif_util [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converting VIF {"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.554 250022 DEBUG nova.network.os_vif_util [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.554 250022 DEBUG os_vif [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.556 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4905566f-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.558 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.560 250022 INFO os_vif [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:fe,bridge_name='br-int',has_traffic_filtering=True,id=4905566f-51f5-4050-b95c-545d1c4eca86,network=Network(8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4905566f-51')
Jan 20 15:08:57 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [NOTICE]   (349734) : haproxy version is 2.8.14-c23fe91
Jan 20 15:08:57 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [NOTICE]   (349734) : path to executable is /usr/sbin/haproxy
Jan 20 15:08:57 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [WARNING]  (349734) : Exiting Master process...
Jan 20 15:08:57 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [ALERT]    (349734) : Current worker (349736) exited with code 143 (Terminated)
Jan 20 15:08:57 compute-0 neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef[349730]: [WARNING]  (349734) : All workers exited. Exiting... (0)
Jan 20 15:08:57 compute-0 systemd[1]: libpod-81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa.scope: Deactivated successfully.
Jan 20 15:08:57 compute-0 conmon[349730]: conmon 81092114eed80c3b729a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa.scope/container/memory.events
Jan 20 15:08:57 compute-0 podman[349925]: 2026-01-20 15:08:57.619951445 +0000 UTC m=+0.055723334 container died 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa-userdata-shm.mount: Deactivated successfully.
Jan 20 15:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e69f2d7e13cf1e5e64998d07634dc821e111079fb88622e8d9d34240a491916-merged.mount: Deactivated successfully.
Jan 20 15:08:57 compute-0 podman[349925]: 2026-01-20 15:08:57.665395381 +0000 UTC m=+0.101167270 container cleanup 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:08:57 compute-0 systemd[1]: libpod-conmon-81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa.scope: Deactivated successfully.
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:08:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:08:57 compute-0 podman[349972]: 2026-01-20 15:08:57.731891225 +0000 UTC m=+0.047687537 container remove 81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.739 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c46af116-1dc7-40b2-a50b-04841a43c378]: (4, ('Tue Jan 20 03:08:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef (81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa)\n81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa\nTue Jan 20 03:08:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef (81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa)\n81092114eed80c3b729a20ee2746f03cb62ebb910fb95ad245700b24612d76aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.740 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd15b05-fdd8-4c6a-8148-747d168375cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.741 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca08cef-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.742 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 kernel: tap8ca08cef-00: left promiscuous mode
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.764 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.767 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[36445ed5-9b71-42e1-ab10-b533edd67b85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.788 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[06fdf2a1-5616-46f3-ab22-36657ac8935b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.789 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad37f1c-e609-4a8e-b925-7cc9e601ff9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.808 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73440485-36cf-4a1c-ace9-8432ac1c0775]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765456, 'reachable_time': 34913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349985, 'error': None, 'target': 'ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d8ca08cef\x2d0ad0\x2d44f7\x2db2ed\x2d8e7b0a2aa4ef.mount: Deactivated successfully.
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.811 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:08:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:08:57.811 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[df3a4835-1a4b-49e4-b8e5-1b0537c381a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:08:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.926 250022 DEBUG nova.compute.manager [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-unplugged-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.926 250022 DEBUG oslo_concurrency.lockutils [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.926 250022 DEBUG oslo_concurrency.lockutils [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.927 250022 DEBUG oslo_concurrency.lockutils [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.927 250022 DEBUG nova.compute.manager [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] No waiting events found dispatching network-vif-unplugged-4905566f-51f5-4050-b95c-545d1c4eca86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:08:57 compute-0 nova_compute[250018]: 2026-01-20 15:08:57.927 250022 DEBUG nova.compute.manager [req-916b103e-4ad6-4508-bd6a-9be5dbc66f34 req-05561167-f597-42fa-a749-5f9534388c9d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-unplugged-4905566f-51f5-4050-b95c-545d1c4eca86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3934644950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:08:58.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:58 compute-0 sudo[349990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:58 compute-0 sudo[349990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:58 compute-0 sudo[349990]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:58 compute-0 sudo[350015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:08:58 compute-0 sudo[350015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:08:58 compute-0 sudo[350015]: pam_unix(sudo:session): session closed for user root
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.813 250022 INFO nova.virt.libvirt.driver [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Deleting instance files /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9_del
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.813 250022 INFO nova.virt.libvirt.driver [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Deletion of /var/lib/nova/instances/9f36f762-4fcd-4996-a347-43e8704458c9_del complete
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.873 250022 INFO nova.compute.manager [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Took 1.57 seconds to destroy the instance on the hypervisor.
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.874 250022 DEBUG oslo.service.loopingcall [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.874 250022 DEBUG nova.compute.manager [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:08:58 compute-0 nova_compute[250018]: 2026-01-20 15:08:58.874 250022 DEBUG nova.network.neutron [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.072 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.073 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.144 250022 DEBUG nova.network.neutron [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updated VIF entry in instance network info cache for port 4905566f-51f5-4050-b95c-545d1c4eca86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.145 250022 DEBUG nova.network.neutron [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [{"id": "4905566f-51f5-4050-b95c-545d1c4eca86", "address": "fa:16:3e:24:50:fe", "network": {"id": "8ca08cef-0ad0-44f7-b2ed-8e7b0a2aa4ef", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-156652880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48aeeaa346bb4dfba4dfc598ae41f062", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4905566f-51", "ovs_interfaceid": "4905566f-51f5-4050-b95c-545d1c4eca86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.166 250022 DEBUG oslo_concurrency.lockutils [req-90aa8a58-4ece-4280-927b-cdf0ea8d5fa3 req-782d977c-e84c-40e3-a756-b10e5be70256 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-9f36f762-4fcd-4996-a347-43e8704458c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:08:59 compute-0 ceph-mon[74360]: pgmap v2564: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 444 op/s
Jan 20 15:08:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4045448645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/698210959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:08:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808126759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.587 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.731 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.732 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4235MB free_disk=20.831497192382812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.733 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.733 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.809 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 9f36f762-4fcd-4996-a347-43e8704458c9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.809 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.809 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.833 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:08:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:08:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:08:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:08:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.913 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.914 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.939 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:08:59 compute-0 nova_compute[250018]: 2026-01-20 15:08:59.966 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.018 250022 DEBUG nova.compute.manager [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.019 250022 DEBUG oslo_concurrency.lockutils [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.019 250022 DEBUG oslo_concurrency.lockutils [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.020 250022 DEBUG oslo_concurrency.lockutils [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.020 250022 DEBUG nova.compute.manager [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] No waiting events found dispatching network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.020 250022 WARNING nova.compute.manager [req-6e2f06e2-36de-4081-a45c-a66941f1b4dc req-12d1a13b-327b-457f-9a7d-999378f8ccd4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received unexpected event network-vif-plugged-4905566f-51f5-4050-b95c-545d1c4eca86 for instance with vm_state active and task_state deleting.
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.021 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.148 250022 DEBUG nova.network.neutron [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.168 250022 INFO nova.compute.manager [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Took 1.29 seconds to deallocate network for instance.
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.237 250022 DEBUG nova.compute.manager [req-03a61f9e-9b5f-42de-baf9-220bc7d38f81 req-1fb635fc-93d3-4af9-b1d9-34a8a3ce4929 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Received event network-vif-deleted-4905566f-51f5-4050-b95c-545d1c4eca86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:00 compute-0 sshd-session[349986]: Connection closed by authenticating user root 134.122.57.138 port 39074 [preauth]
Jan 20 15:09:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 408 op/s
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.431 250022 INFO nova.compute.manager [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Took 0.26 seconds to detach 1 volumes for instance.
Jan 20 15:09:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:00.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.484 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931540506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.579 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.584 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.605 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.630 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.630 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.630 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:00 compute-0 nova_compute[250018]: 2026-01-20 15:09:00.673 250022 DEBUG oslo_concurrency.processutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2842859879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2808126759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-0 sudo[350097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:01 compute-0 sudo[350097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-0 sudo[350097]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-0 sudo[350122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:01 compute-0 sudo[350122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-0 sudo[350122]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:01 compute-0 sudo[350147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:01 compute-0 sudo[350147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-0 sudo[350147]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-0 sudo[350181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:09:01 compute-0 sudo[350181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.625 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.625 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.625 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.626 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:09:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351445276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-0 ceph-mon[74360]: pgmap v2565: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 408 op/s
Jan 20 15:09:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3931540506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3777703864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.784 250022 DEBUG oslo_concurrency.processutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.790 250022 DEBUG nova.compute.provider_tree [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:09:01 compute-0 sudo[350181]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.810 250022 DEBUG nova.scheduler.client.report [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:09:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.831 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.871 250022 INFO nova.scheduler.client.report [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Deleted allocations for instance 9f36f762-4fcd-4996-a347-43e8704458c9
Jan 20 15:09:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:01 compute-0 nova_compute[250018]: 2026-01-20 15:09:01.930 250022 DEBUG oslo_concurrency.lockutils [None req-424a2044-24b7-4722-b114-098a252d7823 467630ad66f84c4ba21657f6e5db7d10 48aeeaa346bb4dfba4dfc598ae41f062 - - default default] Lock "9f36f762-4fcd-4996-a347-43e8704458c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-0 sudo[350228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:02 compute-0 sudo[350228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:02 compute-0 sudo[350228]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:02 compute-0 sudo[350254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:02 compute-0 sudo[350254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:02 compute-0 sudo[350254]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:02 compute-0 sudo[350279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:02 compute-0 sudo[350279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:02 compute-0 sudo[350279]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:02 compute-0 sudo[350304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:09:02 compute-0 sudo[350304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 394 op/s
Jan 20 15:09:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:02 compute-0 nova_compute[250018]: 2026-01-20 15:09:02.559 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:02 compute-0 sudo[350304]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3351445276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-0 ceph-mon[74360]: pgmap v2566: 321 pgs: 321 active+clean; 608 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 394 op/s
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9b025b89-e81c-499c-82d1-9d4ad6c97a89 does not exist
Jan 20 15:09:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e89753fc-785a-484d-92e8-fd0dcc7d7ae7 does not exist
Jan 20 15:09:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 92023f84-c23c-4b13-bf93-d17f70679bb2 does not exist
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:09:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:09:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:03 compute-0 sudo[350360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:03 compute-0 sudo[350360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:03 compute-0 sudo[350360]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:03 compute-0 nova_compute[250018]: 2026-01-20 15:09:03.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:03 compute-0 nova_compute[250018]: 2026-01-20 15:09:03.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:09:03 compute-0 sudo[350385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:03 compute-0 sudo[350385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:03 compute-0 sudo[350385]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:03 compute-0 sudo[350410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:03 compute-0 sudo[350410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:03 compute-0 sudo[350410]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:03 compute-0 sudo[350435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:09:03 compute-0 sudo[350435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:03 compute-0 podman[350500]: 2026-01-20 15:09:03.568167328 +0000 UTC m=+0.025980892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:03 compute-0 podman[350500]: 2026-01-20 15:09:03.790665481 +0000 UTC m=+0.248479015 container create 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:09:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:09:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:09:04 compute-0 systemd[1]: Started libpod-conmon-222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370.scope.
Jan 20 15:09:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 20 15:09:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 622 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 352 op/s
Jan 20 15:09:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:09:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:04.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:09:04 compute-0 podman[350500]: 2026-01-20 15:09:04.51054386 +0000 UTC m=+0.968357434 container init 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:09:04 compute-0 podman[350500]: 2026-01-20 15:09:04.518662649 +0000 UTC m=+0.976476193 container start 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 15:09:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 20 15:09:04 compute-0 compassionate_hellman[350517]: 167 167
Jan 20 15:09:04 compute-0 systemd[1]: libpod-222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370.scope: Deactivated successfully.
Jan 20 15:09:04 compute-0 conmon[350517]: conmon 222c9660252cb1285dc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370.scope/container/memory.events
Jan 20 15:09:04 compute-0 podman[350500]: 2026-01-20 15:09:04.525610557 +0000 UTC m=+0.983424161 container attach 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 20 15:09:04 compute-0 podman[350500]: 2026-01-20 15:09:04.527080336 +0000 UTC m=+0.984893880 container died 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:09:04 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 20 15:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6497e320eb5d74fc29f2c34dfad38626b97e5fe6a4199498283a596af18b72f5-merged.mount: Deactivated successfully.
Jan 20 15:09:04 compute-0 podman[350500]: 2026-01-20 15:09:04.733563976 +0000 UTC m=+1.191377520 container remove 222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:09:04 compute-0 systemd[1]: libpod-conmon-222c9660252cb1285dc3d2ad2843e9c0756a336c0ffb3f92f066254cbfa64370.scope: Deactivated successfully.
Jan 20 15:09:04 compute-0 nova_compute[250018]: 2026-01-20 15:09:04.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:04 compute-0 podman[350544]: 2026-01-20 15:09:04.879712379 +0000 UTC m=+0.021866011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:05 compute-0 podman[350544]: 2026-01-20 15:09:05.07478357 +0000 UTC m=+0.216937182 container create 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:09:05 compute-0 systemd[1]: Started libpod-conmon-23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc.scope.
Jan 20 15:09:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:05 compute-0 podman[350544]: 2026-01-20 15:09:05.329612154 +0000 UTC m=+0.471765856 container init 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:09:05 compute-0 podman[350544]: 2026-01-20 15:09:05.335665997 +0000 UTC m=+0.477819609 container start 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 15:09:05 compute-0 podman[350544]: 2026-01-20 15:09:05.339700496 +0000 UTC m=+0.481854208 container attach 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:09:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 20 15:09:05 compute-0 ceph-mon[74360]: pgmap v2567: 321 pgs: 321 active+clean; 622 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 352 op/s
Jan 20 15:09:05 compute-0 ceph-mon[74360]: osdmap e379: 3 total, 3 up, 3 in
Jan 20 15:09:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2658081429' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2658081429' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 20 15:09:05 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 20 15:09:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:06 compute-0 nova_compute[250018]: 2026-01-20 15:09:06.105 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:06 compute-0 nova_compute[250018]: 2026-01-20 15:09:06.106 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:09:06 compute-0 nova_compute[250018]: 2026-01-20 15:09:06.106 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:09:06 compute-0 nova_compute[250018]: 2026-01-20 15:09:06.127 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:09:06 compute-0 nova_compute[250018]: 2026-01-20 15:09:06.128 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:06 compute-0 gifted_goldstine[350561]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:09:06 compute-0 gifted_goldstine[350561]: --> relative data size: 1.0
Jan 20 15:09:06 compute-0 gifted_goldstine[350561]: --> All data devices are unavailable
Jan 20 15:09:06 compute-0 systemd[1]: libpod-23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc.scope: Deactivated successfully.
Jan 20 15:09:06 compute-0 podman[350544]: 2026-01-20 15:09:06.218605585 +0000 UTC m=+1.360759207 container died 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cc6e702d87ebb0f247879b2ce90674215cdf314a8b8276266f6fd8a046a57ae-merged.mount: Deactivated successfully.
Jan 20 15:09:06 compute-0 podman[350544]: 2026-01-20 15:09:06.304649246 +0000 UTC m=+1.446802858 container remove 23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_goldstine, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:09:06 compute-0 systemd[1]: libpod-conmon-23e237a69b8faa4d47b4207ef66c3175ea6fa3d9161d9c231e8ef3bf4ab046cc.scope: Deactivated successfully.
Jan 20 15:09:06 compute-0 sudo[350435]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:06 compute-0 sudo[350590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 214 op/s
Jan 20 15:09:06 compute-0 sudo[350590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:06 compute-0 sudo[350590]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:06 compute-0 sudo[350615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:06 compute-0 sudo[350615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:06 compute-0 sudo[350615]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:06 compute-0 sudo[350640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:06 compute-0 sudo[350640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:06 compute-0 sudo[350640]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:06 compute-0 sudo[350665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:09:06 compute-0 sudo[350665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:06 compute-0 ceph-mon[74360]: osdmap e380: 3 total, 3 up, 3 in
Jan 20 15:09:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3460090808' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:06 compute-0 ceph-mon[74360]: pgmap v2570: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.9 MiB/s wr, 214 op/s
Jan 20 15:09:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/330773330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.007991729 +0000 UTC m=+0.059483686 container create 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:09:07 compute-0 systemd[1]: Started libpod-conmon-16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703.scope.
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:06.976040187 +0000 UTC m=+0.027532104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.114542393 +0000 UTC m=+0.166034270 container init 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.123795033 +0000 UTC m=+0.175286890 container start 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.131413018 +0000 UTC m=+0.182904885 container attach 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 15:09:07 compute-0 trusting_poincare[350747]: 167 167
Jan 20 15:09:07 compute-0 systemd[1]: libpod-16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703.scope: Deactivated successfully.
Jan 20 15:09:07 compute-0 conmon[350747]: conmon 16dea60dacab0e50ba5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703.scope/container/memory.events
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.135358225 +0000 UTC m=+0.186850082 container died 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4230c11416e17585821b4a4c2008c431f021d97d4ab1b36ddbbb4719a0e1caed-merged.mount: Deactivated successfully.
Jan 20 15:09:07 compute-0 podman[350731]: 2026-01-20 15:09:07.209115414 +0000 UTC m=+0.260607281 container remove 16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:09:07 compute-0 systemd[1]: libpod-conmon-16dea60dacab0e50ba5a26b73d77f37a1703c7be06e51000e2a1b9a2e7251703.scope: Deactivated successfully.
Jan 20 15:09:07 compute-0 podman[350772]: 2026-01-20 15:09:07.400694022 +0000 UTC m=+0.054357568 container create 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:09:07 compute-0 systemd[1]: Started libpod-conmon-5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32.scope.
Jan 20 15:09:07 compute-0 podman[350772]: 2026-01-20 15:09:07.379799718 +0000 UTC m=+0.033463374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c83d566678981870f476b0a4934340da2bf1947943095dc9edff1d4eb210bc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c83d566678981870f476b0a4934340da2bf1947943095dc9edff1d4eb210bc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c83d566678981870f476b0a4934340da2bf1947943095dc9edff1d4eb210bc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c83d566678981870f476b0a4934340da2bf1947943095dc9edff1d4eb210bc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:07 compute-0 podman[350772]: 2026-01-20 15:09:07.500606508 +0000 UTC m=+0.154270084 container init 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:09:07 compute-0 podman[350772]: 2026-01-20 15:09:07.507464563 +0000 UTC m=+0.161128109 container start 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:09:07 compute-0 podman[350772]: 2026-01-20 15:09:07.511067379 +0000 UTC m=+0.164731125 container attach 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:09:07 compute-0 nova_compute[250018]: 2026-01-20 15:09:07.562 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3862902750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3862902750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1328764846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:07.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:08 compute-0 nova_compute[250018]: 2026-01-20 15:09:08.069 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:08 compute-0 distracted_banach[350789]: {
Jan 20 15:09:08 compute-0 distracted_banach[350789]:     "0": [
Jan 20 15:09:08 compute-0 distracted_banach[350789]:         {
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "devices": [
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "/dev/loop3"
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             ],
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "lv_name": "ceph_lv0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "lv_size": "7511998464",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "name": "ceph_lv0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "tags": {
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.cluster_name": "ceph",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.crush_device_class": "",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.encrypted": "0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.osd_id": "0",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.type": "block",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:                 "ceph.vdo": "0"
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             },
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "type": "block",
Jan 20 15:09:08 compute-0 distracted_banach[350789]:             "vg_name": "ceph_vg0"
Jan 20 15:09:08 compute-0 distracted_banach[350789]:         }
Jan 20 15:09:08 compute-0 distracted_banach[350789]:     ]
Jan 20 15:09:08 compute-0 distracted_banach[350789]: }
Jan 20 15:09:08 compute-0 systemd[1]: libpod-5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32.scope: Deactivated successfully.
Jan 20 15:09:08 compute-0 podman[350772]: 2026-01-20 15:09:08.318615683 +0000 UTC m=+0.972279299 container died 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:09:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 868 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c83d566678981870f476b0a4934340da2bf1947943095dc9edff1d4eb210bc7-merged.mount: Deactivated successfully.
Jan 20 15:09:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:08.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:08 compute-0 podman[350772]: 2026-01-20 15:09:08.492499553 +0000 UTC m=+1.146163099 container remove 5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_banach, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:09:08 compute-0 systemd[1]: libpod-conmon-5f2142c5cd9e501b9d2a11c3b638f79e259f68a57edc95f6fbdeeeaef9593c32.scope: Deactivated successfully.
Jan 20 15:09:08 compute-0 sudo[350665]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:08 compute-0 sudo[350811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:08 compute-0 sudo[350811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:08 compute-0 sudo[350811]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:08 compute-0 sudo[350836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:09:08 compute-0 sudo[350836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:08 compute-0 sudo[350836]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:08 compute-0 sudo[350861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:08 compute-0 sudo[350861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:08 compute-0 sudo[350861]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:08 compute-0 sudo[350886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:09:08 compute-0 sudo[350886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4222916982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:09 compute-0 ceph-mon[74360]: pgmap v2571: 321 pgs: 17 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 594 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 868 KiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.220985504 +0000 UTC m=+0.039303971 container create bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:09:09 compute-0 systemd[1]: Started libpod-conmon-bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06.scope.
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.206043252 +0000 UTC m=+0.024361739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.323639304 +0000 UTC m=+0.141957791 container init bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.331086234 +0000 UTC m=+0.149404721 container start bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:09:09 compute-0 systemd[1]: libpod-bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06.scope: Deactivated successfully.
Jan 20 15:09:09 compute-0 pensive_dirac[350968]: 167 167
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.335633397 +0000 UTC m=+0.153951884 container attach bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.335955405 +0000 UTC m=+0.154273872 container died bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8f9c7038231dc46d0bfac9836a4ee9b1383051c0d4b47269bfb0d8ad382472d-merged.mount: Deactivated successfully.
Jan 20 15:09:09 compute-0 podman[350951]: 2026-01-20 15:09:09.37097061 +0000 UTC m=+0.189289077 container remove bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:09:09 compute-0 systemd[1]: libpod-conmon-bc45ba8c2190345b6dd98d9d75f0b69dd2f834233f9d319305dadc65be70dd06.scope: Deactivated successfully.
Jan 20 15:09:09 compute-0 podman[350992]: 2026-01-20 15:09:09.543933856 +0000 UTC m=+0.042648491 container create ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:09:09 compute-0 systemd[1]: Started libpod-conmon-ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93.scope.
Jan 20 15:09:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a047ac34d8894f0ff01bfabc2f207f41f15d47523ee55f4f07bd56adce9ce145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a047ac34d8894f0ff01bfabc2f207f41f15d47523ee55f4f07bd56adce9ce145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a047ac34d8894f0ff01bfabc2f207f41f15d47523ee55f4f07bd56adce9ce145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a047ac34d8894f0ff01bfabc2f207f41f15d47523ee55f4f07bd56adce9ce145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:09 compute-0 podman[350992]: 2026-01-20 15:09:09.52592509 +0000 UTC m=+0.024639755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:09:09 compute-0 podman[350992]: 2026-01-20 15:09:09.627292675 +0000 UTC m=+0.126007310 container init ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:09:09 compute-0 podman[350992]: 2026-01-20 15:09:09.63343186 +0000 UTC m=+0.132146495 container start ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:09:09 compute-0 podman[350992]: 2026-01-20 15:09:09.637431148 +0000 UTC m=+0.136145783 container attach ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:09:09 compute-0 nova_compute[250018]: 2026-01-20 15:09:09.847 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 16 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 303 active+clean; 490 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 333 op/s
Jan 20 15:09:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:10.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]: {
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:         "osd_id": 0,
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:         "type": "bluestore"
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]:     }
Jan 20 15:09:10 compute-0 goofy_ganguly[351009]: }
Jan 20 15:09:10 compute-0 systemd[1]: libpod-ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93.scope: Deactivated successfully.
Jan 20 15:09:10 compute-0 podman[350992]: 2026-01-20 15:09:10.522742769 +0000 UTC m=+1.021457444 container died ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a047ac34d8894f0ff01bfabc2f207f41f15d47523ee55f4f07bd56adce9ce145-merged.mount: Deactivated successfully.
Jan 20 15:09:10 compute-0 podman[350992]: 2026-01-20 15:09:10.584625738 +0000 UTC m=+1.083340383 container remove ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:09:10 compute-0 systemd[1]: libpod-conmon-ef0e4ef3075bba90f78a823b11d1dbd62304a2a180b168be3318cfa27a57bb93.scope: Deactivated successfully.
Jan 20 15:09:10 compute-0 sudo[350886]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:09:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:09:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 48663e7a-e448-4044-81a0-0de6c1bac5df does not exist
Jan 20 15:09:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4784b7b8-3d42-42b1-b62a-638f88765c28 does not exist
Jan 20 15:09:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fecc44b3-293c-4086-91cb-2e5d8e006a2b does not exist
Jan 20 15:09:10 compute-0 sudo[351043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:10 compute-0 sudo[351043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:10 compute-0 sudo[351043]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:10 compute-0 sudo[351068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:09:10 compute-0 sudo[351068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:10 compute-0 sudo[351068]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 20 15:09:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 20 15:09:11 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 20 15:09:11 compute-0 ceph-mon[74360]: pgmap v2572: 321 pgs: 16 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 303 active+clean; 490 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 333 op/s
Jan 20 15:09:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008505163258421738 of space, bias 1.0, pg target 2.5515489775265214 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.644426559882747 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:09:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 15:09:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:11.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 361 op/s
Jan 20 15:09:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:12 compute-0 nova_compute[250018]: 2026-01-20 15:09:12.531 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921737.5293229, 9f36f762-4fcd-4996-a347-43e8704458c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:12 compute-0 nova_compute[250018]: 2026-01-20 15:09:12.531 250022 INFO nova.compute.manager [-] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] VM Stopped (Lifecycle Event)
Jan 20 15:09:12 compute-0 nova_compute[250018]: 2026-01-20 15:09:12.569 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:12 compute-0 nova_compute[250018]: 2026-01-20 15:09:12.610 250022 DEBUG nova.compute.manager [None req-a9775559-30cd-4d5f-9dc8-79b0e21edf50 - - - - - -] [instance: 9f36f762-4fcd-4996-a347-43e8704458c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:13 compute-0 nova_compute[250018]: 2026-01-20 15:09:13.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:13 compute-0 ceph-mon[74360]: osdmap e381: 3 total, 3 up, 3 in
Jan 20 15:09:13 compute-0 nova_compute[250018]: 2026-01-20 15:09:13.333 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:13.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:14 compute-0 nova_compute[250018]: 2026-01-20 15:09:14.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:14 compute-0 nova_compute[250018]: 2026-01-20 15:09:14.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:09:14 compute-0 nova_compute[250018]: 2026-01-20 15:09:14.064 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:09:14 compute-0 ceph-mon[74360]: pgmap v2574: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 361 op/s
Jan 20 15:09:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/668410972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 574 KiB/s wr, 309 op/s
Jan 20 15:09:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:14.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:14 compute-0 nova_compute[250018]: 2026-01-20 15:09:14.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:15 compute-0 nova_compute[250018]: 2026-01-20 15:09:15.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:15 compute-0 ceph-mon[74360]: pgmap v2575: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 574 KiB/s wr, 309 op/s
Jan 20 15:09:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 37 KiB/s wr, 224 op/s
Jan 20 15:09:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 20 15:09:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 20 15:09:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 20 15:09:17 compute-0 ceph-mon[74360]: pgmap v2576: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 37 KiB/s wr, 224 op/s
Jan 20 15:09:17 compute-0 ceph-mon[74360]: osdmap e382: 3 total, 3 up, 3 in
Jan 20 15:09:17 compute-0 nova_compute[250018]: 2026-01-20 15:09:17.572 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:17.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.7 KiB/s wr, 110 op/s
Jan 20 15:09:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:18.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:18 compute-0 sudo[351098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:18 compute-0 sudo[351098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:18 compute-0 sudo[351098]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:18 compute-0 sudo[351123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:18 compute-0 sudo[351123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:18 compute-0 sudo[351123]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:19 compute-0 ceph-mon[74360]: pgmap v2578: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.7 KiB/s wr, 110 op/s
Jan 20 15:09:19 compute-0 nova_compute[250018]: 2026-01-20 15:09:19.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:19.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 55 op/s
Jan 20 15:09:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:20.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:21.385 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:21 compute-0 nova_compute[250018]: 2026-01-20 15:09:21.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:21 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:21.386 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:09:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:21 compute-0 ceph-mon[74360]: pgmap v2579: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 55 op/s
Jan 20 15:09:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 KiB/s wr, 50 op/s
Jan 20 15:09:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:22.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:22 compute-0 nova_compute[250018]: 2026-01-20 15:09:22.574 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:23 compute-0 ceph-mon[74360]: pgmap v2580: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 KiB/s wr, 50 op/s
Jan 20 15:09:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:23.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 480 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 246 KiB/s wr, 42 op/s
Jan 20 15:09:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:24.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:24 compute-0 nova_compute[250018]: 2026-01-20 15:09:24.852 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:25 compute-0 ceph-mon[74360]: pgmap v2581: 321 pgs: 321 active+clean; 480 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 246 KiB/s wr, 42 op/s
Jan 20 15:09:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:25.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 20 15:09:26 compute-0 podman[351153]: 2026-01-20 15:09:26.474454476 +0000 UTC m=+0.059410363 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:09:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:26 compute-0 podman[351152]: 2026-01-20 15:09:26.511471274 +0000 UTC m=+0.096589137 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:09:27 compute-0 nova_compute[250018]: 2026-01-20 15:09:27.576 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:27 compute-0 ceph-mon[74360]: pgmap v2582: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 20 15:09:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:27.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Jan 20 15:09:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:28 compute-0 ceph-mon[74360]: pgmap v2583: 321 pgs: 321 active+clean; 508 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Jan 20 15:09:29 compute-0 nova_compute[250018]: 2026-01-20 15:09:29.853 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:29.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:30.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1946179170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:30.779 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:30.780 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:30.780 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:31.389 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:31 compute-0 ceph-mon[74360]: pgmap v2584: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 15:09:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:32.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:32 compute-0 nova_compute[250018]: 2026-01-20 15:09:32.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:32 compute-0 ceph-mon[74360]: pgmap v2585: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 20 15:09:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:34.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:34 compute-0 nova_compute[250018]: 2026-01-20 15:09:34.855 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:35 compute-0 ceph-mon[74360]: pgmap v2586: 321 pgs: 321 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 20 15:09:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 2.0 MiB/s wr, 98 op/s
Jan 20 15:09:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:09:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2893274847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:09:37 compute-0 nova_compute[250018]: 2026-01-20 15:09:37.580 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 225 KiB/s rd, 46 KiB/s wr, 24 op/s
Jan 20 15:09:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:38.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:38 compute-0 sudo[351201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:38 compute-0 sudo[351201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:38 compute-0 sudo[351201]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:38 compute-0 sudo[351226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:38 compute-0 sudo[351226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:38 compute-0 sudo[351226]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:39 compute-0 ceph-mon[74360]: pgmap v2587: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 2.0 MiB/s wr, 98 op/s
Jan 20 15:09:39 compute-0 nova_compute[250018]: 2026-01-20 15:09:39.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:40 compute-0 nova_compute[250018]: 2026-01-20 15:09:40.064 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Jan 20 15:09:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:40.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3710696462' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2564933822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-0 ceph-mon[74360]: pgmap v2588: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 225 KiB/s rd, 46 KiB/s wr, 24 op/s
Jan 20 15:09:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/628547865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:40 compute-0 ceph-mon[74360]: pgmap v2589: 321 pgs: 321 active+clean; 497 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Jan 20 15:09:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/230950978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 20 15:09:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:42.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:42 compute-0 nova_compute[250018]: 2026-01-20 15:09:42.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:42 compute-0 ceph-mon[74360]: pgmap v2590: 321 pgs: 321 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 20 15:09:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 20 15:09:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 20 15:09:43 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 20 15:09:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:09:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:44.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:44 compute-0 ceph-mon[74360]: osdmap e383: 3 total, 3 up, 3 in
Jan 20 15:09:44 compute-0 ceph-mon[74360]: pgmap v2592: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:09:44 compute-0 nova_compute[250018]: 2026-01-20 15:09:44.909 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 20 15:09:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 20 15:09:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 20 15:09:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 233 op/s
Jan 20 15:09:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:46.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:46 compute-0 ceph-mon[74360]: osdmap e384: 3 total, 3 up, 3 in
Jan 20 15:09:46 compute-0 ceph-mon[74360]: pgmap v2594: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 233 op/s
Jan 20 15:09:47 compute-0 nova_compute[250018]: 2026-01-20 15:09:47.584 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Jan 20 15:09:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:48.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:49 compute-0 ceph-mon[74360]: pgmap v2595: 321 pgs: 321 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.667 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.668 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.704 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.806 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.806 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.814 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.814 250022 INFO nova.compute.claims [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.938 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:49 compute-0 nova_compute[250018]: 2026-01-20 15:09:49.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:09:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867603612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.378 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.384 250022 DEBUG nova.compute.provider_tree [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:09:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.451 250022 DEBUG nova.scheduler.client.report [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.475 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.476 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:09:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/867603612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.521 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.522 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.540 250022 INFO nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.558 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.737 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.738 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.739 250022 INFO nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Creating image(s)
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.770 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.801 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.828 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.832 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "d098465a3e855b629d7a6148197a821861fe0d4b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.833 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "d098465a3e855b629d7a6148197a821861fe0d4b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:50 compute-0 nova_compute[250018]: 2026-01-20 15:09:50.837 250022 DEBUG nova.policy [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c98bd3f0904e48efa524d598bcad85e9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5b43342be22543f79d4a56e26c6d0c96', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:09:51 compute-0 nova_compute[250018]: 2026-01-20 15:09:51.251 250022 DEBUG nova.virt.libvirt.imagebackend [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image locations are: [{'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/f6b86f20-6a27-42dc-9911-14ae5e0ee2dc/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e399cf45-e6b6-5393-99f1-75c601d3f188/images/f6b86f20-6a27-42dc-9911-14ae5e0ee2dc/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 15:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 20 15:09:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 20 15:09:51 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 20 15:09:51 compute-0 ceph-mon[74360]: pgmap v2596: 321 pgs: 321 active+clean; 458 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:09:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:51.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.308 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Successfully created port: e51d2161-817b-46db-b0ce-4b313d293d7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:52 compute-0 ceph-mon[74360]: osdmap e385: 3 total, 3 up, 3 in
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:09:52
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root']
Jan 20 15:09:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.614 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.648 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.695 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.part --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.696 250022 DEBUG nova.virt.images [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] f6b86f20-6a27-42dc-9911-14ae5e0ee2dc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.697 250022 DEBUG nova.privsep.utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.697 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.part /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.922 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.part /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.converted" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:52 compute-0 nova_compute[250018]: 2026-01-20 15:09:52.928 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.001 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.003 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "d098465a3e855b629d7a6148197a821861fe0d4b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.041 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.046 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.360 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.440 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] resizing rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.529 250022 DEBUG nova.objects.instance [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.543 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.544 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Ensure instance console log exists: /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:09:53 compute-0 ceph-mon[74360]: pgmap v2598: 321 pgs: 321 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 262 op/s
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.544 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.545 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.545 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.886 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Successfully updated port: e51d2161-817b-46db-b0ce-4b313d293d7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.903 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.903 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:53 compute-0 nova_compute[250018]: 2026-01-20 15:09:53.903 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:09:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:53 compute-0 sshd-session[351333]: Connection closed by authenticating user root 134.122.57.138 port 35786 [preauth]
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.100 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:09:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 15:09:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.949 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.955 250022 DEBUG nova.network.neutron [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.980 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.981 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance network_info: |[{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.983 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Start _get_guest_xml network_info=[{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T15:09:44Z,direct_url=<?>,disk_format='qcow2',id=f6b86f20-6a27-42dc-9911-14ae5e0ee2dc,min_disk=0,min_ram=0,name='tempest-scenario-img--1212854051',owner='5b43342be22543f79d4a56e26c6d0c96',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T15:09:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'f6b86f20-6a27-42dc-9911-14ae5e0ee2dc'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.988 250022 WARNING nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.993 250022 DEBUG nova.virt.libvirt.host [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.994 250022 DEBUG nova.virt.libvirt.host [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.997 250022 DEBUG nova.virt.libvirt.host [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.997 250022 DEBUG nova.virt.libvirt.host [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.998 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.998 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T15:09:44Z,direct_url=<?>,disk_format='qcow2',id=f6b86f20-6a27-42dc-9911-14ae5e0ee2dc,min_disk=0,min_ram=0,name='tempest-scenario-img--1212854051',owner='5b43342be22543f79d4a56e26c6d0c96',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T15:09:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.999 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.999 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.999 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.999 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:09:54 compute-0 nova_compute[250018]: 2026-01-20 15:09:54.999 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.000 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.000 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.000 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.000 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.000 250022 DEBUG nova.virt.hardware [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.003 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:09:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171336010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.452 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.477 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.481 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:55 compute-0 ceph-mon[74360]: pgmap v2599: 321 pgs: 321 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 147 op/s
Jan 20 15:09:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1171336010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.889 250022 DEBUG nova.compute.manager [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.890 250022 DEBUG nova.compute.manager [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.891 250022 DEBUG oslo_concurrency.lockutils [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.892 250022 DEBUG oslo_concurrency.lockutils [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.892 250022 DEBUG nova.network.neutron [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:09:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:09:55 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065963856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.916 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.917 250022 DEBUG nova.virt.libvirt.vif [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1147518891',display_name='tempest-TestMinimumBasicScenario-server-1147518891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1147518891',id=172,image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOAQWrP+GDiNUoPOYescUGgo/yLOGt4CAUCa8ZMwCowpdu0oAlhZM5WeI21R8uP9lryKPNYsVj0oCABaSlnWGXpu5Hh3xUYyz5x2p4oOq+Z53WRgS8ou5qt7R4Dm4lN/5A==',key_name='tempest-TestMinimumBasicScenario-1568926112',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-bbkfjia9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:09:50Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=7f408ee5-e644-4a37-9cd2-3db94e56c638,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.918 250022 DEBUG nova.network.os_vif_util [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.918 250022 DEBUG nova.network.os_vif_util [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.919 250022 DEBUG nova.objects.instance [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.935 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <uuid>7f408ee5-e644-4a37-9cd2-3db94e56c638</uuid>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <name>instance-000000ac</name>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:name>tempest-TestMinimumBasicScenario-server-1147518891</nova:name>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:09:54</nova:creationTime>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:user uuid="c98bd3f0904e48efa524d598bcad85e9">tempest-TestMinimumBasicScenario-1665080150-project-member</nova:user>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:project uuid="5b43342be22543f79d4a56e26c6d0c96">tempest-TestMinimumBasicScenario-1665080150</nova:project>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="f6b86f20-6a27-42dc-9911-14ae5e0ee2dc"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <nova:port uuid="e51d2161-817b-46db-b0ce-4b313d293d7f">
Jan 20 15:09:55 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <system>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="serial">7f408ee5-e644-4a37-9cd2-3db94e56c638</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="uuid">7f408ee5-e644-4a37-9cd2-3db94e56c638</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </system>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <os>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </os>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <features>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </features>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/7f408ee5-e644-4a37-9cd2-3db94e56c638_disk">
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </source>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config">
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </source>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:09:55 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:89:1a:96"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <target dev="tape51d2161-81"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/console.log" append="off"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <video>
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </video>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:09:55 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:09:55 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:09:55 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:09:55 compute-0 nova_compute[250018]: </domain>
Jan 20 15:09:55 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.936 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Preparing to wait for external event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.936 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.936 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.936 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.937 250022 DEBUG nova.virt.libvirt.vif [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1147518891',display_name='tempest-TestMinimumBasicScenario-server-1147518891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1147518891',id=172,image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOAQWrP+GDiNUoPOYescUGgo/yLOGt4CAUCa8ZMwCowpdu0oAlhZM5WeI21R8uP9lryKPNYsVj0oCABaSlnWGXpu5Hh3xUYyz5x2p4oOq+Z53WRgS8ou5qt7R4Dm4lN/5A==',key_name='tempest-TestMinimumBasicScenario-1568926112',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-bbkfjia9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:09:50Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=7f408ee5-e644-4a37-9cd2-3db94e56c638,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.937 250022 DEBUG nova.network.os_vif_util [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.938 250022 DEBUG nova.network.os_vif_util [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.938 250022 DEBUG os_vif [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.939 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.940 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.943 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.943 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape51d2161-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.944 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape51d2161-81, col_values=(('external_ids', {'iface-id': 'e51d2161-817b-46db-b0ce-4b313d293d7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:1a:96', 'vm-uuid': '7f408ee5-e644-4a37-9cd2-3db94e56c638'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:55 compute-0 NetworkManager[48960]: <info>  [1768921795.9461] manager: (tape51d2161-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:09:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:55 compute-0 nova_compute[250018]: 2026-01-20 15:09:55.953 250022 INFO os_vif [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81')
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.007 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.008 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.008 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No VIF found with MAC fa:16:3e:89:1a:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.009 250022 INFO nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Using config drive
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.032 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:09:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.490 250022 INFO nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Creating config drive at /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.494 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2r12yof execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:09:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:09:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:09:56 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4065963856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.631 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2r12yof" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.656 250022 DEBUG nova.storage.rbd_utils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] rbd image 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.660 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.822 250022 DEBUG oslo_concurrency.processutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config 7f408ee5-e644-4a37-9cd2-3db94e56c638_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.823 250022 INFO nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Deleting local config drive /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638/disk.config because it was imported into RBD.
Jan 20 15:09:56 compute-0 kernel: tape51d2161-81: entered promiscuous mode
Jan 20 15:09:56 compute-0 NetworkManager[48960]: <info>  [1768921796.8760] manager: (tape51d2161-81): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.875 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:56 compute-0 ovn_controller[148666]: 2026-01-20T15:09:56Z|00607|binding|INFO|Claiming lport e51d2161-817b-46db-b0ce-4b313d293d7f for this chassis.
Jan 20 15:09:56 compute-0 ovn_controller[148666]: 2026-01-20T15:09:56Z|00608|binding|INFO|e51d2161-817b-46db-b0ce-4b313d293d7f: Claiming fa:16:3e:89:1a:96 10.100.0.5
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.881 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.889 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:1a:96 10.100.0.5'], port_security=['fa:16:3e:89:1a:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f408ee5-e644-4a37-9cd2-3db94e56c638', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e51d2161-817b-46db-b0ce-4b313d293d7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.890 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e51d2161-817b-46db-b0ce-4b313d293d7f in datapath e22d6ddc-0339-4395-bc21-95081825f05b bound to our chassis
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.891 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.904 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[591a4edc-8b43-4873-a5e6-ad69fc5d7e41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.905 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape22d6ddc-01 in ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.907 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape22d6ddc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.907 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d3babb57-8393-4a85-af01-51934f312c00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.908 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e74fbc6c-d91c-4ecb-a8d5-0d153d536ae3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.919 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[da39296d-a625-43af-968c-27f240b7333f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 systemd-machined[216401]: New machine qemu-76-instance-000000ac.
Jan 20 15:09:56 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-000000ac.
Jan 20 15:09:56 compute-0 systemd-udevd[351619]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.944 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec354da5-dc4c-424c-b19d-c57612e20e54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 NetworkManager[48960]: <info>  [1768921796.9550] device (tape51d2161-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:09:56 compute-0 NetworkManager[48960]: <info>  [1768921796.9560] device (tape51d2161-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.966 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:56 compute-0 ovn_controller[148666]: 2026-01-20T15:09:56Z|00609|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f ovn-installed in OVS
Jan 20 15:09:56 compute-0 ovn_controller[148666]: 2026-01-20T15:09:56Z|00610|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f up in Southbound
Jan 20 15:09:56 compute-0 nova_compute[250018]: 2026-01-20 15:09:56.976 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.978 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[108aa9b0-1ebd-4169-8963-2517cce0b67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 NetworkManager[48960]: <info>  [1768921796.9838] manager: (tape22d6ddc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Jan 20 15:09:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:56.982 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[86192aea-6c27-4830-98c5-9e49082644fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:56 compute-0 systemd-udevd[351629]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:09:57 compute-0 podman[351598]: 2026-01-20 15:09:57.01084009 +0000 UTC m=+0.101574261 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.012 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[56f39703-0a14-484c-a6b2-77b07ed141a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.014 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a55c3d25-ac25-4222-910c-6f0d83c7ecec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 podman[351597]: 2026-01-20 15:09:57.017565191 +0000 UTC m=+0.109018342 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:09:57 compute-0 NetworkManager[48960]: <info>  [1768921797.0375] device (tape22d6ddc-00): carrier: link connected
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.043 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d5994ab6-9adf-483e-a94a-b9a95f04515b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.061 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe0baaf-e8da-429a-a3e2-86d02f466771]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773776, 'reachable_time': 42796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351677, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.074 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[adee6d3c-b587-445d-8e94-a99002b24635]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:3f5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773776, 'tstamp': 773776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351678, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.090 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9186e07-394d-4114-a6ca-f158e383e7b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773776, 'reachable_time': 42796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351679, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.120 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[10ba98ac-1acc-4534-9485-7eaaf9daa087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.170 250022 DEBUG nova.network.neutron [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.170 250022 DEBUG nova.network.neutron [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.177 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[daa94ee8-b86e-4b12-b63f-a27648a5c6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.178 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.179 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.179 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape22d6ddc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.180 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-0 NetworkManager[48960]: <info>  [1768921797.1813] manager: (tape22d6ddc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Jan 20 15:09:57 compute-0 kernel: tape22d6ddc-00: entered promiscuous mode
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.183 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.185 250022 DEBUG oslo_concurrency.lockutils [req-f553bc03-71e8-46a8-a0a2-4409968760a5 req-d7831281-be40-49aa-8d6d-a0786b0e4fa2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.186 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape22d6ddc-00, col_values=(('external_ids', {'iface-id': '940a1442-b0ab-49a2-87e8-750659cdda8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-0 ovn_controller[148666]: 2026-01-20T15:09:57Z|00611|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.203 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.205 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.206 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d98a57c-f889-4881-8b74-324caa139e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.207 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:09:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:09:57.207 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'env', 'PROCESS_TAG=haproxy-e22d6ddc-0339-4395-bc21-95081825f05b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e22d6ddc-0339-4395-bc21-95081825f05b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.210 250022 DEBUG nova.compute.manager [req-cf14978c-f7a4-4a1c-ac96-2bc7e5f403e1 req-d0dea821-b3fb-447a-b04f-d1e1a9736088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.210 250022 DEBUG oslo_concurrency.lockutils [req-cf14978c-f7a4-4a1c-ac96-2bc7e5f403e1 req-d0dea821-b3fb-447a-b04f-d1e1a9736088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.211 250022 DEBUG oslo_concurrency.lockutils [req-cf14978c-f7a4-4a1c-ac96-2bc7e5f403e1 req-d0dea821-b3fb-447a-b04f-d1e1a9736088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.211 250022 DEBUG oslo_concurrency.lockutils [req-cf14978c-f7a4-4a1c-ac96-2bc7e5f403e1 req-d0dea821-b3fb-447a-b04f-d1e1a9736088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.211 250022 DEBUG nova.compute.manager [req-cf14978c-f7a4-4a1c-ac96-2bc7e5f403e1 req-d0dea821-b3fb-447a-b04f-d1e1a9736088 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Processing event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.529 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921797.5290751, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.530 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Started (Lifecycle Event)
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.532 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.540 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.544 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance spawned successfully.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.544 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.549 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:57 compute-0 podman[351752]: 2026-01-20 15:09:57.551801533 +0000 UTC m=+0.052896758 container create 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.552 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.565 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.566 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.566 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.567 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.567 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.567 250022 DEBUG nova.virt.libvirt.driver [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:09:57 compute-0 ceph-mon[74360]: pgmap v2600: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:57 compute-0 systemd[1]: Started libpod-conmon-709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe.scope.
Jan 20 15:09:57 compute-0 podman[351752]: 2026-01-20 15:09:57.525524374 +0000 UTC m=+0.026619619 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:09:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95d882ea98acd6d487f0e82d68ed1123bba044faaf05427acc543da5f93c1b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.642 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.643 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921797.5300405, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.643 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Paused (Lifecycle Event)
Jan 20 15:09:57 compute-0 podman[351752]: 2026-01-20 15:09:57.64548868 +0000 UTC m=+0.146583925 container init 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:09:57 compute-0 podman[351752]: 2026-01-20 15:09:57.656587529 +0000 UTC m=+0.157682754 container start 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.677 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.681 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921797.534114, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.682 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Resumed (Lifecycle Event)
Jan 20 15:09:57 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [NOTICE]   (351771) : New worker (351773) forked
Jan 20 15:09:57 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [NOTICE]   (351771) : Loading success.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.707 250022 INFO nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Took 6.97 seconds to spawn the instance on the hypervisor.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.708 250022 DEBUG nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.718 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.721 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:09:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.770 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.792 250022 INFO nova.compute.manager [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Took 8.02 seconds to build instance.
Jan 20 15:09:57 compute-0 nova_compute[250018]: 2026-01-20 15:09:57.827 250022 DEBUG oslo_concurrency.lockutils [None req-56376146-df83-4e74-b3d3-48cb1955646e c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:09:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:09:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:57.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:09:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:09:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:09:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3512189121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:58 compute-0 sudo[351783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:58 compute-0 sudo[351783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:58 compute-0 sudo[351783]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:59 compute-0 sudo[351808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:09:59 compute-0 sudo[351808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:09:59 compute-0 sudo[351808]: pam_unix(sudo:session): session closed for user root
Jan 20 15:09:59 compute-0 ceph-mon[74360]: pgmap v2601: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Jan 20 15:09:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/984211214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/225497052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:09:59 compute-0 nova_compute[250018]: 2026-01-20 15:09:59.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:09:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:09:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:09:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:09:59.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:10:00 compute-0 nova_compute[250018]: 2026-01-20 15:10:00.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.2 MiB/s wr, 170 op/s
Jan 20 15:10:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3843724838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:10:00 compute-0 ceph-mon[74360]: pgmap v2602: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.2 MiB/s wr, 170 op/s
Jan 20 15:10:00 compute-0 nova_compute[250018]: 2026-01-20 15:10:00.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.262 250022 DEBUG nova.compute.manager [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.263 250022 DEBUG oslo_concurrency.lockutils [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.263 250022 DEBUG oslo_concurrency.lockutils [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.264 250022 DEBUG oslo_concurrency.lockutils [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.264 250022 DEBUG nova.compute.manager [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.264 250022 WARNING nova.compute.manager [req-a17295bc-5de3-4118-82a5-5eb8566b92e3 req-de721d6b-3a17-4129-ba53-e58f97b6e1c4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state None.
Jan 20 15:10:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240187548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.568 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1240187548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.667 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.668 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.856 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.857 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3971MB free_disk=20.809860229492188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.857 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.857 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.931 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.931 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:10:01 compute-0 nova_compute[250018]: 2026-01-20 15:10:01.931 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:10:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 146 op/s
Jan 20 15:10:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:10:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876936489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:02.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.544 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.549 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.564 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.593 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:10:02 compute-0 nova_compute[250018]: 2026-01-20 15:10:02.594 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:02 compute-0 ceph-mon[74360]: pgmap v2603: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.0 MiB/s wr, 146 op/s
Jan 20 15:10:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2876936489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:03 compute-0 nova_compute[250018]: 2026-01-20 15:10:03.595 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:03 compute-0 nova_compute[250018]: 2026-01-20 15:10:03.596 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:10:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:03.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1931767600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 20 15:10:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:04 compute-0 nova_compute[250018]: 2026-01-20 15:10:04.955 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:05 compute-0 ceph-mon[74360]: pgmap v2604: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.107 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.108 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.126 250022 DEBUG nova.objects.instance [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.160 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.683 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.684 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.684 250022 INFO nova.compute.manager [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Attaching volume c53232c5-7839-4cc5-8acf-4257d1fb7c13 to /dev/vdb
Jan 20 15:10:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:05.722 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:05.723 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.841 250022 DEBUG os_brick.utils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.843 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.853 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.854 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec98ea9-be76-45d8-99d6-c20d3fdb50dc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.856 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.863 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.864 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[043ad1e8-bc55-43a5-b08b-2e0ad5719c21]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.866 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.875 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.875 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[c2662ece-02ef-44fe-8dd5-c894630368f3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.877 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[51508b66-fed0-4636-9a1e-14b31fe8946f]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.878 250022 DEBUG oslo_concurrency.processutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.913 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.920 250022 DEBUG oslo_concurrency.processutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.924 250022 DEBUG os_brick.initiator.connectors.lightos [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.925 250022 DEBUG os_brick.initiator.connectors.lightos [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.925 250022 DEBUG os_brick.initiator.connectors.lightos [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.926 250022 DEBUG os_brick.utils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.926 250022 DEBUG nova.virt.block_device [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating existing volume attachment record: a48d10e0-1635-480d-9912-02d24834bdae _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.938 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.939 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:05 compute-0 nova_compute[250018]: 2026-01-20 15:10:05.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.077 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2067592141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 157 op/s
Jan 20 15:10:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:06.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:10:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3240538515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.753 250022 DEBUG nova.objects.instance [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.789 250022 DEBUG nova.virt.libvirt.driver [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Attempting to attach volume c53232c5-7839-4cc5-8acf-4257d1fb7c13 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.792 250022 DEBUG nova.virt.libvirt.guest [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:10:06 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-c53232c5-7839-4cc5-8acf-4257d1fb7c13">
Jan 20 15:10:06 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   </source>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 15:10:06 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   </auth>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:10:06 compute-0 nova_compute[250018]:   <serial>c53232c5-7839-4cc5-8acf-4257d1fb7c13</serial>
Jan 20 15:10:06 compute-0 nova_compute[250018]: </disk>
Jan 20 15:10:06 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.925 250022 DEBUG nova.virt.libvirt.driver [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.926 250022 DEBUG nova.virt.libvirt.driver [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.926 250022 DEBUG nova.virt.libvirt.driver [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:10:06 compute-0 nova_compute[250018]: 2026-01-20 15:10:06.926 250022 DEBUG nova.virt.libvirt.driver [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] No VIF found with MAC fa:16:3e:89:1a:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:10:07 compute-0 nova_compute[250018]: 2026-01-20 15:10:07.076 250022 DEBUG oslo_concurrency.lockutils [None req-22c3b2ac-7335-4881-9141-f027a6ff9888 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:07 compute-0 nova_compute[250018]: 2026-01-20 15:10:07.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:07 compute-0 nova_compute[250018]: 2026-01-20 15:10:07.107 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:07 compute-0 ceph-mon[74360]: pgmap v2605: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 157 op/s
Jan 20 15:10:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3240538515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:07.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.289 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.290 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.290 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:10:08 compute-0 nova_compute[250018]: 2026-01-20 15:10:08.290 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:10:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 123 op/s
Jan 20 15:10:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:08.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:09 compute-0 ceph-mon[74360]: pgmap v2606: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 123 op/s
Jan 20 15:10:09 compute-0 nova_compute[250018]: 2026-01-20 15:10:09.955 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 614 KiB/s wr, 130 op/s
Jan 20 15:10:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:10.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:10.725 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:10 compute-0 nova_compute[250018]: 2026-01-20 15:10:10.951 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:11 compute-0 sudo[351911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-0 sudo[351911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-0 sudo[351911]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-0 sudo[351936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:11 compute-0 sudo[351936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-0 sudo[351936]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-0 sudo[351961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:11 compute-0 sudo[351961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-0 sudo[351961]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-0 sudo[351986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:10:11 compute-0 sudo[351986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:11 compute-0 ceph-mon[74360]: pgmap v2607: 321 pgs: 321 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 614 KiB/s wr, 130 op/s
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:10:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:10:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:10:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:10:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008809600665363857 of space, bias 1.0, pg target 2.642880199609157 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021745009771077612 of space, bias 1.0, pg target 0.6480012911781129 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:10:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 15:10:11 compute-0 sudo[351986]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 15:10:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:11 compute-0 ovn_controller[148666]: 2026-01-20T15:10:11Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:1a:96 10.100.0.5
Jan 20 15:10:11 compute-0 ovn_controller[148666]: 2026-01-20T15:10:11Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:1a:96 10.100.0.5
Jan 20 15:10:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6cbf4e68-ce81-4833-8fbd-4528f03808dd does not exist
Jan 20 15:10:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5bb7a212-bd7d-4254-8f67-34ae217504b6 does not exist
Jan 20 15:10:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ecdc4246-9245-427f-adff-279ac55a8d4d does not exist
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:10:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 487 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1007 KiB/s wr, 94 op/s
Jan 20 15:10:12 compute-0 sudo[352043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:12 compute-0 sudo[352043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:12 compute-0 sudo[352043]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:12 compute-0 sudo[352068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:12 compute-0 sudo[352068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:12 compute-0 sudo[352068]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:12 compute-0 sudo[352093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:12 compute-0 sudo[352093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:12 compute-0 sudo[352093]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1681359816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:10:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:10:12 compute-0 sudo[352118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:10:12 compute-0 sudo[352118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:12 compute-0 nova_compute[250018]: 2026-01-20 15:10:12.661 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:12 compute-0 nova_compute[250018]: 2026-01-20 15:10:12.682 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:12 compute-0 nova_compute[250018]: 2026-01-20 15:10:12.682 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.907597778 +0000 UTC m=+0.041169381 container create 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:10:12 compute-0 systemd[1]: Started libpod-conmon-606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4.scope.
Jan 20 15:10:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.971614486 +0000 UTC m=+0.105186089 container init 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.978578503 +0000 UTC m=+0.112150086 container start 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.982574561 +0000 UTC m=+0.116146154 container attach 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:10:12 compute-0 nifty_williams[352201]: 167 167
Jan 20 15:10:12 compute-0 systemd[1]: libpod-606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4.scope: Deactivated successfully.
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.985091368 +0000 UTC m=+0.118662961 container died 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:12 compute-0 podman[352184]: 2026-01-20 15:10:12.892055109 +0000 UTC m=+0.025626752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-440a34596fa2523c80252c814baf48827475e674e7fb0b61ff525d32804295cc-merged.mount: Deactivated successfully.
Jan 20 15:10:13 compute-0 podman[352184]: 2026-01-20 15:10:13.035493038 +0000 UTC m=+0.169064621 container remove 606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:10:13 compute-0 systemd[1]: libpod-conmon-606a85ce8be64623eac7a5cb8469fba2b2d5cf442302fbcc2f743373c6c12bd4.scope: Deactivated successfully.
Jan 20 15:10:13 compute-0 podman[352226]: 2026-01-20 15:10:13.1949588 +0000 UTC m=+0.040630017 container create e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:10:13 compute-0 systemd[1]: Started libpod-conmon-e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146.scope.
Jan 20 15:10:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:13 compute-0 podman[352226]: 2026-01-20 15:10:13.26241986 +0000 UTC m=+0.108091097 container init e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:10:13 compute-0 podman[352226]: 2026-01-20 15:10:13.270507948 +0000 UTC m=+0.116179165 container start e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:10:13 compute-0 podman[352226]: 2026-01-20 15:10:13.273839428 +0000 UTC m=+0.119510645 container attach e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:10:13 compute-0 podman[352226]: 2026-01-20 15:10:13.180466029 +0000 UTC m=+0.026137286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:10:13 compute-0 ceph-mon[74360]: pgmap v2608: 321 pgs: 321 active+clean; 487 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1007 KiB/s wr, 94 op/s
Jan 20 15:10:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:10:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:13.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:14 compute-0 nervous_jang[352242]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:10:14 compute-0 nervous_jang[352242]: --> relative data size: 1.0
Jan 20 15:10:14 compute-0 nervous_jang[352242]: --> All data devices are unavailable
Jan 20 15:10:14 compute-0 systemd[1]: libpod-e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146.scope: Deactivated successfully.
Jan 20 15:10:14 compute-0 podman[352257]: 2026-01-20 15:10:14.093507078 +0000 UTC m=+0.022893318 container died e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95af1b2ec93d9541d3ff2a80b9f4ae6522d9c755d61dd0c51dd6b10c2a5caa2-merged.mount: Deactivated successfully.
Jan 20 15:10:14 compute-0 podman[352257]: 2026-01-20 15:10:14.139067318 +0000 UTC m=+0.068453538 container remove e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jang, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:10:14 compute-0 systemd[1]: libpod-conmon-e8f8ec24851af35ea4fc1525d151f5a89e788a97beb90cef25cf7fb6894d0146.scope: Deactivated successfully.
Jan 20 15:10:14 compute-0 sudo[352118]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:14 compute-0 sudo[352273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:14 compute-0 sudo[352273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:14 compute-0 sudo[352273]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:14 compute-0 sudo[352298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:14 compute-0 sudo[352298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:14 compute-0 sudo[352298]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:14 compute-0 NetworkManager[48960]: <info>  [1768921814.3006] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Jan 20 15:10:14 compute-0 NetworkManager[48960]: <info>  [1768921814.3015] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Jan 20 15:10:14 compute-0 nova_compute[250018]: 2026-01-20 15:10:14.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-0 sudo[352323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:14 compute-0 sudo[352323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:14 compute-0 sudo[352323]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:14 compute-0 sudo[352348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:10:14 compute-0 sudo[352348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 502 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:10:14 compute-0 nova_compute[250018]: 2026-01-20 15:10:14.532 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-0 ovn_controller[148666]: 2026-01-20T15:10:14Z|00612|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:10:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:14.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:14 compute-0 nova_compute[250018]: 2026-01-20 15:10:14.555 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1131765128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.685888648 +0000 UTC m=+0.036213208 container create dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:10:14 compute-0 systemd[1]: Started libpod-conmon-dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02.scope.
Jan 20 15:10:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.757393427 +0000 UTC m=+0.107717987 container init dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.76307512 +0000 UTC m=+0.113399680 container start dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.669266269 +0000 UTC m=+0.019590849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.766275507 +0000 UTC m=+0.116600097 container attach dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:10:14 compute-0 funny_ritchie[352431]: 167 167
Jan 20 15:10:14 compute-0 systemd[1]: libpod-dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02.scope: Deactivated successfully.
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.769542085 +0000 UTC m=+0.119866655 container died dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba6c3bd2fe5eecfb3114e65ea63453471912f9717f6ee4533a82f98e019478a-merged.mount: Deactivated successfully.
Jan 20 15:10:14 compute-0 podman[352414]: 2026-01-20 15:10:14.803520751 +0000 UTC m=+0.153845311 container remove dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:10:14 compute-0 systemd[1]: libpod-conmon-dbceb1a42a88bd5e0442c27c8430a33708fb960855f4f96eb438099d4c512a02.scope: Deactivated successfully.
Jan 20 15:10:14 compute-0 nova_compute[250018]: 2026-01-20 15:10:14.956 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:14 compute-0 podman[352454]: 2026-01-20 15:10:14.962552841 +0000 UTC m=+0.043536365 container create 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:10:14 compute-0 systemd[1]: Started libpod-conmon-2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c.scope.
Jan 20 15:10:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:14.942225973 +0000 UTC m=+0.023209497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1217de1a9bb2dd14761889bc5232ac829a68e2855db5619996a3f547ad9df183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1217de1a9bb2dd14761889bc5232ac829a68e2855db5619996a3f547ad9df183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1217de1a9bb2dd14761889bc5232ac829a68e2855db5619996a3f547ad9df183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1217de1a9bb2dd14761889bc5232ac829a68e2855db5619996a3f547ad9df183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:15.065238331 +0000 UTC m=+0.146221835 container init 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:15.071921181 +0000 UTC m=+0.152904675 container start 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:15.089867356 +0000 UTC m=+0.170850850 container attach 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:10:15 compute-0 ceph-mon[74360]: pgmap v2609: 321 pgs: 321 active+clean; 502 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]: {
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:     "0": [
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:         {
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "devices": [
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "/dev/loop3"
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             ],
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "lv_name": "ceph_lv0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "lv_size": "7511998464",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "name": "ceph_lv0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "tags": {
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.cluster_name": "ceph",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.crush_device_class": "",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.encrypted": "0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.osd_id": "0",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.type": "block",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:                 "ceph.vdo": "0"
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             },
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "type": "block",
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:             "vg_name": "ceph_vg0"
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:         }
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]:     ]
Jan 20 15:10:15 compute-0 pedantic_albattani[352470]: }
Jan 20 15:10:15 compute-0 systemd[1]: libpod-2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c.scope: Deactivated successfully.
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:15.783875006 +0000 UTC m=+0.864858500 container died 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1217de1a9bb2dd14761889bc5232ac829a68e2855db5619996a3f547ad9df183-merged.mount: Deactivated successfully.
Jan 20 15:10:15 compute-0 podman[352454]: 2026-01-20 15:10:15.825266393 +0000 UTC m=+0.906249887 container remove 2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 15:10:15 compute-0 systemd[1]: libpod-conmon-2708f9f52cec872826b0a3d26bb33f1820d748c66e602dbac381dc8c0d243f2c.scope: Deactivated successfully.
Jan 20 15:10:15 compute-0 sudo[352348]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:15 compute-0 sudo[352490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:15 compute-0 sudo[352490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:15 compute-0 sudo[352490]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:15 compute-0 sudo[352515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:10:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:16 compute-0 sudo[352515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:16.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:16 compute-0 sudo[352515]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:16 compute-0 sudo[352540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:16 compute-0 sudo[352540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:16 compute-0 sudo[352540]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:16 compute-0 sudo[352565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:10:16 compute-0 sudo[352565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.399354309 +0000 UTC m=+0.032570139 container create 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:10:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 721 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 15:10:16 compute-0 systemd[1]: Started libpod-conmon-067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba.scope.
Jan 20 15:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.469641165 +0000 UTC m=+0.102857045 container init 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.476820058 +0000 UTC m=+0.110035888 container start 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.479537302 +0000 UTC m=+0.112753192 container attach 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:10:16 compute-0 thirsty_tesla[352648]: 167 167
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.385586437 +0000 UTC m=+0.018802297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:16 compute-0 systemd[1]: libpod-067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba.scope: Deactivated successfully.
Jan 20 15:10:16 compute-0 conmon[352648]: conmon 067fe711b9cde62187c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba.scope/container/memory.events
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.484448445 +0000 UTC m=+0.117664315 container died 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfe6b09bdcc97177d8bb91010b6e7af307e1bc0ed44e03417b5c45409467b35a-merged.mount: Deactivated successfully.
Jan 20 15:10:16 compute-0 podman[352631]: 2026-01-20 15:10:16.524269238 +0000 UTC m=+0.157485068 container remove 067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:10:16 compute-0 systemd[1]: libpod-conmon-067fe711b9cde62187c70e1e93339ecf0b58a46adf03f11290828a08ea1d0cba.scope: Deactivated successfully.
Jan 20 15:10:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:16 compute-0 ceph-mon[74360]: pgmap v2610: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 721 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.608 250022 DEBUG nova.compute.manager [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.609 250022 DEBUG nova.compute.manager [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.609 250022 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.609 250022 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:16 compute-0 nova_compute[250018]: 2026-01-20 15:10:16.609 250022 DEBUG nova.network.neutron [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:16 compute-0 podman[352672]: 2026-01-20 15:10:16.704277104 +0000 UTC m=+0.045689093 container create b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:10:16 compute-0 systemd[1]: Started libpod-conmon-b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1.scope.
Jan 20 15:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33f73a8c39cbf472ab73a6d55c4db1dfc7c4cd887d99530ebd9f402f76800f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33f73a8c39cbf472ab73a6d55c4db1dfc7c4cd887d99530ebd9f402f76800f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33f73a8c39cbf472ab73a6d55c4db1dfc7c4cd887d99530ebd9f402f76800f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33f73a8c39cbf472ab73a6d55c4db1dfc7c4cd887d99530ebd9f402f76800f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:16 compute-0 podman[352672]: 2026-01-20 15:10:16.686075643 +0000 UTC m=+0.027487602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:10:16 compute-0 podman[352672]: 2026-01-20 15:10:16.783546992 +0000 UTC m=+0.124958991 container init b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:10:16 compute-0 podman[352672]: 2026-01-20 15:10:16.793373927 +0000 UTC m=+0.134785886 container start b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:10:16 compute-0 podman[352672]: 2026-01-20 15:10:16.79640912 +0000 UTC m=+0.137821109 container attach b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:10:17 compute-0 nova_compute[250018]: 2026-01-20 15:10:17.033 250022 DEBUG nova.compute.manager [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:17 compute-0 nova_compute[250018]: 2026-01-20 15:10:17.033 250022 DEBUG nova.compute.manager [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:17 compute-0 nova_compute[250018]: 2026-01-20 15:10:17.033 250022 DEBUG oslo_concurrency.lockutils [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:17 compute-0 fervent_haslett[352688]: {
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:         "osd_id": 0,
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:         "type": "bluestore"
Jan 20 15:10:17 compute-0 fervent_haslett[352688]:     }
Jan 20 15:10:17 compute-0 fervent_haslett[352688]: }
Jan 20 15:10:17 compute-0 systemd[1]: libpod-b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1.scope: Deactivated successfully.
Jan 20 15:10:17 compute-0 podman[352672]: 2026-01-20 15:10:17.571066576 +0000 UTC m=+0.912478555 container died b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 20 15:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33f73a8c39cbf472ab73a6d55c4db1dfc7c4cd887d99530ebd9f402f76800f4-merged.mount: Deactivated successfully.
Jan 20 15:10:17 compute-0 podman[352672]: 2026-01-20 15:10:17.627156649 +0000 UTC m=+0.968568608 container remove b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:10:17 compute-0 systemd[1]: libpod-conmon-b59639250647a2f478eeb6bc93ef6c3af3b49ff96f1ad5c694b8f1995b1b91a1.scope: Deactivated successfully.
Jan 20 15:10:17 compute-0 sudo[352565]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:10:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:10:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fad8e667-15c5-4c85-ac15-0647f60c0be4 does not exist
Jan 20 15:10:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 75b8856a-888e-437b-8137-a448620e4154 does not exist
Jan 20 15:10:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f6f66c0e-139b-498e-ba6f-9d45956d9974 does not exist
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.696080) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817696152, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1743, "num_deletes": 255, "total_data_size": 2780515, "memory_usage": 2826048, "flush_reason": "Manual Compaction"}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817709637, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 2722828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57150, "largest_seqno": 58892, "table_properties": {"data_size": 2714791, "index_size": 4786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17713, "raw_average_key_size": 20, "raw_value_size": 2698424, "raw_average_value_size": 3182, "num_data_blocks": 207, "num_entries": 848, "num_filter_entries": 848, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921681, "oldest_key_time": 1768921681, "file_creation_time": 1768921817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 13657 microseconds, and 5575 cpu microseconds.
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.709757) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 2722828 bytes OK
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.709799) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.711178) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.711194) EVENT_LOG_v1 {"time_micros": 1768921817711189, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.711211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2773075, prev total WAL file size 2773075, number of live WAL files 2.
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.712496) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2659KB)], [128(10MB)]
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817712580, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 14183446, "oldest_snapshot_seqno": -1}
Jan 20 15:10:17 compute-0 sudo[352723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:17 compute-0 sudo[352723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:17 compute-0 sudo[352723]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8785 keys, 12255164 bytes, temperature: kUnknown
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817794869, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 12255164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12196412, "index_size": 35663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 230240, "raw_average_key_size": 26, "raw_value_size": 12039904, "raw_average_value_size": 1370, "num_data_blocks": 1372, "num_entries": 8785, "num_filter_entries": 8785, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.795077) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 12255164 bytes
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.796256) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.3 rd, 148.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.9 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(9.7) write-amplify(4.5) OK, records in: 9314, records dropped: 529 output_compression: NoCompression
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.796270) EVENT_LOG_v1 {"time_micros": 1768921817796263, "job": 78, "event": "compaction_finished", "compaction_time_micros": 82336, "compaction_time_cpu_micros": 29174, "output_level": 6, "num_output_files": 1, "total_output_size": 12255164, "num_input_records": 9314, "num_output_records": 8785, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817796801, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921817798679, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.712185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.798728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.798732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.798734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.798736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:10:17.798737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:10:17 compute-0 sudo[352748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:10:17 compute-0 sudo[352748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:17 compute-0 sudo[352748]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:18.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:18 compute-0 nova_compute[250018]: 2026-01-20 15:10:18.285 250022 DEBUG nova.network.neutron [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:18 compute-0 nova_compute[250018]: 2026-01-20 15:10:18.286 250022 DEBUG nova.network.neutron [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:18 compute-0 nova_compute[250018]: 2026-01-20 15:10:18.307 250022 DEBUG oslo_concurrency.lockutils [req-b4be0a5d-d2d4-463a-882b-65187d28574a req-35613700-c0f8-4356-903c-5d673b4011a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:18 compute-0 nova_compute[250018]: 2026-01-20 15:10:18.308 250022 DEBUG oslo_concurrency.lockutils [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:18 compute-0 nova_compute[250018]: 2026-01-20 15:10:18.308 250022 DEBUG nova.network.neutron [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:10:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:10:18 compute-0 ceph-mon[74360]: pgmap v2611: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:10:19 compute-0 sudo[352774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:19 compute-0 sudo[352774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:19 compute-0 sudo[352774]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:19 compute-0 sudo[352799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:19 compute-0 sudo[352799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:19 compute-0 nova_compute[250018]: 2026-01-20 15:10:19.147 250022 DEBUG nova.compute.manager [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:19 compute-0 nova_compute[250018]: 2026-01-20 15:10:19.148 250022 DEBUG nova.compute.manager [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:19 compute-0 nova_compute[250018]: 2026-01-20 15:10:19.148 250022 DEBUG oslo_concurrency.lockutils [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:19 compute-0 sudo[352799]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:19 compute-0 nova_compute[250018]: 2026-01-20 15:10:19.961 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:20.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:20 compute-0 nova_compute[250018]: 2026-01-20 15:10:20.274 250022 DEBUG nova.network.neutron [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:20 compute-0 nova_compute[250018]: 2026-01-20 15:10:20.275 250022 DEBUG nova.network.neutron [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:20 compute-0 nova_compute[250018]: 2026-01-20 15:10:20.290 250022 DEBUG oslo_concurrency.lockutils [req-9fc007bd-da15-443e-bf5e-b2a582e76dbd req-e4760216-1b44-45ff-8851-30a6f82052a3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:20 compute-0 nova_compute[250018]: 2026-01-20 15:10:20.290 250022 DEBUG oslo_concurrency.lockutils [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:20 compute-0 nova_compute[250018]: 2026-01-20 15:10:20.291 250022 DEBUG nova.network.neutron [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Jan 20 15:10:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:20.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:21 compute-0 nova_compute[250018]: 2026-01-20 15:10:21.009 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:21 compute-0 ceph-mon[74360]: pgmap v2612: 321 pgs: 321 active+clean; 511 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Jan 20 15:10:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:22.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:22 compute-0 nova_compute[250018]: 2026-01-20 15:10:22.559 250022 DEBUG nova.network.neutron [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:22 compute-0 nova_compute[250018]: 2026-01-20 15:10:22.559 250022 DEBUG nova.network.neutron [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:22 compute-0 nova_compute[250018]: 2026-01-20 15:10:22.575 250022 DEBUG oslo_concurrency.lockutils [req-3d7e9ee8-d5c2-43ca-864f-e80c561cbf4a req-76b8ed93-9260-4748-8080-88e6373e8b3d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:22 compute-0 ceph-mon[74360]: pgmap v2613: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Jan 20 15:10:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3238774180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.393 250022 DEBUG nova.compute.manager [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.393 250022 DEBUG nova.compute.manager [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.393 250022 DEBUG oslo_concurrency.lockutils [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.393 250022 DEBUG oslo_concurrency.lockutils [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.394 250022 DEBUG nova.network.neutron [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:10:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Jan 20 15:10:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.864 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.864 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.870 250022 INFO nova.compute.manager [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Rebooting instance
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.887 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:10:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4078764243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:24 compute-0 ceph-mon[74360]: pgmap v2614: 321 pgs: 321 active+clean; 510 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Jan 20 15:10:24 compute-0 nova_compute[250018]: 2026-01-20 15:10:24.963 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:25 compute-0 nova_compute[250018]: 2026-01-20 15:10:25.804 250022 DEBUG nova.network.neutron [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:10:25 compute-0 nova_compute[250018]: 2026-01-20 15:10:25.805 250022 DEBUG nova.network.neutron [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:25 compute-0 nova_compute[250018]: 2026-01-20 15:10:25.823 250022 DEBUG oslo_concurrency.lockutils [req-0134d1ab-799f-4ac9-9ac0-6b6a1ae3e9da req-e6487fcc-f485-48d6-af26-b899614bec6f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:25 compute-0 nova_compute[250018]: 2026-01-20 15:10:25.824 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:10:25 compute-0 nova_compute[250018]: 2026-01-20 15:10:25.824 250022 DEBUG nova.network.neutron [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:10:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:26.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:26 compute-0 nova_compute[250018]: 2026-01-20 15:10:26.054 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 146 KiB/s wr, 68 op/s
Jan 20 15:10:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:27 compute-0 ceph-mon[74360]: pgmap v2615: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 146 KiB/s wr, 68 op/s
Jan 20 15:10:27 compute-0 podman[352829]: 2026-01-20 15:10:27.516633548 +0000 UTC m=+0.095311942 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 15:10:27 compute-0 podman[352828]: 2026-01-20 15:10:27.532968219 +0000 UTC m=+0.107630914 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:10:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:28.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:28 compute-0 nova_compute[250018]: 2026-01-20 15:10:28.196 250022 DEBUG nova.network.neutron [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:10:28 compute-0 nova_compute[250018]: 2026-01-20 15:10:28.281 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:10:28 compute-0 nova_compute[250018]: 2026-01-20 15:10:28.282 250022 DEBUG nova.compute.manager [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 15:10:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:29 compute-0 ceph-mon[74360]: pgmap v2616: 321 pgs: 321 active+clean; 493 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 20 KiB/s wr, 36 op/s
Jan 20 15:10:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3315978454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:29 compute-0 nova_compute[250018]: 2026-01-20 15:10:29.967 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:30.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 29 KiB/s wr, 67 op/s
Jan 20 15:10:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:30 compute-0 kernel: tape51d2161-81 (unregistering): left promiscuous mode
Jan 20 15:10:30 compute-0 NetworkManager[48960]: <info>  [1768921830.7055] device (tape51d2161-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:10:30 compute-0 ovn_controller[148666]: 2026-01-20T15:10:30Z|00613|binding|INFO|Releasing lport e51d2161-817b-46db-b0ce-4b313d293d7f from this chassis (sb_readonly=0)
Jan 20 15:10:30 compute-0 ovn_controller[148666]: 2026-01-20T15:10:30Z|00614|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f down in Southbound
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.711 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 ovn_controller[148666]: 2026-01-20T15:10:30Z|00615|binding|INFO|Removing iface tape51d2161-81 ovn-installed in OVS
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.713 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.741 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:1a:96 10.100.0.5'], port_security=['fa:16:3e:89:1a:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f408ee5-e644-4a37-9cd2-3db94e56c638', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b b6ef6df3-455d-4449-87ef-cdff4c9a6aec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e51d2161-817b-46db-b0ce-4b313d293d7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.743 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e51d2161-817b-46db-b0ce-4b313d293d7f in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.746 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.747 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1df74e-3f65-4ab3-8ac9-0999d97dba23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.748 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace which is not needed anymore
Jan 20 15:10:30 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Jan 20 15:10:30 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000ac.scope: Consumed 15.019s CPU time.
Jan 20 15:10:30 compute-0 systemd-machined[216401]: Machine qemu-76-instance-000000ac terminated.
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.780 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:30 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [NOTICE]   (351771) : haproxy version is 2.8.14-c23fe91
Jan 20 15:10:30 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [NOTICE]   (351771) : path to executable is /usr/sbin/haproxy
Jan 20 15:10:30 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [WARNING]  (351771) : Exiting Master process...
Jan 20 15:10:30 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [ALERT]    (351771) : Current worker (351773) exited with code 143 (Terminated)
Jan 20 15:10:30 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[351767]: [WARNING]  (351771) : All workers exited. Exiting... (0)
Jan 20 15:10:30 compute-0 systemd[1]: libpod-709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe.scope: Deactivated successfully.
Jan 20 15:10:30 compute-0 podman[352901]: 2026-01-20 15:10:30.865227668 +0000 UTC m=+0.046618930 container died 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95d882ea98acd6d487f0e82d68ed1123bba044faaf05427acc543da5f93c1b7-merged.mount: Deactivated successfully.
Jan 20 15:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe-userdata-shm.mount: Deactivated successfully.
Jan 20 15:10:30 compute-0 podman[352901]: 2026-01-20 15:10:30.897479928 +0000 UTC m=+0.078871190 container cleanup 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:10:30 compute-0 systemd[1]: libpod-conmon-709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe.scope: Deactivated successfully.
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.936 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.940 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 podman[352933]: 2026-01-20 15:10:30.970091786 +0000 UTC m=+0.048135560 container remove 709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.976 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[63bac7c6-8fa4-4dc8-8905-4a047b6944ac]: (4, ('Tue Jan 20 03:10:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe)\n709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe\nTue Jan 20 03:10:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe)\n709819e01cc4e258c449a81787e0b1b3ebde4333d29c1db40f42e9d4123dacbe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.978 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a79b15bf-9912-4555-b867-82c51a25c344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:30.979 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.981 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:30 compute-0 kernel: tape22d6ddc-00: left promiscuous mode
Jan 20 15:10:30 compute-0 nova_compute[250018]: 2026-01-20 15:10:30.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.000 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b86e36-f87f-4398-afdc-c8fa07485c0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.017 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[91237656-0dc2-47fc-b00a-1fddbedb53ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.018 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8d76b01f-1095-49da-8d4f-b3a0a4f677e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.032 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8062e1ff-0874-4510-9ca6-3ba9fb55ef1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773769, 'reachable_time': 28277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352963, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.036 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.036 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[284cf28b-1db2-4100-ac35-f7cd8f34ad8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 systemd[1]: run-netns-ovnmeta\x2de22d6ddc\x2d0339\x2d4395\x2dbc21\x2d95081825f05b.mount: Deactivated successfully.
Jan 20 15:10:31 compute-0 nova_compute[250018]: 2026-01-20 15:10:31.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:31 compute-0 nova_compute[250018]: 2026-01-20 15:10:31.436 250022 INFO nova.virt.libvirt.driver [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance shutdown successfully.
Jan 20 15:10:31 compute-0 kernel: tape51d2161-81: entered promiscuous mode
Jan 20 15:10:31 compute-0 systemd-udevd[352879]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:10:31 compute-0 NetworkManager[48960]: <info>  [1768921831.4884] manager: (tape51d2161-81): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Jan 20 15:10:31 compute-0 ovn_controller[148666]: 2026-01-20T15:10:31Z|00616|binding|INFO|Claiming lport e51d2161-817b-46db-b0ce-4b313d293d7f for this chassis.
Jan 20 15:10:31 compute-0 ovn_controller[148666]: 2026-01-20T15:10:31Z|00617|binding|INFO|e51d2161-817b-46db-b0ce-4b313d293d7f: Claiming fa:16:3e:89:1a:96 10.100.0.5
Jan 20 15:10:31 compute-0 nova_compute[250018]: 2026-01-20 15:10:31.489 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:31 compute-0 NetworkManager[48960]: <info>  [1768921831.4981] device (tape51d2161-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:10:31 compute-0 NetworkManager[48960]: <info>  [1768921831.4987] device (tape51d2161-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:10:31 compute-0 ovn_controller[148666]: 2026-01-20T15:10:31Z|00618|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f ovn-installed in OVS
Jan 20 15:10:31 compute-0 nova_compute[250018]: 2026-01-20 15:10:31.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:31 compute-0 nova_compute[250018]: 2026-01-20 15:10:31.514 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:31 compute-0 systemd-machined[216401]: New machine qemu-77-instance-000000ac.
Jan 20 15:10:31 compute-0 ceph-mon[74360]: pgmap v2617: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 29 KiB/s wr, 67 op/s
Jan 20 15:10:31 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-000000ac.
Jan 20 15:10:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:31 compute-0 ovn_controller[148666]: 2026-01-20T15:10:31Z|00619|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f up in Southbound
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.756 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:1a:96 10.100.0.5'], port_security=['fa:16:3e:89:1a:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f408ee5-e644-4a37-9cd2-3db94e56c638', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b b6ef6df3-455d-4449-87ef-cdff4c9a6aec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e51d2161-817b-46db-b0ce-4b313d293d7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.758 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e51d2161-817b-46db-b0ce-4b313d293d7f in datapath e22d6ddc-0339-4395-bc21-95081825f05b bound to our chassis
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.761 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.779 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c7401ffc-d621-43e3-82d6-5f574f71b059]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.781 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape22d6ddc-01 in ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.783 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape22d6ddc-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.783 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3b04b2-f231-4a23-ad89-8ae372207841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.784 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[009bd602-2b35-481e-922c-98a40bed4b55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.803 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a02bc2-54fa-4d1b-8dfd-e330c65d855f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.819 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[32957656-e402-4e63-82e5-a5ee5935ce0f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.865 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f7518ab8-621a-422f-90fc-40ee81d33209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.871 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7dfb96-5fbe-4f56-b90e-bd294a87370f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 NetworkManager[48960]: <info>  [1768921831.8721] manager: (tape22d6ddc-00): new Veth device (/org/freedesktop/NetworkManager/Devices/301)
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.924 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[58ffb821-d9de-4bb6-998c-829452bb54dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.927 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[395abbab-42e3-4d79-9869-9c250304e405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 NetworkManager[48960]: <info>  [1768921831.9525] device (tape22d6ddc-00): carrier: link connected
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.962 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8c836f-0981-4fef-86c3-f629cfd75d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:31.982 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[502a66d9-09cb-41f9-9bc6-fdcc2d9229cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777267, 'reachable_time': 31325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353008, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.005 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d7c8f9-fe4d-4f08-9936-9a46e096f958]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:3f5c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 777267, 'tstamp': 777267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353009, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:32.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.035 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[290e95f0-f5b5-4087-9439-b5a41d13ab59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape22d6ddc-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:3f:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777267, 'reachable_time': 31325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353010, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.067 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[950ed946-a981-43f3-8b33-4d0db449aabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.141 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f456dd6d-2f45-497d-83d4-0af853098b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.143 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.143 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.143 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape22d6ddc-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.145 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:32 compute-0 NetworkManager[48960]: <info>  [1768921832.1460] manager: (tape22d6ddc-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 20 15:10:32 compute-0 kernel: tape22d6ddc-00: entered promiscuous mode
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.148 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.150 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape22d6ddc-00, col_values=(('external_ids', {'iface-id': '940a1442-b0ab-49a2-87e8-750659cdda8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:10:32 compute-0 ovn_controller[148666]: 2026-01-20T15:10:32Z|00620|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.172 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.173 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.174 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f2be78-9f87-4e84-8caa-6a1d48b64a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.175 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/e22d6ddc-0339-4395-bc21-95081825f05b.pid.haproxy
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID e22d6ddc-0339-4395-bc21-95081825f05b
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:10:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:10:32.176 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'env', 'PROCESS_TAG=haproxy-e22d6ddc-0339-4395-bc21-95081825f05b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e22d6ddc-0339-4395-bc21-95081825f05b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.303 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for 7f408ee5-e644-4a37-9cd2-3db94e56c638 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.304 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921832.3035743, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.304 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Resumed (Lifecycle Event)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.311 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance running successfully.
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.311 250022 INFO nova.virt.libvirt.driver [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance soft rebooted successfully.
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.311 250022 DEBUG nova.compute.manager [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 21 KiB/s wr, 67 op/s
Jan 20 15:10:32 compute-0 podman[353103]: 2026-01-20 15:10:32.549521901 +0000 UTC m=+0.084305905 container create df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.560 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.565 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:32.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:32 compute-0 systemd[1]: Started libpod-conmon-df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6.scope.
Jan 20 15:10:32 compute-0 podman[353103]: 2026-01-20 15:10:32.518146135 +0000 UTC m=+0.052930239 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.610 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] During sync_power_state the instance has a pending task (reboot_started). Skip.
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.611 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921832.3067777, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.611 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Started (Lifecycle Event)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.617 250022 DEBUG oslo_concurrency.lockutils [None req-54efe364-484f-4436-be32-c8bd0ade6452 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c0ab1bd1cc39604e16998f6f753fa85427f13aa1e2b42b0faa36e80b39118/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.646 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:10:32 compute-0 podman[353103]: 2026-01-20 15:10:32.652234472 +0000 UTC m=+0.187018496 container init df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.652 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:10:32 compute-0 podman[353103]: 2026-01-20 15:10:32.657460303 +0000 UTC m=+0.192244307 container start df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:10:32 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [NOTICE]   (353122) : New worker (353124) forked
Jan 20 15:10:32 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [NOTICE]   (353122) : Loading success.
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.710 250022 DEBUG nova.compute.manager [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.710 250022 DEBUG oslo_concurrency.lockutils [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.711 250022 DEBUG oslo_concurrency.lockutils [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.711 250022 DEBUG oslo_concurrency.lockutils [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.711 250022 DEBUG nova.compute.manager [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:32 compute-0 nova_compute[250018]: 2026-01-20 15:10:32.711 250022 WARNING nova.compute.manager [req-e3f93ee8-a70a-42d4-966d-6f18a3b3b2b7 req-d699b61b-d3d4-44f4-a8f5-b9026d4102e8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state None.
Jan 20 15:10:33 compute-0 ceph-mon[74360]: pgmap v2618: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 21 KiB/s wr, 67 op/s
Jan 20 15:10:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:34.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 82 op/s
Jan 20 15:10:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:34.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.840 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.841 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.841 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.841 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.842 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.842 250022 WARNING nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state None.
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.842 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.842 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.843 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.843 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.843 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.843 250022 WARNING nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state None.
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.843 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.844 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.844 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.844 250022 DEBUG oslo_concurrency.lockutils [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.844 250022 DEBUG nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.844 250022 WARNING nova.compute.manager [req-92a7d52b-4ea3-4aaf-b74e-c864a0bbf2e3 req-4e3baeab-6afe-4124-9797-92edd40948ef 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state None.
Jan 20 15:10:34 compute-0 nova_compute[250018]: 2026-01-20 15:10:34.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:10:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:10:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:35 compute-0 ceph-mon[74360]: pgmap v2619: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 82 op/s
Jan 20 15:10:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3420058915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:36.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:36 compute-0 nova_compute[250018]: 2026-01-20 15:10:36.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 153 op/s
Jan 20 15:10:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:36 compute-0 ceph-mon[74360]: pgmap v2620: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 153 op/s
Jan 20 15:10:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1348110601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 15:10:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 20 15:10:38 compute-0 ceph-mon[74360]: pgmap v2621: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 20 KiB/s wr, 121 op/s
Jan 20 15:10:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 20 15:10:38 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 20 15:10:39 compute-0 sudo[353136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:39 compute-0 sudo[353136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:39 compute-0 sudo[353136]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:39 compute-0 sudo[353161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:39 compute-0 sudo[353161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:39 compute-0 sudo[353161]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:39 compute-0 ceph-mon[74360]: osdmap e386: 3 total, 3 up, 3 in
Jan 20 15:10:39 compute-0 nova_compute[250018]: 2026-01-20 15:10:39.972 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.2 MiB/s wr, 176 op/s
Jan 20 15:10:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:40 compute-0 ceph-mon[74360]: pgmap v2623: 321 pgs: 321 active+clean; 375 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.2 MiB/s wr, 176 op/s
Jan 20 15:10:41 compute-0 nova_compute[250018]: 2026-01-20 15:10:41.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1780300540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:42.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:42 compute-0 nova_compute[250018]: 2026-01-20 15:10:42.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 15:10:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:42.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:42 compute-0 ceph-mon[74360]: pgmap v2624: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 20 15:10:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:44.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1303016254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 228 op/s
Jan 20 15:10:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:44 compute-0 nova_compute[250018]: 2026-01-20 15:10:44.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:45 compute-0 ceph-mon[74360]: pgmap v2625: 321 pgs: 321 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 228 op/s
Jan 20 15:10:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1430308450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:46.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:46 compute-0 nova_compute[250018]: 2026-01-20 15:10:46.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:46 compute-0 ovn_controller[148666]: 2026-01-20T15:10:46Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:1a:96 10.100.0.5
Jan 20 15:10:47 compute-0 sshd-session[353189]: Connection closed by authenticating user root 134.122.57.138 port 60738 [preauth]
Jan 20 15:10:47 compute-0 ceph-mon[74360]: pgmap v2626: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:48.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:10:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:10:49 compute-0 ceph-mon[74360]: pgmap v2627: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 205 op/s
Jan 20 15:10:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:10:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1529396375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:49 compute-0 nova_compute[250018]: 2026-01-20 15:10:49.976 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:50 compute-0 ovn_controller[148666]: 2026-01-20T15:10:50Z|00621|binding|INFO|Releasing lport 940a1442-b0ab-49a2-87e8-750659cdda8d from this chassis (sb_readonly=0)
Jan 20 15:10:50 compute-0 nova_compute[250018]: 2026-01-20 15:10:50.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 20 15:10:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1529396375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:51 compute-0 nova_compute[250018]: 2026-01-20 15:10:51.108 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:51 compute-0 ceph-mon[74360]: pgmap v2628: 321 pgs: 321 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 20 15:10:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2095857021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:10:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:52.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 855 KiB/s wr, 132 op/s
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:10:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:52.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:10:52
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'vms', 'images', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Jan 20 15:10:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:10:53 compute-0 ceph-mgr[74653]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 15:10:53 compute-0 ceph-mon[74360]: pgmap v2629: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 855 KiB/s wr, 132 op/s
Jan 20 15:10:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:54.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 36 KiB/s wr, 89 op/s
Jan 20 15:10:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:54.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:54 compute-0 nova_compute[250018]: 2026-01-20 15:10:54.978 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:55 compute-0 ceph-mon[74360]: pgmap v2630: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 36 KiB/s wr, 89 op/s
Jan 20 15:10:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:56.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:56 compute-0 nova_compute[250018]: 2026-01-20 15:10:56.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:10:56 compute-0 nova_compute[250018]: 2026-01-20 15:10:56.154 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 141 op/s
Jan 20 15:10:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:10:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:56.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:56 compute-0 ceph-mon[74360]: pgmap v2631: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 141 op/s
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:10:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:10:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:10:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:10:58.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:10:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:10:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:10:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/93101636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:10:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 24 KiB/s wr, 90 op/s
Jan 20 15:10:58 compute-0 podman[353200]: 2026-01-20 15:10:58.479866027 +0000 UTC m=+0.056938956 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 20 15:10:58 compute-0 podman[353199]: 2026-01-20 15:10:58.555674022 +0000 UTC m=+0.132746851 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:10:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:10:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:10:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:10:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:10:58 compute-0 nova_compute[250018]: 2026-01-20 15:10:58.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:10:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2257383779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-0 ceph-mon[74360]: pgmap v2632: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 24 KiB/s wr, 90 op/s
Jan 20 15:10:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4107587614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2215610935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:10:59 compute-0 sudo[353244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:59 compute-0 sudo[353244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:59 compute-0 sudo[353244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:59 compute-0 sudo[353269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:10:59 compute-0 sudo[353269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:10:59 compute-0 sudo[353269]: pam_unix(sudo:session): session closed for user root
Jan 20 15:10:59 compute-0 nova_compute[250018]: 2026-01-20 15:10:59.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:00.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:00 compute-0 nova_compute[250018]: 2026-01-20 15:11:00.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 20 15:11:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 20 15:11:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 20 15:11:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1208327764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2920973194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 120 op/s
Jan 20 15:11:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:01 compute-0 nova_compute[250018]: 2026-01-20 15:11:01.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:01 compute-0 nova_compute[250018]: 2026-01-20 15:11:01.155 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:01 compute-0 ceph-mon[74360]: osdmap e387: 3 total, 3 up, 3 in
Jan 20 15:11:01 compute-0 ceph-mon[74360]: pgmap v2634: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 120 op/s
Jan 20 15:11:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1603287857' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:11:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:02.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:11:02 compute-0 nova_compute[250018]: 2026-01-20 15:11:02.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 20 15:11:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.075 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.075 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:03 compute-0 ceph-mon[74360]: pgmap v2635: 321 pgs: 321 active+clean; 268 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 142 op/s
Jan 20 15:11:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353158919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.587 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.665 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.665 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.666 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.833 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.834 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4040MB free_disk=20.94265365600586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.834 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.835 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.923 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.924 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:11:03 compute-0 nova_compute[250018]: 2026-01-20 15:11:03.924 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.024 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:04.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 255 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Jan 20 15:11:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416091169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.483 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.491 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.509 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.516 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.517 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3353158919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/416091169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:04 compute-0 nova_compute[250018]: 2026-01-20 15:11:04.982 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:05 compute-0 ceph-mon[74360]: pgmap v2636: 321 pgs: 321 active+clean; 255 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 KiB/s wr, 135 op/s
Jan 20 15:11:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:06.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:06 compute-0 nova_compute[250018]: 2026-01-20 15:11:06.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.1 KiB/s wr, 72 op/s
Jan 20 15:11:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 20 15:11:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 20 15:11:06 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 20 15:11:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:07 compute-0 nova_compute[250018]: 2026-01-20 15:11:07.107 250022 DEBUG nova.compute.manager [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:07 compute-0 nova_compute[250018]: 2026-01-20 15:11:07.108 250022 DEBUG nova.compute.manager [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:11:07 compute-0 nova_compute[250018]: 2026-01-20 15:11:07.108 250022 DEBUG oslo_concurrency.lockutils [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:07 compute-0 nova_compute[250018]: 2026-01-20 15:11:07.108 250022 DEBUG oslo_concurrency.lockutils [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:07 compute-0 nova_compute[250018]: 2026-01-20 15:11:07.108 250022 DEBUG nova.network.neutron [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:11:07 compute-0 ceph-mon[74360]: pgmap v2637: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.1 KiB/s wr, 72 op/s
Jan 20 15:11:07 compute-0 ceph-mon[74360]: osdmap e388: 3 total, 3 up, 3 in
Jan 20 15:11:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:08.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 KiB/s wr, 50 op/s
Jan 20 15:11:08 compute-0 nova_compute[250018]: 2026-01-20 15:11:08.518 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/932732216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:09.220 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:09 compute-0 nova_compute[250018]: 2026-01-20 15:11:09.221 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:09.221 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:11:09 compute-0 nova_compute[250018]: 2026-01-20 15:11:09.510 250022 DEBUG nova.network.neutron [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:11:09 compute-0 nova_compute[250018]: 2026-01-20 15:11:09.511 250022 DEBUG nova.network.neutron [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:09 compute-0 nova_compute[250018]: 2026-01-20 15:11:09.532 250022 DEBUG oslo_concurrency.lockutils [req-3687bdee-c017-4620-ac8e-918c676c459b req-b14156d8-1b85-40bf-ba43-4f89125d5027 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:09 compute-0 ceph-mon[74360]: pgmap v2639: 321 pgs: 321 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 KiB/s wr, 50 op/s
Jan 20 15:11:09 compute-0 nova_compute[250018]: 2026-01-20 15:11:09.983 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:11:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.230 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.231 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.231 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:11:10 compute-0 nova_compute[250018]: 2026-01-20 15:11:10.231 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 262 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 15:11:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:10.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:10 compute-0 ceph-mon[74360]: pgmap v2640: 321 pgs: 321 active+clean; 262 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 81 op/s
Jan 20 15:11:11 compute-0 nova_compute[250018]: 2026-01-20 15:11:11.160 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:11 compute-0 nova_compute[250018]: 2026-01-20 15:11:11.600 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:11 compute-0 nova_compute[250018]: 2026-01-20 15:11:11.627 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:11 compute-0 nova_compute[250018]: 2026-01-20 15:11:11.627 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031336151242322727 of space, bias 1.0, pg target 0.9400845372696818 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021628687418574354 of space, bias 1.0, pg target 0.6488606225572306 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:11:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:11:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:11:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:11:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1314462438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Jan 20 15:11:12 compute-0 nova_compute[250018]: 2026-01-20 15:11:12.526 250022 DEBUG nova.compute.manager [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:12 compute-0 nova_compute[250018]: 2026-01-20 15:11:12.526 250022 DEBUG nova.compute.manager [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing instance network info cache due to event network-changed-e51d2161-817b-46db-b0ce-4b313d293d7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:11:12 compute-0 nova_compute[250018]: 2026-01-20 15:11:12.527 250022 DEBUG oslo_concurrency.lockutils [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:12 compute-0 nova_compute[250018]: 2026-01-20 15:11:12.527 250022 DEBUG oslo_concurrency.lockutils [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:12 compute-0 nova_compute[250018]: 2026-01-20 15:11:12.528 250022 DEBUG nova.network.neutron [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Refreshing network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:11:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:12.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.299 250022 DEBUG oslo_concurrency.lockutils [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.300 250022 DEBUG oslo_concurrency.lockutils [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.318 250022 INFO nova.compute.manager [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Detaching volume c53232c5-7839-4cc5-8acf-4257d1fb7c13
Jan 20 15:11:13 compute-0 ceph-mon[74360]: pgmap v2641: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Jan 20 15:11:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2034628683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.478 250022 INFO nova.virt.block_device [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Attempting to driver detach volume c53232c5-7839-4cc5-8acf-4257d1fb7c13 from mountpoint /dev/vdb
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.485 250022 DEBUG nova.virt.libvirt.driver [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Attempting to detach device vdb from instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.486 250022 DEBUG nova.virt.libvirt.guest [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-c53232c5-7839-4cc5-8acf-4257d1fb7c13">
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   </source>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <serial>c53232c5-7839-4cc5-8acf-4257d1fb7c13</serial>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]: </disk>
Jan 20 15:11:13 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.493 250022 INFO nova.virt.libvirt.driver [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully detached device vdb from instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 from the persistent domain config.
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.493 250022 DEBUG nova.virt.libvirt.driver [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.494 250022 DEBUG nova.virt.libvirt.guest [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-c53232c5-7839-4cc5-8acf-4257d1fb7c13">
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   </source>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <serial>c53232c5-7839-4cc5-8acf-4257d1fb7c13</serial>
Jan 20 15:11:13 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 20 15:11:13 compute-0 nova_compute[250018]: </disk>
Jan 20 15:11:13 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:11:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:11:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.593 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768921873.5928564, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:11:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:11:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.598 250022 DEBUG nova.virt.libvirt.driver [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.600 250022 INFO nova.virt.libvirt.driver [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully detached device vdb from instance 7f408ee5-e644-4a37-9cd2-3db94e56c638 from the live domain config.
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.812 250022 DEBUG nova.objects.instance [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'flavor' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.856 250022 DEBUG oslo_concurrency.lockutils [None req-31d23a05-1235-4616-8cea-7ebe6e3cc067 c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.934 250022 DEBUG nova.network.neutron [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updated VIF entry in instance network info cache for port e51d2161-817b-46db-b0ce-4b313d293d7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.935 250022 DEBUG nova.network.neutron [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [{"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:13 compute-0 nova_compute[250018]: 2026-01-20 15:11:13.957 250022 DEBUG oslo_concurrency.lockutils [req-e038122d-5102-4f63-a01e-0fb065ee8cee req-e70e7d3f-6351-498e-bf46-e61682685000 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-7f408ee5-e644-4a37-9cd2-3db94e56c638" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:14.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1024057289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 15:11:14 compute-0 nova_compute[250018]: 2026-01-20 15:11:14.622 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:14 compute-0 nova_compute[250018]: 2026-01-20 15:11:14.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:15 compute-0 ceph-mon[74360]: pgmap v2642: 321 pgs: 321 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 20 15:11:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:16.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.163 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.224 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.372 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.373 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.373 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.373 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.374 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.375 250022 INFO nova.compute.manager [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Terminating instance
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.377 250022 DEBUG nova.compute.manager [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:11:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:11:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/292567124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:11:16 compute-0 kernel: tape51d2161-81 (unregistering): left promiscuous mode
Jan 20 15:11:16 compute-0 NetworkManager[48960]: <info>  [1768921876.4374] device (tape51d2161-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 ovn_controller[148666]: 2026-01-20T15:11:16Z|00622|binding|INFO|Releasing lport e51d2161-817b-46db-b0ce-4b313d293d7f from this chassis (sb_readonly=0)
Jan 20 15:11:16 compute-0 ovn_controller[148666]: 2026-01-20T15:11:16Z|00623|binding|INFO|Setting lport e51d2161-817b-46db-b0ce-4b313d293d7f down in Southbound
Jan 20 15:11:16 compute-0 ovn_controller[148666]: 2026-01-20T15:11:16Z|00624|binding|INFO|Removing iface tape51d2161-81 ovn-installed in OVS
Jan 20 15:11:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.451 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.459 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:1a:96 10.100.0.5'], port_security=['fa:16:3e:89:1a:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7f408ee5-e644-4a37-9cd2-3db94e56c638', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e22d6ddc-0339-4395-bc21-95081825f05b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b43342be22543f79d4a56e26c6d0c96', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8514424f-703f-4374-a78e-584f6e7c233b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c19619-1e7e-40ee-be83-c9dbc347543e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=e51d2161-817b-46db-b0ce-4b313d293d7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.460 160071 INFO neutron.agent.ovn.metadata.agent [-] Port e51d2161-817b-46db-b0ce-4b313d293d7f in datapath e22d6ddc-0339-4395-bc21-95081825f05b unbound from our chassis
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.462 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e22d6ddc-0339-4395-bc21-95081825f05b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.463 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[514dc39c-46b9-49c8-b7a8-5a7676191554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.463 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b namespace which is not needed anymore
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.466 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Jan 20 15:11:16 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ac.scope: Consumed 15.104s CPU time.
Jan 20 15:11:16 compute-0 systemd-machined[216401]: Machine qemu-77-instance-000000ac terminated.
Jan 20 15:11:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.598 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.603 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.611 250022 INFO nova.virt.libvirt.driver [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Instance destroyed successfully.
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.612 250022 DEBUG nova.objects.instance [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lazy-loading 'resources' on Instance uuid 7f408ee5-e644-4a37-9cd2-3db94e56c638 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [NOTICE]   (353122) : haproxy version is 2.8.14-c23fe91
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [NOTICE]   (353122) : path to executable is /usr/sbin/haproxy
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [WARNING]  (353122) : Exiting Master process...
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [WARNING]  (353122) : Exiting Master process...
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [ALERT]    (353122) : Current worker (353124) exited with code 143 (Terminated)
Jan 20 15:11:16 compute-0 neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b[353118]: [WARNING]  (353122) : All workers exited. Exiting... (0)
Jan 20 15:11:16 compute-0 systemd[1]: libpod-df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6.scope: Deactivated successfully.
Jan 20 15:11:16 compute-0 podman[353375]: 2026-01-20 15:11:16.634565424 +0000 UTC m=+0.056464644 container died df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.634 250022 DEBUG nova.virt.libvirt.vif [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1147518891',display_name='tempest-TestMinimumBasicScenario-server-1147518891',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1147518891',id=172,image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOAQWrP+GDiNUoPOYescUGgo/yLOGt4CAUCa8ZMwCowpdu0oAlhZM5WeI21R8uP9lryKPNYsVj0oCABaSlnWGXpu5Hh3xUYyz5x2p4oOq+Z53WRgS8ou5qt7R4Dm4lN/5A==',key_name='tempest-TestMinimumBasicScenario-1568926112',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:09:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5b43342be22543f79d4a56e26c6d0c96',ramdisk_id='',reservation_id='r-bbkfjia9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6b86f20-6a27-42dc-9911-14ae5e0ee2dc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1665080150',owner_user_name='tempest-TestMinimumBasicScenario-1665080150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:10:32Z,user_data=None,user_id='c98bd3f0904e48efa524d598bcad85e9',uuid=7f408ee5-e644-4a37-9cd2-3db94e56c638,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.635 250022 DEBUG nova.network.os_vif_util [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converting VIF {"id": "e51d2161-817b-46db-b0ce-4b313d293d7f", "address": "fa:16:3e:89:1a:96", "network": {"id": "e22d6ddc-0339-4395-bc21-95081825f05b", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1496899124-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5b43342be22543f79d4a56e26c6d0c96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape51d2161-81", "ovs_interfaceid": "e51d2161-817b-46db-b0ce-4b313d293d7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.636 250022 DEBUG nova.network.os_vif_util [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.636 250022 DEBUG os_vif [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:11:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.638 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.639 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape51d2161-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.643 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.645 250022 INFO os_vif [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:1a:96,bridge_name='br-int',has_traffic_filtering=True,id=e51d2161-817b-46db-b0ce-4b313d293d7f,network=Network(e22d6ddc-0339-4395-bc21-95081825f05b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape51d2161-81')
Jan 20 15:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6-userdata-shm.mount: Deactivated successfully.
Jan 20 15:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f6c0ab1bd1cc39604e16998f6f753fa85427f13aa1e2b42b0faa36e80b39118-merged.mount: Deactivated successfully.
Jan 20 15:11:16 compute-0 podman[353375]: 2026-01-20 15:11:16.670755961 +0000 UTC m=+0.092655201 container cleanup df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:11:16 compute-0 systemd[1]: libpod-conmon-df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6.scope: Deactivated successfully.
Jan 20 15:11:16 compute-0 podman[353428]: 2026-01-20 15:11:16.744045267 +0000 UTC m=+0.044823310 container remove df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.750 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[90045d42-e50e-4209-8a30-abf392c3639b]: (4, ('Tue Jan 20 03:11:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6)\ndf7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6\nTue Jan 20 03:11:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b (df7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6)\ndf7743ac7883b8064c7305dc9040b4ba50ba5846d1ba61b6d38c79ab6fbb4ef6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.751 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f945f61b-3dd8-4f31-ad3d-06d918ef33cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.752 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape22d6ddc-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:11:16 compute-0 kernel: tape22d6ddc-00: left promiscuous mode
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.759 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.761 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd39d62-85f7-4fe6-915a-b31ba0b91fc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 nova_compute[250018]: 2026-01-20 15:11:16.771 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.783 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9f9a0d-4dbd-4265-b00f-6a5cbc253b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.785 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ca9e8a-60ff-43c9-ac9d-4350afb35d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.800 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee951242-7b49-46aa-9ef8-d82132c49004]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 777258, 'reachable_time': 27578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353446, 'error': None, 'target': 'ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:16 compute-0 systemd[1]: run-netns-ovnmeta\x2de22d6ddc\x2d0339\x2d4395\x2dbc21\x2d95081825f05b.mount: Deactivated successfully.
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.803 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e22d6ddc-0339-4395-bc21-95081825f05b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:11:16 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:16.804 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d864abe0-584e-4005-a2a5-21e2f009c3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.017 250022 INFO nova.virt.libvirt.driver [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Deleting instance files /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638_del
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.018 250022 INFO nova.virt.libvirt.driver [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Deletion of /var/lib/nova/instances/7f408ee5-e644-4a37-9cd2-3db94e56c638_del complete
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.065 250022 INFO nova.compute.manager [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.066 250022 DEBUG oslo.service.loopingcall [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.066 250022 DEBUG nova.compute.manager [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.067 250022 DEBUG nova.network.neutron [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:11:17 compute-0 ceph-mon[74360]: pgmap v2643: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:11:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2499990403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.991 250022 DEBUG nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.992 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-unplugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.993 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.994 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.994 250022 DEBUG oslo_concurrency.lockutils [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.994 250022 DEBUG nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] No waiting events found dispatching network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:11:17 compute-0 nova_compute[250018]: 2026-01-20 15:11:17.994 250022 WARNING nova.compute.manager [req-d5961128-ad7f-4e78-85db-0f42e94839c8 req-672ca164-5b92-4ddd-a779-d574b87bf4e3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received unexpected event network-vif-plugged-e51d2161-817b-46db-b0ce-4b313d293d7f for instance with vm_state active and task_state deleting.
Jan 20 15:11:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:18 compute-0 sudo[353448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:18 compute-0 sudo[353448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-0 sudo[353448]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-0 sudo[353474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:11:18 compute-0 sudo[353474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-0 sudo[353474]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.265 250022 DEBUG nova.network.neutron [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.281 250022 INFO nova.compute.manager [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Took 1.21 seconds to deallocate network for instance.
Jan 20 15:11:18 compute-0 sudo[353499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:18 compute-0 sudo[353499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-0 sudo[353499]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.356 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.356 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:18 compute-0 sudo[353524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:11:18 compute-0 sudo[353524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.404 250022 DEBUG nova.compute.manager [req-e38d4822-74e5-444b-8fb6-bc78c60002a2 req-aafe32a1-fefd-4a5f-b709-d5a2fb921a1f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Received event network-vif-deleted-e51d2161-817b-46db-b0ce-4b313d293d7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.425 250022 DEBUG oslo_concurrency.processutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 913 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/628713959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273009450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.863 250022 DEBUG oslo_concurrency.processutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.870 250022 DEBUG nova.compute.provider_tree [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:18 compute-0 nova_compute[250018]: 2026-01-20 15:11:18.890 250022 DEBUG nova.scheduler.client.report [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:18 compute-0 sudo[353524]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:11:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:11:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1b5f79a3-31c9-4595-b265-8fd514d833f0 does not exist
Jan 20 15:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a3015253-954c-4689-bde2-60227fc5579c does not exist
Jan 20 15:11:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a62bd624-3dca-4a56-93c6-c4cac1a34972 does not exist
Jan 20 15:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:11:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-0 sudo[353602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-0 sudo[353602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353602]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-0 nova_compute[250018]: 2026-01-20 15:11:19.115 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:19 compute-0 sudo[353627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:11:19 compute-0 sudo[353627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353627]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-0 nova_compute[250018]: 2026-01-20 15:11:19.143 250022 INFO nova.scheduler.client.report [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Deleted allocations for instance 7f408ee5-e644-4a37-9cd2-3db94e56c638
Jan 20 15:11:19 compute-0 sudo[353652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-0 sudo[353652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353652]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-0 nova_compute[250018]: 2026-01-20 15:11:19.220 250022 DEBUG oslo_concurrency.lockutils [None req-84571ca2-368c-49ef-97a3-1850cd8df39a c98bd3f0904e48efa524d598bcad85e9 5b43342be22543f79d4a56e26c6d0c96 - - default default] Lock "7f408ee5-e644-4a37-9cd2-3db94e56c638" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:19 compute-0 sudo[353677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:11:19 compute-0 sudo[353677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-0 sudo[353738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353738]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-0 ceph-mon[74360]: pgmap v2644: 321 pgs: 321 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 913 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/628713959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1273009450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:11:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.549868884 +0000 UTC m=+0.044842210 container create 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:11:19 compute-0 systemd[1]: Started libpod-conmon-461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4.scope.
Jan 20 15:11:19 compute-0 sudo[353780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:19 compute-0 sudo[353780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:19 compute-0 sudo[353780]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.532956538 +0000 UTC m=+0.027929884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.630113439 +0000 UTC m=+0.125086785 container init 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.638823443 +0000 UTC m=+0.133796769 container start 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.642258517 +0000 UTC m=+0.137231843 container attach 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:19 compute-0 stupefied_hertz[353807]: 167 167
Jan 20 15:11:19 compute-0 systemd[1]: libpod-461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4.scope: Deactivated successfully.
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.648043223 +0000 UTC m=+0.143016549 container died 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba78ddfc11a5d5142fcd847db1d9298a2d81bcb3e2b87e3f86f6cde3ef3f5d4-merged.mount: Deactivated successfully.
Jan 20 15:11:19 compute-0 podman[353764]: 2026-01-20 15:11:19.684860215 +0000 UTC m=+0.179833541 container remove 461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hertz, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:11:19 compute-0 systemd[1]: libpod-conmon-461e35a9a94da7b3a0fbdc2b9327f8130f93214254be5c916d459833a31b33a4.scope: Deactivated successfully.
Jan 20 15:11:19 compute-0 podman[353833]: 2026-01-20 15:11:19.848178781 +0000 UTC m=+0.043172456 container create 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:11:19 compute-0 systemd[1]: Started libpod-conmon-075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e.scope.
Jan 20 15:11:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:19 compute-0 podman[353833]: 2026-01-20 15:11:19.831812349 +0000 UTC m=+0.026806054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:19 compute-0 podman[353833]: 2026-01-20 15:11:19.934994373 +0000 UTC m=+0.129988068 container init 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:19 compute-0 podman[353833]: 2026-01-20 15:11:19.948108017 +0000 UTC m=+0.143101692 container start 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:11:19 compute-0 podman[353833]: 2026-01-20 15:11:19.951239051 +0000 UTC m=+0.146232746 container attach 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:11:19 compute-0 nova_compute[250018]: 2026-01-20 15:11:19.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:20.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 15:11:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 20 15:11:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:20.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 20 15:11:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 20 15:11:21 compute-0 ceph-mon[74360]: pgmap v2645: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 177 op/s
Jan 20 15:11:21 compute-0 ceph-mon[74360]: osdmap e389: 3 total, 3 up, 3 in
Jan 20 15:11:21 compute-0 charming_archimedes[353851]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:11:21 compute-0 charming_archimedes[353851]: --> relative data size: 1.0
Jan 20 15:11:21 compute-0 charming_archimedes[353851]: --> All data devices are unavailable
Jan 20 15:11:21 compute-0 systemd[1]: libpod-075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e.scope: Deactivated successfully.
Jan 20 15:11:21 compute-0 podman[353833]: 2026-01-20 15:11:21.088971012 +0000 UTC m=+1.283964687 container died 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a80bd76c3c9c6e4e61b8e0b2169846b586a09a9dd5c7422551b80358b427062-merged.mount: Deactivated successfully.
Jan 20 15:11:21 compute-0 podman[353833]: 2026-01-20 15:11:21.141584961 +0000 UTC m=+1.336578636 container remove 075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:21 compute-0 systemd[1]: libpod-conmon-075198a340a90c7009d38554a814f95bb0342e85add99c1da5c0e5c8c2ff4b8e.scope: Deactivated successfully.
Jan 20 15:11:21 compute-0 sudo[353677]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:21 compute-0 sudo[353880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:21 compute-0 sudo[353880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:21 compute-0 sudo[353880]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:21 compute-0 sudo[353905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:11:21 compute-0 sudo[353905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:21 compute-0 sudo[353905]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:21 compute-0 sudo[353930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:21 compute-0 sudo[353930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:21 compute-0 sudo[353930]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:21 compute-0 sudo[353955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:11:21 compute-0 sudo[353955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:21 compute-0 nova_compute[250018]: 2026-01-20 15:11:21.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.716392096 +0000 UTC m=+0.035175379 container create 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:11:21 compute-0 systemd[1]: Started libpod-conmon-6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650.scope.
Jan 20 15:11:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.787230478 +0000 UTC m=+0.106013761 container init 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.793485416 +0000 UTC m=+0.112268699 container start 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.700357474 +0000 UTC m=+0.019140777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.79661061 +0000 UTC m=+0.115393893 container attach 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:11:21 compute-0 loving_robinson[354037]: 167 167
Jan 20 15:11:21 compute-0 systemd[1]: libpod-6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650.scope: Deactivated successfully.
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.799316813 +0000 UTC m=+0.118100096 container died 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:11:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-43dcc425668023a4565668f260e64fda4c9612d0c5993599e2ced06331574778-merged.mount: Deactivated successfully.
Jan 20 15:11:21 compute-0 podman[354021]: 2026-01-20 15:11:21.831053309 +0000 UTC m=+0.149836592 container remove 6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:11:21 compute-0 systemd[1]: libpod-conmon-6bb6d2a323bb1368c719d0279efff3751980b4f8cab10c7bb95a9232c443b650.scope: Deactivated successfully.
Jan 20 15:11:21 compute-0 podman[354061]: 2026-01-20 15:11:21.976568625 +0000 UTC m=+0.038098118 container create 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:11:22 compute-0 systemd[1]: Started libpod-conmon-49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2.scope.
Jan 20 15:11:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8654a161ab9800816fc0ebac81a013c2c3ed63824306d904a8acefe12ea508a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8654a161ab9800816fc0ebac81a013c2c3ed63824306d904a8acefe12ea508a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8654a161ab9800816fc0ebac81a013c2c3ed63824306d904a8acefe12ea508a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8654a161ab9800816fc0ebac81a013c2c3ed63824306d904a8acefe12ea508a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:22.037984742 +0000 UTC m=+0.099514265 container init 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:22.049905443 +0000 UTC m=+0.111434936 container start 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:22.053241373 +0000 UTC m=+0.114770866 container attach 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:21.960137661 +0000 UTC m=+0.021667184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:22.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1118456220' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1813241616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 226 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 960 KiB/s wr, 169 op/s
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:22 compute-0 dazzling_pike[354077]: {
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:     "0": [
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:         {
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "devices": [
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "/dev/loop3"
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             ],
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "lv_name": "ceph_lv0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "lv_size": "7511998464",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "name": "ceph_lv0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "tags": {
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.cluster_name": "ceph",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.crush_device_class": "",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.encrypted": "0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.osd_id": "0",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.type": "block",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:                 "ceph.vdo": "0"
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             },
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "type": "block",
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:             "vg_name": "ceph_vg0"
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:         }
Jan 20 15:11:22 compute-0 dazzling_pike[354077]:     ]
Jan 20 15:11:22 compute-0 dazzling_pike[354077]: }
Jan 20 15:11:22 compute-0 systemd[1]: libpod-49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2.scope: Deactivated successfully.
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:22.789065182 +0000 UTC m=+0.850594675 container died 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:11:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8654a161ab9800816fc0ebac81a013c2c3ed63824306d904a8acefe12ea508a4-merged.mount: Deactivated successfully.
Jan 20 15:11:22 compute-0 podman[354061]: 2026-01-20 15:11:22.841280971 +0000 UTC m=+0.902810464 container remove 49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:11:22 compute-0 systemd[1]: libpod-conmon-49bb0957ac7f1b7f34c80f3d00f3167d0c9e1bd68c324bac7db003f6c02aedb2.scope: Deactivated successfully.
Jan 20 15:11:22 compute-0 sudo[353955]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:22 compute-0 sudo[354097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:22 compute-0 sudo[354097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:22 compute-0 sudo[354097]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:22 compute-0 sudo[354122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:11:22 compute-0 sudo[354122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:22 compute-0 sudo[354122]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:22.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:23 compute-0 sudo[354147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:23 compute-0 sudo[354147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:23 compute-0 sudo[354147]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:23 compute-0 sudo[354172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:11:23 compute-0 sudo[354172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:23 compute-0 podman[354238]: 2026-01-20 15:11:23.348252336 +0000 UTC m=+0.020237697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:23 compute-0 podman[354238]: 2026-01-20 15:11:23.927863051 +0000 UTC m=+0.599848392 container create af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:11:23 compute-0 ceph-mon[74360]: pgmap v2647: 321 pgs: 321 active+clean; 226 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 960 KiB/s wr, 169 op/s
Jan 20 15:11:23 compute-0 systemd[1]: Started libpod-conmon-af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0.scope.
Jan 20 15:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:24 compute-0 podman[354238]: 2026-01-20 15:11:24.010895591 +0000 UTC m=+0.682880932 container init af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:11:24 compute-0 podman[354238]: 2026-01-20 15:11:24.016236945 +0000 UTC m=+0.688222266 container start af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:11:24 compute-0 podman[354238]: 2026-01-20 15:11:24.019178835 +0000 UTC m=+0.691164166 container attach af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:11:24 compute-0 systemd[1]: libpod-af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0.scope: Deactivated successfully.
Jan 20 15:11:24 compute-0 elated_zhukovsky[354255]: 167 167
Jan 20 15:11:24 compute-0 conmon[354255]: conmon af61a79e4359f1be0daf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0.scope/container/memory.events
Jan 20 15:11:24 compute-0 podman[354238]: 2026-01-20 15:11:24.022141714 +0000 UTC m=+0.694127055 container died af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1215ac34a4cdb3135009113ced87c3d7f66108c92057966a29da8e33c02ee9b9-merged.mount: Deactivated successfully.
Jan 20 15:11:24 compute-0 podman[354238]: 2026-01-20 15:11:24.054801815 +0000 UTC m=+0.726787146 container remove af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:11:24 compute-0 systemd[1]: libpod-conmon-af61a79e4359f1be0daf4651d7ab1d6fc080ded5b9f30b9f8f34f628883bb6c0.scope: Deactivated successfully.
Jan 20 15:11:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:24 compute-0 podman[354278]: 2026-01-20 15:11:24.206273451 +0000 UTC m=+0.045402375 container create aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:11:24 compute-0 systemd[1]: Started libpod-conmon-aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b.scope.
Jan 20 15:11:24 compute-0 podman[354278]: 2026-01-20 15:11:24.186032365 +0000 UTC m=+0.025161289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:11:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee21a0b69c023d34135a1442bf9a6fe5ac4fa625531ca85be2acd6623394c94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee21a0b69c023d34135a1442bf9a6fe5ac4fa625531ca85be2acd6623394c94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee21a0b69c023d34135a1442bf9a6fe5ac4fa625531ca85be2acd6623394c94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee21a0b69c023d34135a1442bf9a6fe5ac4fa625531ca85be2acd6623394c94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:11:24 compute-0 podman[354278]: 2026-01-20 15:11:24.30518309 +0000 UTC m=+0.144312014 container init aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:11:24 compute-0 podman[354278]: 2026-01-20 15:11:24.31298322 +0000 UTC m=+0.152112124 container start aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:11:24 compute-0 podman[354278]: 2026-01-20 15:11:24.316092584 +0000 UTC m=+0.155221488 container attach aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:11:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 201 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 428 KiB/s wr, 154 op/s
Jan 20 15:11:24 compute-0 ceph-mon[74360]: pgmap v2648: 321 pgs: 321 active+clean; 201 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 428 KiB/s wr, 154 op/s
Jan 20 15:11:24 compute-0 nova_compute[250018]: 2026-01-20 15:11:24.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:25 compute-0 modest_swirles[354296]: {
Jan 20 15:11:25 compute-0 modest_swirles[354296]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:11:25 compute-0 modest_swirles[354296]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:11:25 compute-0 modest_swirles[354296]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:11:25 compute-0 modest_swirles[354296]:         "osd_id": 0,
Jan 20 15:11:25 compute-0 modest_swirles[354296]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:11:25 compute-0 modest_swirles[354296]:         "type": "bluestore"
Jan 20 15:11:25 compute-0 modest_swirles[354296]:     }
Jan 20 15:11:25 compute-0 modest_swirles[354296]: }
Jan 20 15:11:25 compute-0 systemd[1]: libpod-aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b.scope: Deactivated successfully.
Jan 20 15:11:25 compute-0 podman[354278]: 2026-01-20 15:11:25.136063853 +0000 UTC m=+0.975192777 container died aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee21a0b69c023d34135a1442bf9a6fe5ac4fa625531ca85be2acd6623394c94-merged.mount: Deactivated successfully.
Jan 20 15:11:25 compute-0 podman[354278]: 2026-01-20 15:11:25.187168761 +0000 UTC m=+1.026297665 container remove aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swirles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:11:25 compute-0 systemd[1]: libpod-conmon-aac330f599dc214b35795f4770076e6eb03fee400331458f6e453c7ec5d09d0b.scope: Deactivated successfully.
Jan 20 15:11:25 compute-0 sudo[354172]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:11:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:11:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5898984a-66ed-47de-a1f3-6a39d0e0a3b6 does not exist
Jan 20 15:11:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 98469ade-58a5-4fc2-9c1b-ebaa46ffffb6 does not exist
Jan 20 15:11:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b8dc9996-f2cd-4ff4-9fd1-c217c3dbf17d does not exist
Jan 20 15:11:25 compute-0 sudo[354329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:25 compute-0 sudo[354329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:25 compute-0 sudo[354329]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:25 compute-0 nova_compute[250018]: 2026-01-20 15:11:25.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:25 compute-0 sudo[354354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:11:25 compute-0 sudo[354354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:25 compute-0 sudo[354354]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:25 compute-0 nova_compute[250018]: 2026-01-20 15:11:25.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:26.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 216 op/s
Jan 20 15:11:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:11:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 20 15:11:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 20 15:11:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 20 15:11:26 compute-0 nova_compute[250018]: 2026-01-20 15:11:26.641 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:26.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:27 compute-0 ceph-mon[74360]: pgmap v2649: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 216 op/s
Jan 20 15:11:27 compute-0 ceph-mon[74360]: osdmap e390: 3 total, 3 up, 3 in
Jan 20 15:11:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:28.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 167 op/s
Jan 20 15:11:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:11:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:29.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:11:29 compute-0 podman[354383]: 2026-01-20 15:11:29.458100251 +0000 UTC m=+0.051942742 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:11:29 compute-0 podman[354382]: 2026-01-20 15:11:29.489665563 +0000 UTC m=+0.083183715 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:11:29 compute-0 ceph-mon[74360]: pgmap v2651: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 167 op/s
Jan 20 15:11:29 compute-0 nova_compute[250018]: 2026-01-20 15:11:29.991 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:30.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 141 op/s
Jan 20 15:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:11:30.781 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:31.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:31 compute-0 nova_compute[250018]: 2026-01-20 15:11:31.611 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921876.6098228, 7f408ee5-e644-4a37-9cd2-3db94e56c638 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:11:31 compute-0 nova_compute[250018]: 2026-01-20 15:11:31.611 250022 INFO nova.compute.manager [-] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] VM Stopped (Lifecycle Event)
Jan 20 15:11:31 compute-0 nova_compute[250018]: 2026-01-20 15:11:31.628 250022 DEBUG nova.compute.manager [None req-2f7d0179-0d9e-4c83-8bd5-c7b1b0a6f6ee - - - - - -] [instance: 7f408ee5-e644-4a37-9cd2-3db94e56c638] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:11:31 compute-0 nova_compute[250018]: 2026-01-20 15:11:31.642 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:31 compute-0 ceph-mon[74360]: pgmap v2652: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 141 op/s
Jan 20 15:11:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:32.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 133 op/s
Jan 20 15:11:32 compute-0 ceph-mon[74360]: pgmap v2653: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 133 op/s
Jan 20 15:11:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:33.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:34.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 128 op/s
Jan 20 15:11:34 compute-0 ceph-mon[74360]: pgmap v2654: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 128 op/s
Jan 20 15:11:34 compute-0 nova_compute[250018]: 2026-01-20 15:11:34.994 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:35.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1087186560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:36.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 847 KiB/s wr, 28 op/s
Jan 20 15:11:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:36 compute-0 nova_compute[250018]: 2026-01-20 15:11:36.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:36 compute-0 ceph-mon[74360]: pgmap v2655: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 847 KiB/s wr, 28 op/s
Jan 20 15:11:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1084543734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3661868601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:37 compute-0 sshd-session[354430]: Connection closed by authenticating user root 134.122.57.138 port 36230 [preauth]
Jan 20 15:11:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 713 KiB/s wr, 23 op/s
Jan 20 15:11:38 compute-0 ceph-mon[74360]: pgmap v2656: 321 pgs: 321 active+clean; 172 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 713 KiB/s wr, 23 op/s
Jan 20 15:11:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:39.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:39 compute-0 sudo[354434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:39 compute-0 sudo[354434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:39 compute-0 sudo[354434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:39 compute-0 sudo[354459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:39 compute-0 sudo[354459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:39 compute-0 sudo[354459]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:39 compute-0 nova_compute[250018]: 2026-01-20 15:11:39.996 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:40.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 217 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 128 op/s
Jan 20 15:11:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:41.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:41 compute-0 ceph-mon[74360]: pgmap v2657: 321 pgs: 321 active+clean; 217 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 128 op/s
Jan 20 15:11:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:41 compute-0 nova_compute[250018]: 2026-01-20 15:11:41.650 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:42.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Jan 20 15:11:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:43 compute-0 ceph-mon[74360]: pgmap v2658: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Jan 20 15:11:44 compute-0 nova_compute[250018]: 2026-01-20 15:11:44.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:44.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 20 15:11:45 compute-0 nova_compute[250018]: 2026-01-20 15:11:45.016 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:45.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 20 15:11:45 compute-0 ceph-mon[74360]: pgmap v2659: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 20 15:11:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 20 15:11:45 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:46.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 20 15:11:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 20 15:11:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-0 ceph-mon[74360]: osdmap e391: 3 total, 3 up, 3 in
Jan 20 15:11:46 compute-0 nova_compute[250018]: 2026-01-20 15:11:46.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:47.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:47 compute-0 ceph-mon[74360]: pgmap v2661: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Jan 20 15:11:47 compute-0 ceph-mon[74360]: osdmap e392: 3 total, 3 up, 3 in
Jan 20 15:11:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:48.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 79 op/s
Jan 20 15:11:48 compute-0 ceph-mon[74360]: pgmap v2663: 321 pgs: 321 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 79 op/s
Jan 20 15:11:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:49.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.017 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 249 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 414 KiB/s wr, 45 op/s
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.882 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.883 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.901 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.990 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:50 compute-0 nova_compute[250018]: 2026-01-20 15:11:50.991 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.000 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.001 250022 INFO nova.compute.claims [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:11:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.132 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:51 compute-0 ceph-mon[74360]: pgmap v2664: 321 pgs: 321 active+clean; 249 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 414 KiB/s wr, 45 op/s
Jan 20 15:11:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676783533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.588 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.596 250022 DEBUG nova.compute.provider_tree [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.629 250022 DEBUG nova.scheduler.client.report [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.654 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.655 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.657 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.725 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.726 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.761 250022 INFO nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.785 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.902 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.903 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.904 250022 INFO nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Creating image(s)
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.929 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.957 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.984 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:51 compute-0 nova_compute[250018]: 2026-01-20 15:11:51.988 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.019 250022 DEBUG nova.policy [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.055 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.055 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.056 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.056 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.082 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.085 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.406 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 143 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.481 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:11:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3676783533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:11:52
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta']
Jan 20 15:11:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.620 250022 DEBUG nova.objects.instance [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid f3b0f200-2f57-4c25-bdf4-8d17165642fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.639 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.640 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Ensure instance console log exists: /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.640 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.641 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:52 compute-0 nova_compute[250018]: 2026-01-20 15:11:52.641 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.082 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.083 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.108 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.178 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.178 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.186 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.186 250022 INFO nova.compute.claims [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.301 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:53 compute-0 ceph-mon[74360]: pgmap v2665: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 143 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.686 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Successfully created port: 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:11:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:11:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803720983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.752 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.758 250022 DEBUG nova.compute.provider_tree [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.784 250022 DEBUG nova.scheduler.client.report [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.801 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.802 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.864 250022 INFO nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.867 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.867 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.923 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:11:53 compute-0 nova_compute[250018]: 2026-01-20 15:11:53.983 250022 INFO nova.virt.block_device [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Booting with volume snapshot ac531e53-ece8-4166-8e41-ebb939119e9b at /dev/vda
Jan 20 15:11:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:54.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:54 compute-0 nova_compute[250018]: 2026-01-20 15:11:54.144 250022 DEBUG nova.policy [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf422e55e158420cbdae75f07a3bb97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a49638950e1543fa8e0d251af5479623', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:11:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 276 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Jan 20 15:11:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/803720983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.047 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Successfully updated port: 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:11:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.069 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.069 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.070 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.241 250022 DEBUG nova.compute.manager [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.241 250022 DEBUG nova.compute.manager [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing instance network info cache due to event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:11:55 compute-0 nova_compute[250018]: 2026-01-20 15:11:55.241 250022 DEBUG oslo_concurrency.lockutils [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:55 compute-0 ceph-mon[74360]: pgmap v2666: 321 pgs: 321 active+clean; 276 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 2.5 MiB/s wr, 64 op/s
Jan 20 15:11:56 compute-0 nova_compute[250018]: 2026-01-20 15:11:56.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:11:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:11:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:11:56 compute-0 nova_compute[250018]: 2026-01-20 15:11:56.232 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:11:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 153 op/s
Jan 20 15:11:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:11:56 compute-0 nova_compute[250018]: 2026-01-20 15:11:56.659 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:11:56 compute-0 ceph-mon[74360]: pgmap v2667: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 4.7 MiB/s wr, 153 op/s
Jan 20 15:11:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:57 compute-0 nova_compute[250018]: 2026-01-20 15:11:57.091 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Successfully created port: 5de19241-ea15-4a94-8f92-497d86147111 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:11:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4078018162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:11:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:11:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:11:58.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:11:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1131313242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:58 compute-0 ceph-mon[74360]: pgmap v2668: 321 pgs: 321 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.750 250022 DEBUG os_brick.utils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.751 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.765 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.765 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[4331c327-a1b6-42bb-b8bb-4547427ccd4d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.766 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.774 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.775 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[9490e840-6d9a-4bbd-9a1d-a66456dd2fe5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.776 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.785 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.785 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[15e7f905-9947-49c1-9152-6a278fb903c6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.786 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[6634b848-3d64-47da-bb9b-263a0e0748d4]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.786 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.826 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.830 250022 DEBUG os_brick.initiator.connectors.lightos [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.831 250022 DEBUG os_brick.initiator.connectors.lightos [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.831 250022 DEBUG os_brick.initiator.connectors.lightos [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.832 250022 DEBUG os_brick.utils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:11:58 compute-0 nova_compute[250018]: 2026-01-20 15:11:58.833 250022 DEBUG nova.virt.block_device [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updating existing volume attachment record: d1f240f5-8538-43ab-a9bf-fe0f26d73875 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:11:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:11:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:11:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:11:59.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.124 250022 DEBUG nova.network.neutron [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.158 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.159 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance network_info: |[{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.159 250022 DEBUG oslo_concurrency.lockutils [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.159 250022 DEBUG nova.network.neutron [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.162 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Start _get_guest_xml network_info=[{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.166 250022 WARNING nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.170 250022 DEBUG nova.virt.libvirt.host [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.171 250022 DEBUG nova.virt.libvirt.host [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.174 250022 DEBUG nova.virt.libvirt.host [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.174 250022 DEBUG nova.virt.libvirt.host [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.176 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.176 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.177 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.177 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.177 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.177 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.178 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.178 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.178 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.178 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.178 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.179 250022 DEBUG nova.virt.hardware [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.182 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.317 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Successfully updated port: 5de19241-ea15-4a94-8f92-497d86147111 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.340 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.340 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquired lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.341 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.438 250022 DEBUG nova.compute.manager [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-changed-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.439 250022 DEBUG nova.compute.manager [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Refreshing instance network info cache due to event network-changed-5de19241-ea15-4a94-8f92-497d86147111. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.439 250022 DEBUG oslo_concurrency.lockutils [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:11:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538423909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.584 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:11:59 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:11:59 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4158599472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.631 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.661 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.666 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:11:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3538423909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4158599472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:11:59 compute-0 sudo[354755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:59 compute-0 sudo[354755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:59 compute-0 sudo[354755]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:59 compute-0 sudo[354799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:11:59 compute-0 sudo[354799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:11:59 compute-0 sudo[354799]: pam_unix(sudo:session): session closed for user root
Jan 20 15:11:59 compute-0 podman[354797]: 2026-01-20 15:11:59.936317515 +0000 UTC m=+0.067380618 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.939 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.940 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.941 250022 INFO nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Creating image(s)
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.942 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.942 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Ensure instance console log exists: /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.942 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.943 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:11:59 compute-0 nova_compute[250018]: 2026-01-20 15:11:59.943 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:11:59 compute-0 podman[354796]: 2026-01-20 15:11:59.962605434 +0000 UTC m=+0.093667527 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.021 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086234804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.132 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.133 250022 DEBUG nova.virt.libvirt.vif [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-755379148',display_name='tempest-TestNetworkAdvancedServerOps-server-755379148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-755379148',id=178,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0NB6AtDQr7I3Hp0XR7ulJzBFloX/ApnUaNnswWSYzrrT8mFzgvIiFRhCWLiZ+TDOJfVtcwGCfevRbqTmLZ5wdo4P6v9G2NYca0swLwaNQ/zK8Zmxz5PIdul2BRm2ICrw==',key_name='tempest-TestNetworkAdvancedServerOps-1554431653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-j0wavald',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:51Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=f3b0f200-2f57-4c25-bdf4-8d17165642fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.133 250022 DEBUG nova.network.os_vif_util [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.134 250022 DEBUG nova.network.os_vif_util [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.136 250022 DEBUG nova.objects.instance [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid f3b0f200-2f57-4c25-bdf4-8d17165642fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.153 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <uuid>f3b0f200-2f57-4c25-bdf4-8d17165642fe</uuid>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <name>instance-000000b2</name>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-755379148</nova:name>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:11:59</nova:creationTime>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <nova:port uuid="25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301">
Jan 20 15:12:00 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <system>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="serial">f3b0f200-2f57-4c25-bdf4-8d17165642fe</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="uuid">f3b0f200-2f57-4c25-bdf4-8d17165642fe</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </system>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <os>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </os>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <features>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </features>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk">
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </source>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config">
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </source>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:12:00 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:ab:6b:07"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <target dev="tap25ad2c72-7d"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/console.log" append="off"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <video>
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </video>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:12:00 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:12:00 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:12:00 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:12:00 compute-0 nova_compute[250018]: </domain>
Jan 20 15:12:00 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.154 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Preparing to wait for external event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.155 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.155 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.155 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.156 250022 DEBUG nova.virt.libvirt.vif [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-755379148',display_name='tempest-TestNetworkAdvancedServerOps-server-755379148',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-755379148',id=178,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0NB6AtDQr7I3Hp0XR7ulJzBFloX/ApnUaNnswWSYzrrT8mFzgvIiFRhCWLiZ+TDOJfVtcwGCfevRbqTmLZ5wdo4P6v9G2NYca0swLwaNQ/zK8Zmxz5PIdul2BRm2ICrw==',key_name='tempest-TestNetworkAdvancedServerOps-1554431653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-j0wavald',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:51Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=f3b0f200-2f57-4c25-bdf4-8d17165642fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.156 250022 DEBUG nova.network.os_vif_util [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.157 250022 DEBUG nova.network.os_vif_util [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.157 250022 DEBUG os_vif [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.157 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.158 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.158 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.162 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.162 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25ad2c72-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.163 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25ad2c72-7d, col_values=(('external_ids', {'iface-id': '25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:6b:07', 'vm-uuid': 'f3b0f200-2f57-4c25-bdf4-8d17165642fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.164 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-0 NetworkManager[48960]: <info>  [1768921920.1654] manager: (tap25ad2c72-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.166 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.170 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.171 250022 INFO os_vif [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d')
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.234 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.235 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.235 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:ab:6b:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.236 250022 INFO nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Using config drive
Jan 20 15:12:00 compute-0 nova_compute[250018]: 2026-01-20 15:12:00.259 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 881 KiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 20 15:12:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1987985162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2331614767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4086234804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:00 compute-0 ceph-mon[74360]: pgmap v2669: 321 pgs: 321 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 881 KiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 20 15:12:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/86498293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.377 250022 DEBUG nova.network.neutron [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updating instance_info_cache with network_info: [{"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.411 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Releasing lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.411 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Instance network_info: |[{"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.414 250022 DEBUG oslo_concurrency.lockutils [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.414 250022 DEBUG nova.network.neutron [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Refreshing network info cache for port 5de19241-ea15-4a94-8f92-497d86147111 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.421 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Start _get_guest_xml network_info=[{"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-20T15:11:45Z,direct_url=<?>,disk_format='qcow2',id=172aeba8-ca4a-4189-856b-42b685d4813d,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-431707316',owner='a49638950e1543fa8e0d251af5479623',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-20T15:11:46Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': 'd1f240f5-8538-43ab-a9bf-fe0f26d73875', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8692e9bd-7163-4f35-a28b-8a31b1691fc8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8692e9bd-7163-4f35-a28b-8a31b1691fc8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cbccef94-6ebf-4050-9b57-31486efe9e8f', 'attached_at': '', 'detached_at': '', 'volume_id': '8692e9bd-7163-4f35-a28b-8a31b1691fc8', 'serial': '8692e9bd-7163-4f35-a28b-8a31b1691fc8'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.427 250022 WARNING nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.432 250022 DEBUG nova.virt.libvirt.host [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.433 250022 DEBUG nova.virt.libvirt.host [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.437 250022 DEBUG nova.virt.libvirt.host [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.437 250022 DEBUG nova.virt.libvirt.host [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.439 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.439 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-20T15:11:45Z,direct_url=<?>,disk_format='qcow2',id=172aeba8-ca4a-4189-856b-42b685d4813d,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-431707316',owner='a49638950e1543fa8e0d251af5479623',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-20T15:11:46Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.440 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.440 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.441 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.441 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.442 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.442 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.443 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.444 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.444 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.444 250022 DEBUG nova.virt.hardware [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.485 250022 DEBUG nova.storage.rbd_utils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.491 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.528 250022 INFO nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Creating config drive at /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.533 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz2dojo2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.673 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmz2dojo2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.714 250022 DEBUG nova.storage.rbd_utils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.717 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1794335812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.870 250022 DEBUG oslo_concurrency.processutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.872 250022 INFO nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Deleting local config drive /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/disk.config because it was imported into RBD.
Jan 20 15:12:01 compute-0 kernel: tap25ad2c72-7d: entered promiscuous mode
Jan 20 15:12:01 compute-0 NetworkManager[48960]: <info>  [1768921921.9362] manager: (tap25ad2c72-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Jan 20 15:12:01 compute-0 ovn_controller[148666]: 2026-01-20T15:12:01Z|00625|binding|INFO|Claiming lport 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for this chassis.
Jan 20 15:12:01 compute-0 ovn_controller[148666]: 2026-01-20T15:12:01Z|00626|binding|INFO|25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301: Claiming fa:16:3e:ab:6b:07 10.100.0.12
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.947 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:6b:07 10.100.0.12'], port_security=['fa:16:3e:ab:6b:07 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f3b0f200-2f57-4c25-bdf4-8d17165642fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3875f94a-ec8d-4588-90ca-c7ebe4dc6a1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a649097-9411-41ae-8903-e778937a7e59, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.948 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 in datapath ef6ea4cb-557a-4dec-844c-6c933ddba0b1 bound to our chassis
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.949 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef6ea4cb-557a-4dec-844c-6c933ddba0b1
Jan 20 15:12:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:12:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256123011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.964 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a268f7b-87fe-4de9-89e5-0a7c18b71044]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.965 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef6ea4cb-51 in ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:12:01 compute-0 systemd-udevd[354980]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.972 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef6ea4cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.972 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[541c0ee4-d888-4890-b696-10bbd19237d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.973 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[693fa1b1-7bca-49bd-92e1-c9af63548d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:01 compute-0 nova_compute[250018]: 2026-01-20 15:12:01.979 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:01 compute-0 NetworkManager[48960]: <info>  [1768921921.9835] device (tap25ad2c72-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:12:01 compute-0 NetworkManager[48960]: <info>  [1768921921.9847] device (tap25ad2c72-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:12:01 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:01.992 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4c27adc4-8722-4127-8372-9a3c42c6ce6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:01 compute-0 systemd-machined[216401]: New machine qemu-78-instance-000000b2.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.004 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 ovn_controller[148666]: 2026-01-20T15:12:02Z|00627|binding|INFO|Setting lport 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 ovn-installed in OVS
Jan 20 15:12:02 compute-0 ovn_controller[148666]: 2026-01-20T15:12:02Z|00628|binding|INFO|Setting lport 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 up in Southbound
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.010 250022 DEBUG nova.virt.libvirt.vif [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1922522280',id=179,image_ref='172aeba8-ca4a-4189-856b-42b685d4813d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCd1i6ksDYk2orgFWClFZiNFO+H9XCYnHrQEiniQ0ayHQQpVJ6Y2R4z0iRJZMzCFIisxk+ekqiBti1LTegyW3GOJQA6J0X8zZkiACZPbuFQJ3znGbrrW2zIMVDTgR37CgQ==',key_name='tempest-keypair-261853780',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-qnfo2jsp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-194644003',image_owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=cbccef94-6ebf-4050-9b57-31486efe9e8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.011 250022 DEBUG nova.network.os_vif_util [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.011 250022 DEBUG nova.network.os_vif_util [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.012 250022 DEBUG nova.objects.instance [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'pci_devices' on Instance uuid cbccef94-6ebf-4050-9b57-31486efe9e8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.014 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-000000b2.
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.020 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3d23da80-0701-4368-a1eb-3aa433a609e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.044 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <uuid>cbccef94-6ebf-4050-9b57-31486efe9e8f</uuid>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <name>instance-000000b3</name>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-1922522280</nova:name>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:12:01</nova:creationTime>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:user uuid="bf422e55e158420cbdae75f07a3bb97a">tempest-TestVolumeBootPattern-194644003-project-member</nova:user>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:project uuid="a49638950e1543fa8e0d251af5479623">tempest-TestVolumeBootPattern-194644003</nova:project>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="172aeba8-ca4a-4189-856b-42b685d4813d"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <nova:port uuid="5de19241-ea15-4a94-8f92-497d86147111">
Jan 20 15:12:02 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <system>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="serial">cbccef94-6ebf-4050-9b57-31486efe9e8f</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="uuid">cbccef94-6ebf-4050-9b57-31486efe9e8f</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </system>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <os>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </os>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <features>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </features>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config">
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </source>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-8692e9bd-7163-4f35-a28b-8a31b1691fc8">
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </source>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:12:02 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <serial>8692e9bd-7163-4f35-a28b-8a31b1691fc8</serial>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:b0:87:db"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <target dev="tap5de19241-ea"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/console.log" append="off"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <video>
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </video>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <input type="keyboard" bus="usb"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:12:02 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:12:02 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:12:02 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:12:02 compute-0 nova_compute[250018]: </domain>
Jan 20 15:12:02 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.045 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Preparing to wait for external event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.045 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.045 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.046 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.046 250022 DEBUG nova.virt.libvirt.vif [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1922522280',id=179,image_ref='172aeba8-ca4a-4189-856b-42b685d4813d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCd1i6ksDYk2orgFWClFZiNFO+H9XCYnHrQEiniQ0ayHQQpVJ6Y2R4z0iRJZMzCFIisxk+ekqiBti1LTegyW3GOJQA6J0X8zZkiACZPbuFQJ3znGbrrW2zIMVDTgR37CgQ==',key_name='tempest-keypair-261853780',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-qnfo2jsp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-194644003',image_owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=cbccef94-6ebf-4050-9b57-31486efe9e8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.047 250022 DEBUG nova.network.os_vif_util [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.047 250022 DEBUG nova.network.os_vif_util [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.048 250022 DEBUG os_vif [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.049 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.049 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.053 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.053 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5de19241-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.054 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5de19241-ea, col_values=(('external_ids', {'iface-id': '5de19241-ea15-4a94-8f92-497d86147111', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:87:db', 'vm-uuid': 'cbccef94-6ebf-4050-9b57-31486efe9e8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.053 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[618c73c4-d6c8-4bba-860f-fc2f1bf19ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 NetworkManager[48960]: <info>  [1768921922.0561] manager: (tap5de19241-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.056 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 NetworkManager[48960]: <info>  [1768921922.0614] manager: (tapef6ea4cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.060 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff67a66-08bd-445c-a187-1be9eb0ba84d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.062 250022 INFO os_vif [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea')
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.089 250022 DEBUG nova.network.neutron [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updated VIF entry in instance network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.090 250022 DEBUG nova.network.neutron [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.091 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[08d34a9f-ed4a-4cf2-948e-83068e9fd548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.093 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3752bf57-b04e-48eb-b93f-9341f3922a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:02.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.114 250022 DEBUG oslo_concurrency.lockutils [req-eec3ca46-a784-4017-b883-abcdac8f9a63 req-04f1dce5-76b8-4071-a86e-7dc0b4da806c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:02 compute-0 NetworkManager[48960]: <info>  [1768921922.1162] device (tapef6ea4cb-50): carrier: link connected
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.123 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6abdfd65-ec2b-41e5-a75e-bc6e1956adc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.140 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[da7dd6ee-afeb-4ccd-add5-c533ea5dbf22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef6ea4cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:ba:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786283, 'reachable_time': 39927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355017, 'error': None, 'target': 'ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.156 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e70f6aab-b2e2-40f7-b395-d191664173bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bab8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786283, 'tstamp': 786283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355018, 'error': None, 'target': 'ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.164 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.164 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.165 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No VIF found with MAC fa:16:3e:b0:87:db, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.165 250022 INFO nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Using config drive
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.171 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9308ceb0-a385-4436-95b7-b155de52066a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef6ea4cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:ba:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786283, 'reachable_time': 39927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355019, 'error': None, 'target': 'ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.196 250022 DEBUG nova.storage.rbd_utils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.208 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8c7306-74a4-4690-aaa0-ff0bd9f6cc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.265 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5739ba98-8d0f-46b3-87c9-1a77dd6c445e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.267 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef6ea4cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.267 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.267 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef6ea4cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 kernel: tapef6ea4cb-50: entered promiscuous mode
Jan 20 15:12:02 compute-0 NetworkManager[48960]: <info>  [1768921922.2696] manager: (tapef6ea4cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.272 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.273 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.274 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef6ea4cb-50, col_values=(('external_ids', {'iface-id': 'e83e13a6-6446-4245-ab88-80085510e40d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 ovn_controller[148666]: 2026-01-20T15:12:02Z|00629|binding|INFO|Releasing lport e83e13a6-6446-4245-ab88-80085510e40d from this chassis (sb_readonly=0)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.291 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.296 250022 DEBUG nova.compute.manager [req-bda38dad-6e81-45ae-84b9-4cd02cdb4a98 req-442a56eb-897e-48b6-9bf4-3659f800536f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.296 250022 DEBUG oslo_concurrency.lockutils [req-bda38dad-6e81-45ae-84b9-4cd02cdb4a98 req-442a56eb-897e-48b6-9bf4-3659f800536f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.296 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef6ea4cb-557a-4dec-844c-6c933ddba0b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef6ea4cb-557a-4dec-844c-6c933ddba0b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.296 250022 DEBUG oslo_concurrency.lockutils [req-bda38dad-6e81-45ae-84b9-4cd02cdb4a98 req-442a56eb-897e-48b6-9bf4-3659f800536f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.297 250022 DEBUG oslo_concurrency.lockutils [req-bda38dad-6e81-45ae-84b9-4cd02cdb4a98 req-442a56eb-897e-48b6-9bf4-3659f800536f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.297 250022 DEBUG nova.compute.manager [req-bda38dad-6e81-45ae-84b9-4cd02cdb4a98 req-442a56eb-897e-48b6-9bf4-3659f800536f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Processing event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.297 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ce506665-9ace-4c56-b3c7-9bc7c3c9c652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.298 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-ef6ea4cb-557a-4dec-844c-6c933ddba0b1
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/ef6ea4cb-557a-4dec-844c-6c933ddba0b1.pid.haproxy
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID ef6ea4cb-557a-4dec-844c-6c933ddba0b1
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:12:02 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:02.299 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'env', 'PROCESS_TAG=haproxy-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef6ea4cb-557a-4dec-844c-6c933ddba0b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.431 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921922.4312677, f3b0f200-2f57-4c25-bdf4-8d17165642fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.432 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] VM Started (Lifecycle Event)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.434 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.437 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.440 250022 INFO nova.virt.libvirt.driver [-] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance spawned successfully.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.440 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.462 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.469 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 5.5 MiB/s wr, 177 op/s
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.476 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.477 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.478 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.479 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.480 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.481 250022 DEBUG nova.virt.libvirt.driver [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.491 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.492 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921922.4336483, f3b0f200-2f57-4c25-bdf4-8d17165642fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.492 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] VM Paused (Lifecycle Event)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.531 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.535 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921922.4365835, f3b0f200-2f57-4c25-bdf4-8d17165642fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.535 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] VM Resumed (Lifecycle Event)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.589 250022 INFO nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Took 10.69 seconds to spawn the instance on the hypervisor.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.589 250022 DEBUG nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.596 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.598 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.644 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.678 250022 INFO nova.compute.manager [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Took 11.72 seconds to build instance.
Jan 20 15:12:02 compute-0 podman[355115]: 2026-01-20 15:12:02.684717185 +0000 UTC m=+0.051466509 container create b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.685 250022 INFO nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Creating config drive at /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.696 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zm62nu8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:02 compute-0 systemd[1]: Started libpod-conmon-b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f.scope.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.730 250022 DEBUG oslo_concurrency.lockutils [None req-40d4b039-0274-4cc5-86a5-aff9f7e1a487 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:02 compute-0 podman[355115]: 2026-01-20 15:12:02.659741681 +0000 UTC m=+0.026491025 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59aa1722068164d57c2d8f7686d1628f48f85a0771725d25979932125d2bde82/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3816362426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4256123011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:02 compute-0 ceph-mon[74360]: pgmap v2670: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 5.5 MiB/s wr, 177 op/s
Jan 20 15:12:02 compute-0 podman[355115]: 2026-01-20 15:12:02.76940945 +0000 UTC m=+0.136158794 container init b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:12:02 compute-0 podman[355115]: 2026-01-20 15:12:02.774833927 +0000 UTC m=+0.141583251 container start b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:12:02 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [NOTICE]   (355137) : New worker (355139) forked
Jan 20 15:12:02 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [NOTICE]   (355137) : Loading success.
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.837 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zm62nu8" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.865 250022 DEBUG nova.storage.rbd_utils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:12:02 compute-0 nova_compute[250018]: 2026-01-20 15:12:02.868 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.028 250022 DEBUG oslo_concurrency.processutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config cbccef94-6ebf-4050-9b57-31486efe9e8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.028 250022 INFO nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Deleting local config drive /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f/disk.config because it was imported into RBD.
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.0723] manager: (tap5de19241-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Jan 20 15:12:03 compute-0 kernel: tap5de19241-ea: entered promiscuous mode
Jan 20 15:12:03 compute-0 systemd-udevd[355006]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00630|binding|INFO|Claiming lport 5de19241-ea15-4a94-8f92-497d86147111 for this chassis.
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00631|binding|INFO|5de19241-ea15-4a94-8f92-497d86147111: Claiming fa:16:3e:b0:87:db 10.100.0.5
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.076 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.083 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.0863] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.085 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.0873] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.0910] device (tap5de19241-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.0922] device (tap5de19241-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.096 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:87:db 10.100.0.5'], port_security=['fa:16:3e:b0:87:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cbccef94-6ebf-4050-9b57-31486efe9e8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'feac6571-afda-4567-b3ec-7c23719c9e1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5de19241-ea15-4a94-8f92-497d86147111) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.097 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5de19241-ea15-4a94-8f92-497d86147111 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 bound to our chassis
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.098 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.103 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.104 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.104 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.104 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.104 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.109 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[493e848c-49f8-4865-8efc-5a42d761195f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.110 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb677f1a9-d1 in ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.113 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb677f1a9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.113 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f99bdf84-09c0-4cc8-9f44-f26bd823c5c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.114 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[544fe564-4b5e-4382-88da-edb7215c4174]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 systemd-machined[216401]: New machine qemu-79-instance-000000b3.
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.125 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[9948f748-b745-4cc8-b098-8924e5964cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.151 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c729638-d689-4a7a-a212-2559425c5334]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-000000b3.
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.189 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4db83f-d8b2-47c9-8484-31da3b3434ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.199 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[30fb7d72-1829-4c5a-ba97-c30d587911ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.2006] manager: (tapb677f1a9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.231 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[87e68e9e-a611-4be8-b713-c3f5c49d3090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00632|binding|INFO|Releasing lport e83e13a6-6446-4245-ab88-80085510e40d from this chassis (sb_readonly=0)
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.239 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ca6aba-de74-4c98-9b85-49a9c432c41c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.250 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00633|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 ovn-installed in OVS
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00634|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 up in Southbound
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.263 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.2637] device (tapb677f1a9-d0): carrier: link connected
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.267 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4365eb57-074a-4e4c-894f-dd5102b918a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.287 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5249f2d5-9af3-4fa4-a0a6-9b2b4fff91f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786398, 'reachable_time': 18063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355235, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.304 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2f65d52b-6455-4c4f-a954-356043440724]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:c834'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786398, 'tstamp': 786398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355236, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.322 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b3c77b-ff8f-4bf2-bf81-41db3f5826b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786398, 'reachable_time': 18063, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355237, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.356 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7c144ecf-4ef3-4287-ab96-863b66e91db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.417 250022 DEBUG nova.network.neutron [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updated VIF entry in instance network info cache for port 5de19241-ea15-4a94-8f92-497d86147111. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.418 250022 DEBUG nova.network.neutron [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updating instance_info_cache with network_info: [{"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.421 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9407ca-de32-4528-819d-7fe51ce8a642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.423 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.423 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.424 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb677f1a9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:03 compute-0 kernel: tapb677f1a9-d0: entered promiscuous mode
Jan 20 15:12:03 compute-0 NetworkManager[48960]: <info>  [1768921923.4267] manager: (tapb677f1a9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.428 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.433 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb677f1a9-d0, col_values=(('external_ids', {'iface-id': '1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:03 compute-0 ovn_controller[148666]: 2026-01-20T15:12:03Z|00635|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.435 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.437 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.438 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c553f6bd-6650-4754-b8e8-74468e653fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.438 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:12:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:03.439 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'env', 'PROCESS_TAG=haproxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b677f1a9-dbaa-4373-8466-bd9ccf067b91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.514 250022 DEBUG oslo_concurrency.lockutils [req-b912ac4d-e2f7-4e03-bc66-8bd16fd636c9 req-0920d982-d87b-4caf-b703-021d4f325b0d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600982363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.572 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.699 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.700 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.702 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.702 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1600982363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:03 compute-0 podman[355270]: 2026-01-20 15:12:03.828959601 +0000 UTC m=+0.059078364 container create df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:12:03 compute-0 systemd[1]: Started libpod-conmon-df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b.scope.
Jan 20 15:12:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.895 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:12:03 compute-0 podman[355270]: 2026-01-20 15:12:03.800478013 +0000 UTC m=+0.030596806 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.896 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4103MB free_disk=20.946460723876953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.897 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.897 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1655e6cb05c4c545a543e40285e17bc3ce2743f3e29e06bbbd893905fa37e3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:03 compute-0 podman[355270]: 2026-01-20 15:12:03.910145781 +0000 UTC m=+0.140264564 container init df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:12:03 compute-0 podman[355270]: 2026-01-20 15:12:03.916153384 +0000 UTC m=+0.146272157 container start df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:12:03 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [NOTICE]   (355289) : New worker (355291) forked
Jan 20 15:12:03 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [NOTICE]   (355289) : Loading success.
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.998 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance f3b0f200-2f57-4c25-bdf4-8d17165642fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.999 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance cbccef94-6ebf-4050-9b57-31486efe9e8f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.999 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:12:03 compute-0 nova_compute[250018]: 2026-01-20 15:12:03.999 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.056 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:04.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.343 250022 DEBUG nova.compute.manager [req-48da20cd-a7f5-429f-9591-c6184f2be045 req-73c92ad1-754c-406a-9817-3b1b5233976f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.344 250022 DEBUG oslo_concurrency.lockutils [req-48da20cd-a7f5-429f-9591-c6184f2be045 req-73c92ad1-754c-406a-9817-3b1b5233976f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.345 250022 DEBUG oslo_concurrency.lockutils [req-48da20cd-a7f5-429f-9591-c6184f2be045 req-73c92ad1-754c-406a-9817-3b1b5233976f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.345 250022 DEBUG oslo_concurrency.lockutils [req-48da20cd-a7f5-429f-9591-c6184f2be045 req-73c92ad1-754c-406a-9817-3b1b5233976f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.345 250022 DEBUG nova.compute.manager [req-48da20cd-a7f5-429f-9591-c6184f2be045 req-73c92ad1-754c-406a-9817-3b1b5233976f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Processing event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.416 250022 DEBUG nova.compute.manager [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.417 250022 DEBUG oslo_concurrency.lockutils [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.417 250022 DEBUG oslo_concurrency.lockutils [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.417 250022 DEBUG oslo_concurrency.lockutils [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.418 250022 DEBUG nova.compute.manager [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] No waiting events found dispatching network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.418 250022 WARNING nova.compute.manager [req-7a818454-90c0-43e8-88c5-b6126faa132b req-69e547e4-1881-4914-bd28-88d70a307dca 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received unexpected event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for instance with vm_state active and task_state None.
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.436 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921924.4360547, cbccef94-6ebf-4050-9b57-31486efe9e8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.436 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] VM Started (Lifecycle Event)
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.438 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.447 250022 DEBUG nova.virt.libvirt.driver [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.459 250022 INFO nova.virt.libvirt.driver [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Instance spawned successfully.
Jan 20 15:12:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997130354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.461 250022 INFO nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Took 4.52 seconds to spawn the instance on the hypervisor.
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.462 250022 DEBUG nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.463 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 275 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.4 MiB/s wr, 214 op/s
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.471 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.477 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.483 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.507 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.507 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921924.4367104, cbccef94-6ebf-4050-9b57-31486efe9e8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.507 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] VM Paused (Lifecycle Event)
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.515 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.550 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.558 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.558 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.559 250022 INFO nova.compute.manager [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Took 11.40 seconds to build instance.
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.561 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768921924.4467852, cbccef94-6ebf-4050-9b57-31486efe9e8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.561 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] VM Resumed (Lifecycle Event)
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.590 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.591 250022 DEBUG oslo_concurrency.lockutils [None req-369fd57f-2b01-47ff-b473-c364dcb82184 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:04 compute-0 nova_compute[250018]: 2026-01-20 15:12:04.594 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:12:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1997130354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:04 compute-0 ceph-mon[74360]: pgmap v2671: 321 pgs: 321 active+clean; 275 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.4 MiB/s wr, 214 op/s
Jan 20 15:12:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:05 compute-0 nova_compute[250018]: 2026-01-20 15:12:05.072 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:05 compute-0 nova_compute[250018]: 2026-01-20 15:12:05.559 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:06.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.9 MiB/s wr, 336 op/s
Jan 20 15:12:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.662 250022 DEBUG nova.compute.manager [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.663 250022 DEBUG oslo_concurrency.lockutils [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.663 250022 DEBUG oslo_concurrency.lockutils [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.664 250022 DEBUG oslo_concurrency.lockutils [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.664 250022 DEBUG nova.compute.manager [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:06 compute-0 nova_compute[250018]: 2026-01-20 15:12:06.664 250022 WARNING nova.compute.manager [req-057c2858-2d0d-4111-8863-43bbf558bf24 req-572a8618-1f8a-452c-bf40-7b6355ed463d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received unexpected event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with vm_state active and task_state None.
Jan 20 15:12:07 compute-0 nova_compute[250018]: 2026-01-20 15:12:07.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:07 compute-0 ceph-mon[74360]: pgmap v2672: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.9 MiB/s wr, 336 op/s
Jan 20 15:12:08 compute-0 nova_compute[250018]: 2026-01-20 15:12:08.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:08.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 20 15:12:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:09.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:09 compute-0 ceph-mon[74360]: pgmap v2673: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 20 15:12:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:10.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:10 compute-0 nova_compute[250018]: 2026-01-20 15:12:10.114 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 20 15:12:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:11.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:11 compute-0 ceph-mon[74360]: pgmap v2674: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010118227131025498 of space, bias 1.0, pg target 0.30354681393076494 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432773677414852 of space, bias 1.0, pg target 1.298321032244556 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:12:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.057 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 77 KiB/s wr, 245 op/s
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.820 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.820 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.820 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:12:12 compute-0 nova_compute[250018]: 2026-01-20 15:12:12.821 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3b0f200-2f57-4c25-bdf4-8d17165642fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:13.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:13 compute-0 ceph-mon[74360]: pgmap v2675: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 77 KiB/s wr, 245 op/s
Jan 20 15:12:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.132 250022 DEBUG nova.compute.manager [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.132 250022 DEBUG nova.compute.manager [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing instance network info cache due to event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.133 250022 DEBUG oslo_concurrency.lockutils [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 31 KiB/s wr, 228 op/s
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.596 250022 DEBUG nova.compute.manager [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-changed-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.597 250022 DEBUG nova.compute.manager [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Refreshing instance network info cache due to event network-changed-5de19241-ea15-4a94-8f92-497d86147111. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.597 250022 DEBUG oslo_concurrency.lockutils [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.597 250022 DEBUG oslo_concurrency.lockutils [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:14 compute-0 nova_compute[250018]: 2026-01-20 15:12:14.598 250022 DEBUG nova.network.neutron [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Refreshing network info cache for port 5de19241-ea15-4a94-8f92-497d86147111 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:12:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1857185143' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1857185143' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:15.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:15 compute-0 nova_compute[250018]: 2026-01-20 15:12:15.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:15 compute-0 ceph-mon[74360]: pgmap v2676: 321 pgs: 321 active+clean; 247 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 31 KiB/s wr, 228 op/s
Jan 20 15:12:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/514934175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:16.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:16 compute-0 ovn_controller[148666]: 2026-01-20T15:12:16Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:6b:07 10.100.0.12
Jan 20 15:12:16 compute-0 ovn_controller[148666]: 2026-01-20T15:12:16Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:6b:07 10.100.0.12
Jan 20 15:12:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 203 op/s
Jan 20 15:12:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:16 compute-0 ceph-mon[74360]: pgmap v2677: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 203 op/s
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.133 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.167 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.168 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.168 250022 DEBUG oslo_concurrency.lockutils [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:17 compute-0 nova_compute[250018]: 2026-01-20 15:12:17.168 250022 DEBUG nova.network.neutron [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:12:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/394228248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/636621959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:18.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:18 compute-0 nova_compute[250018]: 2026-01-20 15:12:18.382 250022 DEBUG nova.network.neutron [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updated VIF entry in instance network info cache for port 5de19241-ea15-4a94-8f92-497d86147111. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:12:18 compute-0 nova_compute[250018]: 2026-01-20 15:12:18.382 250022 DEBUG nova.network.neutron [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updating instance_info_cache with network_info: [{"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 69 op/s
Jan 20 15:12:18 compute-0 nova_compute[250018]: 2026-01-20 15:12:18.625 250022 DEBUG oslo_concurrency.lockutils [req-f435ff12-98af-40e6-9e4f-0fa296aafdb0 req-a5c67be9-2afa-4e9a-99d3-5956d6b51bf4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-cbccef94-6ebf-4050-9b57-31486efe9e8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:18 compute-0 ceph-mon[74360]: pgmap v2678: 321 pgs: 321 active+clean; 259 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 69 op/s
Jan 20 15:12:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:19.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:19 compute-0 ovn_controller[148666]: 2026-01-20T15:12:19Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:87:db 10.100.0.5
Jan 20 15:12:19 compute-0 ovn_controller[148666]: 2026-01-20T15:12:19Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:87:db 10.100.0.5
Jan 20 15:12:19 compute-0 sudo[355372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:20 compute-0 sudo[355372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:20 compute-0 sudo[355372]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:20 compute-0 sudo[355397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:20 compute-0 sudo[355397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:20 compute-0 sudo[355397]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:20 compute-0 nova_compute[250018]: 2026-01-20 15:12:20.079 250022 DEBUG nova.network.neutron [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updated VIF entry in instance network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:12:20 compute-0 nova_compute[250018]: 2026-01-20 15:12:20.080 250022 DEBUG nova.network.neutron [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:20 compute-0 nova_compute[250018]: 2026-01-20 15:12:20.093 250022 DEBUG oslo_concurrency.lockutils [req-b8d08781-bf63-4d57-ab39-4759a788bc6b req-cf8f10cc-153c-47a7-b07c-31bc2d96467b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:20 compute-0 nova_compute[250018]: 2026-01-20 15:12:20.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:20.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 322 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 190 op/s
Jan 20 15:12:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:12:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:21.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:12:21 compute-0 ceph-mon[74360]: pgmap v2679: 321 pgs: 321 active+clean; 322 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.1 MiB/s wr, 190 op/s
Jan 20 15:12:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.592740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941592840, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1481, "num_deletes": 257, "total_data_size": 2262891, "memory_usage": 2302768, "flush_reason": "Manual Compaction"}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941604991, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 2233908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58894, "largest_seqno": 60373, "table_properties": {"data_size": 2227138, "index_size": 3776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15019, "raw_average_key_size": 20, "raw_value_size": 2213210, "raw_average_value_size": 2970, "num_data_blocks": 166, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921817, "oldest_key_time": 1768921817, "file_creation_time": 1768921941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 12274 microseconds, and 5430 cpu microseconds.
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.605038) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 2233908 bytes OK
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.605061) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.606552) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.606565) EVENT_LOG_v1 {"time_micros": 1768921941606561, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.606580) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2256467, prev total WAL file size 2256467, number of live WAL files 2.
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.607294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323730' seq:72057594037927935, type:22 .. '6C6F676D0032353231' seq:0, type:0; will stop at (end)
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(2181KB)], [131(11MB)]
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941607371, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 14489072, "oldest_snapshot_seqno": -1}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8999 keys, 14341299 bytes, temperature: kUnknown
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941669265, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 14341299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14278872, "index_size": 38826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22533, "raw_key_size": 235818, "raw_average_key_size": 26, "raw_value_size": 14116396, "raw_average_value_size": 1568, "num_data_blocks": 1503, "num_entries": 8999, "num_filter_entries": 8999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.669524) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 14341299 bytes
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.670768) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 233.9 rd, 231.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.7 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 9530, records dropped: 531 output_compression: NoCompression
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.670785) EVENT_LOG_v1 {"time_micros": 1768921941670777, "job": 80, "event": "compaction_finished", "compaction_time_micros": 61937, "compaction_time_cpu_micros": 29479, "output_level": 6, "num_output_files": 1, "total_output_size": 14341299, "num_input_records": 9530, "num_output_records": 8999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941671184, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921941672945, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.607162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.673002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.673008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.673010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.673012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:21 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:21.673014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:22 compute-0 nova_compute[250018]: 2026-01-20 15:12:22.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:22.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 200 op/s
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:23 compute-0 nova_compute[250018]: 2026-01-20 15:12:23.055 250022 INFO nova.compute.manager [None req-aede52ec-4bf9-4ede-ba07-8741fa7595f8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Get console output
Jan 20 15:12:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:23 compute-0 nova_compute[250018]: 2026-01-20 15:12:23.188 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:12:23 compute-0 ceph-mon[74360]: pgmap v2680: 321 pgs: 321 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 200 op/s
Jan 20 15:12:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:24.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 343 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 204 op/s
Jan 20 15:12:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:25.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:25 compute-0 nova_compute[250018]: 2026-01-20 15:12:25.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:25 compute-0 ceph-mon[74360]: pgmap v2681: 321 pgs: 321 active+clean; 343 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 204 op/s
Jan 20 15:12:25 compute-0 sudo[355427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:25 compute-0 sudo[355427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-0 sudo[355427]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-0 sudo[355452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:25 compute-0 sudo[355452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-0 sudo[355452]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-0 sudo[355477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:25 compute-0 sudo[355477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:25 compute-0 sudo[355477]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:25 compute-0 sudo[355502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:12:25 compute-0 sudo[355502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:26 compute-0 nova_compute[250018]: 2026-01-20 15:12:26.094 250022 INFO nova.compute.manager [None req-ba80db3e-edc9-480c-8d68-d8c69ead10b4 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Get console output
Jan 20 15:12:26 compute-0 nova_compute[250018]: 2026-01-20 15:12:26.100 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:12:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:12:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:12:26 compute-0 podman[355602]: 2026-01-20 15:12:26.339702731 +0000 UTC m=+0.052131757 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:12:26 compute-0 podman[355602]: 2026-01-20 15:12:26.43275026 +0000 UTC m=+0.145179276 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:12:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 224 op/s
Jan 20 15:12:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:26 compute-0 ceph-mon[74360]: pgmap v2682: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.5 MiB/s wr, 224 op/s
Jan 20 15:12:26 compute-0 podman[355758]: 2026-01-20 15:12:26.996213221 +0000 UTC m=+0.046312131 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:12:27 compute-0 podman[355758]: 2026-01-20 15:12:27.007067514 +0000 UTC m=+0.057166444 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 nova_compute[250018]: 2026-01-20 15:12:27.061 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:27 compute-0 podman[355823]: 2026-01-20 15:12:27.197005827 +0000 UTC m=+0.043276019 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, io.buildah.version=1.28.2, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 15:12:27 compute-0 podman[355823]: 2026-01-20 15:12:27.208563028 +0000 UTC m=+0.054833200 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Jan 20 15:12:27 compute-0 sudo[355502]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 sudo[355854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:27 compute-0 sudo[355854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-0 sudo[355854]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-0 sudo[355879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:27 compute-0 sudo[355879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-0 sudo[355879]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-0 sudo[355904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:27 compute-0 sudo[355904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-0 sudo[355904]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:27 compute-0 sudo[355929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:12:27 compute-0 sudo[355929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:27 compute-0 sshd-session[355425]: Connection closed by authenticating user root 134.122.57.138 port 60620 [preauth]
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:12:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:27 compute-0 sudo[355929]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:28.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa0c2b42-d987-402a-b9f0-184bdf0e4ec6 does not exist
Jan 20 15:12:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f002c65b-0540-4283-b541-2973db373bb3 does not exist
Jan 20 15:12:28 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b290a6b8-8dc0-4d3b-94f1-caecc3066482 does not exist
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:12:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:12:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:28 compute-0 sudo[355986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:28 compute-0 sudo[355986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:28 compute-0 sudo[355986]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 198 op/s
Jan 20 15:12:28 compute-0 sudo[356011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:28 compute-0 sudo[356011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:28 compute-0 sudo[356011]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:28 compute-0 sudo[356036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:28 compute-0 sudo[356036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:28 compute-0 sudo[356036]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:28 compute-0 sudo[356061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:12:28 compute-0 sudo[356061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:28.908181345 +0000 UTC m=+0.023358640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.026050366 +0000 UTC m=+0.141227651 container create 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:12:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:12:29 compute-0 ceph-mon[74360]: pgmap v2683: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 198 op/s
Jan 20 15:12:29 compute-0 systemd[1]: Started libpod-conmon-85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285.scope.
Jan 20 15:12:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:29.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.215951578 +0000 UTC m=+0.331128873 container init 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.223122611 +0000 UTC m=+0.338299886 container start 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.226003609 +0000 UTC m=+0.341180864 container attach 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:12:29 compute-0 compassionate_jennings[356144]: 167 167
Jan 20 15:12:29 compute-0 systemd[1]: libpod-85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285.scope: Deactivated successfully.
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.230746937 +0000 UTC m=+0.345924202 container died 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-998431e6afdc2e8cbc0d3d526a74d61cd9e90c1eeee3d0b8fe2d7eec2b0771b6-merged.mount: Deactivated successfully.
Jan 20 15:12:29 compute-0 podman[356128]: 2026-01-20 15:12:29.465602892 +0000 UTC m=+0.580780157 container remove 85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:12:29 compute-0 systemd[1]: libpod-conmon-85985d4a48424ea1299053e18972a7918dc46eb031091ea94a35876be6b1a285.scope: Deactivated successfully.
Jan 20 15:12:29 compute-0 podman[356170]: 2026-01-20 15:12:29.648522076 +0000 UTC m=+0.052303611 container create f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:12:29 compute-0 systemd[1]: Started libpod-conmon-f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3.scope.
Jan 20 15:12:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:29 compute-0 podman[356170]: 2026-01-20 15:12:29.618710752 +0000 UTC m=+0.022492317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:29 compute-0 podman[356170]: 2026-01-20 15:12:29.801789111 +0000 UTC m=+0.205570646 container init f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:12:29 compute-0 podman[356170]: 2026-01-20 15:12:29.810032133 +0000 UTC m=+0.213813668 container start f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:12:29 compute-0 podman[356170]: 2026-01-20 15:12:29.814122743 +0000 UTC m=+0.217904278 container attach f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.127 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:30.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Jan 20 15:12:30 compute-0 podman[356194]: 2026-01-20 15:12:30.487567769 +0000 UTC m=+0.080031540 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:12:30 compute-0 podman[356193]: 2026-01-20 15:12:30.510117947 +0000 UTC m=+0.102817334 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:12:30 compute-0 awesome_brown[356187]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:12:30 compute-0 awesome_brown[356187]: --> relative data size: 1.0
Jan 20 15:12:30 compute-0 awesome_brown[356187]: --> All data devices are unavailable
Jan 20 15:12:30 compute-0 systemd[1]: libpod-f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3.scope: Deactivated successfully.
Jan 20 15:12:30 compute-0 podman[356170]: 2026-01-20 15:12:30.614147294 +0000 UTC m=+1.017928829 container died f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cbe26bf2325346f556b39dba0005ed254b074c56ea45a2dc14131fde4946b56-merged.mount: Deactivated successfully.
Jan 20 15:12:30 compute-0 podman[356170]: 2026-01-20 15:12:30.660423802 +0000 UTC m=+1.064205327 container remove f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:12:30 compute-0 systemd[1]: libpod-conmon-f657ae50260e777bb551d2e5c7c61d51e6be6067132cd6023bb74107893d85d3.scope: Deactivated successfully.
Jan 20 15:12:30 compute-0 sudo[356061]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:30 compute-0 sudo[356260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:30 compute-0 sudo[356260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:30 compute-0 sudo[356260]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:30.782 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:30.782 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:30.783 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:30 compute-0 sudo[356286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:30 compute-0 sudo[356286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:30 compute-0 sudo[356286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:30 compute-0 sudo[356311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:30 compute-0 sudo[356311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:30 compute-0 sudo[356311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:30 compute-0 sudo[356336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:12:30 compute-0 sudo[356336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.950 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.951 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.952 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.952 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.952 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.953 250022 INFO nova.compute.manager [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Terminating instance
Jan 20 15:12:30 compute-0 nova_compute[250018]: 2026-01-20 15:12:30.955 250022 DEBUG nova.compute.manager [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:12:31 compute-0 kernel: tap5de19241-ea (unregistering): left promiscuous mode
Jan 20 15:12:31 compute-0 NetworkManager[48960]: <info>  [1768921951.0169] device (tap5de19241-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00636|binding|INFO|Releasing lport 5de19241-ea15-4a94-8f92-497d86147111 from this chassis (sb_readonly=0)
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00637|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 down in Southbound
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.022 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00638|binding|INFO|Removing iface tap5de19241-ea ovn-installed in OVS
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.026 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.031 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:87:db 10.100.0.5'], port_security=['fa:16:3e:b0:87:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cbccef94-6ebf-4050-9b57-31486efe9e8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'feac6571-afda-4567-b3ec-7c23719c9e1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5de19241-ea15-4a94-8f92-497d86147111) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.032 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5de19241-ea15-4a94-8f92-497d86147111 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.033 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.034 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9993df08-b902-4ad6-aeb3-30cc4c6042dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.035 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace which is not needed anymore
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.042 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Jan 20 15:12:31 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000b3.scope: Consumed 15.784s CPU time.
Jan 20 15:12:31 compute-0 systemd-machined[216401]: Machine qemu-79-instance-000000b3 terminated.
Jan 20 15:12:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:31.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:31 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [NOTICE]   (355289) : haproxy version is 2.8.14-c23fe91
Jan 20 15:12:31 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [NOTICE]   (355289) : path to executable is /usr/sbin/haproxy
Jan 20 15:12:31 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [WARNING]  (355289) : Exiting Master process...
Jan 20 15:12:31 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [ALERT]    (355289) : Current worker (355291) exited with code 143 (Terminated)
Jan 20 15:12:31 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[355285]: [WARNING]  (355289) : All workers exited. Exiting... (0)
Jan 20 15:12:31 compute-0 systemd[1]: libpod-df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b.scope: Deactivated successfully.
Jan 20 15:12:31 compute-0 conmon[355285]: conmon df19398649d2934ed4a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b.scope/container/memory.events
Jan 20 15:12:31 compute-0 podman[356410]: 2026-01-20 15:12:31.171322104 +0000 UTC m=+0.045585141 container died df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 20 15:12:31 compute-0 kernel: tap5de19241-ea: entered promiscuous mode
Jan 20 15:12:31 compute-0 systemd-udevd[356375]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:12:31 compute-0 NetworkManager[48960]: <info>  [1768921951.1734] manager: (tap5de19241-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Jan 20 15:12:31 compute-0 kernel: tap5de19241-ea (unregistering): left promiscuous mode
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.175 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00639|binding|INFO|Claiming lport 5de19241-ea15-4a94-8f92-497d86147111 for this chassis.
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00640|binding|INFO|5de19241-ea15-4a94-8f92-497d86147111: Claiming fa:16:3e:b0:87:db 10.100.0.5
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.186 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:87:db 10.100.0.5'], port_security=['fa:16:3e:b0:87:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cbccef94-6ebf-4050-9b57-31486efe9e8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'feac6571-afda-4567-b3ec-7c23719c9e1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5de19241-ea15-4a94-8f92-497d86147111) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.196 250022 INFO nova.virt.libvirt.driver [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Instance destroyed successfully.
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.197 250022 DEBUG nova.objects.instance [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'resources' on Instance uuid cbccef94-6ebf-4050-9b57-31486efe9e8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00641|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 ovn-installed in OVS
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00642|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 up in Southbound
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00643|binding|INFO|Releasing lport 5de19241-ea15-4a94-8f92-497d86147111 from this chassis (sb_readonly=1)
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00644|if_status|INFO|Not setting lport 5de19241-ea15-4a94-8f92-497d86147111 down as sb is readonly
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00645|binding|INFO|Removing iface tap5de19241-ea ovn-installed in OVS
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00646|binding|INFO|Releasing lport 5de19241-ea15-4a94-8f92-497d86147111 from this chassis (sb_readonly=0)
Jan 20 15:12:31 compute-0 ovn_controller[148666]: 2026-01-20T15:12:31Z|00647|binding|INFO|Setting lport 5de19241-ea15-4a94-8f92-497d86147111 down in Southbound
Jan 20 15:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b-userdata-shm.mount: Deactivated successfully.
Jan 20 15:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab1655e6cb05c4c545a543e40285e17bc3ce2743f3e29e06bbbd893905fa37e3-merged.mount: Deactivated successfully.
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.217 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.219 250022 DEBUG nova.virt.libvirt.vif [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:11:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-1922522280',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-1922522280',id=179,image_ref='172aeba8-ca4a-4189-856b-42b685d4813d',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCd1i6ksDYk2orgFWClFZiNFO+H9XCYnHrQEiniQ0ayHQQpVJ6Y2R4z0iRJZMzCFIisxk+ekqiBti1LTegyW3GOJQA6J0X8zZkiACZPbuFQJ3znGbrrW2zIMVDTgR37CgQ==',key_name='tempest-keypair-261853780',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:12:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-qnfo2jsp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-194644003',image_owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:12:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=cbccef94-6ebf-4050-9b57-31486efe9e8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.220 250022 DEBUG nova.network.os_vif_util [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "5de19241-ea15-4a94-8f92-497d86147111", "address": "fa:16:3e:b0:87:db", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5de19241-ea", "ovs_interfaceid": "5de19241-ea15-4a94-8f92-497d86147111", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.220 250022 DEBUG nova.network.os_vif_util [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.221 250022 DEBUG os_vif [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:12:31 compute-0 podman[356410]: 2026-01-20 15:12:31.222867075 +0000 UTC m=+0.097130112 container cleanup df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.223 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5de19241-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.226 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:87:db 10.100.0.5'], port_security=['fa:16:3e:b0:87:db 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cbccef94-6ebf-4050-9b57-31486efe9e8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'feac6571-afda-4567-b3ec-7c23719c9e1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=5de19241-ea15-4a94-8f92-497d86147111) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.229 250022 INFO os_vif [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:87:db,bridge_name='br-int',has_traffic_filtering=True,id=5de19241-ea15-4a94-8f92-497d86147111,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5de19241-ea')
Jan 20 15:12:31 compute-0 systemd[1]: libpod-conmon-df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b.scope: Deactivated successfully.
Jan 20 15:12:31 compute-0 podman[356456]: 2026-01-20 15:12:31.289480001 +0000 UTC m=+0.042751934 container remove df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.296 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d8575748-d7cc-48d2-86d0-636a5973903b]: (4, ('Tue Jan 20 03:12:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b)\ndf19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b\nTue Jan 20 03:12:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (df19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b)\ndf19398649d2934ed4a5d16070ea36088ebc4d820ae446436dba83d36e2b8a1b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.298 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6366a182-a135-4073-9126-93991e4e051e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.298 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 kernel: tapb677f1a9-d0: left promiscuous mode
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.304509307 +0000 UTC m=+0.048852959 container create 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.314 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.315 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.318 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7397a546-50d0-4122-8e32-495a93c2a511]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.329 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c39dbe0-d13a-44c5-a0d7-61e0fa0ddf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.331 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3fad510a-bd2a-4823-9999-e6650e87a282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 systemd[1]: Started libpod-conmon-9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5.scope.
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.343 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.344 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.345 250022 DEBUG nova.network.neutron [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.346 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[68733102-ff42-4770-9f22-8fe44fb989c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786390, 'reachable_time': 35559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356506, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 systemd[1]: run-netns-ovnmeta\x2db677f1a9\x2ddbaa\x2d4373\x2d8466\x2dbd9ccf067b91.mount: Deactivated successfully.
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.349 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.349 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[ff725dd4-3509-492a-95bb-e0dccd005acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.351 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5de19241-ea15-4a94-8f92-497d86147111 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.352 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.353 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[70d213ce-bde7-4b56-af46-70e2ef290e5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.353 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 5de19241-ea15-4a94-8f92-497d86147111 in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.354 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:12:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:31.354 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebe030b-d71a-4616-991a-ee4f632f89bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.278057963 +0000 UTC m=+0.022401635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.387927766 +0000 UTC m=+0.132271408 container init 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.394221987 +0000 UTC m=+0.138565639 container start 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.396968471 +0000 UTC m=+0.141312123 container attach 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:12:31 compute-0 stupefied_brown[356507]: 167 167
Jan 20 15:12:31 compute-0 systemd[1]: libpod-9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5.scope: Deactivated successfully.
Jan 20 15:12:31 compute-0 conmon[356507]: conmon 9742ca7e25c7d070ad57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5.scope/container/memory.events
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.400464965 +0000 UTC m=+0.144808627 container died 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:12:31 compute-0 podman[356464]: 2026-01-20 15:12:31.431631046 +0000 UTC m=+0.175974698 container remove 9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brown, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 15:12:31 compute-0 systemd[1]: libpod-conmon-9742ca7e25c7d070ad57d96499dc5e6c653b33429fb191198814a577a7dc20d5.scope: Deactivated successfully.
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.465 250022 INFO nova.virt.libvirt.driver [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Deleting instance files /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f_del
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.466 250022 INFO nova.virt.libvirt.driver [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Deletion of /var/lib/nova/instances/cbccef94-6ebf-4050-9b57-31486efe9e8f_del complete
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.524 250022 INFO nova.compute.manager [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.525 250022 DEBUG oslo.service.loopingcall [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.525 250022 DEBUG nova.compute.manager [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.525 250022 DEBUG nova.network.neutron [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:12:31 compute-0 ceph-mon[74360]: pgmap v2684: 321 pgs: 321 active+clean; 344 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 199 op/s
Jan 20 15:12:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1500870900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:31 compute-0 podman[356530]: 2026-01-20 15:12:31.595370913 +0000 UTC m=+0.044881252 container create 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:12:31 compute-0 systemd[1]: Started libpod-conmon-661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0.scope.
Jan 20 15:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-766f63df24ff59559158550f9c274b856f464ae9ec301ceaf56c0748e97fd20f-merged.mount: Deactivated successfully.
Jan 20 15:12:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:31 compute-0 podman[356530]: 2026-01-20 15:12:31.575944679 +0000 UTC m=+0.025455038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276ffb1324da5d8b9ef3ce4b3a4719218734ae9f6da6e58b6979e7d0b98930bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276ffb1324da5d8b9ef3ce4b3a4719218734ae9f6da6e58b6979e7d0b98930bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276ffb1324da5d8b9ef3ce4b3a4719218734ae9f6da6e58b6979e7d0b98930bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/276ffb1324da5d8b9ef3ce4b3a4719218734ae9f6da6e58b6979e7d0b98930bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:31 compute-0 podman[356530]: 2026-01-20 15:12:31.679126672 +0000 UTC m=+0.128637011 container init 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:12:31 compute-0 podman[356530]: 2026-01-20 15:12:31.68794242 +0000 UTC m=+0.137452759 container start 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:12:31 compute-0 podman[356530]: 2026-01-20 15:12:31.690557071 +0000 UTC m=+0.140067430 container attach 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.698 250022 DEBUG nova.compute.manager [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.699 250022 DEBUG oslo_concurrency.lockutils [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.699 250022 DEBUG oslo_concurrency.lockutils [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.699 250022 DEBUG oslo_concurrency.lockutils [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.700 250022 DEBUG nova.compute.manager [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:31 compute-0 nova_compute[250018]: 2026-01-20 15:12:31.700 250022 DEBUG nova.compute.manager [req-9b03334a-f6d4-40a8-95cf-a8d001eecc92 req-2eacf62e-dc4e-4924-8d94-7e1ae83c3d58 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:12:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:32 compute-0 modest_wing[356547]: {
Jan 20 15:12:32 compute-0 modest_wing[356547]:     "0": [
Jan 20 15:12:32 compute-0 modest_wing[356547]:         {
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "devices": [
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "/dev/loop3"
Jan 20 15:12:32 compute-0 modest_wing[356547]:             ],
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "lv_name": "ceph_lv0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "lv_size": "7511998464",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "name": "ceph_lv0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "tags": {
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.cluster_name": "ceph",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.crush_device_class": "",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.encrypted": "0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.osd_id": "0",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.type": "block",
Jan 20 15:12:32 compute-0 modest_wing[356547]:                 "ceph.vdo": "0"
Jan 20 15:12:32 compute-0 modest_wing[356547]:             },
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "type": "block",
Jan 20 15:12:32 compute-0 modest_wing[356547]:             "vg_name": "ceph_vg0"
Jan 20 15:12:32 compute-0 modest_wing[356547]:         }
Jan 20 15:12:32 compute-0 modest_wing[356547]:     ]
Jan 20 15:12:32 compute-0 modest_wing[356547]: }
Jan 20 15:12:32 compute-0 systemd[1]: libpod-661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0.scope: Deactivated successfully.
Jan 20 15:12:32 compute-0 podman[356530]: 2026-01-20 15:12:32.464871158 +0000 UTC m=+0.914381497 container died 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:12:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 20 15:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-276ffb1324da5d8b9ef3ce4b3a4719218734ae9f6da6e58b6979e7d0b98930bc-merged.mount: Deactivated successfully.
Jan 20 15:12:32 compute-0 podman[356530]: 2026-01-20 15:12:32.537491216 +0000 UTC m=+0.987001565 container remove 661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:12:32 compute-0 systemd[1]: libpod-conmon-661490b2e2e2a6bdb5dc88adfd440b462c91f283c500479a91ba58e66dd48aa0.scope: Deactivated successfully.
Jan 20 15:12:32 compute-0 sudo[356336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:32 compute-0 sudo[356571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:32 compute-0 sudo[356571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:32 compute-0 sudo[356571]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:32 compute-0 sudo[356596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:12:32 compute-0 sudo[356596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:32 compute-0 sudo[356596]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:32 compute-0 sudo[356621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:32 compute-0 sudo[356621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:32 compute-0 sudo[356621]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:32 compute-0 sudo[356646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:12:32 compute-0 sudo[356646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:33.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.116900336 +0000 UTC m=+0.035653483 container create 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:12:33 compute-0 systemd[1]: Started libpod-conmon-409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab.scope.
Jan 20 15:12:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.18971361 +0000 UTC m=+0.108466787 container init 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.196496653 +0000 UTC m=+0.115249800 container start 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.102422045 +0000 UTC m=+0.021175222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:33 compute-0 elated_curran[356729]: 167 167
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.2000742 +0000 UTC m=+0.118827367 container attach 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:12:33 compute-0 systemd[1]: libpod-409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab.scope: Deactivated successfully.
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.201627822 +0000 UTC m=+0.120380969 container died 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-56afab1e02b3d6fda9c66071bd9d29d1dd0403fd11036b917f6d111fc46bf9db-merged.mount: Deactivated successfully.
Jan 20 15:12:33 compute-0 podman[356713]: 2026-01-20 15:12:33.235133365 +0000 UTC m=+0.153886512 container remove 409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:12:33 compute-0 systemd[1]: libpod-conmon-409e354e58ec5446b7fa87474ee434313bfcc266e1343ede06a8949cb9fe63ab.scope: Deactivated successfully.
Jan 20 15:12:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:33.317 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:33.318 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:12:33 compute-0 podman[356753]: 2026-01-20 15:12:33.397266509 +0000 UTC m=+0.038833429 container create 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:12:33 compute-0 systemd[1]: Started libpod-conmon-0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987.scope.
Jan 20 15:12:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd01ed6fdf4a599d676c29bd27b6141427e36d9746d91ac8175ebba41b524afa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd01ed6fdf4a599d676c29bd27b6141427e36d9746d91ac8175ebba41b524afa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd01ed6fdf4a599d676c29bd27b6141427e36d9746d91ac8175ebba41b524afa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd01ed6fdf4a599d676c29bd27b6141427e36d9746d91ac8175ebba41b524afa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:12:33 compute-0 podman[356753]: 2026-01-20 15:12:33.456462636 +0000 UTC m=+0.098029586 container init 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:12:33 compute-0 podman[356753]: 2026-01-20 15:12:33.464991346 +0000 UTC m=+0.106558266 container start 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:12:33 compute-0 podman[356753]: 2026-01-20 15:12:33.468674006 +0000 UTC m=+0.110240966 container attach 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:12:33 compute-0 podman[356753]: 2026-01-20 15:12:33.379843239 +0000 UTC m=+0.021410179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.484 250022 DEBUG nova.network.neutron [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:33 compute-0 ceph-mon[74360]: pgmap v2685: 321 pgs: 321 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.622 250022 INFO nova.compute.manager [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Took 2.10 seconds to deallocate network for instance.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.705 250022 DEBUG nova.compute.manager [req-239199ab-9e8f-4dfb-af7f-5e8aa038b221 req-3b769c70-cc4a-491a-b3f6-ef9a365079c3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-deleted-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.882 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.882 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.883 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.883 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.883 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.883 250022 WARNING nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received unexpected event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with vm_state active and task_state deleting.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.883 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.884 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.884 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.884 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.884 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.884 250022 WARNING nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received unexpected event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with vm_state active and task_state deleting.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.885 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.885 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.885 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.885 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.885 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.886 250022 WARNING nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received unexpected event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with vm_state active and task_state deleting.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.886 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.886 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.886 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-unplugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.887 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.888 250022 DEBUG oslo_concurrency.lockutils [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.888 250022 DEBUG nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] No waiting events found dispatching network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.888 250022 WARNING nova.compute.manager [req-1433f502-e520-4295-96e7-46ea300a715d req-c2b1afa7-7f1d-4349-86bd-e13542ef4ecf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Received unexpected event network-vif-plugged-5de19241-ea15-4a94-8f92-497d86147111 for instance with vm_state active and task_state deleting.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.893 250022 DEBUG nova.network.neutron [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.908 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.957 250022 INFO nova.compute.manager [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Took 0.33 seconds to detach 1 volumes for instance.
Jan 20 15:12:33 compute-0 nova_compute[250018]: 2026-01-20 15:12:33.958 250022 DEBUG nova.compute.manager [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Deleting volume: 8692e9bd-7163-4f35-a28b-8a31b1691fc8 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.013 250022 DEBUG nova.virt.libvirt.driver [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.014 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Creating file /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/d9340330478a4f02b2aabc04a1768b04.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.014 250022 DEBUG oslo_concurrency.processutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/d9340330478a4f02b2aabc04a1768b04.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:34.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.257 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.258 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:34 compute-0 agitated_neumann[356770]: {
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:         "osd_id": 0,
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:         "type": "bluestore"
Jan 20 15:12:34 compute-0 agitated_neumann[356770]:     }
Jan 20 15:12:34 compute-0 agitated_neumann[356770]: }
Jan 20 15:12:34 compute-0 systemd[1]: libpod-0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987.scope: Deactivated successfully.
Jan 20 15:12:34 compute-0 podman[356753]: 2026-01-20 15:12:34.302655212 +0000 UTC m=+0.944222152 container died 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd01ed6fdf4a599d676c29bd27b6141427e36d9746d91ac8175ebba41b524afa-merged.mount: Deactivated successfully.
Jan 20 15:12:34 compute-0 podman[356753]: 2026-01-20 15:12:34.360023969 +0000 UTC m=+1.001590879 container remove 0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:12:34 compute-0 systemd[1]: libpod-conmon-0ea2658c2cb16d0ddce964dc4f4924b926414a1135019e919eb31c57661fc987.scope: Deactivated successfully.
Jan 20 15:12:34 compute-0 sudo[356646]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6d0f251f-9477-4522-bacc-6315d81079c3 does not exist
Jan 20 15:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev eca83293-029c-4e5a-aad7-3dac2b95c542 does not exist
Jan 20 15:12:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7d0a2fab-0ba8-4729-9c57-a2070b03f1c4 does not exist
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.447 250022 DEBUG oslo_concurrency.processutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:34 compute-0 sudo[356804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:34 compute-0 sudo[356804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:34 compute-0 sudo[356804]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 936 KiB/s wr, 60 op/s
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.505 250022 DEBUG oslo_concurrency.processutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/d9340330478a4f02b2aabc04a1768b04.tmp" returned: 1 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.506 250022 DEBUG oslo_concurrency.processutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe/d9340330478a4f02b2aabc04a1768b04.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.506 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Creating directory /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.507 250022 DEBUG oslo_concurrency.processutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:34 compute-0 sudo[356830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:12:34 compute-0 sudo[356830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:34 compute-0 sudo[356830]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.718 250022 DEBUG oslo_concurrency.processutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/f3b0f200-2f57-4c25-bdf4-8d17165642fe" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.727 250022 DEBUG nova.virt.libvirt.driver [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:12:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655711995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.888 250022 DEBUG oslo_concurrency.processutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.894 250022 DEBUG nova.compute.provider_tree [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.932 250022 DEBUG nova.scheduler.client.report [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:34 compute-0 nova_compute[250018]: 2026-01-20 15:12:34.956 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:35 compute-0 nova_compute[250018]: 2026-01-20 15:12:35.001 250022 INFO nova.scheduler.client.report [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Deleted allocations for instance cbccef94-6ebf-4050-9b57-31486efe9e8f
Jan 20 15:12:35 compute-0 nova_compute[250018]: 2026-01-20 15:12:35.079 250022 DEBUG oslo_concurrency.lockutils [None req-8c65db9e-36cf-4e64-9f15-397a9c5aa635 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "cbccef94-6ebf-4050-9b57-31486efe9e8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:35.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:35 compute-0 nova_compute[250018]: 2026-01-20 15:12:35.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:35 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:12:35 compute-0 ceph-mon[74360]: pgmap v2686: 321 pgs: 321 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 936 KiB/s wr, 60 op/s
Jan 20 15:12:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1655711995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:12:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:12:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-0 nova_compute[250018]: 2026-01-20 15:12:36.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 20 15:12:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2796264318' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:36 compute-0 ceph-mon[74360]: pgmap v2687: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 20 15:12:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:37.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 20 15:12:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 20 15:12:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 20 15:12:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2691945406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:38 compute-0 kernel: tap25ad2c72-7d (unregistering): left promiscuous mode
Jan 20 15:12:38 compute-0 NetworkManager[48960]: <info>  [1768921958.0782] device (tap25ad2c72-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:12:38 compute-0 ovn_controller[148666]: 2026-01-20T15:12:38Z|00648|binding|INFO|Releasing lport 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 from this chassis (sb_readonly=0)
Jan 20 15:12:38 compute-0 ovn_controller[148666]: 2026-01-20T15:12:38Z|00649|binding|INFO|Setting lport 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 down in Southbound
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.085 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 ovn_controller[148666]: 2026-01-20T15:12:38Z|00650|binding|INFO|Removing iface tap25ad2c72-7d ovn-installed in OVS
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.087 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.122 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:6b:07 10.100.0.12'], port_security=['fa:16:3e:ab:6b:07 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f3b0f200-2f57-4c25-bdf4-8d17165642fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3875f94a-ec8d-4588-90ca-c7ebe4dc6a1a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a649097-9411-41ae-8903-e778937a7e59, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.123 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 in datapath ef6ea4cb-557a-4dec-844c-6c933ddba0b1 unbound from our chassis
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.125 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef6ea4cb-557a-4dec-844c-6c933ddba0b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.126 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[49e210ab-3209-4c48-b286-3010421d714b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.126 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1 namespace which is not needed anymore
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.134 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:38 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Jan 20 15:12:38 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000b2.scope: Consumed 15.811s CPU time.
Jan 20 15:12:38 compute-0 systemd-machined[216401]: Machine qemu-78-instance-000000b2 terminated.
Jan 20 15:12:38 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [NOTICE]   (355137) : haproxy version is 2.8.14-c23fe91
Jan 20 15:12:38 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [NOTICE]   (355137) : path to executable is /usr/sbin/haproxy
Jan 20 15:12:38 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [WARNING]  (355137) : Exiting Master process...
Jan 20 15:12:38 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [ALERT]    (355137) : Current worker (355139) exited with code 143 (Terminated)
Jan 20 15:12:38 compute-0 neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1[355131]: [WARNING]  (355137) : All workers exited. Exiting... (0)
Jan 20 15:12:38 compute-0 systemd[1]: libpod-b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f.scope: Deactivated successfully.
Jan 20 15:12:38 compute-0 podman[356903]: 2026-01-20 15:12:38.268168441 +0000 UTC m=+0.048444677 container died b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f-userdata-shm.mount: Deactivated successfully.
Jan 20 15:12:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-59aa1722068164d57c2d8f7686d1628f48f85a0771725d25979932125d2bde82-merged.mount: Deactivated successfully.
Jan 20 15:12:38 compute-0 podman[356903]: 2026-01-20 15:12:38.403281046 +0000 UTC m=+0.183557282 container cleanup b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:12:38 compute-0 systemd[1]: libpod-conmon-b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f.scope: Deactivated successfully.
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.415 250022 DEBUG nova.compute.manager [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-unplugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.416 250022 DEBUG oslo_concurrency.lockutils [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.417 250022 DEBUG oslo_concurrency.lockutils [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.417 250022 DEBUG oslo_concurrency.lockutils [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.417 250022 DEBUG nova.compute.manager [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] No waiting events found dispatching network-vif-unplugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.417 250022 WARNING nova.compute.manager [req-2d728edf-5f04-4aaa-b80e-e171d9196a0b req-8c0a20cb-e8cf-4665-90a0-c4ca7cb7cdf3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received unexpected event network-vif-unplugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for instance with vm_state active and task_state resize_migrating.
Jan 20 15:12:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 20 15:12:38 compute-0 podman[356945]: 2026-01-20 15:12:38.509549423 +0000 UTC m=+0.077513203 container remove b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.515 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[40e39e3e-d429-42f2-bac7-8b5524e8f6b6]: (4, ('Tue Jan 20 03:12:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1 (b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f)\nb043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f\nTue Jan 20 03:12:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1 (b043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f)\nb043c4d077938bfd0d3800b9b6aae179208c52d07268a2d15dc1a086f842d37f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.517 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ecaf4f-96e7-4320-b05f-10fb682df3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.518 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef6ea4cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.520 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 kernel: tapef6ea4cb-50: left promiscuous mode
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.538 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.542 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f91183d7-54c5-4228-8438-11fa35250a55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.555 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9b256c14-93a1-458d-b9ba-516fbd88a45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.556 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc7ca38-6b79-4bc4-9e6d-0286baa6befc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.573 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d993b7ef-8451-4ee2-b781-663100a5b7f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786277, 'reachable_time': 25948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356963, 'error': None, 'target': 'ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.576 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef6ea4cb-557a-4dec-844c-6c933ddba0b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:12:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:38.576 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c02ab8-fa85-4d23-a862-7f1589e1339e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:12:38 compute-0 systemd[1]: run-netns-ovnmeta\x2def6ea4cb\x2d557a\x2d4dec\x2d844c\x2d6c933ddba0b1.mount: Deactivated successfully.
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.749 250022 INFO nova.virt.libvirt.driver [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance shutdown successfully after 4 seconds.
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.754 250022 INFO nova.virt.libvirt.driver [-] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Instance destroyed successfully.
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.755 250022 DEBUG nova.virt.libvirt.vif [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-755379148',display_name='tempest-TestNetworkAdvancedServerOps-server-755379148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-755379148',id=178,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0NB6AtDQr7I3Hp0XR7ulJzBFloX/ApnUaNnswWSYzrrT8mFzgvIiFRhCWLiZ+TDOJfVtcwGCfevRbqTmLZ5wdo4P6v9G2NYca0swLwaNQ/zK8Zmxz5PIdul2BRm2ICrw==',key_name='tempest-TestNetworkAdvancedServerOps-1554431653',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:12:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-j0wavald',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:12:30Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=f3b0f200-2f57-4c25-bdf4-8d17165642fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1896631991", "vif_mac": "fa:16:3e:ab:6b:07"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.756 250022 DEBUG nova.network.os_vif_util [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1896631991", "vif_mac": "fa:16:3e:ab:6b:07"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.756 250022 DEBUG nova.network.os_vif_util [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.757 250022 DEBUG os_vif [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.759 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.759 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25ad2c72-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.762 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.763 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.766 250022 INFO os_vif [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d')
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.771 250022 DEBUG nova.virt.libvirt.driver [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:38 compute-0 nova_compute[250018]: 2026-01-20 15:12:38.771 250022 DEBUG nova.virt.libvirt.driver [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:12:38 compute-0 ceph-mon[74360]: osdmap e393: 3 total, 3 up, 3 in
Jan 20 15:12:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1378670698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:38 compute-0 ceph-mon[74360]: pgmap v2689: 321 pgs: 321 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 20 15:12:39 compute-0 nova_compute[250018]: 2026-01-20 15:12:39.024 250022 DEBUG neutronclient.v2_0.client [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:12:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:39.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:39 compute-0 nova_compute[250018]: 2026-01-20 15:12:39.159 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:39 compute-0 nova_compute[250018]: 2026-01-20 15:12:39.160 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:39 compute-0 nova_compute[250018]: 2026-01-20 15:12:39.160 250022 DEBUG oslo_concurrency.lockutils [None req-9e5c7bcf-e8da-46ec-8456-e937cc9e653a 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.131 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:40 compute-0 sudo[356964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:40 compute-0 sudo[356964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:40 compute-0 sudo[356964]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:40 compute-0 sudo[356989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:12:40 compute-0 sudo[356989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:12:40 compute-0 sudo[356989]: pam_unix(sudo:session): session closed for user root
Jan 20 15:12:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 4.1 MiB/s wr, 209 op/s
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.546 250022 DEBUG nova.compute.manager [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.546 250022 DEBUG oslo_concurrency.lockutils [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.546 250022 DEBUG oslo_concurrency.lockutils [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.547 250022 DEBUG oslo_concurrency.lockutils [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.547 250022 DEBUG nova.compute.manager [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] No waiting events found dispatching network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:40 compute-0 nova_compute[250018]: 2026-01-20 15:12:40.547 250022 WARNING nova.compute.manager [req-b30fb206-28f5-44f1-8f3f-c1f28b5fd6e9 req-c71a8222-51b1-427b-9bae-e83229426101 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received unexpected event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for instance with vm_state active and task_state resize_migrated.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: pgmap v2690: 321 pgs: 321 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 4.1 MiB/s wr, 209 op/s
Jan 20 15:12:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:41.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:41 compute-0 nova_compute[250018]: 2026-01-20 15:12:41.307 250022 DEBUG nova.compute.manager [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:41 compute-0 nova_compute[250018]: 2026-01-20 15:12:41.307 250022 DEBUG nova.compute.manager [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing instance network info cache due to event network-changed-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:12:41 compute-0 nova_compute[250018]: 2026-01-20 15:12:41.308 250022 DEBUG oslo_concurrency.lockutils [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:41 compute-0 nova_compute[250018]: 2026-01-20 15:12:41.308 250022 DEBUG oslo_concurrency.lockutils [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:41 compute-0 nova_compute[250018]: 2026-01-20 15:12:41.308 250022 DEBUG nova.network.neutron [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Refreshing network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:12:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.783573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961783623, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 508, "num_deletes": 252, "total_data_size": 549785, "memory_usage": 560112, "flush_reason": "Manual Compaction"}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961789478, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 472423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60374, "largest_seqno": 60881, "table_properties": {"data_size": 469520, "index_size": 874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7759, "raw_average_key_size": 21, "raw_value_size": 463546, "raw_average_value_size": 1266, "num_data_blocks": 36, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921942, "oldest_key_time": 1768921942, "file_creation_time": 1768921961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 5935 microseconds, and 2838 cpu microseconds.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.789509) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 472423 bytes OK
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.789530) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.790766) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.790781) EVENT_LOG_v1 {"time_micros": 1768921961790776, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.790797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 546829, prev total WAL file size 546829, number of live WAL files 2.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.791312) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303035' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(461KB)], [134(13MB)]
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961791343, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 14813722, "oldest_snapshot_seqno": -1}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8845 keys, 10986306 bytes, temperature: kUnknown
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961864281, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 10986306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10929458, "index_size": 33631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 232849, "raw_average_key_size": 26, "raw_value_size": 10774229, "raw_average_value_size": 1218, "num_data_blocks": 1287, "num_entries": 8845, "num_filter_entries": 8845, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768921961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.864926) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10986306 bytes
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.866656) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.9 rd, 150.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(54.6) write-amplify(23.3) OK, records in: 9365, records dropped: 520 output_compression: NoCompression
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.866682) EVENT_LOG_v1 {"time_micros": 1768921961866671, "job": 82, "event": "compaction_finished", "compaction_time_micros": 73019, "compaction_time_cpu_micros": 25506, "output_level": 6, "num_output_files": 1, "total_output_size": 10986306, "num_input_records": 9365, "num_output_records": 8845, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961867099, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768921961869838, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.791225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.870048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.870054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.870056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.870058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:12:41.870060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:12:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:12:42.320 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 4.0 MiB/s wr, 214 op/s
Jan 20 15:12:42 compute-0 ceph-mon[74360]: pgmap v2691: 321 pgs: 321 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 4.0 MiB/s wr, 214 op/s
Jan 20 15:12:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:43.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:43 compute-0 nova_compute[250018]: 2026-01-20 15:12:43.562 250022 DEBUG nova.network.neutron [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updated VIF entry in instance network info cache for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:12:43 compute-0 nova_compute[250018]: 2026-01-20 15:12:43.563 250022 DEBUG nova.network.neutron [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:43 compute-0 nova_compute[250018]: 2026-01-20 15:12:43.616 250022 DEBUG oslo_concurrency.lockutils [req-6c603b11-14d3-40b9-96d7-246e412b2044 req-acda59cc-eb49-4a67-9909-d104426f4b05 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:43 compute-0 nova_compute[250018]: 2026-01-20 15:12:43.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1046212052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:12:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1969015913' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:12:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 20 15:12:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 20 15:12:44 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 20 15:12:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:44.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 178 op/s
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:45.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:45 compute-0 ceph-mon[74360]: osdmap e394: 3 total, 3 up, 3 in
Jan 20 15:12:45 compute-0 ceph-mon[74360]: pgmap v2693: 321 pgs: 321 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 178 op/s
Jan 20 15:12:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1692019823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.133 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.925 250022 DEBUG nova.compute.manager [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.925 250022 DEBUG oslo_concurrency.lockutils [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.925 250022 DEBUG oslo_concurrency.lockutils [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.925 250022 DEBUG oslo_concurrency.lockutils [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.926 250022 DEBUG nova.compute.manager [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] No waiting events found dispatching network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:45 compute-0 nova_compute[250018]: 2026-01-20 15:12:45.926 250022 WARNING nova.compute.manager [req-54b4d772-0214-48fd-9fcc-d453e445096b req-412c24a1-2118-40af-8ec5-8eb9080fcc42 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received unexpected event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for instance with vm_state active and task_state resize_finish.
Jan 20 15:12:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:46 compute-0 nova_compute[250018]: 2026-01-20 15:12:46.195 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921951.1944578, cbccef94-6ebf-4050-9b57-31486efe9e8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:46 compute-0 nova_compute[250018]: 2026-01-20 15:12:46.196 250022 INFO nova.compute.manager [-] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] VM Stopped (Lifecycle Event)
Jan 20 15:12:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/358433427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/131253235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:12:46 compute-0 nova_compute[250018]: 2026-01-20 15:12:46.222 250022 DEBUG nova.compute.manager [None req-2ded25f9-db30-4ba1-a7ad-13189b9094ad - - - - - -] [instance: cbccef94-6ebf-4050-9b57-31486efe9e8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 286 op/s
Jan 20 15:12:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 20 15:12:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 20 15:12:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 20 15:12:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:47.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.184 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.184 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.184 250022 DEBUG nova.compute.manager [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 15:12:47 compute-0 ceph-mon[74360]: pgmap v2694: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 286 op/s
Jan 20 15:12:47 compute-0 ceph-mon[74360]: osdmap e395: 3 total, 3 up, 3 in
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.792 250022 DEBUG neutronclient.v2_0.client [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.793 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.793 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.794 250022 DEBUG nova.network.neutron [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:12:47 compute-0 nova_compute[250018]: 2026-01-20 15:12:47.794 250022 DEBUG nova.objects.instance [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'info_cache' on Instance uuid f3b0f200-2f57-4c25-bdf4-8d17165642fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.112 250022 DEBUG nova.compute.manager [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.114 250022 DEBUG oslo_concurrency.lockutils [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.114 250022 DEBUG oslo_concurrency.lockutils [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.114 250022 DEBUG oslo_concurrency.lockutils [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.115 250022 DEBUG nova.compute.manager [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] No waiting events found dispatching network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.115 250022 WARNING nova.compute.manager [req-b7ad8032-2fa8-4c69-b375-829fc64f3b62 req-7f1c082c-01b5-44f0-8ad2-a6e4d2c57dfc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Received unexpected event network-vif-plugged-25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301 for instance with vm_state resized and task_state None.
Jan 20 15:12:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:48.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 881 KiB/s wr, 199 op/s
Jan 20 15:12:48 compute-0 nova_compute[250018]: 2026-01-20 15:12:48.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:49.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:49 compute-0 ceph-mon[74360]: pgmap v2696: 321 pgs: 321 active+clean; 216 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 881 KiB/s wr, 199 op/s
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.101 250022 DEBUG nova.network.neutron [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Updating instance_info_cache with network_info: [{"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.132 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-f3b0f200-2f57-4c25-bdf4-8d17165642fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.132 250022 DEBUG nova.objects.instance [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid f3b0f200-2f57-4c25-bdf4-8d17165642fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.137 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:12:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:50.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.274 250022 DEBUG nova.storage.rbd_utils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] removing snapshot(nova-resize) on rbd image(f3b0f200-2f57-4c25-bdf4-8d17165642fe_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:12:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 KiB/s wr, 267 op/s
Jan 20 15:12:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 20 15:12:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 20 15:12:50 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.627 250022 DEBUG nova.virt.libvirt.vif [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-755379148',display_name='tempest-TestNetworkAdvancedServerOps-server-755379148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-755379148',id=178,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP0NB6AtDQr7I3Hp0XR7ulJzBFloX/ApnUaNnswWSYzrrT8mFzgvIiFRhCWLiZ+TDOJfVtcwGCfevRbqTmLZ5wdo4P6v9G2NYca0swLwaNQ/zK8Zmxz5PIdul2BRm2ICrw==',key_name='tempest-TestNetworkAdvancedServerOps-1554431653',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:12:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-j0wavald',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:12:46Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=f3b0f200-2f57-4c25-bdf4-8d17165642fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.628 250022 DEBUG nova.network.os_vif_util [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "address": "fa:16:3e:ab:6b:07", "network": {"id": "ef6ea4cb-557a-4dec-844c-6c933ddba0b1", "bridge": "br-int", "label": "tempest-network-smoke--1896631991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25ad2c72-7d", "ovs_interfaceid": "25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.629 250022 DEBUG nova.network.os_vif_util [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.630 250022 DEBUG os_vif [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.632 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.632 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25ad2c72-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.632 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.635 250022 INFO os_vif [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:6b:07,bridge_name='br-int',has_traffic_filtering=True,id=25ad2c72-7d4d-4eb9-bf00-a5c42aa9d301,network=Network(ef6ea4cb-557a-4dec-844c-6c933ddba0b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25ad2c72-7d')
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.636 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.636 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:12:50 compute-0 nova_compute[250018]: 2026-01-20 15:12:50.738 250022 DEBUG oslo_concurrency.processutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:12:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:51.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:12:51 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57227263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.221 250022 DEBUG oslo_concurrency.processutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.228 250022 DEBUG nova.compute.provider_tree [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.261 250022 DEBUG nova.scheduler.client.report [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.359 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.499 250022 INFO nova.scheduler.client.report [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocation for migration fb286472-618a-41fb-a80d-8579de56af31
Jan 20 15:12:51 compute-0 nova_compute[250018]: 2026-01-20 15:12:51.578 250022 DEBUG oslo_concurrency.lockutils [None req-ba83ce46-5fb2-4ed2-9823-3b94b19a71aa 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "f3b0f200-2f57-4c25-bdf4-8d17165642fe" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:12:51 compute-0 ceph-mon[74360]: pgmap v2697: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 KiB/s wr, 267 op/s
Jan 20 15:12:51 compute-0 ceph-mon[74360]: osdmap e396: 3 total, 3 up, 3 in
Jan 20 15:12:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/57227263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:12:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:12:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:52.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.4 KiB/s wr, 273 op/s
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:12:52
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'backups', 'volumes', '.rgw.root', 'default.rgw.control', '.mgr']
Jan 20 15:12:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:12:52 compute-0 ceph-mon[74360]: pgmap v2699: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.4 KiB/s wr, 273 op/s
Jan 20 15:12:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:53.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:53 compute-0 nova_compute[250018]: 2026-01-20 15:12:53.363 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768921958.3619523, f3b0f200-2f57-4c25-bdf4-8d17165642fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:12:53 compute-0 nova_compute[250018]: 2026-01-20 15:12:53.363 250022 INFO nova.compute.manager [-] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] VM Stopped (Lifecycle Event)
Jan 20 15:12:53 compute-0 nova_compute[250018]: 2026-01-20 15:12:53.391 250022 DEBUG nova.compute.manager [None req-b69eba6e-d791-4631-8be6-f261819d2f37 - - - - - -] [instance: f3b0f200-2f57-4c25-bdf4-8d17165642fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:12:53 compute-0 nova_compute[250018]: 2026-01-20 15:12:53.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:54.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Jan 20 15:12:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:55.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:55 compute-0 nova_compute[250018]: 2026-01-20 15:12:55.138 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:55 compute-0 ceph-mon[74360]: pgmap v2700: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Jan 20 15:12:56 compute-0 nova_compute[250018]: 2026-01-20 15:12:56.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:12:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Jan 20 15:12:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:12:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 20 15:12:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 20 15:12:56 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 20 15:12:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:57.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:57 compute-0 ceph-mon[74360]: pgmap v2701: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Jan 20 15:12:57 compute-0 ceph-mon[74360]: osdmap e397: 3 total, 3 up, 3 in
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:12:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:12:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:12:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:12:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:12:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 639 B/s wr, 65 op/s
Jan 20 15:12:58 compute-0 ceph-mon[74360]: pgmap v2703: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 639 B/s wr, 65 op/s
Jan 20 15:12:58 compute-0 nova_compute[250018]: 2026-01-20 15:12:58.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:12:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:12:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:12:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:12:59.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:12:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/270906769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:00.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:00 compute-0 nova_compute[250018]: 2026-01-20 15:13:00.176 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:00 compute-0 sudo[357084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:00 compute-0 sudo[357084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:00 compute-0 sudo[357084]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:00 compute-0 sudo[357109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:00 compute-0 sudo[357109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:00 compute-0 sudo[357109]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Jan 20 15:13:01 compute-0 ceph-mon[74360]: pgmap v2704: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 KiB/s wr, 90 op/s
Jan 20 15:13:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3721231144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:01.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:01 compute-0 podman[357135]: 2026-01-20 15:13:01.470957373 +0000 UTC m=+0.059189208 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:13:01 compute-0 podman[357134]: 2026-01-20 15:13:01.506882202 +0000 UTC m=+0.095124807 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:13:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:02 compute-0 nova_compute[250018]: 2026-01-20 15:13:02.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2748790219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/992957130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 64 op/s
Jan 20 15:13:03 compute-0 nova_compute[250018]: 2026-01-20 15:13:03.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:03 compute-0 ceph-mon[74360]: pgmap v2705: 321 pgs: 321 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 64 op/s
Jan 20 15:13:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:03.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:03 compute-0 nova_compute[250018]: 2026-01-20 15:13:03.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.071 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.072 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.072 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:04.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 73 op/s
Jan 20 15:13:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926783721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.552 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2926783721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.701 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.702 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4213MB free_disk=20.942672729492188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.702 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.702 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.776 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.776 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.795 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:04 compute-0 nova_compute[250018]: 2026-01-20 15:13:04.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:05.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187333900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.223 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.229 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.342 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.423 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:13:05 compute-0 nova_compute[250018]: 2026-01-20 15:13:05.424 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:05 compute-0 ceph-mon[74360]: pgmap v2706: 321 pgs: 321 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 73 op/s
Jan 20 15:13:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/187333900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:06 compute-0 nova_compute[250018]: 2026-01-20 15:13:06.425 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 104 op/s
Jan 20 15:13:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:07 compute-0 ceph-mon[74360]: pgmap v2707: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 104 op/s
Jan 20 15:13:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:08.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 976 KiB/s wr, 89 op/s
Jan 20 15:13:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2234610671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:08 compute-0 nova_compute[250018]: 2026-01-20 15:13:08.718 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:08 compute-0 nova_compute[250018]: 2026-01-20 15:13:08.810 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:09 compute-0 nova_compute[250018]: 2026-01-20 15:13:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:09 compute-0 ceph-mon[74360]: pgmap v2708: 321 pgs: 321 active+clean; 227 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 976 KiB/s wr, 89 op/s
Jan 20 15:13:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/221289373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:10.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:10 compute-0 nova_compute[250018]: 2026-01-20 15:13:10.180 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463661681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 198 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 20 15:13:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1463661681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:10 compute-0 ceph-mon[74360]: pgmap v2709: 321 pgs: 321 active+clean; 198 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 20 15:13:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:11 compute-0 nova_compute[250018]: 2026-01-20 15:13:11.316 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:11 compute-0 nova_compute[250018]: 2026-01-20 15:13:11.573 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:13:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009402117648427322 of space, bias 1.0, pg target 0.28206352945281965 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150154708728829 of space, bias 1.0, pg target 0.9450464126186487 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:13:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:13:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:12.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:13:13 compute-0 nova_compute[250018]: 2026-01-20 15:13:13.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:13 compute-0 nova_compute[250018]: 2026-01-20 15:13:13.068 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:13 compute-0 nova_compute[250018]: 2026-01-20 15:13:13.068 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:13:13 compute-0 nova_compute[250018]: 2026-01-20 15:13:13.092 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:13:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:13 compute-0 ceph-mon[74360]: pgmap v2710: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:13:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2024585432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:13 compute-0 nova_compute[250018]: 2026-01-20 15:13:13.812 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:14 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 15:13:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:14.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 20 15:13:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2794425772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:13:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2794425772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:13:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:15 compute-0 nova_compute[250018]: 2026-01-20 15:13:15.182 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:15 compute-0 ceph-mon[74360]: pgmap v2711: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 20 15:13:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:13:16 compute-0 ceph-mon[74360]: pgmap v2712: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:13:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:16 compute-0 sshd-session[357230]: Connection closed by authenticating user root 134.122.57.138 port 45820 [preauth]
Jan 20 15:13:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:17.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 900 KiB/s wr, 60 op/s
Jan 20 15:13:18 compute-0 nova_compute[250018]: 2026-01-20 15:13:18.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:19.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:19 compute-0 ceph-mon[74360]: pgmap v2713: 321 pgs: 321 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 900 KiB/s wr, 60 op/s
Jan 20 15:13:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:20 compute-0 nova_compute[250018]: 2026-01-20 15:13:20.244 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:20 compute-0 sudo[357235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:20 compute-0 sudo[357235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:20 compute-0 sudo[357235]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:20 compute-0 sudo[357260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:20 compute-0 sudo[357260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:20 compute-0 sudo[357260]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 20 15:13:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:21 compute-0 ceph-mon[74360]: pgmap v2714: 321 pgs: 321 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 20 15:13:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:22.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:22 compute-0 ceph-mon[74360]: pgmap v2715: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 20 15:13:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.329 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.329 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.391 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.507 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.508 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.514 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.514 250022 INFO nova.compute.claims [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.612 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:23 compute-0 nova_compute[250018]: 2026-01-20 15:13:23.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043402433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.039 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.046 250022 DEBUG nova.compute.provider_tree [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.065 250022 DEBUG nova.scheduler.client.report [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:13:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4043402433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.096 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.097 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:13:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:24.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.196 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.197 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.248 250022 INFO nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.320 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.359 250022 INFO nova.virt.block_device [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Booting with volume c46398b2-4b96-4ab6-ac9b-93d7ea349e92 at /dev/vda
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.435 250022 DEBUG nova.policy [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e9cc4ce3e069479ba9c789b378a68a1d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fff727019f86407498e83d7948d54962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.501 250022 DEBUG os_brick.utils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.502 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.517 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.518 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[55a7f241-7c33-4e93-8028-eb1ef4efee1b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.519 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.527 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.527 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[2a88ef83-1e50-4a1d-859b-30487ab72345]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.529 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.536 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.536 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[98839f87-6c42-469e-b151-1e228d76e666]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.537 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[3820b2e5-d821-4c1a-8fdd-f600a7d345ab]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.537 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.565 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.568 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.568 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.569 250022 DEBUG os_brick.initiator.connectors.lightos [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.569 250022 DEBUG os_brick.utils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:13:24 compute-0 nova_compute[250018]: 2026-01-20 15:13:24.570 250022 DEBUG nova.virt.block_device [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating existing volume attachment record: e7dd1bf8-e492-41ff-9f83-dc8c2cd2340c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:13:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:13:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.281 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:25 compute-0 ceph-mon[74360]: pgmap v2716: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.425 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Successfully created port: 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.724 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.726 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.726 250022 INFO nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Creating image(s)
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.727 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.727 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Ensure instance console log exists: /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.727 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.728 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:25 compute-0 nova_compute[250018]: 2026-01-20 15:13:25.728 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.218 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Successfully updated port: 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.245 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.245 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.245 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.320 250022 DEBUG nova.compute.manager [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-changed-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.320 250022 DEBUG nova.compute.manager [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Refreshing instance network info cache due to event network-changed-1421cc5f-9a45-447a-bb3a-3f13dcc5a309. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.320 250022 DEBUG oslo_concurrency.lockutils [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3326666623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:26 compute-0 nova_compute[250018]: 2026-01-20 15:13:26.464 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:13:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 20 15:13:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:27.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.400462) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007400513, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 714, "num_deletes": 254, "total_data_size": 850728, "memory_usage": 863848, "flush_reason": "Manual Compaction"}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 20 15:13:27 compute-0 ceph-mon[74360]: pgmap v2717: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007407799, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 840017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60882, "largest_seqno": 61595, "table_properties": {"data_size": 836370, "index_size": 1426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8828, "raw_average_key_size": 19, "raw_value_size": 828875, "raw_average_value_size": 1871, "num_data_blocks": 63, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768921962, "oldest_key_time": 1768921962, "file_creation_time": 1768922007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 7369 microseconds, and 2585 cpu microseconds.
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.407838) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 840017 bytes OK
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.407854) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.409643) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.409657) EVENT_LOG_v1 {"time_micros": 1768922007409652, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.409670) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 847048, prev total WAL file size 847048, number of live WAL files 2.
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.410159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(820KB)], [137(10MB)]
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007410233, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11826323, "oldest_snapshot_seqno": -1}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8766 keys, 9951420 bytes, temperature: kUnknown
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007500484, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9951420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9896080, "index_size": 32315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 231962, "raw_average_key_size": 26, "raw_value_size": 9743152, "raw_average_value_size": 1111, "num_data_blocks": 1224, "num_entries": 8766, "num_filter_entries": 8766, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.500734) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9951420 bytes
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502847) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.9 rd, 110.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(25.9) write-amplify(11.8) OK, records in: 9288, records dropped: 522 output_compression: NoCompression
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.502862) EVENT_LOG_v1 {"time_micros": 1768922007502855, "job": 84, "event": "compaction_finished", "compaction_time_micros": 90322, "compaction_time_cpu_micros": 48552, "output_level": 6, "num_output_files": 1, "total_output_size": 9951420, "num_input_records": 9288, "num_output_records": 8766, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007503075, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922007504950, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.410066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.505092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.505104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.505108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.505122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:13:27.505127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:13:27 compute-0 nova_compute[250018]: 2026-01-20 15:13:27.759 250022 DEBUG nova.network.neutron [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.008 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.009 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Instance network_info: |[{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.010 250022 DEBUG oslo_concurrency.lockutils [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.011 250022 DEBUG nova.network.neutron [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Refreshing network info cache for port 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.016 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Start _get_guest_xml network_info=[{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': 'e7dd1bf8-e492-41ff-9f83-dc8c2cd2340c', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c46398b2-4b96-4ab6-ac9b-93d7ea349e92', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c46398b2-4b96-4ab6-ac9b-93d7ea349e92', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e79c0704-f95e-422f-9c25-ed35fca7cb7c', 'attached_at': '', 'detached_at': '', 'volume_id': 'c46398b2-4b96-4ab6-ac9b-93d7ea349e92', 'serial': 'c46398b2-4b96-4ab6-ac9b-93d7ea349e92', 'multiattach': True}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.022 250022 WARNING nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.027 250022 DEBUG nova.virt.libvirt.host [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.027 250022 DEBUG nova.virt.libvirt.host [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.030 250022 DEBUG nova.virt.libvirt.host [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.030 250022 DEBUG nova.virt.libvirt.host [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.031 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.031 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.032 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.032 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.032 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.033 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.033 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.033 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.033 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.034 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.034 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.034 250022 DEBUG nova.virt.hardware [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.064 250022 DEBUG nova.storage.rbd_utils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.067 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:28.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1030759791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028771629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.511 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.587 250022 DEBUG nova.virt.libvirt.vif [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-102275358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-102275358',id=182,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-hnemwa3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:24Z,user_data=None,user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=e79c0704-f95e-422f-9c25-ed35fca7cb7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.588 250022 DEBUG nova.network.os_vif_util [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.588 250022 DEBUG nova.network.os_vif_util [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.589 250022 DEBUG nova.objects.instance [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_devices' on Instance uuid e79c0704-f95e-422f-9c25-ed35fca7cb7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.607 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <uuid>e79c0704-f95e-422f-9c25-ed35fca7cb7c</uuid>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <name>instance-000000b6</name>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-102275358</nova:name>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:13:28</nova:creationTime>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:user uuid="e9cc4ce3e069479ba9c789b378a68a1d">tempest-AttachVolumeMultiAttachTest-418194625-project-member</nova:user>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:project uuid="fff727019f86407498e83d7948d54962">tempest-AttachVolumeMultiAttachTest-418194625</nova:project>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <nova:port uuid="1421cc5f-9a45-447a-bb3a-3f13dcc5a309">
Jan 20 15:13:28 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <system>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="serial">e79c0704-f95e-422f-9c25-ed35fca7cb7c</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="uuid">e79c0704-f95e-422f-9c25-ed35fca7cb7c</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </system>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <os>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </os>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <features>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </features>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config">
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-c46398b2-4b96-4ab6-ac9b-93d7ea349e92">
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:13:28 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <serial>c46398b2-4b96-4ab6-ac9b-93d7ea349e92</serial>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <shareable/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:3b:1e:c2"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <target dev="tap1421cc5f-9a"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/console.log" append="off"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <video>
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </video>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:13:28 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:13:28 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:13:28 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:13:28 compute-0 nova_compute[250018]: </domain>
Jan 20 15:13:28 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.608 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Preparing to wait for external event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.609 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.609 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.610 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.610 250022 DEBUG nova.virt.libvirt.vif [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-102275358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-102275358',id=182,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-hnemwa3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:24Z,user_data=None,user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=e79c0704-f95e-422f-9c25-ed35fca7cb7c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.611 250022 DEBUG nova.network.os_vif_util [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.612 250022 DEBUG nova.network.os_vif_util [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.612 250022 DEBUG os_vif [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.613 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.613 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.614 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.616 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.616 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1421cc5f-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.617 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1421cc5f-9a, col_values=(('external_ids', {'iface-id': '1421cc5f-9a45-447a-bb3a-3f13dcc5a309', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:1e:c2', 'vm-uuid': 'e79c0704-f95e-422f-9c25-ed35fca7cb7c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.638 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:28 compute-0 NetworkManager[48960]: <info>  [1768922008.6400] manager: (tap1421cc5f-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.641 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.648 250022 INFO os_vif [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a')
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.704 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.704 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.704 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No VIF found with MAC fa:16:3e:3b:1e:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.705 250022 INFO nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Using config drive
Jan 20 15:13:28 compute-0 nova_compute[250018]: 2026-01-20 15:13:28.728 250022 DEBUG nova.storage.rbd_utils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.121 250022 INFO nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Creating config drive at /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.127 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3qm1n85v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.262 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3qm1n85v" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.289 250022 DEBUG nova.storage.rbd_utils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.293 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4028771629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:29 compute-0 ceph-mon[74360]: pgmap v2718: 321 pgs: 321 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.448 250022 DEBUG oslo_concurrency.processutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config e79c0704-f95e-422f-9c25-ed35fca7cb7c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.450 250022 INFO nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Deleting local config drive /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c/disk.config because it was imported into RBD.
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.473 250022 DEBUG nova.network.neutron [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updated VIF entry in instance network info cache for port 1421cc5f-9a45-447a-bb3a-3f13dcc5a309. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.474 250022 DEBUG nova.network.neutron [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.491 250022 DEBUG oslo_concurrency.lockutils [req-dc519264-e9c8-4136-b635-d35a8628f147 req-d793afd0-4594-43a6-99ed-7c35482b52dc 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:29 compute-0 kernel: tap1421cc5f-9a: entered promiscuous mode
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.5038] manager: (tap1421cc5f-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/315)
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.506 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 ovn_controller[148666]: 2026-01-20T15:13:29Z|00651|binding|INFO|Claiming lport 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 for this chassis.
Jan 20 15:13:29 compute-0 ovn_controller[148666]: 2026-01-20T15:13:29Z|00652|binding|INFO|1421cc5f-9a45-447a-bb3a-3f13dcc5a309: Claiming fa:16:3e:3b:1e:c2 10.100.0.8
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 systemd-machined[216401]: New machine qemu-80-instance-000000b6.
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.573 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-000000b6.
Jan 20 15:13:29 compute-0 ovn_controller[148666]: 2026-01-20T15:13:29Z|00653|binding|INFO|Setting lport 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 ovn-installed in OVS
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.579 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 systemd-udevd[357430]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.6032] device (tap1421cc5f-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.6040] device (tap1421cc5f-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:13:29 compute-0 ovn_controller[148666]: 2026-01-20T15:13:29Z|00654|binding|INFO|Setting lport 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 up in Southbound
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.620 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:1e:c2 10.100.0.8'], port_security=['fa:16:3e:3b:1e:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e79c0704-f95e-422f-9c25-ed35fca7cb7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '2', 'neutron:security_group_ids': '278e6fc3-62b7-45c0-b1a3-c75cbe3171fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1421cc5f-9a45-447a-bb3a-3f13dcc5a309) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.621 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab bound to our chassis
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.622 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.634 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e350606c-4f89-47a8-8eb6-9a3333c2dd74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.635 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1f4a971-01 in ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.638 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1f4a971-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.638 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8b783-6346-4017-ad69-34587652ae01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.639 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f33b5313-60f9-4a28-a691-cd50ad5bbd6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.651 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[c63cc14c-3999-4b3d-a73c-d72f8701a74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.675 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a394aa-ea4b-4bbc-9155-36e34ad68926]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.700 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[44772e07-aa29-42a9-8325-3ad427d1e5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 systemd-udevd[357435]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.7079] manager: (tapc1f4a971-00): new Veth device (/org/freedesktop/NetworkManager/Devices/316)
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.707 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e4559424-88c5-4289-9b7b-dd47b2429e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.738 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[8252113c-dc13-4774-9fec-ee372e40cd57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.743 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cd972831-ff42-439b-864a-f853018072ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.7646] device (tapc1f4a971-00): carrier: link connected
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.769 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a130bfeb-ed1b-49fa-99d3-4f5267af6f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.784 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[252b613a-0197-4f51-82d9-02c191a82174]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357463, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.798 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[749ae7a3-aa92-4027-a671-43b97f6c15d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:30f0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795048, 'tstamp': 795048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357464, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.813 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[30a44070-0ede-4be2-bb75-9be03038457d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357465, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.837 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec4930a-aecd-4fe7-a27f-a8626e3b3c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.887 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebfa76e-24c2-437d-95fc-296ef8cd2c78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.888 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.888 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.889 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.890 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 NetworkManager[48960]: <info>  [1768922009.8912] manager: (tapc1f4a971-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Jan 20 15:13:29 compute-0 kernel: tapc1f4a971-00: entered promiscuous mode
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.894 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.895 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 ovn_controller[148666]: 2026-01-20T15:13:29Z|00655|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.910 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.912 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.912 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[43ac79f7-8d26-4a69-ab14-baf754c46675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.913 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.pid.haproxy
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:13:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:29.913 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'env', 'PROCESS_TAG=haproxy-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.990 250022 DEBUG nova.compute.manager [req-08193732-cf4b-4eef-9b65-dee448515c10 req-76786d30-1536-408a-9df9-8842d256509e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.990 250022 DEBUG oslo_concurrency.lockutils [req-08193732-cf4b-4eef-9b65-dee448515c10 req-76786d30-1536-408a-9df9-8842d256509e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.990 250022 DEBUG oslo_concurrency.lockutils [req-08193732-cf4b-4eef-9b65-dee448515c10 req-76786d30-1536-408a-9df9-8842d256509e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.991 250022 DEBUG oslo_concurrency.lockutils [req-08193732-cf4b-4eef-9b65-dee448515c10 req-76786d30-1536-408a-9df9-8842d256509e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:29 compute-0 nova_compute[250018]: 2026-01-20 15:13:29.991 250022 DEBUG nova.compute.manager [req-08193732-cf4b-4eef-9b65-dee448515c10 req-76786d30-1536-408a-9df9-8842d256509e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Processing event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:13:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:30.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:30 compute-0 podman[357497]: 2026-01-20 15:13:30.242206398 +0000 UTC m=+0.047160723 container create 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.282 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:30 compute-0 systemd[1]: Started libpod-conmon-0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1.scope.
Jan 20 15:13:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:30 compute-0 podman[357497]: 2026-01-20 15:13:30.215658592 +0000 UTC m=+0.020612907 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9819d32ccd5dfe0ff2da99d01f7801b8fbd0bc6a449f02d71010f2caa7c9d1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:30 compute-0 podman[357497]: 2026-01-20 15:13:30.326402819 +0000 UTC m=+0.131357144 container init 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:13:30 compute-0 podman[357497]: 2026-01-20 15:13:30.331155487 +0000 UTC m=+0.136109782 container start 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:13:30 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [NOTICE]   (357518) : New worker (357520) forked
Jan 20 15:13:30 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [NOTICE]   (357518) : Loading success.
Jan 20 15:13:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 244 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 15:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:30.783 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:30.783 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.868 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.869 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922010.8687074, e79c0704-f95e-422f-9c25-ed35fca7cb7c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.869 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] VM Started (Lifecycle Event)
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.872 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.875 250022 INFO nova.virt.libvirt.driver [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Instance spawned successfully.
Jan 20 15:13:30 compute-0 nova_compute[250018]: 2026-01-20 15:13:30.875 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.147 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.150 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:31.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.315 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.316 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.317 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.317 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.318 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.318 250022 DEBUG nova.virt.libvirt.driver [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.521 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.521 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922010.8688192, e79c0704-f95e-422f-9c25-ed35fca7cb7c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.522 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] VM Paused (Lifecycle Event)
Jan 20 15:13:31 compute-0 ceph-mon[74360]: pgmap v2719: 321 pgs: 321 active+clean; 244 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 141 op/s
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.766 250022 INFO nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Took 6.04 seconds to spawn the instance on the hypervisor.
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.767 250022 DEBUG nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.768 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.778 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922010.8715923, e79c0704-f95e-422f-9c25-ed35fca7cb7c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.779 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] VM Resumed (Lifecycle Event)
Jan 20 15:13:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.804 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.807 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.845 250022 INFO nova.compute.manager [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Took 8.37 seconds to build instance.
Jan 20 15:13:31 compute-0 nova_compute[250018]: 2026-01-20 15:13:31.863 250022 DEBUG oslo_concurrency.lockutils [None req-9f95432c-3112-4209-a1b8-fcc9cd7720b0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.069 250022 DEBUG nova.compute.manager [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.070 250022 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.070 250022 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.070 250022 DEBUG oslo_concurrency.lockutils [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.071 250022 DEBUG nova.compute.manager [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] No waiting events found dispatching network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:13:32 compute-0 nova_compute[250018]: 2026-01-20 15:13:32.071 250022 WARNING nova.compute.manager [req-de2a3c0f-8b93-454c-8732-1b48901607c0 req-dfc04484-6634-4fb7-90c0-e9c764b7da19 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received unexpected event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 for instance with vm_state active and task_state None.
Jan 20 15:13:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:32.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:32 compute-0 podman[357573]: 2026-01-20 15:13:32.45989318 +0000 UTC m=+0.053714100 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:13:32 compute-0 podman[357572]: 2026-01-20 15:13:32.489184141 +0000 UTC m=+0.084744748 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:13:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 278 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 4.0 MiB/s wr, 106 op/s
Jan 20 15:13:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1603427827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2022380678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:33.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:33 compute-0 ceph-mon[74360]: pgmap v2720: 321 pgs: 321 active+clean; 278 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 4.0 MiB/s wr, 106 op/s
Jan 20 15:13:33 compute-0 nova_compute[250018]: 2026-01-20 15:13:33.639 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:34.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Jan 20 15:13:34 compute-0 sudo[357617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:34 compute-0 sudo[357617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-0 sudo[357617]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:34 compute-0 sudo[357642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:13:34 compute-0 sudo[357642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-0 sudo[357642]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:34 compute-0 sudo[357667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:34 compute-0 sudo[357667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:34 compute-0 sudo[357667]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:35 compute-0 sudo[357692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:13:35 compute-0 sudo[357692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:35.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:35 compute-0 nova_compute[250018]: 2026-01-20 15:13:35.284 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:35 compute-0 ceph-mon[74360]: pgmap v2721: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 3.9 MiB/s wr, 114 op/s
Jan 20 15:13:35 compute-0 sudo[357692]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7cb883c6-d126-42d7-82be-4d6eb63ed989 does not exist
Jan 20 15:13:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8526f1c9-8cd3-46e7-a6f8-4e56c6a9aa7d does not exist
Jan 20 15:13:35 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7bfe46b6-d9aa-4d82-bba4-aa74a3a5166f does not exist
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:13:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:13:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:35 compute-0 sudo[357747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:35 compute-0 sudo[357747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:35 compute-0 sudo[357747]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:35 compute-0 sudo[357772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:13:35 compute-0 sudo[357772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:36 compute-0 sudo[357772]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:36 compute-0 sudo[357797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:36 compute-0 sudo[357797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:36 compute-0 sudo[357797]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:36 compute-0 sudo[357822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:13:36 compute-0 sudo[357822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:36.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:13:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.450450316 +0000 UTC m=+0.042624861 container create a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 15:13:36 compute-0 systemd[1]: Started libpod-conmon-a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2.scope.
Jan 20 15:13:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.429240714 +0000 UTC m=+0.021415279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.535251193 +0000 UTC m=+0.127425748 container init a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.542149079 +0000 UTC m=+0.134323614 container start a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.545453639 +0000 UTC m=+0.137628174 container attach a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 20 15:13:36 compute-0 determined_snyder[357906]: 167 167
Jan 20 15:13:36 compute-0 systemd[1]: libpod-a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2.scope: Deactivated successfully.
Jan 20 15:13:36 compute-0 conmon[357906]: conmon a91abbf7da0ddd6176f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2.scope/container/memory.events
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.549867468 +0000 UTC m=+0.142042023 container died a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-62eda5f5df328f3bab00bf5992ff16752dcb5f94a679617e87b9b3c57831aba1-merged.mount: Deactivated successfully.
Jan 20 15:13:36 compute-0 podman[357889]: 2026-01-20 15:13:36.593598717 +0000 UTC m=+0.185773252 container remove a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_snyder, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:13:36 compute-0 systemd[1]: libpod-conmon-a91abbf7da0ddd6176f7db0ed6cf6a45428ff4315b58fcab8c242ca51efba2f2.scope: Deactivated successfully.
Jan 20 15:13:36 compute-0 podman[357929]: 2026-01-20 15:13:36.760432507 +0000 UTC m=+0.046560176 container create c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:13:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:36 compute-0 systemd[1]: Started libpod-conmon-c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f.scope.
Jan 20 15:13:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:36 compute-0 podman[357929]: 2026-01-20 15:13:36.741823496 +0000 UTC m=+0.027951185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:36 compute-0 podman[357929]: 2026-01-20 15:13:36.864219447 +0000 UTC m=+0.150347126 container init c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:13:36 compute-0 podman[357929]: 2026-01-20 15:13:36.876514029 +0000 UTC m=+0.162641698 container start c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:13:36 compute-0 podman[357929]: 2026-01-20 15:13:36.880079015 +0000 UTC m=+0.166206684 container attach c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:13:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:37 compute-0 ceph-mon[74360]: pgmap v2722: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:37.511 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:13:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:37.512 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:13:37 compute-0 nova_compute[250018]: 2026-01-20 15:13:37.513 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:37 compute-0 stupefied_zhukovsky[357946]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:13:37 compute-0 stupefied_zhukovsky[357946]: --> relative data size: 1.0
Jan 20 15:13:37 compute-0 stupefied_zhukovsky[357946]: --> All data devices are unavailable
Jan 20 15:13:37 compute-0 systemd[1]: libpod-c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f.scope: Deactivated successfully.
Jan 20 15:13:37 compute-0 podman[357929]: 2026-01-20 15:13:37.64613862 +0000 UTC m=+0.932266289 container died c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec6146e3b050dc9df3d2c5fd1789807436ca77591224cf06662402e112c6e11-merged.mount: Deactivated successfully.
Jan 20 15:13:37 compute-0 podman[357929]: 2026-01-20 15:13:37.74845066 +0000 UTC m=+1.034578329 container remove c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 20 15:13:37 compute-0 systemd[1]: libpod-conmon-c780f5274d79377db57811d0fdabd31ba6bfab0496831156c65e98f4203e218f.scope: Deactivated successfully.
Jan 20 15:13:37 compute-0 sudo[357822]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:37 compute-0 sudo[357973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:37 compute-0 sudo[357973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:37 compute-0 sudo[357973]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:37 compute-0 sudo[357998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:13:37 compute-0 sudo[357998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:37 compute-0 sudo[357998]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:37 compute-0 sudo[358023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:37 compute-0 sudo[358023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:37 compute-0 sudo[358023]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:37 compute-0 sudo[358048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:13:37 compute-0 sudo[358048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.318123897 +0000 UTC m=+0.052117917 container create 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:13:38 compute-0 systemd[1]: Started libpod-conmon-6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83.scope.
Jan 20 15:13:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.289895315 +0000 UTC m=+0.023889385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.385920785 +0000 UTC m=+0.119914825 container init 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.392190105 +0000 UTC m=+0.126184125 container start 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.395243697 +0000 UTC m=+0.129237777 container attach 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:13:38 compute-0 dazzling_blackwell[358130]: 167 167
Jan 20 15:13:38 compute-0 systemd[1]: libpod-6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83.scope: Deactivated successfully.
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.398568016 +0000 UTC m=+0.132562036 container died 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d3fdafcf3cf04ef8773d80a870cdf2221cd8c0992fed77233cd62588f30ae54-merged.mount: Deactivated successfully.
Jan 20 15:13:38 compute-0 podman[358114]: 2026-01-20 15:13:38.433624612 +0000 UTC m=+0.167618632 container remove 6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:13:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/152969632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:38 compute-0 systemd[1]: libpod-conmon-6fb2ac3ce3dfed60890f48c54ce2cdc2fef8fae79e5171c1258ce59ae1f73b83.scope: Deactivated successfully.
Jan 20 15:13:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:38 compute-0 podman[358153]: 2026-01-20 15:13:38.592057185 +0000 UTC m=+0.037752319 container create b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:13:38 compute-0 systemd[1]: Started libpod-conmon-b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a.scope.
Jan 20 15:13:38 compute-0 nova_compute[250018]: 2026-01-20 15:13:38.642 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09d0ea547014d00402b38e4231906ea93444f3c4116e470e5eb3daa4321533d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09d0ea547014d00402b38e4231906ea93444f3c4116e470e5eb3daa4321533d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09d0ea547014d00402b38e4231906ea93444f3c4116e470e5eb3daa4321533d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09d0ea547014d00402b38e4231906ea93444f3c4116e470e5eb3daa4321533d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:38 compute-0 podman[358153]: 2026-01-20 15:13:38.575720235 +0000 UTC m=+0.021415389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:38 compute-0 podman[358153]: 2026-01-20 15:13:38.676658778 +0000 UTC m=+0.122353922 container init b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:13:38 compute-0 podman[358153]: 2026-01-20 15:13:38.683212595 +0000 UTC m=+0.128907729 container start b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:13:38 compute-0 podman[358153]: 2026-01-20 15:13:38.686544635 +0000 UTC m=+0.132239789 container attach b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:13:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:39.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:39 compute-0 musing_mestorf[358170]: {
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:     "0": [
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:         {
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "devices": [
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "/dev/loop3"
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             ],
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "lv_name": "ceph_lv0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "lv_size": "7511998464",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "name": "ceph_lv0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "tags": {
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.cluster_name": "ceph",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.crush_device_class": "",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.encrypted": "0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.osd_id": "0",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.type": "block",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:                 "ceph.vdo": "0"
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             },
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "type": "block",
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:             "vg_name": "ceph_vg0"
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:         }
Jan 20 15:13:39 compute-0 musing_mestorf[358170]:     ]
Jan 20 15:13:39 compute-0 musing_mestorf[358170]: }
Jan 20 15:13:39 compute-0 systemd[1]: libpod-b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a.scope: Deactivated successfully.
Jan 20 15:13:39 compute-0 podman[358153]: 2026-01-20 15:13:39.407876763 +0000 UTC m=+0.853571887 container died b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e09d0ea547014d00402b38e4231906ea93444f3c4116e470e5eb3daa4321533d-merged.mount: Deactivated successfully.
Jan 20 15:13:39 compute-0 podman[358153]: 2026-01-20 15:13:39.456635978 +0000 UTC m=+0.902331112 container remove b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:13:39 compute-0 ceph-mon[74360]: pgmap v2723: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 20 15:13:39 compute-0 sudo[358048]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:39 compute-0 systemd[1]: libpod-conmon-b7afff9a973defc6d9557339398f1dce30996a9c6c2de0c05ce0f755e7294e8a.scope: Deactivated successfully.
Jan 20 15:13:39 compute-0 sudo[358194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:39 compute-0 sudo[358194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:39 compute-0 sudo[358194]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:39 compute-0 sudo[358219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:13:39 compute-0 sudo[358219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:39 compute-0 sudo[358219]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:39 compute-0 sudo[358244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:39 compute-0 sudo[358244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:39 compute-0 sudo[358244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:39 compute-0 sudo[358269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:13:39 compute-0 sudo[358269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.076582371 +0000 UTC m=+0.045735055 container create c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:13:40 compute-0 systemd[1]: Started libpod-conmon-c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e.scope.
Jan 20 15:13:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.058431772 +0000 UTC m=+0.027584516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.15400616 +0000 UTC m=+0.123158844 container init c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.158958574 +0000 UTC m=+0.128111258 container start c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.162029836 +0000 UTC m=+0.131182610 container attach c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:13:40 compute-0 systemd[1]: libpod-c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e.scope: Deactivated successfully.
Jan 20 15:13:40 compute-0 pedantic_clarke[358350]: 167 167
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.169351124 +0000 UTC m=+0.138503808 container died c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:13:40 compute-0 conmon[358350]: conmon c1122884da9b6704de00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e.scope/container/memory.events
Jan 20 15:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e9060de1ccc53a6c32d1e94664401d48dc4791fc23d3d48bc486d82affc1fba-merged.mount: Deactivated successfully.
Jan 20 15:13:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:40 compute-0 podman[358334]: 2026-01-20 15:13:40.20891447 +0000 UTC m=+0.178067154 container remove c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:13:40 compute-0 systemd[1]: libpod-conmon-c1122884da9b6704de008d940a2a4be11ecfded1f234c38bd5053e0ece94e09e.scope: Deactivated successfully.
Jan 20 15:13:40 compute-0 nova_compute[250018]: 2026-01-20 15:13:40.286 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:40 compute-0 podman[358375]: 2026-01-20 15:13:40.374309632 +0000 UTC m=+0.040187754 container create 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:13:40 compute-0 systemd[1]: Started libpod-conmon-849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123.scope.
Jan 20 15:13:40 compute-0 podman[358375]: 2026-01-20 15:13:40.35865148 +0000 UTC m=+0.024529622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:13:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d98dc45ba6d5b58055cede06207b8920a3ef747ca28d97631ba2c066a1b876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d98dc45ba6d5b58055cede06207b8920a3ef747ca28d97631ba2c066a1b876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d98dc45ba6d5b58055cede06207b8920a3ef747ca28d97631ba2c066a1b876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d98dc45ba6d5b58055cede06207b8920a3ef747ca28d97631ba2c066a1b876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:40 compute-0 podman[358375]: 2026-01-20 15:13:40.501656407 +0000 UTC m=+0.167534559 container init 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:13:40 compute-0 podman[358375]: 2026-01-20 15:13:40.51696174 +0000 UTC m=+0.182839862 container start 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:13:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 20 15:13:40 compute-0 podman[358375]: 2026-01-20 15:13:40.520751113 +0000 UTC m=+0.186629265 container attach 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:13:40 compute-0 sudo[358394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:40 compute-0 sudo[358394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:40 compute-0 sudo[358394]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:40 compute-0 sudo[358421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:40 compute-0 sudo[358421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:40 compute-0 sudo[358421]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]: {
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:         "osd_id": 0,
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:         "type": "bluestore"
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]:     }
Jan 20 15:13:41 compute-0 boring_aryabhata[358391]: }
Jan 20 15:13:41 compute-0 systemd[1]: libpod-849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123.scope: Deactivated successfully.
Jan 20 15:13:41 compute-0 conmon[358391]: conmon 849aca555f285098d573 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123.scope/container/memory.events
Jan 20 15:13:41 compute-0 podman[358375]: 2026-01-20 15:13:41.451785047 +0000 UTC m=+1.117663179 container died 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-84d98dc45ba6d5b58055cede06207b8920a3ef747ca28d97631ba2c066a1b876-merged.mount: Deactivated successfully.
Jan 20 15:13:41 compute-0 podman[358375]: 2026-01-20 15:13:41.505152847 +0000 UTC m=+1.171030969 container remove 849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:13:41 compute-0 systemd[1]: libpod-conmon-849aca555f285098d5731b2a2fd323238ae84341f6789945e0dd6206416ee123.scope: Deactivated successfully.
Jan 20 15:13:41 compute-0 sudo[358269]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:13:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:13:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4a7fa1c1-7b21-4723-b9fc-db9288e4e3dc does not exist
Jan 20 15:13:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3ec77d23-001f-451c-ab9d-e0ef7a749ef5 does not exist
Jan 20 15:13:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 42f5a94d-6a71-4b6d-8192-a599076b0ed7 does not exist
Jan 20 15:13:41 compute-0 sudo[358476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:13:41 compute-0 sudo[358476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:41 compute-0 sudo[358476]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:41 compute-0 ceph-mon[74360]: pgmap v2724: 321 pgs: 321 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 20 15:13:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/40024272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:13:41 compute-0 sudo[358501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:13:41 compute-0 sudo[358501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:13:41 compute-0 sudo[358501]: pam_unix(sudo:session): session closed for user root
Jan 20 15:13:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:42 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:42.514 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 194 op/s
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.570 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.571 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.586 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.660 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.660 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.668 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.669 250022 INFO nova.compute.claims [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:13:42 compute-0 nova_compute[250018]: 2026-01-20 15:13:42.816 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:43.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:13:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719836672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.361 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.366 250022 DEBUG nova.compute.provider_tree [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.382 250022 DEBUG nova.scheduler.client.report [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.403 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.404 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.450 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.453 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.476 250022 INFO nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.498 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.564 250022 INFO nova.virt.block_device [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Booting with volume 670aaaab-6a81-487d-a346-d03d445d8abe at /dev/vda
Jan 20 15:13:43 compute-0 ceph-mon[74360]: pgmap v2725: 321 pgs: 321 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 194 op/s
Jan 20 15:13:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2719836672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.680 250022 DEBUG nova.policy [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bf422e55e158420cbdae75f07a3bb97a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a49638950e1543fa8e0d251af5479623', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.709 250022 DEBUG os_brick.utils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.710 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.721 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.721 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[8f683041-167f-4e48-a47e-f4e0c478b486]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.722 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.729 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.730 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[22ee232e-81dd-4784-b7e1-c903164595cb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.731 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.738 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.739 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[75761293-694d-4cc7-8624-6182c003903e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.740 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[bc23e28c-d80d-4040-b2a3-61e975beb095]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.740 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.784 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.787 250022 DEBUG os_brick.initiator.connectors.lightos [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.787 250022 DEBUG os_brick.initiator.connectors.lightos [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.788 250022 DEBUG os_brick.initiator.connectors.lightos [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.788 250022 DEBUG os_brick.utils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:13:43 compute-0 nova_compute[250018]: 2026-01-20 15:13:43.788 250022 DEBUG nova.virt.block_device [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating existing volume attachment record: ba0da680-f438-4117-8180-9febfc710658 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:13:43 compute-0 ovn_controller[148666]: 2026-01-20T15:13:43Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:1e:c2 10.100.0.8
Jan 20 15:13:43 compute-0 ovn_controller[148666]: 2026-01-20T15:13:43Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:1e:c2 10.100.0.8
Jan 20 15:13:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:13:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:44.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.495 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Successfully created port: d593d88a-ba32-4023-9e92-973064a24fbe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:13:44 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:44 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3096174409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 307 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 15:13:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/418648228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3096174409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:44 compute-0 ceph-mon[74360]: pgmap v2726: 321 pgs: 321 active+clean; 307 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.900 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.901 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.902 250022 INFO nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Creating image(s)
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.902 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.902 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Ensure instance console log exists: /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.903 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.903 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:44 compute-0 nova_compute[250018]: 2026-01-20 15:13:44.903 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.288 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.641 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Successfully updated port: d593d88a-ba32-4023-9e92-973064a24fbe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.656 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.656 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquired lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.656 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:13:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1877180579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.738 250022 DEBUG nova.compute.manager [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.738 250022 DEBUG nova.compute.manager [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing instance network info cache due to event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.739 250022 DEBUG oslo_concurrency.lockutils [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:45 compute-0 nova_compute[250018]: 2026-01-20 15:13:45.878 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:46.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.640 250022 DEBUG nova.network.neutron [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.675 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Releasing lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.675 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Instance network_info: |[{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.676 250022 DEBUG oslo_concurrency.lockutils [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.676 250022 DEBUG nova.network.neutron [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.679 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Start _get_guest_xml network_info=[{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'attachment_id': 'ba0da680-f438-4117-8180-9febfc710658', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-670aaaab-6a81-487d-a346-d03d445d8abe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '670aaaab-6a81-487d-a346-d03d445d8abe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5380c3d8-edb4-4366-85ab-3dc76ecc1f43', 'attached_at': '', 'detached_at': '', 'volume_id': '670aaaab-6a81-487d-a346-d03d445d8abe', 'serial': '670aaaab-6a81-487d-a346-d03d445d8abe'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.682 250022 WARNING nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.686 250022 DEBUG nova.virt.libvirt.host [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.686 250022 DEBUG nova.virt.libvirt.host [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.688 250022 DEBUG nova.virt.libvirt.host [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.689 250022 DEBUG nova.virt.libvirt.host [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.690 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.690 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.690 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.690 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.691 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.691 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.691 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.691 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.691 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.692 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.692 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.692 250022 DEBUG nova.virt.hardware [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.720 250022 DEBUG nova.storage.rbd_utils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:46 compute-0 nova_compute[250018]: 2026-01-20 15:13:46.724 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:46 compute-0 ceph-mon[74360]: pgmap v2727: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 20 15:13:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:13:47 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3855612625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.174 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.198 250022 DEBUG nova.virt.libvirt.vif [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1422997990',display_name='tempest-TestVolumeBootPattern-server-1422997990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1422997990',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-fn8hs3j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:43Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=5380c3d8-edb4-4366-85ab-3dc76ecc1f43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:13:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.199 250022 DEBUG nova.network.os_vif_util [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:47.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.200 250022 DEBUG nova.network.os_vif_util [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.201 250022 DEBUG nova.objects.instance [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.217 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <uuid>5380c3d8-edb4-4366-85ab-3dc76ecc1f43</uuid>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <name>instance-000000b9</name>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:name>tempest-TestVolumeBootPattern-server-1422997990</nova:name>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:13:46</nova:creationTime>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:user uuid="bf422e55e158420cbdae75f07a3bb97a">tempest-TestVolumeBootPattern-194644003-project-member</nova:user>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:project uuid="a49638950e1543fa8e0d251af5479623">tempest-TestVolumeBootPattern-194644003</nova:project>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <nova:port uuid="d593d88a-ba32-4023-9e92-973064a24fbe">
Jan 20 15:13:47 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <system>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="serial">5380c3d8-edb4-4366-85ab-3dc76ecc1f43</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="uuid">5380c3d8-edb4-4366-85ab-3dc76ecc1f43</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </system>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <os>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </os>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <features>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </features>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config">
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </source>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-670aaaab-6a81-487d-a346-d03d445d8abe">
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </source>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:13:47 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <serial>670aaaab-6a81-487d-a346-d03d445d8abe</serial>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:b5:46:3f"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <target dev="tapd593d88a-ba"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/console.log" append="off"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <video>
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </video>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:13:47 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:13:47 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:13:47 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:13:47 compute-0 nova_compute[250018]: </domain>
Jan 20 15:13:47 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.219 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Preparing to wait for external event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.219 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.220 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.220 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.221 250022 DEBUG nova.virt.libvirt.vif [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:13:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1422997990',display_name='tempest-TestVolumeBootPattern-server-1422997990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1422997990',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-fn8hs3j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:13:43Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=5380c3d8-edb4-4366-85ab-3dc76ecc1f43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.221 250022 DEBUG nova.network.os_vif_util [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.222 250022 DEBUG nova.network.os_vif_util [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.222 250022 DEBUG os_vif [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.223 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.224 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.227 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.227 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd593d88a-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.227 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd593d88a-ba, col_values=(('external_ids', {'iface-id': 'd593d88a-ba32-4023-9e92-973064a24fbe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:46:3f', 'vm-uuid': '5380c3d8-edb4-4366-85ab-3dc76ecc1f43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:47 compute-0 NetworkManager[48960]: <info>  [1768922027.2297] manager: (tapd593d88a-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.236 250022 INFO os_vif [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba')
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.290 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.291 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.291 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] No VIF found with MAC fa:16:3e:b5:46:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.291 250022 INFO nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Using config drive
Jan 20 15:13:47 compute-0 nova_compute[250018]: 2026-01-20 15:13:47.316 250022 DEBUG nova.storage.rbd_utils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3855612625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:13:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Jan 20 15:13:48 compute-0 ceph-mon[74360]: pgmap v2728: 321 pgs: 321 active+clean; 365 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Jan 20 15:13:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.093 250022 INFO nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Creating config drive at /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.097 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6j4_7v3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.125 250022 DEBUG nova.network.neutron [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updated VIF entry in instance network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.126 250022 DEBUG nova.network.neutron [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.153 250022 DEBUG oslo_concurrency.lockutils [req-bc2b9eec-3add-44c0-a18e-75d3a7264b91 req-fb362809-d746-4498-87fc-f5af4a8dac03 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:13:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.232 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn6j4_7v3" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.263 250022 DEBUG nova.storage.rbd_utils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] rbd image 5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.266 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config 5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 399 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 271 op/s
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.571 250022 DEBUG oslo_concurrency.processutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config 5380c3d8-edb4-4366-85ab-3dc76ecc1f43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.572 250022 INFO nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Deleting local config drive /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43/disk.config because it was imported into RBD.
Jan 20 15:13:50 compute-0 kernel: tapd593d88a-ba: entered promiscuous mode
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.6522] manager: (tapd593d88a-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Jan 20 15:13:50 compute-0 ovn_controller[148666]: 2026-01-20T15:13:50Z|00656|binding|INFO|Claiming lport d593d88a-ba32-4023-9e92-973064a24fbe for this chassis.
Jan 20 15:13:50 compute-0 ovn_controller[148666]: 2026-01-20T15:13:50Z|00657|binding|INFO|d593d88a-ba32-4023-9e92-973064a24fbe: Claiming fa:16:3e:b5:46:3f 10.100.0.12
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.654 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.664 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:46:3f 10.100.0.12'], port_security=['fa:16:3e:b5:46:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5380c3d8-edb4-4366-85ab-3dc76ecc1f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c29da5ec-6cb2-4047-ba89-70fa67a96476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d593d88a-ba32-4023-9e92-973064a24fbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.666 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d593d88a-ba32-4023-9e92-973064a24fbe in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 bound to our chassis
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.667 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.684 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9e48d58f-14c0-4b8e-b2dc-64b737c93f46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.685 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb677f1a9-d1 in ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.688 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb677f1a9-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.688 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ace24afd-faf0-462b-b64b-7e326decdb40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 systemd-udevd[358674]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.690 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[56dca850-ac3f-4dfc-8c7d-61ff411ac728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 systemd-machined[216401]: New machine qemu-81-instance-000000b9.
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.708 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[97e7e2b9-c8c6-43eb-8bc9-1bd243c07f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.7116] device (tapd593d88a-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.7122] device (tapd593d88a-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.724 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:50 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-000000b9.
Jan 20 15:13:50 compute-0 ovn_controller[148666]: 2026-01-20T15:13:50Z|00658|binding|INFO|Setting lport d593d88a-ba32-4023-9e92-973064a24fbe ovn-installed in OVS
Jan 20 15:13:50 compute-0 ovn_controller[148666]: 2026-01-20T15:13:50Z|00659|binding|INFO|Setting lport d593d88a-ba32-4023-9e92-973064a24fbe up in Southbound
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.732 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.736 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[530c5c35-4435-4a76-9d75-2ad9737c18d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.777 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[db7f7259-49eb-4aaf-ac3b-ef8a8e83bfc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.782 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aa290ad7-f617-4b2b-9e6c-d846602587b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.7838] manager: (tapb677f1a9-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.822 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3e08bbd3-71a6-4077-b92c-46213c2723e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.825 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[25722a55-e450-4ddb-9cfe-197bd95f3be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.8540] device (tapb677f1a9-d0): carrier: link connected
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.860 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4b5809-1742-4f0f-8320-165a4f8f6633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.876 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8e80ba-b0da-4473-928d-363c7a508277]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797157, 'reachable_time': 41005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358707, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.888 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe2e95e-6570-4e7b-af53-5538290f3982]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:c834'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 797157, 'tstamp': 797157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358708, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.902 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a55940db-11c9-4fd1-afa1-c7f463d047af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb677f1a9-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:c8:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797157, 'reachable_time': 41005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358709, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.935 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ca0118-a20f-4e72-9269-5a825338cbee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.996 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c33307a5-bb47-4fd5-8de6-a8ef0c0d38bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.997 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.997 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:13:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:50.997 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb677f1a9-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:50 compute-0 nova_compute[250018]: 2026-01-20 15:13:50.999 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:50 compute-0 NetworkManager[48960]: <info>  [1768922030.9994] manager: (tapb677f1a9-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 20 15:13:51 compute-0 kernel: tapb677f1a9-d0: entered promiscuous mode
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:51.001 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb677f1a9-d0, col_values=(('external_ids', {'iface-id': '1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:13:51 compute-0 ovn_controller[148666]: 2026-01-20T15:13:51Z|00660|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.015 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:51.019 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:51.020 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d3945b3c-494b-40b7-819c-a6b389d1ba61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:51.020 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b677f1a9-dbaa-4373-8466-bd9ccf067b91.pid.haproxy
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b677f1a9-dbaa-4373-8466-bd9ccf067b91
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:13:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:13:51.021 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'env', 'PROCESS_TAG=haproxy-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b677f1a9-dbaa-4373-8466-bd9ccf067b91.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:13:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:51 compute-0 podman[358776]: 2026-01-20 15:13:51.394605727 +0000 UTC m=+0.066262758 container create 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.440 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922031.4399238, 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.441 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] VM Started (Lifecycle Event)
Jan 20 15:13:51 compute-0 podman[358776]: 2026-01-20 15:13:51.349901671 +0000 UTC m=+0.021558722 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.458 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.464 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922031.440664, 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.464 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] VM Paused (Lifecycle Event)
Jan 20 15:13:51 compute-0 NetworkManager[48960]: <info>  [1768922031.4831] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Jan 20 15:13:51 compute-0 NetworkManager[48960]: <info>  [1768922031.4836] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.484 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.485 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.492 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.511 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:13:51 compute-0 systemd[1]: Started libpod-conmon-78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876.scope.
Jan 20 15:13:51 compute-0 ceph-mon[74360]: pgmap v2729: 321 pgs: 321 active+clean; 399 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 271 op/s
Jan 20 15:13:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deff99ba270527f612a48f0d03c948ddc109c49f0ef75c6a048849552407896/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:13:51 compute-0 ovn_controller[148666]: 2026-01-20T15:13:51Z|00661|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:13:51 compute-0 ovn_controller[148666]: 2026-01-20T15:13:51Z|00662|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:51 compute-0 podman[358776]: 2026-01-20 15:13:51.625626259 +0000 UTC m=+0.297283310 container init 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.628 250022 DEBUG nova.compute.manager [req-0ce4f935-8307-424f-bb21-8c00de6cbc86 req-39b3cfe3-bde9-4642-9c30-e839a38658f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.628 250022 DEBUG oslo_concurrency.lockutils [req-0ce4f935-8307-424f-bb21-8c00de6cbc86 req-39b3cfe3-bde9-4642-9c30-e839a38658f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.628 250022 DEBUG oslo_concurrency.lockutils [req-0ce4f935-8307-424f-bb21-8c00de6cbc86 req-39b3cfe3-bde9-4642-9c30-e839a38658f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.629 250022 DEBUG oslo_concurrency.lockutils [req-0ce4f935-8307-424f-bb21-8c00de6cbc86 req-39b3cfe3-bde9-4642-9c30-e839a38658f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.629 250022 DEBUG nova.compute.manager [req-0ce4f935-8307-424f-bb21-8c00de6cbc86 req-39b3cfe3-bde9-4642-9c30-e839a38658f3 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Processing event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.629 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:13:51 compute-0 podman[358776]: 2026-01-20 15:13:51.631442156 +0000 UTC m=+0.303099187 container start 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.633 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922031.6328995, 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.634 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] VM Resumed (Lifecycle Event)
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.635 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.640 250022 INFO nova.virt.libvirt.driver [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Instance spawned successfully.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.641 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.668 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.672 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:13:51 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [NOTICE]   (358801) : New worker (358803) forked
Jan 20 15:13:51 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [NOTICE]   (358801) : Loading success.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.697 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.700 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.701 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.702 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.702 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.703 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.704 250022 DEBUG nova.virt.libvirt.driver [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:13:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.805 250022 INFO nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Took 6.90 seconds to spawn the instance on the hypervisor.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.805 250022 DEBUG nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.866 250022 INFO nova.compute.manager [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Took 9.23 seconds to build instance.
Jan 20 15:13:51 compute-0 nova_compute[250018]: 2026-01-20 15:13:51.882 250022 DEBUG oslo_concurrency.lockutils [None req-009e1aad-bf2b-43c0-8318-9dd6e620e746 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:52 compute-0 nova_compute[250018]: 2026-01-20 15:13:52.229 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 404 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 238 op/s
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:13:52
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Jan 20 15:13:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:13:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:53 compute-0 ceph-mon[74360]: pgmap v2730: 321 pgs: 321 active+clean; 404 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 238 op/s
Jan 20 15:13:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:54.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.1 MiB/s wr, 251 op/s
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.623 250022 DEBUG nova.compute.manager [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.624 250022 DEBUG oslo_concurrency.lockutils [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.624 250022 DEBUG oslo_concurrency.lockutils [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.624 250022 DEBUG oslo_concurrency.lockutils [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.624 250022 DEBUG nova.compute.manager [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] No waiting events found dispatching network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:13:54 compute-0 nova_compute[250018]: 2026-01-20 15:13:54.625 250022 WARNING nova.compute.manager [req-5bcfe1e7-fd62-4e03-94e7-8eac5d91dec1 req-81778ee6-344e-401d-a43a-b3aec5c66d0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received unexpected event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe for instance with vm_state active and task_state None.
Jan 20 15:13:54 compute-0 ceph-mon[74360]: pgmap v2731: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.1 MiB/s wr, 251 op/s
Jan 20 15:13:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:55 compute-0 nova_compute[250018]: 2026-01-20 15:13:55.292 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:13:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:56.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:13:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 262 op/s
Jan 20 15:13:56 compute-0 ceph-mon[74360]: pgmap v2732: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.0 MiB/s wr, 262 op/s
Jan 20 15:13:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:13:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:57 compute-0 nova_compute[250018]: 2026-01-20 15:13:57.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:13:57 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3414784352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:13:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:13:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 15:13:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:13:58.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 15:13:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 20 15:13:58 compute-0 ceph-mon[74360]: pgmap v2733: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.967 250022 DEBUG nova.compute.manager [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.968 250022 DEBUG nova.compute.manager [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing instance network info cache due to event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.968 250022 DEBUG oslo_concurrency.lockutils [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.969 250022 DEBUG oslo_concurrency.lockutils [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:13:58 compute-0 nova_compute[250018]: 2026-01-20 15:13:58.969 250022 DEBUG nova.network.neutron [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:13:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:13:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:13:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:13:59.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.449 250022 DEBUG nova.compute.manager [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.534 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.535 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.557 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.570 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.571 250022 INFO nova.compute.claims [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.571 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'resources' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.581 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.590 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.626 250022 INFO nova.compute.resource_tracker [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating resource usage from migration c8ea2eca-34f1-4b31-9699-90661d5995f9
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.627 250022 DEBUG nova.compute.resource_tracker [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting to track incoming migration c8ea2eca-34f1-4b31-9699-90661d5995f9 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 15:13:59 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2197494371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:13:59 compute-0 nova_compute[250018]: 2026-01-20 15:13:59.786 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:00.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764944478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.253 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.262 250022 DEBUG nova.compute.provider_tree [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.293 250022 DEBUG nova.scheduler.client.report [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.326 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.326 250022 INFO nova.compute.manager [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Migrating
Jan 20 15:14:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.6 MiB/s wr, 279 op/s
Jan 20 15:14:00 compute-0 sudo[358840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:00 compute-0 sudo[358840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:00 compute-0 sudo[358840]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:00 compute-0 sudo[358865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:00 compute-0 sudo[358865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:00 compute-0 sudo[358865]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3764944478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:00 compute-0 ceph-mon[74360]: pgmap v2734: 321 pgs: 321 active+clean; 454 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.6 MiB/s wr, 279 op/s
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.990 250022 DEBUG nova.network.neutron [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updated VIF entry in instance network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:00 compute-0 nova_compute[250018]: 2026-01-20 15:14:00.991 250022 DEBUG nova.network.neutron [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:01 compute-0 nova_compute[250018]: 2026-01-20 15:14:01.014 250022 DEBUG oslo_concurrency.lockutils [req-041b58ed-c956-4497-8e28-f343a4a6d7ad req-3a5c58d9-e85a-4b2d-9436-6185acc694eb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/916155642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3060118312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:02 compute-0 nova_compute[250018]: 2026-01-20 15:14:02.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:02 compute-0 nova_compute[250018]: 2026-01-20 15:14:02.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:02 compute-0 sshd-session[358891]: Accepted publickey for nova from 192.168.122.101 port 41168 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:14:02 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 15:14:02 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 15:14:02 compute-0 systemd-logind[796]: New session 56 of user nova.
Jan 20 15:14:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 479 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 20 15:14:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 15:14:02 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 20 15:14:02 compute-0 systemd[358913]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:14:02 compute-0 podman[358894]: 2026-01-20 15:14:02.589301286 +0000 UTC m=+0.073493834 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:14:02 compute-0 podman[358895]: 2026-01-20 15:14:02.622279685 +0000 UTC m=+0.106471563 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:14:02 compute-0 systemd[358913]: Queued start job for default target Main User Target.
Jan 20 15:14:02 compute-0 systemd[358913]: Created slice User Application Slice.
Jan 20 15:14:02 compute-0 systemd[358913]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:14:02 compute-0 systemd[358913]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 15:14:02 compute-0 systemd[358913]: Reached target Paths.
Jan 20 15:14:02 compute-0 systemd[358913]: Reached target Timers.
Jan 20 15:14:02 compute-0 systemd[358913]: Starting D-Bus User Message Bus Socket...
Jan 20 15:14:02 compute-0 systemd[358913]: Starting Create User's Volatile Files and Directories...
Jan 20 15:14:02 compute-0 systemd[358913]: Finished Create User's Volatile Files and Directories.
Jan 20 15:14:02 compute-0 systemd[358913]: Listening on D-Bus User Message Bus Socket.
Jan 20 15:14:02 compute-0 systemd[358913]: Reached target Sockets.
Jan 20 15:14:02 compute-0 systemd[358913]: Reached target Basic System.
Jan 20 15:14:02 compute-0 systemd[358913]: Reached target Main User Target.
Jan 20 15:14:02 compute-0 systemd[358913]: Startup finished in 153ms.
Jan 20 15:14:02 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 20 15:14:02 compute-0 systemd[1]: Started Session 56 of User nova.
Jan 20 15:14:02 compute-0 sshd-session[358891]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:14:02 compute-0 sshd-session[358956]: Received disconnect from 192.168.122.101 port 41168:11: disconnected by user
Jan 20 15:14:02 compute-0 sshd-session[358956]: Disconnected from user nova 192.168.122.101 port 41168
Jan 20 15:14:02 compute-0 sshd-session[358891]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:14:02 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Jan 20 15:14:02 compute-0 systemd-logind[796]: Session 56 logged out. Waiting for processes to exit.
Jan 20 15:14:02 compute-0 systemd-logind[796]: Removed session 56.
Jan 20 15:14:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1147698371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:02 compute-0 ceph-mon[74360]: pgmap v2735: 321 pgs: 321 active+clean; 479 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.6 MiB/s wr, 183 op/s
Jan 20 15:14:02 compute-0 sshd-session[358958]: Accepted publickey for nova from 192.168.122.101 port 41180 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:14:02 compute-0 systemd-logind[796]: New session 58 of user nova.
Jan 20 15:14:02 compute-0 systemd[1]: Started Session 58 of User nova.
Jan 20 15:14:02 compute-0 sshd-session[358958]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:14:03 compute-0 sshd-session[358961]: Received disconnect from 192.168.122.101 port 41180:11: disconnected by user
Jan 20 15:14:03 compute-0 sshd-session[358961]: Disconnected from user nova 192.168.122.101 port 41180
Jan 20 15:14:03 compute-0 sshd-session[358958]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:14:03 compute-0 systemd-logind[796]: Session 58 logged out. Waiting for processes to exit.
Jan 20 15:14:03 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Jan 20 15:14:03 compute-0 systemd-logind[796]: Removed session 58.
Jan 20 15:14:03 compute-0 nova_compute[250018]: 2026-01-20 15:14:03.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:03.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1568540187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.075 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.075 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278568393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.497 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 481 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.572 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.572 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.576 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.576 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.731 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.732 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3776MB free_disk=20.876388549804688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.732 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.732 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.800 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration for instance 128af7d9-155f-468d-9873-98c816f0df9e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.821 250022 INFO nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating resource usage from migration c8ea2eca-34f1-4b31-9699-90661d5995f9
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.821 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting to track incoming migration c8ea2eca-34f1-4b31-9699-90661d5995f9 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.846 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance e79c0704-f95e-422f-9c25-ed35fca7cb7c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.847 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:14:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2257608367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1278568393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:04 compute-0 ceph-mon[74360]: pgmap v2736: 321 pgs: 321 active+clean; 481 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.867 250022 WARNING nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 128af7d9-155f-468d-9873-98c816f0df9e has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.867 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.867 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.887 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.903 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.904 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.917 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:14:04 compute-0 nova_compute[250018]: 2026-01-20 15:14:04.936 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.031 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:05 compute-0 sshd-session[358955]: Connection closed by authenticating user root 134.122.57.138 port 36850 [preauth]
Jan 20 15:14:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.341 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350738824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.514 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.519 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.534 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:05 compute-0 ovn_controller[148666]: 2026-01-20T15:14:05Z|00087|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.12
Jan 20 15:14:05 compute-0 ovn_controller[148666]: 2026-01-20T15:14:05Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b5:46:3f 10.100.0.12
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.556 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.556 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.879 250022 DEBUG nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.879 250022 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.879 250022 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.879 250022 DEBUG oslo_concurrency.lockutils [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.880 250022 DEBUG nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:05 compute-0 nova_compute[250018]: 2026-01-20 15:14:05.880 250022 WARNING nova.compute.manager [req-a1eda262-5399-4ec9-924e-5d5597919829 req-789f59f2-89ed-49d3-b1d9-b47f31b6876e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state resize_migrating.
Jan 20 15:14:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3350738824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Jan 20 15:14:06 compute-0 nova_compute[250018]: 2026-01-20 15:14:06.557 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:06 compute-0 nova_compute[250018]: 2026-01-20 15:14:06.782 250022 INFO nova.network.neutron [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating port 9de5453d-b548-429c-8fc2-7b012cb8ebdf with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:14:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:07 compute-0 ceph-mon[74360]: pgmap v2737: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.577 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.578 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.578 250022 DEBUG nova.network.neutron [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.683 250022 DEBUG nova.compute.manager [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.683 250022 DEBUG nova.compute.manager [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.684 250022 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.994 250022 DEBUG nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.994 250022 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.994 250022 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.995 250022 DEBUG oslo_concurrency.lockutils [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.995 250022 DEBUG nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:07 compute-0 nova_compute[250018]: 2026-01-20 15:14:07.995 250022 WARNING nova.compute.manager [req-061373c2-4af0-43d6-9bcd-2aa411425f8d req-dd2b9761-cc89-488c-94cf-e4aef802ec64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state active and task_state resize_migrated.
Jan 20 15:14:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:08.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 728 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:14:08 compute-0 ovn_controller[148666]: 2026-01-20T15:14:08Z|00089|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.12
Jan 20 15:14:08 compute-0 ovn_controller[148666]: 2026-01-20T15:14:08Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b5:46:3f 10.100.0.12
Jan 20 15:14:08 compute-0 nova_compute[250018]: 2026-01-20 15:14:08.902 250022 DEBUG nova.network.neutron [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:08 compute-0 nova_compute[250018]: 2026-01-20 15:14:08.921 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:08 compute-0 nova_compute[250018]: 2026-01-20 15:14:08.924 250022 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:08 compute-0 nova_compute[250018]: 2026-01-20 15:14:08.924 250022 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.009 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.010 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.011 250022 INFO nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Creating image(s)
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.049 250022 DEBUG nova.storage.rbd_utils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] creating snapshot(nova-resize) on rbd image(128af7d9-155f-468d-9873-98c816f0df9e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:14:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 20 15:14:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 20 15:14:09 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 20 15:14:09 compute-0 ceph-mon[74360]: pgmap v2738: 321 pgs: 321 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 728 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.637 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.767 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.768 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Ensure instance console log exists: /var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.769 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.769 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.770 250022 DEBUG oslo_concurrency.lockutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.774 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Start _get_guest_xml network_info=[{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.780 250022 WARNING nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.785 250022 DEBUG nova.virt.libvirt.host [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.786 250022 DEBUG nova.virt.libvirt.host [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.789 250022 DEBUG nova.virt.libvirt.host [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.789 250022 DEBUG nova.virt.libvirt.host [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.791 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.791 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.791 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.791 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.792 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.792 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.792 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.792 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.793 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.793 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.793 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.793 250022 DEBUG nova.virt.hardware [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.793 250022 DEBUG nova.objects.instance [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:09 compute-0 nova_compute[250018]: 2026-01-20 15:14:09.820 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:14:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469366976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.254 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.290 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.342 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:46:3f 10.100.0.12
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:46:3f 10.100.0.12
Jan 20 15:14:10 compute-0 ceph-mon[74360]: osdmap e398: 3 total, 3 up, 3 in
Jan 20 15:14:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/469366976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331031870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.724 250022 DEBUG oslo_concurrency.processutils [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.726 250022 DEBUG nova.virt.libvirt.vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:13:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:06Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.727 250022 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.728 250022 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.730 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <uuid>128af7d9-155f-468d-9873-98c816f0df9e</uuid>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <name>instance-000000b7</name>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1998945962</nova:name>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:14:09</nova:creationTime>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <nova:port uuid="9de5453d-b548-429c-8fc2-7b012cb8ebdf">
Jan 20 15:14:10 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <system>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="serial">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="uuid">128af7d9-155f-468d-9873-98c816f0df9e</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </system>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <os>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </os>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <features>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </features>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk">
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </source>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/128af7d9-155f-468d-9873-98c816f0df9e_disk.config">
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </source>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:14:10 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a8:1d:e9"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <target dev="tap9de5453d-b5"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/128af7d9-155f-468d-9873-98c816f0df9e/console.log" append="off"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <video>
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </video>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:14:10 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:14:10 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:14:10 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:14:10 compute-0 nova_compute[250018]: </domain>
Jan 20 15:14:10 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.731 250022 DEBUG nova.virt.libvirt.vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:13:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:06Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.732 250022 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1250108698", "vif_mac": "fa:16:3e:a8:1d:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.732 250022 DEBUG nova.network.os_vif_util [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.733 250022 DEBUG os_vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.734 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.734 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.737 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9de5453d-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.738 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9de5453d-b5, col_values=(('external_ids', {'iface-id': '9de5453d-b548-429c-8fc2-7b012cb8ebdf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:1d:e9', 'vm-uuid': '128af7d9-155f-468d-9873-98c816f0df9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.739 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 NetworkManager[48960]: <info>  [1768922050.7404] manager: (tap9de5453d-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.742 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.746 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.747 250022 INFO os_vif [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.799 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.799 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.799 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] No VIF found with MAC fa:16:3e:a8:1d:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.800 250022 INFO nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Using config drive
Jan 20 15:14:10 compute-0 kernel: tap9de5453d-b5: entered promiscuous mode
Jan 20 15:14:10 compute-0 NetworkManager[48960]: <info>  [1768922050.8831] manager: (tap9de5453d-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00663|binding|INFO|Claiming lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf for this chassis.
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00664|binding|INFO|9de5453d-b548-429c-8fc2-7b012cb8ebdf: Claiming fa:16:3e:a8:1d:e9 10.100.0.4
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.884 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.891 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.892 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba bound to our chassis
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.894 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00665|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf ovn-installed in OVS
Jan 20 15:14:10 compute-0 ovn_controller[148666]: 2026-01-20T15:14:10Z|00666|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf up in Southbound
Jan 20 15:14:10 compute-0 nova_compute[250018]: 2026-01-20 15:14:10.906 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.909 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[351d9080-9021-4978-879f-314112a589c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.910 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd07527d3-71 in ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.912 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd07527d3-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.913 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4fab57f5-3144-45d0-bfc7-af488e9349df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.914 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[627bf8d2-9278-49a8-bdbe-b16153a2f4ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 systemd-machined[216401]: New machine qemu-82-instance-000000b7.
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.926 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[360f0cde-568c-40ac-8d0e-796f0ddcd2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 systemd-udevd[359182]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:10 compute-0 NetworkManager[48960]: <info>  [1768922050.9446] device (tap9de5453d-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:14:10 compute-0 NetworkManager[48960]: <info>  [1768922050.9451] device (tap9de5453d-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.949 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f53278ba-cdd2-4325-9d80-4fe28b7673cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-000000b7.
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.981 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[6c68d525-c274-40d9-92de-72b0a8574264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:10.988 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7d5208-92ad-47c0-8de7-dce5d9edc91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:10 compute-0 NetworkManager[48960]: <info>  [1768922050.9888] manager: (tapd07527d3-70): new Veth device (/org/freedesktop/NetworkManager/Devices/326)
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.020 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[38f2bc76-93d5-4458-a421-338565236271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.024 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3b7e59-78e9-4a95-bddf-a99fff8da7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 NetworkManager[48960]: <info>  [1768922051.0487] device (tapd07527d3-70): carrier: link connected
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.052 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1f2be2-71d3-4790-8bc6-2508ff4e99c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.071 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[662da89e-5ec3-4e36-8878-ae7a61e7ccd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799177, 'reachable_time': 33121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359213, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.086 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[02505567-1908-4b57-be81-0bd53e82019d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:33a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799177, 'tstamp': 799177}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359214, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.102 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[56c1cffa-d8f7-4eee-8558-99b54badeb9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd07527d3-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:33:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799177, 'reachable_time': 33121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359215, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.133 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[299a87be-b26a-4029-b4c3-71c25480759b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.201 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[43b92e49-4c78-428f-8266-0ed9660a69df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.202 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.203 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.203 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd07527d3-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:11 compute-0 kernel: tapd07527d3-70: entered promiscuous mode
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.204 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-0 NetworkManager[48960]: <info>  [1768922051.2054] manager: (tapd07527d3-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.213 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd07527d3-70, col_values=(('external_ids', {'iface-id': '311d5bf2-0b44-4ce1-9ec1-e7458d5df232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.214 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-0 ovn_controller[148666]: 2026-01-20T15:14:11Z|00667|binding|INFO|Releasing lport 311d5bf2-0b44-4ce1-9ec1-e7458d5df232 from this chassis (sb_readonly=0)
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.215 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.228 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.229 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.229 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ae7bc0-8470-461c-b681-669ba79ebd25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.230 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/d07527d3-7363-453c-9902-c562bab626ba.pid.haproxy
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID d07527d3-7363-453c-9902-c562bab626ba
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:14:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:11.230 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'env', 'PROCESS_TAG=haproxy-d07527d3-7363-453c-9902-c562bab626ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d07527d3-7363-453c-9902-c562bab626ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:14:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:11 compute-0 podman[359247]: 2026-01-20 15:14:11.574917845 +0000 UTC m=+0.047266457 container create 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:14:11 compute-0 systemd[1]: Started libpod-conmon-93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51.scope.
Jan 20 15:14:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:11 compute-0 podman[359247]: 2026-01-20 15:14:11.549848948 +0000 UTC m=+0.022197560 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:14:11 compute-0 ceph-mon[74360]: pgmap v2740: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 20 15:14:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3331031870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea40564ba393d1fb9e1e8d4be7c749148d2403cfbd736ca172e3ab669f281fed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:11 compute-0 podman[359247]: 2026-01-20 15:14:11.672329233 +0000 UTC m=+0.144677845 container init 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:11 compute-0 podman[359247]: 2026-01-20 15:14:11.680372809 +0000 UTC m=+0.152721401 container start 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 15:14:11 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [NOTICE]   (359307) : New worker (359309) forked
Jan 20 15:14:11 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [NOTICE]   (359307) : Loading success.
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.749 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922051.7494009, 128af7d9-155f-468d-9873-98c816f0df9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.750 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Resumed (Lifecycle Event)
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.752 250022 DEBUG nova.compute.manager [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.754 250022 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance running successfully.
Jan 20 15:14:11 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.756 250022 DEBUG nova.virt.libvirt.guest [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.756 250022 DEBUG nova.virt.libvirt.driver [None req-285d398b-bc35-41f1-80eb-34f9b43ae976 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.784 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.787 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005353918027638191 of space, bias 1.0, pg target 1.6061754082914574 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006489514993951237 of space, bias 1.0, pg target 1.94036498319142 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:14:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 20 15:14:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.821 250022 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.822 250022 DEBUG nova.network.neutron [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.842 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.842 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922051.7502105, 128af7d9-155f-468d-9873-98c816f0df9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.842 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Started (Lifecycle Event)
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.847 250022 DEBUG oslo_concurrency.lockutils [req-6c55cbfe-8bca-4b99-b488-8c07175e349c req-10bdd897-4ba8-49bf-b9ce-254d974383ed 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.874 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.877 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.933 250022 DEBUG nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.933 250022 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.934 250022 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.934 250022 DEBUG oslo_concurrency.lockutils [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.934 250022 DEBUG nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:11 compute-0 nova_compute[250018]: 2026-01-20 15:14:11.934 250022 WARNING nova.compute.manager [req-93ecd443-7114-407f-9823-5fa6636357e0 req-4295da27-e483-4903-be84-8847c1316b75 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state None.
Jan 20 15:14:12 compute-0 ceph-osd[84815]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 20 15:14:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 108 KiB/s wr, 182 op/s
Jan 20 15:14:12 compute-0 ceph-mon[74360]: pgmap v2741: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 108 KiB/s wr, 182 op/s
Jan 20 15:14:13 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 15:14:13 compute-0 systemd[358913]: Activating special unit Exit the Session...
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped target Main User Target.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped target Basic System.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped target Paths.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped target Sockets.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped target Timers.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 15:14:13 compute-0 systemd[358913]: Closed D-Bus User Message Bus Socket.
Jan 20 15:14:13 compute-0 systemd[358913]: Stopped Create User's Volatile Files and Directories.
Jan 20 15:14:13 compute-0 systemd[358913]: Removed slice User Application Slice.
Jan 20 15:14:13 compute-0 systemd[358913]: Reached target Shutdown.
Jan 20 15:14:13 compute-0 systemd[358913]: Finished Exit the Session.
Jan 20 15:14:13 compute-0 systemd[358913]: Reached target Exit the Session.
Jan 20 15:14:13 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 15:14:13 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 15:14:13 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 15:14:13 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 15:14:13 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 15:14:13 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 15:14:13 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 15:14:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:14:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:14:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:14:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:14:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:14:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1373685837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.035 250022 DEBUG nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.035 250022 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.035 250022 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.035 250022 DEBUG oslo_concurrency.lockutils [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.036 250022 DEBUG nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.036 250022 WARNING nova.compute.manager [req-312b01be-d4e9-4906-9088-3b048ff4cb74 req-4f600fd2-4dcf-4a37-a270-b33776175ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.286 250022 DEBUG nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.286 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.286 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:14 compute-0 nova_compute[250018]: 2026-01-20 15:14:14.287 250022 DEBUG nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 93 KiB/s wr, 201 op/s
Jan 20 15:14:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3444125027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:14 compute-0 ceph-mon[74360]: pgmap v2742: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 93 KiB/s wr, 201 op/s
Jan 20 15:14:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3444125027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.073 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:14:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.253 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.253 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.254 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.254 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e79c0704-f95e-422f-9c25-ed35fca7cb7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.312 250022 DEBUG nova.network.neutron [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.331 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 kernel: tap9de5453d-b5 (unregistering): left promiscuous mode
Jan 20 15:14:15 compute-0 NetworkManager[48960]: <info>  [1768922055.3890] device (tap9de5453d-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 ovn_controller[148666]: 2026-01-20T15:14:15Z|00668|binding|INFO|Releasing lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf from this chassis (sb_readonly=0)
Jan 20 15:14:15 compute-0 ovn_controller[148666]: 2026-01-20T15:14:15Z|00669|binding|INFO|Setting lport 9de5453d-b548-429c-8fc2-7b012cb8ebdf down in Southbound
Jan 20 15:14:15 compute-0 ovn_controller[148666]: 2026-01-20T15:14:15Z|00670|binding|INFO|Removing iface tap9de5453d-b5 ovn-installed in OVS
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.399 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.404 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:1d:e9 10.100.0.4'], port_security=['fa:16:3e:a8:1d:e9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '128af7d9-155f-468d-9873-98c816f0df9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d07527d3-7363-453c-9902-c562bab626ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b4895263-5fc5-4c5a-ab8d-547f570bc095', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf4d49b6-1d42-4171-8055-0d823fb37e66, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=9de5453d-b548-429c-8fc2-7b012cb8ebdf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.405 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 9de5453d-b548-429c-8fc2-7b012cb8ebdf in datapath d07527d3-7363-453c-9902-c562bab626ba unbound from our chassis
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.407 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d07527d3-7363-453c-9902-c562bab626ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.408 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4f0706-3ae4-4058-8aa2-68db67ef1a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.408 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d07527d3-7363-453c-9902-c562bab626ba namespace which is not needed anymore
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.415 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Jan 20 15:14:15 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b7.scope: Consumed 4.458s CPU time.
Jan 20 15:14:15 compute-0 systemd-machined[216401]: Machine qemu-82-instance-000000b7 terminated.
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [NOTICE]   (359307) : haproxy version is 2.8.14-c23fe91
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [NOTICE]   (359307) : path to executable is /usr/sbin/haproxy
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [WARNING]  (359307) : Exiting Master process...
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [WARNING]  (359307) : Exiting Master process...
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [ALERT]    (359307) : Current worker (359309) exited with code 143 (Terminated)
Jan 20 15:14:15 compute-0 neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba[359287]: [WARNING]  (359307) : All workers exited. Exiting... (0)
Jan 20 15:14:15 compute-0 systemd[1]: libpod-93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51.scope: Deactivated successfully.
Jan 20 15:14:15 compute-0 podman[359346]: 2026-01-20 15:14:15.546265823 +0000 UTC m=+0.045717985 container died 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51-userdata-shm.mount: Deactivated successfully.
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.573 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea40564ba393d1fb9e1e8d4be7c749148d2403cfbd736ca172e3ab669f281fed-merged.mount: Deactivated successfully.
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.579 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.584 250022 INFO nova.virt.libvirt.driver [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Instance destroyed successfully.
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.584 250022 DEBUG nova.objects.instance [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:15 compute-0 podman[359346]: 2026-01-20 15:14:15.588747198 +0000 UTC m=+0.088199350 container cleanup 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.599 250022 DEBUG nova.virt.libvirt.vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1998945962',display_name='tempest-TestNetworkAdvancedServerOps-server-1998945962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1998945962',id=183,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQtn9WZwWHpZ18eEtTY9zdPNbJgOayUdrvVmR1brDMxwKaiJ8tf9lOFdht6GjVy3Orpnh5Z5LatI7xEKad9rNtjFmwEczk5s4CmWp5ueE54bJ73h+pph+yq2VHvIP5rgg==',key_name='tempest-TestNetworkAdvancedServerOps-1645169738',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-mvt7thmt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:14:11Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=128af7d9-155f-468d-9873-98c816f0df9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.600 250022 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.600 250022 DEBUG nova.network.os_vif_util [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.600 250022 DEBUG os_vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:14:15 compute-0 systemd[1]: libpod-conmon-93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51.scope: Deactivated successfully.
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.602 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.603 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9de5453d-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.607 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.610 250022 INFO os_vif [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:1d:e9,bridge_name='br-int',has_traffic_filtering=True,id=9de5453d-b548-429c-8fc2-7b012cb8ebdf,network=Network(d07527d3-7363-453c-9902-c562bab626ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9de5453d-b5')
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.613 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.614 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.631 250022 DEBUG nova.objects.instance [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 128af7d9-155f-468d-9873-98c816f0df9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:15 compute-0 podman[359384]: 2026-01-20 15:14:15.652416445 +0000 UTC m=+0.040651247 container remove 93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.657 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3630f3bc-4dc2-4b16-8b3d-2b29090ee278]: (4, ('Tue Jan 20 03:14:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51)\n93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51\nTue Jan 20 03:14:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d07527d3-7363-453c-9902-c562bab626ba (93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51)\n93017a4df31f84f1c3b95d578b348dca2119306eada501b5230b4dba5b126f51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.659 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4fc484-5142-41b3-a087-cd13d7ee4dc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.660 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd07527d3-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.662 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 kernel: tapd07527d3-70: left promiscuous mode
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.675 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.676 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.677 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[751a819f-5fc6-40c9-81ec-d0210ee40c5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.700 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8666c206-4ca7-4033-8e4b-94bfcef379c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.702 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c6058395-a4ee-486e-9e36-f362907c5bb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.719 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[49005441-6469-4700-b2fe-2c4d0ff8fcb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799169, 'reachable_time': 43644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359399, 'error': None, 'target': 'ovnmeta-d07527d3-7363-453c-9902-c562bab626ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 systemd[1]: run-netns-ovnmeta\x2dd07527d3\x2d7363\x2d453c\x2d9902\x2dc562bab626ba.mount: Deactivated successfully.
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.722 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d07527d3-7363-453c-9902-c562bab626ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:14:15 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:15.722 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[dc064a35-ce9d-4326-8e1c-50f96bc9ab67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:15 compute-0 nova_compute[250018]: 2026-01-20 15:14:15.734 250022 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.152 250022 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.153 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.153 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.154 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.154 250022 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.154 250022 WARNING nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-unplugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.155 250022 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.155 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.155 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.155 250022 DEBUG oslo_concurrency.lockutils [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.156 250022 DEBUG nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.156 250022 WARNING nova.compute.manager [req-640f54c9-ef6f-4b56-a54a-7f6781e5e70a req-acf46d5b-f416-447d-a5de-eb042192bc60 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.211 250022 DEBUG oslo_concurrency.processutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.216 250022 DEBUG nova.compute.provider_tree [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.234 250022 DEBUG nova.scheduler.client.report [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:16.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3848990267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:16 compute-0 nova_compute[250018]: 2026-01-20 15:14:16.294 250022 DEBUG oslo_concurrency.lockutils [None req-ee866e29-0f1e-4cbf-b426-74bd65a7b594 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.344 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.365 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.366 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.815 250022 DEBUG nova.compute.manager [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.815 250022 DEBUG nova.compute.manager [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing instance network info cache due to event network-changed-9de5453d-b548-429c-8fc2-7b012cb8ebdf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.816 250022 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.816 250022 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:17 compute-0 nova_compute[250018]: 2026-01-20 15:14:17.816 250022 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Refreshing network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:18 compute-0 nova_compute[250018]: 2026-01-20 15:14:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:18 compute-0 ceph-mon[74360]: pgmap v2743: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:19.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:20 compute-0 nova_compute[250018]: 2026-01-20 15:14:20.083 250022 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updated VIF entry in instance network info cache for port 9de5453d-b548-429c-8fc2-7b012cb8ebdf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:20 compute-0 nova_compute[250018]: 2026-01-20 15:14:20.084 250022 DEBUG nova.network.neutron [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Updating instance_info_cache with network_info: [{"id": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "address": "fa:16:3e:a8:1d:e9", "network": {"id": "d07527d3-7363-453c-9902-c562bab626ba", "bridge": "br-int", "label": "tempest-network-smoke--1250108698", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9de5453d-b5", "ovs_interfaceid": "9de5453d-b548-429c-8fc2-7b012cb8ebdf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:20 compute-0 nova_compute[250018]: 2026-01-20 15:14:20.098 250022 DEBUG oslo_concurrency.lockutils [req-5c8a6c42-d363-41d1-993b-3458673b597a req-e041dbdf-95b4-4d3f-a85c-5f3c6d6d7d01 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-128af7d9-155f-468d-9873-98c816f0df9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2888658016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:20 compute-0 ceph-mon[74360]: pgmap v2744: 321 pgs: 321 active+clean; 486 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 24 KiB/s wr, 217 op/s
Jan 20 15:14:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:20 compute-0 nova_compute[250018]: 2026-01-20 15:14:20.348 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 501 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.4 MiB/s wr, 212 op/s
Jan 20 15:14:20 compute-0 nova_compute[250018]: 2026-01-20 15:14:20.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:20 compute-0 sudo[359425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:20 compute-0 sudo[359425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:20 compute-0 sudo[359425]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:20 compute-0 sudo[359450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:20 compute-0 sudo[359450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:20 compute-0 sudo[359450]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 20 15:14:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 20 15:14:21 compute-0 ceph-mon[74360]: pgmap v2745: 321 pgs: 321 active+clean; 501 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.4 MiB/s wr, 212 op/s
Jan 20 15:14:21 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 20 15:14:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:22 compute-0 ceph-mon[74360]: osdmap e399: 3 total, 3 up, 3 in
Jan 20 15:14:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 505 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 150 op/s
Jan 20 15:14:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/714890073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:23 compute-0 ceph-mon[74360]: pgmap v2747: 321 pgs: 321 active+clean; 505 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 150 op/s
Jan 20 15:14:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4287932142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.682 250022 DEBUG nova.compute.manager [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.682 250022 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "128af7d9-155f-468d-9873-98c816f0df9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.683 250022 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.683 250022 DEBUG oslo_concurrency.lockutils [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "128af7d9-155f-468d-9873-98c816f0df9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.683 250022 DEBUG nova.compute.manager [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] No waiting events found dispatching network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:23 compute-0 nova_compute[250018]: 2026-01-20 15:14:23.683 250022 WARNING nova.compute.manager [req-0bb21d69-33bb-4488-b61e-e71fda433142 req-af923cfc-861a-47cf-8b85-26ef2f466681 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Received unexpected event network-vif-plugged-9de5453d-b548-429c-8fc2-7b012cb8ebdf for instance with vm_state resized and task_state resize_reverting.
Jan 20 15:14:24 compute-0 nova_compute[250018]: 2026-01-20 15:14:24.061 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:24 compute-0 nova_compute[250018]: 2026-01-20 15:14:24.062 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:14:24 compute-0 nova_compute[250018]: 2026-01-20 15:14:24.076 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:14:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:24.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 515 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Jan 20 15:14:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:25 compute-0 nova_compute[250018]: 2026-01-20 15:14:25.349 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:25 compute-0 nova_compute[250018]: 2026-01-20 15:14:25.607 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:25 compute-0 ceph-mon[74360]: pgmap v2748: 321 pgs: 321 active+clean; 515 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Jan 20 15:14:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 15:14:26 compute-0 ceph-mon[74360]: pgmap v2749: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 20 15:14:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 20 15:14:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 20 15:14:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 20 15:14:26 compute-0 nova_compute[250018]: 2026-01-20 15:14:26.915 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:26 compute-0 nova_compute[250018]: 2026-01-20 15:14:26.915 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:26 compute-0 nova_compute[250018]: 2026-01-20 15:14:26.930 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.005 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.006 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.016 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.017 250022 INFO nova.compute.claims [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.163 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:14:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626811342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.593 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.601 250022 DEBUG nova.compute.provider_tree [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.621 250022 DEBUG nova.scheduler.client.report [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.713 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.714 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.752 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.752 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.773 250022 INFO nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:14:27 compute-0 nova_compute[250018]: 2026-01-20 15:14:27.813 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:14:27 compute-0 ceph-mon[74360]: osdmap e400: 3 total, 3 up, 3 in
Jan 20 15:14:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1626811342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.014 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.015 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.016 250022 INFO nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Creating image(s)
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.039 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.070 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.095 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.098 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.130 250022 DEBUG nova.policy [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e9cc4ce3e069479ba9c789b378a68a1d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fff727019f86407498e83d7948d54962', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.180 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.181 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.182 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.182 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.207 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.211 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Jan 20 15:14:28 compute-0 ceph-mon[74360]: pgmap v2751: 321 pgs: 321 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Jan 20 15:14:28 compute-0 nova_compute[250018]: 2026-01-20 15:14:28.999 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Successfully created port: 0e93d1de-671e-4e37-8e79-44bed7981254 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.211 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.000s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.275 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] resizing rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.610 250022 DEBUG nova.objects.instance [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'migration_context' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.623 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.623 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Ensure instance console log exists: /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.623 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.624 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:29 compute-0 nova_compute[250018]: 2026-01-20 15:14:29.624 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.352 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.370 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Successfully updated port: 0e93d1de-671e-4e37-8e79-44bed7981254 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.403 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.403 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.403 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.470 250022 DEBUG nova.compute.manager [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.470 250022 DEBUG nova.compute.manager [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.470 250022 DEBUG oslo_concurrency.lockutils [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 543 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.582 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922055.5815806, 128af7d9-155f-468d-9873-98c816f0df9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.583 250022 INFO nova.compute.manager [-] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] VM Stopped (Lifecycle Event)
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.604 250022 DEBUG nova.compute.manager [None req-a4dd1b85-c547-425b-88c7-d64e87b26ea5 - - - - - -] [instance: 128af7d9-155f-468d-9873-98c816f0df9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:30 compute-0 nova_compute[250018]: 2026-01-20 15:14:30.608 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:31 compute-0 nova_compute[250018]: 2026-01-20 15:14:31.057 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:14:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 20 15:14:31 compute-0 ceph-mon[74360]: pgmap v2752: 321 pgs: 321 active+clean; 543 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Jan 20 15:14:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 20 15:14:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 20 15:14:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.122 250022 DEBUG nova.network.neutron [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.150 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.151 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance network_info: |[{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.151 250022 DEBUG oslo_concurrency.lockutils [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.152 250022 DEBUG nova.network.neutron [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.155 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Start _get_guest_xml network_info=[{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.160 250022 WARNING nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.164 250022 DEBUG nova.virt.libvirt.host [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.165 250022 DEBUG nova.virt.libvirt.host [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.169 250022 DEBUG nova.virt.libvirt.host [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.169 250022 DEBUG nova.virt.libvirt.host [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.171 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.171 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.172 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.172 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.172 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.173 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.173 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.173 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.174 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.174 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.174 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.175 250022 DEBUG nova.virt.hardware [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.178 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 186 op/s
Jan 20 15:14:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869517748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:32 compute-0 ceph-mon[74360]: osdmap e401: 3 total, 3 up, 3 in
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.663 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.689 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:32 compute-0 nova_compute[250018]: 2026-01-20 15:14:32.694 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503689261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.124 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.127 250022 DEBUG nova.virt.libvirt.vif [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.128 250022 DEBUG nova.network.os_vif_util [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.130 250022 DEBUG nova.network.os_vif_util [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.133 250022 DEBUG nova.objects.instance [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_devices' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.215 250022 DEBUG nova.network.neutron [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.216 250022 DEBUG nova.network.neutron [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.220 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <uuid>f1ded131-d9a3-4e93-ad99-53ee2695d5c8</uuid>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <name>instance-000000bb</name>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:name>multiattach-server-0</nova:name>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:14:32</nova:creationTime>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:user uuid="e9cc4ce3e069479ba9c789b378a68a1d">tempest-AttachVolumeMultiAttachTest-418194625-project-member</nova:user>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:project uuid="fff727019f86407498e83d7948d54962">tempest-AttachVolumeMultiAttachTest-418194625</nova:project>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <nova:port uuid="0e93d1de-671e-4e37-8e79-44bed7981254">
Jan 20 15:14:33 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <system>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="serial">f1ded131-d9a3-4e93-ad99-53ee2695d5c8</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="uuid">f1ded131-d9a3-4e93-ad99-53ee2695d5c8</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </system>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <os>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </os>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <features>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </features>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk">
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </source>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config">
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </source>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:14:33 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:99:5e:ed"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <target dev="tap0e93d1de-67"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/console.log" append="off"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <video>
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </video>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:14:33 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:14:33 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:14:33 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:14:33 compute-0 nova_compute[250018]: </domain>
Jan 20 15:14:33 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.222 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Preparing to wait for external event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.222 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.222 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.222 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.223 250022 DEBUG nova.virt.libvirt.vif [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:14:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.223 250022 DEBUG nova.network.os_vif_util [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.224 250022 DEBUG nova.network.os_vif_util [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.224 250022 DEBUG os_vif [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.225 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.225 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.225 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.229 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.229 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e93d1de-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.229 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e93d1de-67, col_values=(('external_ids', {'iface-id': '0e93d1de-671e-4e37-8e79-44bed7981254', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:5e:ed', 'vm-uuid': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.231 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:33 compute-0 NetworkManager[48960]: <info>  [1768922073.2318] manager: (tap0e93d1de-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.237 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.238 250022 INFO os_vif [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67')
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.239 250022 DEBUG oslo_concurrency.lockutils [req-c00a20e7-f272-4884-b8a3-9f85361f21cf req-be7de86c-2cdb-4920-846b-602225b70ac0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.299 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.300 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.300 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No VIF found with MAC fa:16:3e:99:5e:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.302 250022 INFO nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Using config drive
Jan 20 15:14:33 compute-0 nova_compute[250018]: 2026-01-20 15:14:33.346 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:33 compute-0 podman[359735]: 2026-01-20 15:14:33.362279274 +0000 UTC m=+0.075686933 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:14:33 compute-0 podman[359734]: 2026-01-20 15:14:33.376968769 +0000 UTC m=+0.101516929 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:14:33 compute-0 ceph-mon[74360]: pgmap v2754: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 186 op/s
Jan 20 15:14:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3869517748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2503689261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:34 compute-0 nova_compute[250018]: 2026-01-20 15:14:34.138 250022 INFO nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Creating config drive at /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config
Jan 20 15:14:34 compute-0 nova_compute[250018]: 2026-01-20 15:14:34.144 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo1z6gyb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:34 compute-0 nova_compute[250018]: 2026-01-20 15:14:34.289 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplo1z6gyb" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:34 compute-0 nova_compute[250018]: 2026-01-20 15:14:34.318 250022 DEBUG nova.storage.rbd_utils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] rbd image f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:14:34 compute-0 nova_compute[250018]: 2026-01-20 15:14:34.322 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:14:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 87 op/s
Jan 20 15:14:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:35 compute-0 nova_compute[250018]: 2026-01-20 15:14:35.356 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:36 compute-0 ceph-mon[74360]: pgmap v2755: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 87 op/s
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.514 250022 DEBUG oslo_concurrency.processutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.514 250022 INFO nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Deleting local config drive /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/disk.config because it was imported into RBD.
Jan 20 15:14:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:14:36 compute-0 kernel: tap0e93d1de-67: entered promiscuous mode
Jan 20 15:14:36 compute-0 NetworkManager[48960]: <info>  [1768922076.5650] manager: (tap0e93d1de-67): new Tun device (/org/freedesktop/NetworkManager/Devices/329)
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.566 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 ovn_controller[148666]: 2026-01-20T15:14:36Z|00671|binding|INFO|Claiming lport 0e93d1de-671e-4e37-8e79-44bed7981254 for this chassis.
Jan 20 15:14:36 compute-0 ovn_controller[148666]: 2026-01-20T15:14:36Z|00672|binding|INFO|0e93d1de-671e-4e37-8e79-44bed7981254: Claiming fa:16:3e:99:5e:ed 10.100.0.3
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.574 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:5e:ed 10.100.0.3'], port_security=['fa:16:3e:99:5e:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=0e93d1de-671e-4e37-8e79-44bed7981254) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.575 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 0e93d1de-671e-4e37-8e79-44bed7981254 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab bound to our chassis
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.576 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:14:36 compute-0 ovn_controller[148666]: 2026-01-20T15:14:36Z|00673|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 ovn-installed in OVS
Jan 20 15:14:36 compute-0 ovn_controller[148666]: 2026-01-20T15:14:36Z|00674|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 up in Southbound
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.589 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.591 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1c06da20-95dc-4ba4-ae28-ebc31a03dc87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 systemd-udevd[359853]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:14:36 compute-0 systemd-machined[216401]: New machine qemu-83-instance-000000bb.
Jan 20 15:14:36 compute-0 NetworkManager[48960]: <info>  [1768922076.6106] device (tap0e93d1de-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:14:36 compute-0 NetworkManager[48960]: <info>  [1768922076.6112] device (tap0e93d1de-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:14:36 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-000000bb.
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.627 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e1619fd8-f06a-41bd-a74e-8fd9307efdcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.630 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d9625cf6-d62b-4c7f-99c3-86d01d5ce95f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.655 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[468f6ede-aa7f-4f10-8bae-002db1c7246a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.674 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cd20c4-4225-4779-8d01-8b42e96369b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359862, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.689 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b326396a-ad68-43c4-89ee-a19ecfbe8d35]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795057, 'tstamp': 795057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359866, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795060, 'tstamp': 795060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359866, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.691 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.693 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.694 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.694 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.694 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.695 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:36.695 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:14:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.866 250022 DEBUG nova.compute.manager [req-ee8a1fd4-e0cd-48d2-ab7e-750709cd3460 req-077d009b-4e3f-4193-b981-0dded7a0c846 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.866 250022 DEBUG oslo_concurrency.lockutils [req-ee8a1fd4-e0cd-48d2-ab7e-750709cd3460 req-077d009b-4e3f-4193-b981-0dded7a0c846 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.867 250022 DEBUG oslo_concurrency.lockutils [req-ee8a1fd4-e0cd-48d2-ab7e-750709cd3460 req-077d009b-4e3f-4193-b981-0dded7a0c846 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.867 250022 DEBUG oslo_concurrency.lockutils [req-ee8a1fd4-e0cd-48d2-ab7e-750709cd3460 req-077d009b-4e3f-4193-b981-0dded7a0c846 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:36 compute-0 nova_compute[250018]: 2026-01-20 15:14:36.867 250022 DEBUG nova.compute.manager [req-ee8a1fd4-e0cd-48d2-ab7e-750709cd3460 req-077d009b-4e3f-4193-b981-0dded7a0c846 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Processing event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:14:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:37.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:37 compute-0 ceph-mon[74360]: pgmap v2756: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.282 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.283 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922078.282821, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.283 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Started (Lifecycle Event)
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.285 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.288 250022 INFO nova.virt.libvirt.driver [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance spawned successfully.
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.288 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.329 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.332 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.338 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.338 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.338 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.339 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.339 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.339 250022 DEBUG nova.virt.libvirt.driver [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.370 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.370 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922078.2829373, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.370 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Paused (Lifecycle Event)
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.409 250022 INFO nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Took 10.39 seconds to spawn the instance on the hypervisor.
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.410 250022 DEBUG nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.411 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.419 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922078.2852216, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.419 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Resumed (Lifecycle Event)
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.444 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.448 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.479 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.482 250022 INFO nova.compute.manager [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Took 11.51 seconds to build instance.
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.501 250022 DEBUG oslo_concurrency.lockutils [None req-24c932b4-69ff-4b6a-95c6-525233a0dc4b e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 20 15:14:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3112583804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.927 250022 DEBUG nova.compute.manager [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.928 250022 DEBUG oslo_concurrency.lockutils [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.928 250022 DEBUG oslo_concurrency.lockutils [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.928 250022 DEBUG oslo_concurrency.lockutils [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.928 250022 DEBUG nova.compute.manager [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:14:38 compute-0 nova_compute[250018]: 2026-01-20 15:14:38.929 250022 WARNING nova.compute.manager [req-f6fe1052-ceb5-4679-956b-81f381c8867b req-e6f08568-f818-4036-a02d-3b1a1842ee8b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state active and task_state None.
Jan 20 15:14:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:14:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/185856105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:40.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:40 compute-0 ceph-mon[74360]: pgmap v2757: 321 pgs: 321 active+clean; 565 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 20 15:14:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/185856105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:40 compute-0 nova_compute[250018]: 2026-01-20 15:14:40.358 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Jan 20 15:14:40 compute-0 sudo[359912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:40 compute-0 sudo[359912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:40 compute-0 sudo[359912]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:41 compute-0 sudo[359937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:41 compute-0 sudo[359937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:41 compute-0 sudo[359937]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:41 compute-0 ceph-mon[74360]: pgmap v2758: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Jan 20 15:14:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:41 compute-0 sudo[359962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:41 compute-0 sudo[359962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:41 compute-0 sudo[359962]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-0 sudo[359987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:14:42 compute-0 sudo[359987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-0 sudo[359987]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-0 sudo[360012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:42 compute-0 sudo[360012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-0 sudo[360012]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-0 sudo[360037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:14:42 compute-0 sudo[360037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:42.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 46 KiB/s wr, 151 op/s
Jan 20 15:14:42 compute-0 sudo[360037]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: pgmap v2759: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 46 KiB/s wr, 151 op/s
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b6f927d1-746b-4f18-b056-3869f918a20e does not exist
Jan 20 15:14:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 79d4ba59-d499-4310-adb8-cffebe85718f does not exist
Jan 20 15:14:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 16fbf609-31e1-4146-b0d0-516bba0d5a48 does not exist
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:14:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:14:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:42 compute-0 sudo[360095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:42 compute-0 sudo[360095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:42 compute-0 sudo[360095]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:43 compute-0 sudo[360120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:14:43 compute-0 sudo[360120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:43 compute-0 sudo[360120]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:43 compute-0 sudo[360145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:43 compute-0 sudo[360145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:43 compute-0 sudo[360145]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:43 compute-0 sudo[360170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:14:43 compute-0 sudo[360170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:43 compute-0 nova_compute[250018]: 2026-01-20 15:14:43.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.46699824 +0000 UTC m=+0.042844267 container create dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:14:43 compute-0 systemd[1]: Started libpod-conmon-dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22.scope.
Jan 20 15:14:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.450833763 +0000 UTC m=+0.026679820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.558507648 +0000 UTC m=+0.134353695 container init dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.565737353 +0000 UTC m=+0.141583380 container start dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.568395244 +0000 UTC m=+0.144241271 container attach dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:14:43 compute-0 gracious_chebyshev[360253]: 167 167
Jan 20 15:14:43 compute-0 systemd[1]: libpod-dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22.scope: Deactivated successfully.
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.571846198 +0000 UTC m=+0.147692235 container died dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:14:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc442b32dab3c50183f241fa602623060ffd7ffadf8562d64015c0b102afaebf-merged.mount: Deactivated successfully.
Jan 20 15:14:43 compute-0 podman[360237]: 2026-01-20 15:14:43.605476105 +0000 UTC m=+0.181322132 container remove dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:14:43 compute-0 systemd[1]: libpod-conmon-dd6275549798a923d74982dca72d902a60105f364bcc984c3248c1684e721a22.scope: Deactivated successfully.
Jan 20 15:14:43 compute-0 podman[360277]: 2026-01-20 15:14:43.810160246 +0000 UTC m=+0.046953128 container create f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:14:43 compute-0 systemd[1]: Started libpod-conmon-f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592.scope.
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:14:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2934712307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:43 compute-0 podman[360277]: 2026-01-20 15:14:43.789046297 +0000 UTC m=+0.025839229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:43 compute-0 podman[360277]: 2026-01-20 15:14:43.898709085 +0000 UTC m=+0.135501987 container init f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:14:43 compute-0 podman[360277]: 2026-01-20 15:14:43.907256365 +0000 UTC m=+0.144049237 container start f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:14:43 compute-0 podman[360277]: 2026-01-20 15:14:43.910254396 +0000 UTC m=+0.147047278 container attach f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 15:14:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:44 compute-0 nova_compute[250018]: 2026-01-20 15:14:44.499 250022 DEBUG nova.compute.manager [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:14:44 compute-0 nova_compute[250018]: 2026-01-20 15:14:44.500 250022 DEBUG nova.compute.manager [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:14:44 compute-0 nova_compute[250018]: 2026-01-20 15:14:44.500 250022 DEBUG oslo_concurrency.lockutils [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:14:44 compute-0 nova_compute[250018]: 2026-01-20 15:14:44.500 250022 DEBUG oslo_concurrency.lockutils [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:14:44 compute-0 nova_compute[250018]: 2026-01-20 15:14:44.501 250022 DEBUG nova.network.neutron [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:14:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 567 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 143 op/s
Jan 20 15:14:44 compute-0 distracted_hawking[360294]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:14:44 compute-0 distracted_hawking[360294]: --> relative data size: 1.0
Jan 20 15:14:44 compute-0 distracted_hawking[360294]: --> All data devices are unavailable
Jan 20 15:14:44 compute-0 systemd[1]: libpod-f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592.scope: Deactivated successfully.
Jan 20 15:14:44 compute-0 podman[360310]: 2026-01-20 15:14:44.760685727 +0000 UTC m=+0.027153004 container died f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e2cd9e78778852d3f94887b5cc128d5bc70175f4fe234f44d3c7a08afd0bf6a-merged.mount: Deactivated successfully.
Jan 20 15:14:44 compute-0 podman[360310]: 2026-01-20 15:14:44.815396193 +0000 UTC m=+0.081863460 container remove f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:14:44 compute-0 systemd[1]: libpod-conmon-f5b82239247dde58ac806d2a99185ac1b8c7580f94fa092ee294f1c6f3c7c592.scope: Deactivated successfully.
Jan 20 15:14:44 compute-0 sudo[360170]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:44 compute-0 sudo[360325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:44 compute-0 sudo[360325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:44 compute-0 sudo[360325]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:44 compute-0 ceph-mon[74360]: pgmap v2760: 321 pgs: 321 active+clean; 567 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 143 op/s
Jan 20 15:14:44 compute-0 sudo[360350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:14:44 compute-0 sudo[360350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:44 compute-0 sudo[360350]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:45 compute-0 sudo[360375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:45 compute-0 sudo[360375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:45 compute-0 sudo[360375]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:45 compute-0 sudo[360400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:14:45 compute-0 sudo[360400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:45.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:45 compute-0 nova_compute[250018]: 2026-01-20 15:14:45.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.410481485 +0000 UTC m=+0.042115957 container create b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:14:45 compute-0 systemd[1]: Started libpod-conmon-b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75.scope.
Jan 20 15:14:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.392354176 +0000 UTC m=+0.023988668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.496133766 +0000 UTC m=+0.127768248 container init b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.505247532 +0000 UTC m=+0.136882004 container start b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.508405387 +0000 UTC m=+0.140039859 container attach b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 20 15:14:45 compute-0 goofy_sanderson[360482]: 167 167
Jan 20 15:14:45 compute-0 systemd[1]: libpod-b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75.scope: Deactivated successfully.
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.512760674 +0000 UTC m=+0.144395166 container died b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2194f2b0f41813c76ca6dbe6eb06ccde3214d83ccb0e52bbd33e2898da4189de-merged.mount: Deactivated successfully.
Jan 20 15:14:45 compute-0 podman[360465]: 2026-01-20 15:14:45.553310748 +0000 UTC m=+0.184945230 container remove b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:14:45 compute-0 systemd[1]: libpod-conmon-b07402d3f71bf42b6959895a804ee55da1680d7f33a6760ebffd14bee8581f75.scope: Deactivated successfully.
Jan 20 15:14:45 compute-0 nova_compute[250018]: 2026-01-20 15:14:45.716 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:45.719 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:14:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:45.719 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:14:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:14:45.720 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:14:45 compute-0 podman[360506]: 2026-01-20 15:14:45.736596692 +0000 UTC m=+0.044222644 container create d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:14:45 compute-0 systemd[1]: Started libpod-conmon-d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01.scope.
Jan 20 15:14:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc6bb11542ae9f24f40954079408883139a038ad4bb3a7112e4fec2e46b3a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc6bb11542ae9f24f40954079408883139a038ad4bb3a7112e4fec2e46b3a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc6bb11542ae9f24f40954079408883139a038ad4bb3a7112e4fec2e46b3a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cc6bb11542ae9f24f40954079408883139a038ad4bb3a7112e4fec2e46b3a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:45 compute-0 podman[360506]: 2026-01-20 15:14:45.717652801 +0000 UTC m=+0.025278773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:45 compute-0 podman[360506]: 2026-01-20 15:14:45.815284135 +0000 UTC m=+0.122910117 container init d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 20 15:14:45 compute-0 podman[360506]: 2026-01-20 15:14:45.823855656 +0000 UTC m=+0.131481618 container start d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:14:45 compute-0 podman[360506]: 2026-01-20 15:14:45.82731159 +0000 UTC m=+0.134937542 container attach d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:14:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 65 KiB/s wr, 155 op/s
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]: {
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:     "0": [
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:         {
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "devices": [
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "/dev/loop3"
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             ],
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "lv_name": "ceph_lv0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "lv_size": "7511998464",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "name": "ceph_lv0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "tags": {
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.cluster_name": "ceph",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.crush_device_class": "",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.encrypted": "0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.osd_id": "0",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.type": "block",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:                 "ceph.vdo": "0"
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             },
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "type": "block",
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:             "vg_name": "ceph_vg0"
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:         }
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]:     ]
Jan 20 15:14:46 compute-0 competent_brahmagupta[360523]: }
Jan 20 15:14:46 compute-0 systemd[1]: libpod-d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01.scope: Deactivated successfully.
Jan 20 15:14:46 compute-0 podman[360506]: 2026-01-20 15:14:46.602273554 +0000 UTC m=+0.909899506 container died d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-42cc6bb11542ae9f24f40954079408883139a038ad4bb3a7112e4fec2e46b3a7-merged.mount: Deactivated successfully.
Jan 20 15:14:46 compute-0 podman[360506]: 2026-01-20 15:14:46.656851996 +0000 UTC m=+0.964477948 container remove d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:14:46 compute-0 systemd[1]: libpod-conmon-d4bd321b2e92b4b0d1b9b6b124e08840d16c7f2417cfea8b155e331d353fee01.scope: Deactivated successfully.
Jan 20 15:14:46 compute-0 sudo[360400]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:46 compute-0 sudo[360544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:46 compute-0 sudo[360544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:46 compute-0 sudo[360544]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:46 compute-0 sudo[360569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:14:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:46 compute-0 sudo[360569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:46 compute-0 sudo[360569]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:46 compute-0 sudo[360594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:46 compute-0 sudo[360594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:46 compute-0 sudo[360594]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:46 compute-0 sudo[360619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:14:46 compute-0 sudo[360619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:47 compute-0 nova_compute[250018]: 2026-01-20 15:14:47.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:47 compute-0 nova_compute[250018]: 2026-01-20 15:14:47.177 250022 DEBUG nova.network.neutron [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:14:47 compute-0 nova_compute[250018]: 2026-01-20 15:14:47.178 250022 DEBUG nova.network.neutron [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:14:47 compute-0 nova_compute[250018]: 2026-01-20 15:14:47.199 250022 DEBUG oslo_concurrency.lockutils [req-64f77f08-e2ba-47d8-a8ee-94c368cda85a req-894df13b-fea7-43c3-91f9-36601bc264a1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:14:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:47 compute-0 podman[360686]: 2026-01-20 15:14:47.394907605 +0000 UTC m=+0.052281501 container create 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:14:47 compute-0 systemd[1]: Started libpod-conmon-5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f.scope.
Jan 20 15:14:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:47 compute-0 podman[360686]: 2026-01-20 15:14:47.373350144 +0000 UTC m=+0.030724070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:47 compute-0 podman[360686]: 2026-01-20 15:14:47.477824361 +0000 UTC m=+0.135198257 container init 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:14:47 compute-0 podman[360686]: 2026-01-20 15:14:47.485055327 +0000 UTC m=+0.142429203 container start 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:14:47 compute-0 podman[360686]: 2026-01-20 15:14:47.488334675 +0000 UTC m=+0.145708571 container attach 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:14:47 compute-0 systemd[1]: libpod-5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f.scope: Deactivated successfully.
Jan 20 15:14:47 compute-0 zealous_bouman[360702]: 167 167
Jan 20 15:14:47 compute-0 conmon[360702]: conmon 5c996d7f65e8a46bfedd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f.scope/container/memory.events
Jan 20 15:14:47 compute-0 podman[360707]: 2026-01-20 15:14:47.537696387 +0000 UTC m=+0.028980902 container died 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb36afdfd3f7bf8af95514c859639fa3b7c5e72f321b8377afee4e6381d5d217-merged.mount: Deactivated successfully.
Jan 20 15:14:47 compute-0 podman[360707]: 2026-01-20 15:14:47.575372683 +0000 UTC m=+0.066657178 container remove 5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:14:47 compute-0 systemd[1]: libpod-conmon-5c996d7f65e8a46bfedd137f2fa2050a84b6b407de325e406360e411d2e3132f.scope: Deactivated successfully.
Jan 20 15:14:47 compute-0 ceph-mon[74360]: pgmap v2761: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 65 KiB/s wr, 155 op/s
Jan 20 15:14:47 compute-0 podman[360729]: 2026-01-20 15:14:47.770064775 +0000 UTC m=+0.047582315 container create 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:14:47 compute-0 systemd[1]: Started libpod-conmon-72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814.scope.
Jan 20 15:14:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:14:47 compute-0 podman[360729]: 2026-01-20 15:14:47.749736707 +0000 UTC m=+0.027254267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f72748783b2015914ef0cc640a3d6b498072b1e1e1beef6a037526c48ad4a1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f72748783b2015914ef0cc640a3d6b498072b1e1e1beef6a037526c48ad4a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f72748783b2015914ef0cc640a3d6b498072b1e1e1beef6a037526c48ad4a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f72748783b2015914ef0cc640a3d6b498072b1e1e1beef6a037526c48ad4a1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:14:47 compute-0 podman[360729]: 2026-01-20 15:14:47.861827171 +0000 UTC m=+0.139344721 container init 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:14:47 compute-0 podman[360729]: 2026-01-20 15:14:47.869168808 +0000 UTC m=+0.146686338 container start 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:14:47 compute-0 podman[360729]: 2026-01-20 15:14:47.872547009 +0000 UTC m=+0.150064539 container attach 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:14:48 compute-0 nova_compute[250018]: 2026-01-20 15:14:48.236 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 132 op/s
Jan 20 15:14:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1149942708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:48 compute-0 pedantic_buck[360746]: {
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:         "osd_id": 0,
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:         "type": "bluestore"
Jan 20 15:14:48 compute-0 pedantic_buck[360746]:     }
Jan 20 15:14:48 compute-0 pedantic_buck[360746]: }
Jan 20 15:14:48 compute-0 systemd[1]: libpod-72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814.scope: Deactivated successfully.
Jan 20 15:14:48 compute-0 podman[360768]: 2026-01-20 15:14:48.765712113 +0000 UTC m=+0.039602730 container died 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f72748783b2015914ef0cc640a3d6b498072b1e1e1beef6a037526c48ad4a1a-merged.mount: Deactivated successfully.
Jan 20 15:14:48 compute-0 podman[360768]: 2026-01-20 15:14:48.815871226 +0000 UTC m=+0.089761833 container remove 72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:14:48 compute-0 systemd[1]: libpod-conmon-72093b55c870f7f5dda10f76ecec7f501eb5bfc9c2664adca018c4ef5f795814.scope: Deactivated successfully.
Jan 20 15:14:48 compute-0 sudo[360619]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:14:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:14:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7c431c88-2578-463e-8852-32312f17d4a5 does not exist
Jan 20 15:14:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2c043969-f0fb-4c99-800e-05e439f16715 does not exist
Jan 20 15:14:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9c69c484-84b0-48ac-af7a-dfea25a40b08 does not exist
Jan 20 15:14:48 compute-0 sudo[360781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:14:48 compute-0 sudo[360781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:48 compute-0 sudo[360781]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:49 compute-0 sudo[360806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:14:49 compute-0 sudo[360806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:14:49 compute-0 sudo[360806]: pam_unix(sudo:session): session closed for user root
Jan 20 15:14:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:49.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:49 compute-0 ceph-mon[74360]: pgmap v2762: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 132 op/s
Jan 20 15:14:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:14:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:50 compute-0 nova_compute[250018]: 2026-01-20 15:14:50.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 532 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 229 op/s
Jan 20 15:14:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1825238396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:14:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:51 compute-0 sshd-session[360832]: Invalid user admin from 134.122.57.138 port 39256
Jan 20 15:14:51 compute-0 sshd-session[360832]: Connection closed by invalid user admin 134.122.57.138 port 39256 [preauth]
Jan 20 15:14:51 compute-0 ceph-mon[74360]: pgmap v2763: 321 pgs: 321 active+clean; 532 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 229 op/s
Jan 20 15:14:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 533 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:14:52
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'images', 'cephfs.cephfs.data']
Jan 20 15:14:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:14:52 compute-0 ceph-mon[74360]: pgmap v2764: 321 pgs: 321 active+clean; 533 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 20 15:14:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1450403844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:53 compute-0 nova_compute[250018]: 2026-01-20 15:14:53.239 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:53 compute-0 ovn_controller[148666]: 2026-01-20T15:14:53Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:5e:ed 10.100.0.3
Jan 20 15:14:53 compute-0 ovn_controller[148666]: 2026-01-20T15:14:53Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:5e:ed 10.100.0.3
Jan 20 15:14:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:53.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2924848263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:14:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:14:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:14:54 compute-0 ovn_controller[148666]: 2026-01-20T15:14:54Z|00675|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:14:54 compute-0 ovn_controller[148666]: 2026-01-20T15:14:54Z|00676|binding|INFO|Releasing lport 1aa285ce-a9ae-4d1e-b4b9-c72f4e0b8d65 from this chassis (sb_readonly=0)
Jan 20 15:14:54 compute-0 nova_compute[250018]: 2026-01-20 15:14:54.499 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:14:54 compute-0 ceph-mon[74360]: pgmap v2765: 321 pgs: 321 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 20 15:14:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:55 compute-0 nova_compute[250018]: 2026-01-20 15:14:55.363 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 15:14:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:14:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:14:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:14:57 compute-0 ceph-mon[74360]: pgmap v2766: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:14:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:14:58 compute-0 nova_compute[250018]: 2026-01-20 15:14:58.244 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:14:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:14:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 20 15:14:59 compute-0 nova_compute[250018]: 2026-01-20 15:14:59.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:14:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:14:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:14:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:14:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:14:59 compute-0 ceph-mon[74360]: pgmap v2767: 321 pgs: 321 active+clean; 566 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 20 15:15:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:15:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:00.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:15:00 compute-0 nova_compute[250018]: 2026-01-20 15:15:00.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 280 op/s
Jan 20 15:15:00 compute-0 ceph-mon[74360]: pgmap v2768: 321 pgs: 321 active+clean; 568 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 280 op/s
Jan 20 15:15:01 compute-0 sudo[360841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:01 compute-0 sudo[360841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:01 compute-0 sudo[360841]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:01 compute-0 sudo[360866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:01 compute-0 sudo[360866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:01 compute-0 sudo[360866]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:01.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:01 compute-0 nova_compute[250018]: 2026-01-20 15:15:01.388 250022 DEBUG nova.compute.manager [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:01 compute-0 nova_compute[250018]: 2026-01-20 15:15:01.389 250022 DEBUG nova.compute.manager [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:01 compute-0 nova_compute[250018]: 2026-01-20 15:15:01.389 250022 DEBUG oslo_concurrency.lockutils [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:01 compute-0 nova_compute[250018]: 2026-01-20 15:15:01.389 250022 DEBUG oslo_concurrency.lockutils [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:01 compute-0 nova_compute[250018]: 2026-01-20 15:15:01.389 250022 DEBUG nova.network.neutron [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 207 op/s
Jan 20 15:15:03 compute-0 nova_compute[250018]: 2026-01-20 15:15:03.246 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:03.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:03 compute-0 podman[360892]: 2026-01-20 15:15:03.486042435 +0000 UTC m=+0.066769472 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:15:03 compute-0 podman[360893]: 2026-01-20 15:15:03.506355182 +0000 UTC m=+0.087082109 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:15:03 compute-0 ceph-mon[74360]: pgmap v2769: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 207 op/s
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.159 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.160 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.160 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.160 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.161 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Jan 20 15:15:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590486040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:04 compute-0 nova_compute[250018]: 2026-01-20 15:15:04.609 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1795334721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2590486040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.419 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:05 compute-0 ceph-mon[74360]: pgmap v2770: 321 pgs: 321 active+clean; 580 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.987 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.988 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.992 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.993 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.995 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:05 compute-0 nova_compute[250018]: 2026-01-20 15:15:05.995 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.154 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.155 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3640MB free_disk=20.830204010009766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.156 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.156 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.605 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance e79c0704-f95e-422f-9c25-ed35fca7cb7c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.605 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.606 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance f1ded131-d9a3-4e93-ad99-53ee2695d5c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.606 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.607 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:15:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3818984052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:06 compute-0 nova_compute[250018]: 2026-01-20 15:15:06.852 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.126 250022 DEBUG nova.network.neutron [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.127 250022 DEBUG nova.network.neutron [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.193 250022 DEBUG oslo_concurrency.lockutils [req-07f328b8-84ba-41ae-b681-c186d691a5bd req-b3ff03d5-6887-4ce2-902d-ffe82e9a9f2d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126827095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.310 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.318 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.378 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.450 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:15:07 compute-0 nova_compute[250018]: 2026-01-20 15:15:07.451 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:07 compute-0 ceph-mon[74360]: pgmap v2771: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Jan 20 15:15:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3239820126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2592459286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2126827095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:15:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 62K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1753 writes, 7985 keys, 1753 commit groups, 1.0 writes per commit group, ingest: 11.06 MB, 0.02 MB/s
                                           Interval WAL: 1753 writes, 1753 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     49.0      1.64              0.26        42    0.039       0      0       0.0       0.0
                                             L6      1/0    9.49 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    139.0    118.2      3.39              1.26        41    0.083    283K    22K       0.0       0.0
                                            Sum      1/0    9.49 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     93.7     95.6      5.02              1.52        83    0.061    283K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.9    152.5    151.9      0.59              0.26        14    0.042     64K   3683       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    139.0    118.2      3.39              1.26        41    0.083    283K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     49.0      1.63              0.26        41    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.078, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.47 GB write, 0.10 MB/s write, 0.46 GB read, 0.10 MB/s read, 5.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 52.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000738 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3051,50.40 MB,16.5789%) FilterBlock(84,794.73 KB,0.255299%) IndexBlock(84,1.30 MB,0.429274%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:15:08 compute-0 nova_compute[250018]: 2026-01-20 15:15:08.248 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:08 compute-0 nova_compute[250018]: 2026-01-20 15:15:08.445 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:08 compute-0 nova_compute[250018]: 2026-01-20 15:15:08.446 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 598 KiB/s wr, 102 op/s
Jan 20 15:15:08 compute-0 ceph-mon[74360]: pgmap v2772: 321 pgs: 321 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 598 KiB/s wr, 102 op/s
Jan 20 15:15:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:09 compute-0 nova_compute[250018]: 2026-01-20 15:15:09.475 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:09 compute-0 nova_compute[250018]: 2026-01-20 15:15:09.476 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:09 compute-0 nova_compute[250018]: 2026-01-20 15:15:09.511 250022 DEBUG nova.objects.instance [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'flavor' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:09 compute-0 nova_compute[250018]: 2026-01-20 15:15:09.550 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:10 compute-0 nova_compute[250018]: 2026-01-20 15:15:10.420 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 20 15:15:10 compute-0 nova_compute[250018]: 2026-01-20 15:15:10.928 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:10 compute-0 nova_compute[250018]: 2026-01-20 15:15:10.928 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:10 compute-0 nova_compute[250018]: 2026-01-20 15:15:10.929 250022 INFO nova.compute.manager [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Attaching volume 933c5c7a-f496-4bcc-b304-68156c235fe5 to /dev/vdb
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.074 250022 DEBUG os_brick.utils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.075 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.088 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.088 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e1775c-aa29-416c-b8ae-900ee1bbac4f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.089 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.099 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.099 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[226f3ec4-d9f9-4cac-9787-fd9737107fae]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.100 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.108 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.108 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4642c4-6d35-415c-aabe-f17c9421f15a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.110 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[fd798e1e-a814-4b94-ae06-561c46394406]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.111 250022 DEBUG oslo_concurrency.processutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.147 250022 DEBUG oslo_concurrency.processutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.151 250022 DEBUG os_brick.initiator.connectors.lightos [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.152 250022 DEBUG os_brick.initiator.connectors.lightos [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.152 250022 DEBUG os_brick.initiator.connectors.lightos [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.153 250022 DEBUG os_brick.utils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:15:11 compute-0 nova_compute[250018]: 2026-01-20 15:15:11.154 250022 DEBUG nova.virt.block_device [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating existing volume attachment record: 8dd443c2-f151-4144-b3c5-f1165910696f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 20 15:15:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:11 compute-0 ceph-mon[74360]: pgmap v2773: 321 pgs: 321 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00817000948027173 of space, bias 1.0, pg target 2.451002844081519 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006806311650846827 of space, bias 1.0, pg target 2.0282808719523544 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:15:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 20 15:15:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:12 compute-0 nova_compute[250018]: 2026-01-20 15:15:12.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 912 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 20 15:15:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4149569028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:13 compute-0 nova_compute[250018]: 2026-01-20 15:15:13.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:13 compute-0 nova_compute[250018]: 2026-01-20 15:15:13.251 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:13.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:13 compute-0 ceph-mon[74360]: pgmap v2774: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 912 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 20 15:15:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1223840242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1223840242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:14 compute-0 nova_compute[250018]: 2026-01-20 15:15:14.260 250022 DEBUG nova.objects.instance [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'flavor' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:14.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:14 compute-0 ceph-mon[74360]: pgmap v2775: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:14 compute-0 nova_compute[250018]: 2026-01-20 15:15:14.982 250022 DEBUG nova.virt.libvirt.driver [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Attempting to attach volume 933c5c7a-f496-4bcc-b304-68156c235fe5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 20 15:15:14 compute-0 nova_compute[250018]: 2026-01-20 15:15:14.985 250022 DEBUG nova.virt.libvirt.guest [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] attach device xml: <disk type="network" device="disk">
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:15:14 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   </source>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <auth username="openstack">
Jan 20 15:15:14 compute-0 nova_compute[250018]:     <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   </auth>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:15:14 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 15:15:14 compute-0 nova_compute[250018]: </disk>
Jan 20 15:15:14 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.203 250022 DEBUG nova.virt.libvirt.driver [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.204 250022 DEBUG nova.virt.libvirt.driver [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.204 250022 DEBUG nova.virt.libvirt.driver [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.205 250022 DEBUG nova.virt.libvirt.driver [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No VIF found with MAC fa:16:3e:99:5e:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:15:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:15.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.423 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:15 compute-0 nova_compute[250018]: 2026-01-20 15:15:15.577 250022 DEBUG oslo_concurrency.lockutils [None req-562ec470-8057-4ad2-96db-5f7d941e8de0 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:16 compute-0 nova_compute[250018]: 2026-01-20 15:15:16.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:16 compute-0 nova_compute[250018]: 2026-01-20 15:15:16.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:15:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:17 compute-0 ceph-mon[74360]: pgmap v2776: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 20 15:15:18 compute-0 nova_compute[250018]: 2026-01-20 15:15:18.253 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:18.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:15:18 compute-0 nova_compute[250018]: 2026-01-20 15:15:18.615 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:18 compute-0 nova_compute[250018]: 2026-01-20 15:15:18.616 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:18 compute-0 nova_compute[250018]: 2026-01-20 15:15:18.616 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:15:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:19 compute-0 ceph-mon[74360]: pgmap v2777: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:15:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:20.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:20 compute-0 nova_compute[250018]: 2026-01-20 15:15:20.426 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 436 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:15:20 compute-0 ceph-mon[74360]: pgmap v2778: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 436 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 20 15:15:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1049306304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:21 compute-0 sudo[361017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:21 compute-0 sudo[361017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:21 compute-0 sudo[361017]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:21.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:21 compute-0 sudo[361042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:21 compute-0 sudo[361042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:21 compute-0 sudo[361042]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 82 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 20 15:15:22 compute-0 nova_compute[250018]: 2026-01-20 15:15:22.644 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:22 compute-0 ceph-mon[74360]: pgmap v2779: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 82 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 20 15:15:23 compute-0 nova_compute[250018]: 2026-01-20 15:15:23.003 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:23 compute-0 nova_compute[250018]: 2026-01-20 15:15:23.004 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:15:23 compute-0 nova_compute[250018]: 2026-01-20 15:15:23.256 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:23 compute-0 nova_compute[250018]: 2026-01-20 15:15:23.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:24.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 68 KiB/s wr, 8 op/s
Jan 20 15:15:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:25 compute-0 nova_compute[250018]: 2026-01-20 15:15:25.427 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:25 compute-0 ceph-mon[74360]: pgmap v2780: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 68 KiB/s wr, 8 op/s
Jan 20 15:15:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:26.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 78 KiB/s wr, 17 op/s
Jan 20 15:15:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:26 compute-0 ceph-mon[74360]: pgmap v2781: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 78 KiB/s wr, 17 op/s
Jan 20 15:15:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:27.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:28 compute-0 nova_compute[250018]: 2026-01-20 15:15:28.258 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:28.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3564192023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:28 compute-0 nova_compute[250018]: 2026-01-20 15:15:28.547 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:28 compute-0 nova_compute[250018]: 2026-01-20 15:15:28.548 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:28 compute-0 nova_compute[250018]: 2026-01-20 15:15:28.548 250022 DEBUG nova.network.neutron [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:15:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 61 KiB/s wr, 16 op/s
Jan 20 15:15:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:29.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:29 compute-0 ceph-mon[74360]: pgmap v2782: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 61 KiB/s wr, 16 op/s
Jan 20 15:15:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:30.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.428 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 259 KiB/s rd, 61 KiB/s wr, 24 op/s
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.722 250022 DEBUG nova.network.neutron [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.771 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:30.784 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:30.785 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.954 250022 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.955 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Creating file /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/d279e44eea9548899d8b875c79d8c7e8.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 15:15:30 compute-0 nova_compute[250018]: 2026-01-20 15:15:30.955 250022 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/d279e44eea9548899d8b875c79d8c7e8.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.394 250022 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/d279e44eea9548899d8b875c79d8c7e8.tmp" returned: 1 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.395 250022 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8/d279e44eea9548899d8b875c79d8c7e8.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.395 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Creating directory /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.395 250022 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:31 compute-0 ceph-mon[74360]: pgmap v2783: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 259 KiB/s rd, 61 KiB/s wr, 24 op/s
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.636 250022 DEBUG oslo_concurrency.processutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/f1ded131-d9a3-4e93-ad99-53ee2695d5c8" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:31 compute-0 nova_compute[250018]: 2026-01-20 15:15:31.640 250022 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:15:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:32.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 56 KiB/s wr, 25 op/s
Jan 20 15:15:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1592438209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:33 compute-0 nova_compute[250018]: 2026-01-20 15:15:33.260 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:15:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:33.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:15:33 compute-0 ceph-mon[74360]: pgmap v2784: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 56 KiB/s wr, 25 op/s
Jan 20 15:15:33 compute-0 kernel: tap0e93d1de-67 (unregistering): left promiscuous mode
Jan 20 15:15:33 compute-0 NetworkManager[48960]: <info>  [1768922133.9220] device (tap0e93d1de-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:15:33 compute-0 ovn_controller[148666]: 2026-01-20T15:15:33Z|00677|binding|INFO|Releasing lport 0e93d1de-671e-4e37-8e79-44bed7981254 from this chassis (sb_readonly=0)
Jan 20 15:15:33 compute-0 ovn_controller[148666]: 2026-01-20T15:15:33Z|00678|binding|INFO|Setting lport 0e93d1de-671e-4e37-8e79-44bed7981254 down in Southbound
Jan 20 15:15:33 compute-0 ovn_controller[148666]: 2026-01-20T15:15:33Z|00679|binding|INFO|Removing iface tap0e93d1de-67 ovn-installed in OVS
Jan 20 15:15:33 compute-0 nova_compute[250018]: 2026-01-20 15:15:33.928 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-0 nova_compute[250018]: 2026-01-20 15:15:33.931 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:33.946 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:5e:ed 10.100.0.3'], port_security=['fa:16:3e:99:5e:ed 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f1ded131-d9a3-4e93-ad99-53ee2695d5c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=0e93d1de-671e-4e37-8e79-44bed7981254) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:33.947 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 0e93d1de-671e-4e37-8e79-44bed7981254 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab unbound from our chassis
Jan 20 15:15:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:33.948 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:15:33 compute-0 nova_compute[250018]: 2026-01-20 15:15:33.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:33.968 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d77c0da-ebda-444a-a1ce-bc1141ba6448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.000 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9191c718-5eb2-4365-99f8-f4056e04bb1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.003 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[03ab8bcb-183a-453d-bda9-8ede8415f477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Jan 20 15:15:34 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000bb.scope: Consumed 17.288s CPU time.
Jan 20 15:15:34 compute-0 podman[361078]: 2026-01-20 15:15:34.007869417 +0000 UTC m=+0.061883020 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 15:15:34 compute-0 systemd-machined[216401]: Machine qemu-83-instance-000000bb terminated.
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.038 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d573db62-e96a-43d4-b68d-1c53fff1ae8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 podman[361077]: 2026-01-20 15:15:34.042249744 +0000 UTC m=+0.095761534 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.057 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e3b092-e13a-44f6-a9fd-a1cc3e8ca291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361129, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.075 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[684c5e7a-7c84-432a-b9c4-e5b9a44074cf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795057, 'tstamp': 795057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361131, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795060, 'tstamp': 795060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361131, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.077 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.078 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.083 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.084 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.084 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.085 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:34 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:34.085 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.298 250022 DEBUG nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.299 250022 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.299 250022 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.299 250022 DEBUG oslo_concurrency.lockutils [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.299 250022 DEBUG nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.300 250022 WARNING nova.compute.manager [req-e4c946d2-8be3-403c-961a-3a75924fbe4e req-46ac77b8-0c6a-420a-bb6c-49d1a41ef030 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-unplugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state active and task_state resize_migrating.
Jan 20 15:15:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:34.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 13 KiB/s wr, 22 op/s
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.657 250022 INFO nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance shutdown successfully after 3 seconds.
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.661 250022 INFO nova.virt.libvirt.driver [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Instance destroyed successfully.
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.662 250022 DEBUG nova.virt.libvirt.vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:15:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.663 250022 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:99:5e:ed"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.663 250022 DEBUG nova.network.os_vif_util [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.664 250022 DEBUG os_vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.666 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.666 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e93d1de-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.668 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.671 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.674 250022 INFO os_vif [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67')
Jan 20 15:15:34 compute-0 ceph-mon[74360]: pgmap v2785: 321 pgs: 321 active+clean; 620 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 13 KiB/s wr, 22 op/s
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.884 250022 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.884 250022 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:34 compute-0 nova_compute[250018]: 2026-01-20 15:15:34.884 250022 DEBUG nova.virt.libvirt.driver [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:15:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:35.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:35 compute-0 nova_compute[250018]: 2026-01-20 15:15:35.474 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:35 compute-0 nova_compute[250018]: 2026-01-20 15:15:35.532 250022 DEBUG neutronclient.v2_0.client [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0e93d1de-671e-4e37-8e79-44bed7981254 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:15:35 compute-0 nova_compute[250018]: 2026-01-20 15:15:35.626 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:35 compute-0 nova_compute[250018]: 2026-01-20 15:15:35.626 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:35 compute-0 nova_compute[250018]: 2026-01-20 15:15:35.627 250022 DEBUG oslo_concurrency.lockutils [None req-b335d546-cb29-41df-a4fc-7fc39a94104e e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1108637769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1108637769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:36.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 20 15:15:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:36 compute-0 ceph-mon[74360]: pgmap v2786: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 20 15:15:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:37.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:37 compute-0 sshd-session[361145]: Invalid user admin from 134.122.57.138 port 56706
Jan 20 15:15:37 compute-0 sshd-session[361145]: Connection closed by invalid user admin 134.122.57.138 port 56706 [preauth]
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.547 250022 DEBUG nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.548 250022 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.548 250022 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.548 250022 DEBUG oslo_concurrency.lockutils [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.548 250022 DEBUG nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:37 compute-0 nova_compute[250018]: 2026-01-20 15:15:37.548 250022 WARNING nova.compute.manager [req-1000111b-15d0-45d2-ac83-d49fd1cbbc37 req-33a57693-0eff-4d7f-8520-32926828c084 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state active and task_state resize_migrated.
Jan 20 15:15:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 20 15:15:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 20 15:15:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 20 15:15:38 compute-0 nova_compute[250018]: 2026-01-20 15:15:38.286 250022 DEBUG nova.compute.manager [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:38 compute-0 nova_compute[250018]: 2026-01-20 15:15:38.287 250022 DEBUG nova.compute.manager [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing instance network info cache due to event network-changed-0e93d1de-671e-4e37-8e79-44bed7981254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:38 compute-0 nova_compute[250018]: 2026-01-20 15:15:38.287 250022 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:38 compute-0 nova_compute[250018]: 2026-01-20 15:15:38.287 250022 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:38 compute-0 nova_compute[250018]: 2026-01-20 15:15:38.287 250022 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Refreshing network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:38.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 21 KiB/s wr, 42 op/s
Jan 20 15:15:38 compute-0 ceph-mon[74360]: osdmap e402: 3 total, 3 up, 3 in
Jan 20 15:15:38 compute-0 ceph-mon[74360]: pgmap v2788: 321 pgs: 321 active+clean; 605 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 21 KiB/s wr, 42 op/s
Jan 20 15:15:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:39.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.668 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.963 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.964 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.964 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.965 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.965 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.966 250022 INFO nova.compute.manager [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Terminating instance
Jan 20 15:15:39 compute-0 nova_compute[250018]: 2026-01-20 15:15:39.967 250022 DEBUG nova.compute.manager [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:15:40 compute-0 kernel: tapd593d88a-ba (unregistering): left promiscuous mode
Jan 20 15:15:40 compute-0 NetworkManager[48960]: <info>  [1768922140.1871] device (tapd593d88a-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:15:40 compute-0 ovn_controller[148666]: 2026-01-20T15:15:40Z|00680|binding|INFO|Releasing lport d593d88a-ba32-4023-9e92-973064a24fbe from this chassis (sb_readonly=0)
Jan 20 15:15:40 compute-0 ovn_controller[148666]: 2026-01-20T15:15:40Z|00681|binding|INFO|Setting lport d593d88a-ba32-4023-9e92-973064a24fbe down in Southbound
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.225 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 ovn_controller[148666]: 2026-01-20T15:15:40Z|00682|binding|INFO|Removing iface tapd593d88a-ba ovn-installed in OVS
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.227 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.240 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:46:3f 10.100.0.12'], port_security=['fa:16:3e:b5:46:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5380c3d8-edb4-4366-85ab-3dc76ecc1f43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a49638950e1543fa8e0d251af5479623', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c29da5ec-6cb2-4047-ba89-70fa67a96476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76ec1139-009f-49fe-bfde-07c0ef9e8b12, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=d593d88a-ba32-4023-9e92-973064a24fbe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.241 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.241 160071 INFO neutron.agent.ovn.metadata.agent [-] Port d593d88a-ba32-4023-9e92-973064a24fbe in datapath b677f1a9-dbaa-4373-8466-bd9ccf067b91 unbound from our chassis
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.242 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b677f1a9-dbaa-4373-8466-bd9ccf067b91, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.243 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9485e55f-ad10-4cdb-82f9-29a935ce9b03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.243 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 namespace which is not needed anymore
Jan 20 15:15:40 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Jan 20 15:15:40 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b9.scope: Consumed 17.297s CPU time.
Jan 20 15:15:40 compute-0 systemd-machined[216401]: Machine qemu-81-instance-000000b9 terminated.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.276 250022 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updated VIF entry in instance network info cache for port 0e93d1de-671e-4e37-8e79-44bed7981254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.276 250022 DEBUG nova.network.neutron [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:40.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.325 250022 DEBUG oslo_concurrency.lockutils [req-87897e1d-ecd1-4510-b9de-886f9ed8c75b req-f4ce1150-6021-408f-8ead-d1dfd55983da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [NOTICE]   (358801) : haproxy version is 2.8.14-c23fe91
Jan 20 15:15:40 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [NOTICE]   (358801) : path to executable is /usr/sbin/haproxy
Jan 20 15:15:40 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [WARNING]  (358801) : Exiting Master process...
Jan 20 15:15:40 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [ALERT]    (358801) : Current worker (358803) exited with code 143 (Terminated)
Jan 20 15:15:40 compute-0 neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91[358797]: [WARNING]  (358801) : All workers exited. Exiting... (0)
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.394 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 systemd[1]: libpod-78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876.scope: Deactivated successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.399 250022 INFO nova.virt.libvirt.driver [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Instance destroyed successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.400 250022 DEBUG nova.objects.instance [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lazy-loading 'resources' on Instance uuid 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:40 compute-0 podman[361174]: 2026-01-20 15:15:40.403915241 +0000 UTC m=+0.060854193 container died 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876-userdata-shm.mount: Deactivated successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.435 250022 DEBUG nova.virt.libvirt.vif [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1422997990',display_name='tempest-TestVolumeBootPattern-server-1422997990',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1422997990',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBjkv3PM31l7/LOeidHCDov4vvdGwqOT15IVVbWearXBCn3jQz2xB6ix8iz1XP+iiPXyhWuw0LpMPT9jQN2b0mvhqeZTHErGcz1VZLskRcT6iqcekmFxWykFxr44bv68XA==',key_name='tempest-TestVolumeBootPattern-474773317',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:13:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a49638950e1543fa8e0d251af5479623',ramdisk_id='',reservation_id='r-fn8hs3j5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-194644003',owner_user_name='tempest-TestVolumeBootPattern-194644003-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:13:51Z,user_data=None,user_id='bf422e55e158420cbdae75f07a3bb97a',uuid=5380c3d8-edb4-4366-85ab-3dc76ecc1f43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.436 250022 DEBUG nova.network.os_vif_util [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converting VIF {"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9deff99ba270527f612a48f0d03c948ddc109c49f0ef75c6a048849552407896-merged.mount: Deactivated successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.438 250022 DEBUG nova.network.os_vif_util [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.439 250022 DEBUG os_vif [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.442 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.443 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd593d88a-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:40 compute-0 podman[361174]: 2026-01-20 15:15:40.444999309 +0000 UTC m=+0.101938251 container cleanup 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.446 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.457 250022 DEBUG nova.compute.manager [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.458 250022 DEBUG nova.compute.manager [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing instance network info cache due to event network-changed-d593d88a-ba32-4023-9e92-973064a24fbe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.458 250022 DEBUG oslo_concurrency.lockutils [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.459 250022 DEBUG oslo_concurrency.lockutils [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.459 250022 DEBUG nova.network.neutron [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Refreshing network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.462 250022 INFO os_vif [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:46:3f,bridge_name='br-int',has_traffic_filtering=True,id=d593d88a-ba32-4023-9e92-973064a24fbe,network=Network(b677f1a9-dbaa-4373-8466-bd9ccf067b91),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd593d88a-ba')
Jan 20 15:15:40 compute-0 systemd[1]: libpod-conmon-78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876.scope: Deactivated successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.489 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 podman[361212]: 2026-01-20 15:15:40.531671547 +0000 UTC m=+0.059307150 container remove 78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.541 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c97241bd-fd44-45b9-9a28-eafc5937a878]: (4, ('Tue Jan 20 03:15:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876)\n78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876\nTue Jan 20 03:15:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 (78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876)\n78699a6190e449634e6c052d2a8e5b4779f2df8f9c4de6f19fc48bbe8927e876\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.543 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[28b9bff5-ac5b-4c4f-87ad-f309a2cba4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.544 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb677f1a9-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.545 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 kernel: tapb677f1a9-d0: left promiscuous mode
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.560 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.563 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[383e5552-db99-477b-8aa8-9e3581e3eb51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 23 KiB/s wr, 33 op/s
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.577 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4585bffd-8cc3-48c1-9ffe-245dfab422ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.578 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fccb5de8-dc20-4635-b0e0-2b9f4abb2c5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.595 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4d57f95a-1d00-48cd-909c-6b84f9698c68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 797149, 'reachable_time': 42166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361246, 'error': None, 'target': 'ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.598 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b677f1a9-dbaa-4373-8466-bd9ccf067b91 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:15:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:40.598 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8c83b9-8c7b-4e1e-8150-4a7c3de5b08f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:15:40 compute-0 systemd[1]: run-netns-ovnmeta\x2db677f1a9\x2ddbaa\x2d4373\x2d8466\x2dbd9ccf067b91.mount: Deactivated successfully.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.667 250022 INFO nova.virt.libvirt.driver [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Deleting instance files /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43_del
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.668 250022 INFO nova.virt.libvirt.driver [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Deletion of /var/lib/nova/instances/5380c3d8-edb4-4366-85ab-3dc76ecc1f43_del complete
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.735 250022 INFO nova.compute.manager [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.736 250022 DEBUG oslo.service.loopingcall [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.736 250022 DEBUG nova.compute.manager [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.736 250022 DEBUG nova.network.neutron [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.870 250022 DEBUG nova.compute.manager [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-unplugged-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.871 250022 DEBUG oslo_concurrency.lockutils [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.871 250022 DEBUG oslo_concurrency.lockutils [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.871 250022 DEBUG oslo_concurrency.lockutils [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.871 250022 DEBUG nova.compute.manager [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] No waiting events found dispatching network-vif-unplugged-d593d88a-ba32-4023-9e92-973064a24fbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:40 compute-0 nova_compute[250018]: 2026-01-20 15:15:40.872 250022 DEBUG nova.compute.manager [req-3d27a804-7a4d-40a4-a719-85e9a9785d5e req-42780836-1c3b-4cc5-919c-8c4b9f26fc0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-unplugged-d593d88a-ba32-4023-9e92-973064a24fbe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:15:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:41.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:41 compute-0 sudo[361248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:41 compute-0 sudo[361248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:41 compute-0 sudo[361248]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:41 compute-0 sudo[361273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:41 compute-0 sudo[361273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:41 compute-0 sudo[361273]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:41 compute-0 ceph-mon[74360]: pgmap v2789: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 23 KiB/s wr, 33 op/s
Jan 20 15:15:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1691770121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:41 compute-0 nova_compute[250018]: 2026-01-20 15:15:41.891 250022 DEBUG nova.network.neutron [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:41 compute-0 nova_compute[250018]: 2026-01-20 15:15:41.923 250022 INFO nova.compute.manager [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Took 1.19 seconds to deallocate network for instance.
Jan 20 15:15:41 compute-0 nova_compute[250018]: 2026-01-20 15:15:41.995 250022 DEBUG nova.compute.manager [req-2210e806-3b06-4cbe-adb8-60173f6a22c5 req-5d1a5118-38d3-49d7-bdbe-dbdf8e1e828d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-deleted-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.214 250022 INFO nova.compute.manager [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Took 0.29 seconds to detach 1 volumes for instance.
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.269 250022 DEBUG nova.network.neutron [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updated VIF entry in instance network info cache for port d593d88a-ba32-4023-9e92-973064a24fbe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.270 250022 DEBUG nova.network.neutron [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Updating instance_info_cache with network_info: [{"id": "d593d88a-ba32-4023-9e92-973064a24fbe", "address": "fa:16:3e:b5:46:3f", "network": {"id": "b677f1a9-dbaa-4373-8466-bd9ccf067b91", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-408170906-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a49638950e1543fa8e0d251af5479623", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd593d88a-ba", "ovs_interfaceid": "d593d88a-ba32-4023-9e92-973064a24fbe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.302 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.303 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:42.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.357 250022 DEBUG oslo_concurrency.lockutils [req-60ace831-787a-4e39-8d55-a0673b316dde req-187addbc-803f-4ccb-be9e-f05c247b5354 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-5380c3d8-edb4-4366-85ab-3dc76ecc1f43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.494 250022 DEBUG oslo_concurrency.processutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 20 15:15:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 20 15:15:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 20 15:15:42 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 20 15:15:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/810396698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/856547777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.954 250022 DEBUG oslo_concurrency.processutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:42 compute-0 nova_compute[250018]: 2026-01-20 15:15:42.962 250022 DEBUG nova.compute.provider_tree [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.003 250022 DEBUG nova.scheduler.client.report [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.026 250022 DEBUG nova.compute.manager [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.026 250022 DEBUG oslo_concurrency.lockutils [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.026 250022 DEBUG oslo_concurrency.lockutils [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.027 250022 DEBUG oslo_concurrency.lockutils [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.027 250022 DEBUG nova.compute.manager [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] No waiting events found dispatching network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.027 250022 WARNING nova.compute.manager [req-1655395d-4884-4af2-a07a-e4920f12541c req-91c2d61e-eaf6-4324-85e9-29be52c9c557 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Received unexpected event network-vif-plugged-d593d88a-ba32-4023-9e92-973064a24fbe for instance with vm_state deleted and task_state None.
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.054 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.097 250022 INFO nova.scheduler.client.report [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Deleted allocations for instance 5380c3d8-edb4-4366-85ab-3dc76ecc1f43
Jan 20 15:15:43 compute-0 nova_compute[250018]: 2026-01-20 15:15:43.253 250022 DEBUG oslo_concurrency.lockutils [None req-a9dce372-776e-4aa4-8a25-7cab70e5d506 bf422e55e158420cbdae75f07a3bb97a a49638950e1543fa8e0d251af5479623 - - default default] Lock "5380c3d8-edb4-4366-85ab-3dc76ecc1f43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:43.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:43 compute-0 ceph-mon[74360]: pgmap v2790: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 20 15:15:43 compute-0 ceph-mon[74360]: osdmap e403: 3 total, 3 up, 3 in
Jan 20 15:15:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/856547777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2943588033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:44.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.2 MiB/s wr, 54 op/s
Jan 20 15:15:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1384480633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:44 compute-0 ceph-mon[74360]: pgmap v2792: 321 pgs: 321 active+clean; 623 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.2 MiB/s wr, 54 op/s
Jan 20 15:15:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:45.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:45 compute-0 nova_compute[250018]: 2026-01-20 15:15:45.446 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:45 compute-0 nova_compute[250018]: 2026-01-20 15:15:45.477 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:46.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3184790100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:15:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3184790100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:15:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 171 op/s
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.639 250022 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.640 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.640 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.640 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.640 250022 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.641 250022 WARNING nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state resized and task_state None.
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.641 250022 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.641 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.641 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.642 250022 DEBUG oslo_concurrency.lockutils [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.642 250022 DEBUG nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] No waiting events found dispatching network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:15:46 compute-0 nova_compute[250018]: 2026-01-20 15:15:46.642 250022 WARNING nova.compute.manager [req-4e636554-da77-4e96-a836-a9a962da2e40 req-cf8118bb-0a23-4f14-878f-090c53e009ce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Received unexpected event network-vif-plugged-0e93d1de-671e-4e37-8e79-44bed7981254 for instance with vm_state resized and task_state None.
Jan 20 15:15:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 20 15:15:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 20 15:15:46 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 20 15:15:47 compute-0 ceph-mon[74360]: pgmap v2793: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 171 op/s
Jan 20 15:15:47 compute-0 ceph-mon[74360]: osdmap e404: 3 total, 3 up, 3 in
Jan 20 15:15:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3505090184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:47.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:48 compute-0 nova_compute[250018]: 2026-01-20 15:15:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:48.161 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:15:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:48.162 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:15:48 compute-0 nova_compute[250018]: 2026-01-20 15:15:48.163 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/257267576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:15:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:15:48 compute-0 nova_compute[250018]: 2026-01-20 15:15:48.836 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:48 compute-0 nova_compute[250018]: 2026-01-20 15:15:48.836 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:48 compute-0 nova_compute[250018]: 2026-01-20 15:15:48.837 250022 DEBUG nova.compute.manager [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 15:15:49 compute-0 nova_compute[250018]: 2026-01-20 15:15:49.185 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922134.1837225, f1ded131-d9a3-4e93-ad99-53ee2695d5c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:49 compute-0 nova_compute[250018]: 2026-01-20 15:15:49.185 250022 INFO nova.compute.manager [-] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] VM Stopped (Lifecycle Event)
Jan 20 15:15:49 compute-0 nova_compute[250018]: 2026-01-20 15:15:49.205 250022 DEBUG nova.compute.manager [None req-e69229de-ebd4-4a89-9537-0f340e00e717 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:49 compute-0 nova_compute[250018]: 2026-01-20 15:15:49.208 250022 DEBUG nova.compute.manager [None req-e69229de-ebd4-4a89-9537-0f340e00e717 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:15:49 compute-0 nova_compute[250018]: 2026-01-20 15:15:49.239 250022 INFO nova.compute.manager [None req-e69229de-ebd4-4a89-9537-0f340e00e717 - - - - - -] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 15:15:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 15:15:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:49.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 15:15:49 compute-0 ceph-mon[74360]: pgmap v2795: 321 pgs: 321 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 181 op/s
Jan 20 15:15:49 compute-0 sudo[361325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:49 compute-0 sudo[361325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-0 sudo[361325]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-0 sudo[361350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:15:49 compute-0 sudo[361350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-0 sudo[361350]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-0 sudo[361375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:49 compute-0 sudo[361375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:49 compute-0 sudo[361375]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:49 compute-0 sudo[361400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:15:49 compute-0 sudo[361400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:50 compute-0 sudo[361400]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.092 250022 DEBUG neutronclient.v2_0.client [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0e93d1de-671e-4e37-8e79-44bed7981254 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.093 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.093 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.094 250022 DEBUG nova.network.neutron [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.094 250022 DEBUG nova.objects.instance [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'info_cache' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:15:50.163 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 170d303b-8fc0-4928-9bdb-80cf1f096dce does not exist
Jan 20 15:15:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6c613213-e3cd-43d5-ab0d-536d1800f4fb does not exist
Jan 20 15:15:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d1665ba3-a70f-4de5-a3ba-97ecf1948c45 does not exist
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:15:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:50 compute-0 sudo[361456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:50 compute-0 sudo[361456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:50 compute-0 sudo[361456]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:50 compute-0 sudo[361481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:15:50 compute-0 sudo[361481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:50 compute-0 sudo[361481]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:15:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:15:50 compute-0 sudo[361506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:50 compute-0 sudo[361506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:50 compute-0 sudo[361506]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.447 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:50 compute-0 nova_compute[250018]: 2026-01-20 15:15:50.478 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:50 compute-0 sudo[361531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:15:50 compute-0 sudo[361531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 596 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 219 op/s
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.836895704 +0000 UTC m=+0.044615894 container create 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:15:50 compute-0 systemd[1]: Started libpod-conmon-5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e.scope.
Jan 20 15:15:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.816452024 +0000 UTC m=+0.024172264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.925281679 +0000 UTC m=+0.133001889 container init 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.933415258 +0000 UTC m=+0.141135448 container start 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.936936074 +0000 UTC m=+0.144656294 container attach 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:15:50 compute-0 beautiful_swirles[361611]: 167 167
Jan 20 15:15:50 compute-0 systemd[1]: libpod-5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e.scope: Deactivated successfully.
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.94237462 +0000 UTC m=+0.150094830 container died 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf7e7ba9948f30eed3a1c24284d50f50416db5e4282435f254a945d0123e46f9-merged.mount: Deactivated successfully.
Jan 20 15:15:50 compute-0 podman[361595]: 2026-01-20 15:15:50.978536685 +0000 UTC m=+0.186256875 container remove 5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_swirles, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:15:50 compute-0 systemd[1]: libpod-conmon-5f28ded9c6aabd66764f7496e95bfd19314ed084fcdc14ecbecacf7c68d24a5e.scope: Deactivated successfully.
Jan 20 15:15:51 compute-0 podman[361635]: 2026-01-20 15:15:51.19112281 +0000 UTC m=+0.042546508 container create d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:15:51 compute-0 systemd[1]: Started libpod-conmon-d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6.scope.
Jan 20 15:15:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:51 compute-0 podman[361635]: 2026-01-20 15:15:51.17258389 +0000 UTC m=+0.024007608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:51 compute-0 podman[361635]: 2026-01-20 15:15:51.311063986 +0000 UTC m=+0.162487684 container init d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:15:51 compute-0 podman[361635]: 2026-01-20 15:15:51.317459698 +0000 UTC m=+0.168883396 container start d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:15:51 compute-0 podman[361635]: 2026-01-20 15:15:51.330818678 +0000 UTC m=+0.182242376 container attach d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:15:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:51 compute-0 ceph-mon[74360]: pgmap v2796: 321 pgs: 321 active+clean; 596 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 219 op/s
Jan 20 15:15:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:52 compute-0 gifted_wilbur[361651]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:15:52 compute-0 gifted_wilbur[361651]: --> relative data size: 1.0
Jan 20 15:15:52 compute-0 gifted_wilbur[361651]: --> All data devices are unavailable
Jan 20 15:15:52 compute-0 systemd[1]: libpod-d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6.scope: Deactivated successfully.
Jan 20 15:15:52 compute-0 podman[361635]: 2026-01-20 15:15:52.143646005 +0000 UTC m=+0.995069693 container died d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:15:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:52.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b427c3784fb3f7bdbfc5a57d18770fede6824e6ecdecbc8ec2dd5faa825e335a-merged.mount: Deactivated successfully.
Jan 20 15:15:52 compute-0 podman[361635]: 2026-01-20 15:15:52.359761515 +0000 UTC m=+1.211185213 container remove d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:15:52 compute-0 sudo[361531]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:52 compute-0 systemd[1]: libpod-conmon-d81c9ba78ddfe07a404f08fb655ec742e3618dff2b7b418574dd22ee3b8a7ed6.scope: Deactivated successfully.
Jan 20 15:15:52 compute-0 sudo[361679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:52 compute-0 sudo[361679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:52 compute-0 sudo[361679]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:52 compute-0 sudo[361704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:15:52 compute-0 sudo[361704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:52 compute-0 sudo[361704]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:15:52 compute-0 sudo[361729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:52 compute-0 sudo[361729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:52 compute-0 sudo[361729]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 20 15:15:52 compute-0 sudo[361754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:15:52 compute-0 sudo[361754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:15:52
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'volumes']
Jan 20 15:15:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:15:52 compute-0 podman[361817]: 2026-01-20 15:15:52.938857846 +0000 UTC m=+0.044372048 container create f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:15:52 compute-0 systemd[1]: Started libpod-conmon-f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810.scope.
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:52.920707987 +0000 UTC m=+0.026222209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:53.044010983 +0000 UTC m=+0.149525175 container init f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:53.053875599 +0000 UTC m=+0.159389801 container start f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:53.056750326 +0000 UTC m=+0.162264528 container attach f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:15:53 compute-0 admiring_hodgkin[361834]: 167 167
Jan 20 15:15:53 compute-0 systemd[1]: libpod-f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810.scope: Deactivated successfully.
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:53.060846417 +0000 UTC m=+0.166360619 container died f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.076 250022 DEBUG nova.network.neutron [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: f1ded131-d9a3-4e93-ad99-53ee2695d5c8] Updating instance_info_cache with network_info: [{"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f62545f5a18bab44e08ec42cdbe075bfa5f675dc70a9a8d0fb83f2fc6e01db2-merged.mount: Deactivated successfully.
Jan 20 15:15:53 compute-0 podman[361817]: 2026-01-20 15:15:53.096418386 +0000 UTC m=+0.201932568 container remove f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:15:53 compute-0 systemd[1]: libpod-conmon-f6a77a3b42beabae2ebd61fe350f63302d0d33fea0482e19b48059f1b4964810.scope: Deactivated successfully.
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.108 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-f1ded131-d9a3-4e93-ad99-53ee2695d5c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.109 250022 DEBUG nova.objects.instance [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'migration_context' on Instance uuid f1ded131-d9a3-4e93-ad99-53ee2695d5c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.214 250022 DEBUG nova.storage.rbd_utils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] removing snapshot(nova-resize) on rbd image(f1ded131-d9a3-4e93-ad99-53ee2695d5c8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:15:53 compute-0 podman[361894]: 2026-01-20 15:15:53.264552001 +0000 UTC m=+0.045910359 container create 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 15:15:53 compute-0 systemd[1]: Started libpod-conmon-89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688.scope.
Jan 20 15:15:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea62352db71312be05762afaf9cd58d2f8d91723932c10d1d7d92172e8f51f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea62352db71312be05762afaf9cd58d2f8d91723932c10d1d7d92172e8f51f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea62352db71312be05762afaf9cd58d2f8d91723932c10d1d7d92172e8f51f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea62352db71312be05762afaf9cd58d2f8d91723932c10d1d7d92172e8f51f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:53 compute-0 podman[361894]: 2026-01-20 15:15:53.334544079 +0000 UTC m=+0.115902467 container init 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 15:15:53 compute-0 podman[361894]: 2026-01-20 15:15:53.341571969 +0000 UTC m=+0.122930327 container start 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:15:53 compute-0 podman[361894]: 2026-01-20 15:15:53.24632489 +0000 UTC m=+0.027683268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:53 compute-0 podman[361894]: 2026-01-20 15:15:53.345144386 +0000 UTC m=+0.126502774 container attach 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:15:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:53 compute-0 ovn_controller[148666]: 2026-01-20T15:15:53Z|00683|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:15:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 20 15:15:53 compute-0 ceph-mon[74360]: pgmap v2797: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 20 15:15:53 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.705 250022 DEBUG nova.virt.libvirt.vif [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-0',id=187,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-h927541v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:15:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=f1ded131-d9a3-4e93-ad99-53ee2695d5c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.706 250022 DEBUG nova.network.os_vif_util [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "0e93d1de-671e-4e37-8e79-44bed7981254", "address": "fa:16:3e:99:5e:ed", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e93d1de-67", "ovs_interfaceid": "0e93d1de-671e-4e37-8e79-44bed7981254", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.706 250022 DEBUG nova.network.os_vif_util [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.707 250022 DEBUG os_vif [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.709 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.709 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e93d1de-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.709 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.711 250022 INFO os_vif [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:5e:ed,bridge_name='br-int',has_traffic_filtering=True,id=0e93d1de-671e-4e37-8e79-44bed7981254,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e93d1de-67')
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.712 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.712 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:15:53 compute-0 nova_compute[250018]: 2026-01-20 15:15:53.996 250022 DEBUG oslo_concurrency.processutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]: {
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:     "0": [
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:         {
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "devices": [
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "/dev/loop3"
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             ],
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "lv_name": "ceph_lv0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "lv_size": "7511998464",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "name": "ceph_lv0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "tags": {
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.cluster_name": "ceph",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.crush_device_class": "",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.encrypted": "0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.osd_id": "0",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.type": "block",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:                 "ceph.vdo": "0"
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             },
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "type": "block",
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:             "vg_name": "ceph_vg0"
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:         }
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]:     ]
Jan 20 15:15:54 compute-0 sleepy_cannon[361911]: }
Jan 20 15:15:54 compute-0 systemd[1]: libpod-89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688.scope: Deactivated successfully.
Jan 20 15:15:54 compute-0 podman[361894]: 2026-01-20 15:15:54.108338483 +0000 UTC m=+0.889696881 container died 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea62352db71312be05762afaf9cd58d2f8d91723932c10d1d7d92172e8f51f1-merged.mount: Deactivated successfully.
Jan 20 15:15:54 compute-0 podman[361894]: 2026-01-20 15:15:54.171719652 +0000 UTC m=+0.953078010 container remove 89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:15:54 compute-0 systemd[1]: libpod-conmon-89bc997b7b1f83c6ffdb80018c4e35a9012c138b1521622a811435798e22c688.scope: Deactivated successfully.
Jan 20 15:15:54 compute-0 sudo[361754]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:54 compute-0 sudo[361955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:54 compute-0 sudo[361955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:54 compute-0 sudo[361955]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:54 compute-0 sudo[361980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:15:54 compute-0 sudo[361980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:54 compute-0 sudo[361980]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:54 compute-0 sudo[362005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:54 compute-0 sudo[362005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:54 compute-0 sudo[362005]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:54 compute-0 sudo[362030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:15:54 compute-0 sudo[362030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:15:54 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040095600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.464 250022 DEBUG oslo_concurrency.processutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.475 250022 DEBUG nova.compute.provider_tree [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.508 250022 DEBUG nova.scheduler.client.report [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.559 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 20 15:15:54 compute-0 ceph-mon[74360]: osdmap e405: 3 total, 3 up, 3 in
Jan 20 15:15:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2040095600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.698 250022 INFO nova.scheduler.client.report [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Deleted allocation for migration 57f22c5f-c3c6-4f11-afbc-5b3fc1752f60
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.748286396 +0000 UTC m=+0.040991337 container create ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:15:54 compute-0 nova_compute[250018]: 2026-01-20 15:15:54.775 250022 DEBUG oslo_concurrency.lockutils [None req-14f9682e-83a0-4f1e-8166-084f69b8b753 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "f1ded131-d9a3-4e93-ad99-53ee2695d5c8" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:15:54 compute-0 systemd[1]: Started libpod-conmon-ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9.scope.
Jan 20 15:15:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.730456334 +0000 UTC m=+0.023161305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.82852048 +0000 UTC m=+0.121225441 container init ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.834872461 +0000 UTC m=+0.127577392 container start ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.837766509 +0000 UTC m=+0.130471450 container attach ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:15:54 compute-0 festive_spence[362115]: 167 167
Jan 20 15:15:54 compute-0 systemd[1]: libpod-ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9.scope: Deactivated successfully.
Jan 20 15:15:54 compute-0 conmon[362115]: conmon ed26dd7eb21ced7a0563 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9.scope/container/memory.events
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.841836589 +0000 UTC m=+0.134541530 container died ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac6f8bf90102d0895778bdc3b14fae2fc07e365f2d59f8b50b4ff3e7720d7ef-merged.mount: Deactivated successfully.
Jan 20 15:15:54 compute-0 podman[362098]: 2026-01-20 15:15:54.87636185 +0000 UTC m=+0.169066801 container remove ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:15:54 compute-0 systemd[1]: libpod-conmon-ed26dd7eb21ced7a056355d11ce8cf416b13a1c7f242fa637940864611c0a4e9.scope: Deactivated successfully.
Jan 20 15:15:55 compute-0 podman[362138]: 2026-01-20 15:15:55.042726078 +0000 UTC m=+0.045333224 container create cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:15:55 compute-0 systemd[1]: Started libpod-conmon-cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1.scope.
Jan 20 15:15:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfa4f1184e3df54306fe70baa558ff0710398b595d0f6897634603cfd3f4b3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfa4f1184e3df54306fe70baa558ff0710398b595d0f6897634603cfd3f4b3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfa4f1184e3df54306fe70baa558ff0710398b595d0f6897634603cfd3f4b3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfa4f1184e3df54306fe70baa558ff0710398b595d0f6897634603cfd3f4b3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:15:55 compute-0 podman[362138]: 2026-01-20 15:15:55.114148674 +0000 UTC m=+0.116755850 container init cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:15:55 compute-0 podman[362138]: 2026-01-20 15:15:55.025469333 +0000 UTC m=+0.028076529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:15:55 compute-0 podman[362138]: 2026-01-20 15:15:55.120122735 +0000 UTC m=+0.122729881 container start cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:15:55 compute-0 podman[362138]: 2026-01-20 15:15:55.123398814 +0000 UTC m=+0.126005960 container attach cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:15:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:55 compute-0 nova_compute[250018]: 2026-01-20 15:15:55.399 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922140.3981242, 5380c3d8-edb4-4366-85ab-3dc76ecc1f43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:15:55 compute-0 nova_compute[250018]: 2026-01-20 15:15:55.399 250022 INFO nova.compute.manager [-] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] VM Stopped (Lifecycle Event)
Jan 20 15:15:55 compute-0 nova_compute[250018]: 2026-01-20 15:15:55.488 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:15:55 compute-0 nova_compute[250018]: 2026-01-20 15:15:55.491 250022 DEBUG nova.compute.manager [None req-0b7c587c-532e-4301-8a37-175e08d4182d - - - - - -] [instance: 5380c3d8-edb4-4366-85ab-3dc76ecc1f43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:15:55 compute-0 ceph-mon[74360]: pgmap v2799: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]: {
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:         "osd_id": 0,
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:         "type": "bluestore"
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]:     }
Jan 20 15:15:55 compute-0 blissful_lichterman[362154]: }
Jan 20 15:15:55 compute-0 systemd[1]: libpod-cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1.scope: Deactivated successfully.
Jan 20 15:15:56 compute-0 podman[362176]: 2026-01-20 15:15:56.020782911 +0000 UTC m=+0.025304023 container died cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cfa4f1184e3df54306fe70baa558ff0710398b595d0f6897634603cfd3f4b3b-merged.mount: Deactivated successfully.
Jan 20 15:15:56 compute-0 podman[362176]: 2026-01-20 15:15:56.066781452 +0000 UTC m=+0.071302534 container remove cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:15:56 compute-0 systemd[1]: libpod-conmon-cbd7bd75d7123fe5cd0b4eeaa4c4ec9ef3f5a10b867d43c47036d7ec39f4bca1.scope: Deactivated successfully.
Jan 20 15:15:56 compute-0 sudo[362030]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:15:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:15:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 59bf15a2-404b-4389-b31a-f44c437b3131 does not exist
Jan 20 15:15:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev afee79d2-0aa0-4149-a9ac-07cf75a9df2b does not exist
Jan 20 15:15:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c637053f-1e14-4d3d-a69c-db09d15de9f2 does not exist
Jan 20 15:15:56 compute-0 sudo[362191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:15:56 compute-0 sudo[362191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:56 compute-0 sudo[362191]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:56 compute-0 sudo[362217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:15:56 compute-0 sudo[362217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:15:56 compute-0 sudo[362217]: pam_unix(sudo:session): session closed for user root
Jan 20 15:15:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 27 KiB/s wr, 148 op/s
Jan 20 15:15:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:15:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:15:57 compute-0 ceph-mon[74360]: pgmap v2800: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 27 KiB/s wr, 148 op/s
Jan 20 15:15:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:57.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:15:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:15:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:15:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:15:58.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:15:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 144 op/s
Jan 20 15:15:59 compute-0 nova_compute[250018]: 2026-01-20 15:15:59.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:15:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:15:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:15:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:15:59.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:15:59 compute-0 ceph-mon[74360]: pgmap v2801: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 144 op/s
Jan 20 15:16:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:00.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:00 compute-0 nova_compute[250018]: 2026-01-20 15:16:00.490 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:16:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 133 op/s
Jan 20 15:16:00 compute-0 ceph-mon[74360]: pgmap v2802: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 133 op/s
Jan 20 15:16:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:01 compute-0 sudo[362244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:01 compute-0 sudo[362244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:01 compute-0 sudo[362244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:01 compute-0 sudo[362269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:01 compute-0 sudo[362269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:01 compute-0 sudo[362269]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 20 15:16:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 20 15:16:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 20 15:16:01 compute-0 ovn_controller[148666]: 2026-01-20T15:16:01Z|00684|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:16:02 compute-0 nova_compute[250018]: 2026-01-20 15:16:02.071 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:16:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:16:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 28 KiB/s wr, 161 op/s
Jan 20 15:16:02 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 20 15:16:02 compute-0 ceph-mon[74360]: osdmap e406: 3 total, 3 up, 3 in
Jan 20 15:16:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1509921417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:02 compute-0 ceph-mon[74360]: pgmap v2804: 321 pgs: 321 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 28 KiB/s wr, 161 op/s
Jan 20 15:16:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:03.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2411011844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3815396852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:04.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:04 compute-0 podman[362297]: 2026-01-20 15:16:04.46224645 +0000 UTC m=+0.053287458 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 15:16:04 compute-0 podman[362296]: 2026-01-20 15:16:04.486241447 +0000 UTC m=+0.077133751 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:16:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 135 op/s
Jan 20 15:16:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4156676619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:04 compute-0 ceph-mon[74360]: pgmap v2805: 321 pgs: 321 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 135 op/s
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.086 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.086 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.492 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931726910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.525 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.624 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.625 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.784 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.785 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3964MB free_disk=20.77831268310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.785 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.786 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.873 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance e79c0704-f95e-422f-9c25-ed35fca7cb7c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.873 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:16:05 compute-0 nova_compute[250018]: 2026-01-20 15:16:05.873 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:16:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3931726910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.508 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.513 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.529 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.567 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:16:06 compute-0 nova_compute[250018]: 2026-01-20 15:16:06.567 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2847299956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:07 compute-0 ceph-mon[74360]: pgmap v2806: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:07.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:07 compute-0 nova_compute[250018]: 2026-01-20 15:16:07.569 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.071 250022 DEBUG nova.compute.manager [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 20 15:16:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:09.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:09 compute-0 ceph-mon[74360]: pgmap v2807: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.726 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.727 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.756 250022 DEBUG nova.objects.instance [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_requests' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.781 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.782 250022 INFO nova.compute.claims [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.782 250022 DEBUG nova.objects.instance [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'resources' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.808 250022 DEBUG nova.objects.instance [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.853 250022 INFO nova.compute.resource_tracker [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating resource usage from migration 80437f3e-3a02-4f66-bb65-e87ce242092f
Jan 20 15:16:09 compute-0 nova_compute[250018]: 2026-01-20 15:16:09.854 250022 DEBUG nova.compute.resource_tracker [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Starting to track incoming migration 80437f3e-3a02-4f66-bb65-e87ce242092f with flavor 30c26a27-d918-46d8-a512-4ef3b4ce5955 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 20 15:16:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:10.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:10 compute-0 nova_compute[250018]: 2026-01-20 15:16:10.495 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 615 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 20 15:16:10 compute-0 nova_compute[250018]: 2026-01-20 15:16:10.698 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:16:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136970445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:11 compute-0 nova_compute[250018]: 2026-01-20 15:16:11.157 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:11 compute-0 nova_compute[250018]: 2026-01-20 15:16:11.164 250022 DEBUG nova.compute.provider_tree [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:16:11 compute-0 nova_compute[250018]: 2026-01-20 15:16:11.193 250022 DEBUG nova.scheduler.client.report [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:16:11 compute-0 nova_compute[250018]: 2026-01-20 15:16:11.218 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:11 compute-0 nova_compute[250018]: 2026-01-20 15:16:11.219 250022 INFO nova.compute.manager [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Migrating
Jan 20 15:16:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:11.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:11 compute-0 ceph-mon[74360]: pgmap v2808: 321 pgs: 321 active+clean; 598 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 615 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 20 15:16:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3136970445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010861236157640052 of space, bias 1.0, pg target 3.2583708472920154 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004322102410199144 of space, bias 1.0, pg target 1.283664415829146 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:16:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 20 15:16:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Jan 20 15:16:12 compute-0 ceph-mon[74360]: pgmap v2809: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Jan 20 15:16:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:13.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:16:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:16:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:16:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:16:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:16:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1846740269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:16:14 compute-0 nova_compute[250018]: 2026-01-20 15:16:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:16:14 compute-0 ceph-mon[74360]: pgmap v2810: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:16:14 compute-0 sshd-session[362415]: Accepted publickey for nova from 192.168.122.102 port 33664 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:16:14 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 20 15:16:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 20 15:16:14 compute-0 systemd-logind[796]: New session 59 of user nova.
Jan 20 15:16:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 20 15:16:14 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 20 15:16:15 compute-0 systemd[362419]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:16:15 compute-0 systemd[362419]: Queued start job for default target Main User Target.
Jan 20 15:16:15 compute-0 systemd[362419]: Created slice User Application Slice.
Jan 20 15:16:15 compute-0 systemd[362419]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:16:15 compute-0 systemd[362419]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 15:16:15 compute-0 systemd[362419]: Reached target Paths.
Jan 20 15:16:15 compute-0 systemd[362419]: Reached target Timers.
Jan 20 15:16:15 compute-0 systemd[362419]: Starting D-Bus User Message Bus Socket...
Jan 20 15:16:15 compute-0 systemd[362419]: Starting Create User's Volatile Files and Directories...
Jan 20 15:16:15 compute-0 systemd[362419]: Listening on D-Bus User Message Bus Socket.
Jan 20 15:16:15 compute-0 systemd[362419]: Reached target Sockets.
Jan 20 15:16:15 compute-0 systemd[362419]: Finished Create User's Volatile Files and Directories.
Jan 20 15:16:15 compute-0 systemd[362419]: Reached target Basic System.
Jan 20 15:16:15 compute-0 systemd[362419]: Reached target Main User Target.
Jan 20 15:16:15 compute-0 systemd[362419]: Startup finished in 176ms.
Jan 20 15:16:15 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 20 15:16:15 compute-0 systemd[1]: Started Session 59 of User nova.
Jan 20 15:16:15 compute-0 sshd-session[362415]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:16:15 compute-0 sshd-session[362434]: Received disconnect from 192.168.122.102 port 33664:11: disconnected by user
Jan 20 15:16:15 compute-0 sshd-session[362434]: Disconnected from user nova 192.168.122.102 port 33664
Jan 20 15:16:15 compute-0 sshd-session[362415]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:16:15 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Jan 20 15:16:15 compute-0 systemd-logind[796]: Session 59 logged out. Waiting for processes to exit.
Jan 20 15:16:15 compute-0 systemd-logind[796]: Removed session 59.
Jan 20 15:16:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:15.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:15 compute-0 sshd-session[362436]: Accepted publickey for nova from 192.168.122.102 port 33678 ssh2: ECDSA SHA256:XnPnjIKlkePRv+YAV8ktjwWUWX9aekF80jIRGfdhjRU
Jan 20 15:16:15 compute-0 systemd-logind[796]: New session 61 of user nova.
Jan 20 15:16:15 compute-0 systemd[1]: Started Session 61 of User nova.
Jan 20 15:16:15 compute-0 sshd-session[362436]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 20 15:16:15 compute-0 sshd-session[362439]: Received disconnect from 192.168.122.102 port 33678:11: disconnected by user
Jan 20 15:16:15 compute-0 sshd-session[362439]: Disconnected from user nova 192.168.122.102 port 33678
Jan 20 15:16:15 compute-0 sshd-session[362436]: pam_unix(sshd:session): session closed for user nova
Jan 20 15:16:15 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Jan 20 15:16:15 compute-0 systemd-logind[796]: Session 61 logged out. Waiting for processes to exit.
Jan 20 15:16:15 compute-0 systemd-logind[796]: Removed session 61.
Jan 20 15:16:15 compute-0 nova_compute[250018]: 2026-01-20 15:16:15.496 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:16.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 1.6 MiB/s wr, 42 op/s
Jan 20 15:16:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:16:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.473 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.474 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.474 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:16:17 compute-0 nova_compute[250018]: 2026-01-20 15:16:17.474 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e79c0704-f95e-422f-9c25-ed35fca7cb7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:17 compute-0 ceph-mon[74360]: pgmap v2811: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 1.6 MiB/s wr, 42 op/s
Jan 20 15:16:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:18.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Jan 20 15:16:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.258 250022 DEBUG nova.compute.manager [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.258 250022 DEBUG oslo_concurrency.lockutils [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.258 250022 DEBUG oslo_concurrency.lockutils [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.258 250022 DEBUG oslo_concurrency.lockutils [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.259 250022 DEBUG nova.compute.manager [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.259 250022 WARNING nova.compute.manager [req-8fa38861-3989-4ada-9654-1f0d33a06532 req-a3dad340-5358-4541-b7f8-e83b413f4838 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received unexpected event network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with vm_state active and task_state resize_migrating.
Jan 20 15:16:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.407 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Creating tmpfile /var/lib/nova/instances/tmpwqr60y7t to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 20 15:16:19 compute-0 nova_compute[250018]: 2026-01-20 15:16:19.604 250022 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 20 15:16:19 compute-0 ceph-mon[74360]: pgmap v2812: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 20 KiB/s wr, 1 op/s
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.046 250022 INFO nova.network.neutron [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating port 1b18c40e-cce7-4971-98d2-c95ec41c9040 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.054 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.067 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.067 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:16:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:16:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:20.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.499 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.710 250022 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 20 15:16:20 compute-0 ceph-mon[74360]: pgmap v2813: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 KiB/s rd, 27 KiB/s wr, 3 op/s
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.741 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.741 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:20 compute-0 nova_compute[250018]: 2026-01-20 15:16:20.741 250022 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.386 250022 DEBUG nova.compute.manager [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.386 250022 DEBUG oslo_concurrency.lockutils [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.387 250022 DEBUG oslo_concurrency.lockutils [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.387 250022 DEBUG oslo_concurrency.lockutils [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.387 250022 DEBUG nova.compute.manager [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.388 250022 WARNING nova.compute.manager [req-aeb0375b-ef7d-43fa-93fe-3d0b035fe22d req-c5f9169a-8c31-4949-b3b4-3ea5ec07c753 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received unexpected event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with vm_state active and task_state resize_migrated.
Jan 20 15:16:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.788 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.788 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquired lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:21 compute-0 nova_compute[250018]: 2026-01-20 15:16:21.788 250022 DEBUG nova.network.neutron [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:16:21 compute-0 sudo[362446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:21 compute-0 sudo[362446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:21 compute-0 sudo[362446]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:21 compute-0 sudo[362471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:21 compute-0 sudo[362471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:21 compute-0 sudo[362471]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:22.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:23 compute-0 sshd-session[362444]: Invalid user admin from 134.122.57.138 port 36368
Jan 20 15:16:23 compute-0 sshd-session[362444]: Connection closed by invalid user admin 134.122.57.138 port 36368 [preauth]
Jan 20 15:16:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.497 250022 DEBUG nova.compute.manager [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-changed-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.498 250022 DEBUG nova.compute.manager [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Refreshing instance network info cache due to event network-changed-1b18c40e-cce7-4971-98d2-c95ec41c9040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.498 250022 DEBUG oslo_concurrency.lockutils [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:23 compute-0 ceph-mon[74360]: pgmap v2814: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.944 250022 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.960 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.962 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.962 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Creating instance directory: /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.962 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Ensure instance console log exists: /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.963 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.964 250022 DEBUG nova.virt.libvirt.vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:15:50Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.964 250022 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.964 250022 DEBUG nova.network.os_vif_util [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.965 250022 DEBUG os_vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.966 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.966 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.969 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebbe6083-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:23 compute-0 nova_compute[250018]: 2026-01-20 15:16:23.969 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebbe6083-de, col_values=(('external_ids', {'iface-id': 'ebbe6083-de9d-43ca-9ab2-cf306ea0be4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:77:ea', 'vm-uuid': 'b5656c1b-5ac7-4b93-a25d-420e1e294678'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:24 compute-0 NetworkManager[48960]: <info>  [1768922184.0180] manager: (tapebbe6083-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.023 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.025 250022 INFO os_vif [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de')
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.026 250022 DEBUG nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.026 250022 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.200 250022 DEBUG nova.network.neutron [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating instance_info_cache with network_info: [{"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.233 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Releasing lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.238 250022 DEBUG oslo_concurrency.lockutils [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.238 250022 DEBUG nova.network.neutron [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Refreshing network info cache for port 1b18c40e-cce7-4971-98d2-c95ec41c9040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:24.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.413 250022 DEBUG os_brick.utils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.414 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.425 268150 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.425 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[cb31be8e-394f-48f4-80c0-23827c0b5b44]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.426 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.434 268150 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.434 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[61f6e0c0-de65-47b1-a63f-465fd3098de6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:228389a1f17e', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.435 268150 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.443 268150 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.443 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[58e6a9d4-7665-4bbe-9e6f-92c708f8c2bf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.444 268150 DEBUG oslo.privsep.daemon [-] privsep: reply[47018de2-4e93-4ff0-9cd5-3ccff827b0c4]: (4, '35085f33-1a27-41e3-805d-02c7ac6a1d7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.445 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.475 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.477 250022 DEBUG os_brick.initiator.connectors.lightos [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.477 250022 DEBUG os_brick.initiator.connectors.lightos [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.477 250022 DEBUG os_brick.initiator.connectors.lightos [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 20 15:16:24 compute-0 nova_compute[250018]: 2026-01-20 15:16:24.478 250022 DEBUG os_brick.utils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:228389a1f17e', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '35085f33-1a27-41e3-805d-02c7ac6a1d7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 20 15:16:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:16:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1205900117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.501 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:25 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 20 15:16:25 compute-0 systemd[362419]: Activating special unit Exit the Session...
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped target Main User Target.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped target Basic System.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped target Paths.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped target Sockets.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped target Timers.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 15:16:25 compute-0 systemd[362419]: Closed D-Bus User Message Bus Socket.
Jan 20 15:16:25 compute-0 systemd[362419]: Stopped Create User's Volatile Files and Directories.
Jan 20 15:16:25 compute-0 systemd[362419]: Removed slice User Application Slice.
Jan 20 15:16:25 compute-0 systemd[362419]: Reached target Shutdown.
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.634 250022 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 20 15:16:25 compute-0 systemd[362419]: Finished Exit the Session.
Jan 20 15:16:25 compute-0 systemd[362419]: Reached target Exit the Session.
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.636 250022 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwqr60y7t',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b5656c1b-5ac7-4b93-a25d-420e1e294678',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 20 15:16:25 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 20 15:16:25 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.657 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.659 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.659 250022 INFO nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Creating image(s)
Jan 20 15:16:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 20 15:16:25 compute-0 ceph-mon[74360]: pgmap v2815: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 23 KiB/s wr, 2 op/s
Jan 20 15:16:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1205900117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:25 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 20 15:16:25 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 20 15:16:25 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 20 15:16:25 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.696 250022 DEBUG nova.storage.rbd_utils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] creating snapshot(nova-resize) on rbd image(b3504af3-390e-4ab0-8af6-15749a887d8f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 20 15:16:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 20 15:16:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 20 15:16:25 compute-0 kernel: tapebbe6083-de: entered promiscuous mode
Jan 20 15:16:25 compute-0 NetworkManager[48960]: <info>  [1768922185.9977] manager: (tapebbe6083-de): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Jan 20 15:16:25 compute-0 ovn_controller[148666]: 2026-01-20T15:16:25Z|00685|binding|INFO|Claiming lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for this additional chassis.
Jan 20 15:16:25 compute-0 ovn_controller[148666]: 2026-01-20T15:16:25Z|00686|binding|INFO|ebbe6083-de9d-43ca-9ab2-cf306ea0be4d: Claiming fa:16:3e:a9:77:ea 10.100.0.5
Jan 20 15:16:25 compute-0 nova_compute[250018]: 2026-01-20 15:16:25.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:26 compute-0 ovn_controller[148666]: 2026-01-20T15:16:26Z|00687|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d ovn-installed in OVS
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.018 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:26 compute-0 systemd-udevd[362575]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:16:26 compute-0 systemd-machined[216401]: New machine qemu-84-instance-000000be.
Jan 20 15:16:26 compute-0 NetworkManager[48960]: <info>  [1768922186.0448] device (tapebbe6083-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:16:26 compute-0 NetworkManager[48960]: <info>  [1768922186.0455] device (tapebbe6083-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:16:26 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-000000be.
Jan 20 15:16:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:26.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 KiB/s rd, 11 KiB/s wr, 2 op/s
Jan 20 15:16:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 20 15:16:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 20 15:16:26 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 20 15:16:26 compute-0 ceph-mon[74360]: pgmap v2816: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 KiB/s rd, 11 KiB/s wr, 2 op/s
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.737 250022 DEBUG nova.objects.instance [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.847 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.848 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Ensure instance console log exists: /var/lib/nova/instances/b3504af3-390e-4ab0-8af6-15749a887d8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.849 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.849 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.849 250022 DEBUG oslo_concurrency.lockutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.852 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Start _get_guest_xml network_info=[{"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:eb:05:c3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'boot_index': None, 'device_type': 'disk', 'attachment_id': '0d9a1862-1a05-483d-a1d0-ba414e1ffed2', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '933c5c7a-f496-4bcc-b304-68156c235fe5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'b3504af3-390e-4ab0-8af6-15749a887d8f', 'attached_at': '2026-01-20T15:16:25.000000', 'detached_at': '', 'volume_id': '933c5c7a-f496-4bcc-b304-68156c235fe5', 'multiattach': True, 'serial': '933c5c7a-f496-4bcc-b304-68156c235fe5'}, 'disk_bus': 'virtio', 'guest_format': None, 'delete_on_termination': False, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.857 250022 WARNING nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.864 250022 DEBUG nova.virt.libvirt.host [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.865 250022 DEBUG nova.virt.libvirt.host [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.867 250022 DEBUG nova.virt.libvirt.host [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.868 250022 DEBUG nova.virt.libvirt.host [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.869 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.869 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='30c26a27-d918-46d8-a512-4ef3b4ce5955',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.871 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.871 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.871 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.871 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.872 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.872 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.872 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.872 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.872 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.873 250022 DEBUG nova.virt.hardware [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.873 250022 DEBUG nova.objects.instance [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:26 compute-0 nova_compute[250018]: 2026-01-20 15:16:26.898 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.016 250022 DEBUG nova.network.neutron [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updated VIF entry in instance network info cache for port 1b18c40e-cce7-4971-98d2-c95ec41c9040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.019 250022 DEBUG nova.network.neutron [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating instance_info_cache with network_info: [{"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.040 250022 DEBUG oslo_concurrency.lockutils [req-bcc71ad0-6110-4605-a304-ec4c6225d80a req-5078ae13-45a0-45b6-85a8-1329f79908da 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.164 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922187.1639273, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.165 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Started (Lifecycle Event)
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.187 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:16:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605721892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.342 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:27.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.414 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.609 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922187.6090283, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.610 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Resumed (Lifecycle Event)
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.629 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.633 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.656 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 15:16:27 compute-0 ceph-mon[74360]: osdmap e407: 3 total, 3 up, 3 in
Jan 20 15:16:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3605721892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:16:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173701695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.867 250022 DEBUG oslo_concurrency.processutils [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.896 250022 DEBUG nova.virt.libvirt.vif [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=189,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-akp879yg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:16:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=b3504af3-390e-4ab0-8af6-15749a887d8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:eb:05:c3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.897 250022 DEBUG nova.network.os_vif_util [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:eb:05:c3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.898 250022 DEBUG nova.network.os_vif_util [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.901 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <uuid>b3504af3-390e-4ab0-8af6-15749a887d8f</uuid>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <name>instance-000000bd</name>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <memory>196608</memory>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:name>multiattach-server-1</nova:name>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:16:26</nova:creationTime>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:flavor name="m1.micro">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:memory>192</nova:memory>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:user uuid="e9cc4ce3e069479ba9c789b378a68a1d">tempest-AttachVolumeMultiAttachTest-418194625-project-member</nova:user>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:project uuid="fff727019f86407498e83d7948d54962">tempest-AttachVolumeMultiAttachTest-418194625</nova:project>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <nova:port uuid="1b18c40e-cce7-4971-98d2-c95ec41c9040">
Jan 20 15:16:27 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <system>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="serial">b3504af3-390e-4ab0-8af6-15749a887d8f</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="uuid">b3504af3-390e-4ab0-8af6-15749a887d8f</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </system>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <os>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </os>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <features>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </features>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b3504af3-390e-4ab0-8af6-15749a887d8f_disk">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b3504af3-390e-4ab0-8af6-15749a887d8f_disk.config">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:16:27 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:16:27 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:16:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <target dev="vdb" bus="virtio"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <shareable/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:eb:05:c3"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <target dev="tap1b18c40e-cc"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b3504af3-390e-4ab0-8af6-15749a887d8f/console.log" append="off"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <video>
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </video>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:16:27 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:16:27 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:16:27 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:16:27 compute-0 nova_compute[250018]: </domain>
Jan 20 15:16:27 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.902 250022 DEBUG nova.virt.libvirt.vif [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=189,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:14:54Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-akp879yg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:16:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=b3504af3-390e-4ab0-8af6-15749a887d8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:eb:05:c3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.903 250022 DEBUG nova.network.os_vif_util [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "vif_mac": "fa:16:3e:eb:05:c3"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.903 250022 DEBUG nova.network.os_vif_util [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.904 250022 DEBUG os_vif [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.904 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.906 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.906 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.909 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.909 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b18c40e-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.909 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b18c40e-cc, col_values=(('external_ids', {'iface-id': '1b18c40e-cce7-4971-98d2-c95ec41c9040', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:05:c3', 'vm-uuid': 'b3504af3-390e-4ab0-8af6-15749a887d8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-0 NetworkManager[48960]: <info>  [1768922187.9121] manager: (tap1b18c40e-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:27 compute-0 nova_compute[250018]: 2026-01-20 15:16:27.919 250022 INFO os_vif [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc')
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.063 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.064 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.064 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.064 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] No VIF found with MAC fa:16:3e:eb:05:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.065 250022 INFO nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Using config drive
Jan 20 15:16:28 compute-0 kernel: tap1b18c40e-cc: entered promiscuous mode
Jan 20 15:16:28 compute-0 NetworkManager[48960]: <info>  [1768922188.1580] manager: (tap1b18c40e-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Jan 20 15:16:28 compute-0 ovn_controller[148666]: 2026-01-20T15:16:28Z|00688|binding|INFO|Claiming lport 1b18c40e-cce7-4971-98d2-c95ec41c9040 for this chassis.
Jan 20 15:16:28 compute-0 ovn_controller[148666]: 2026-01-20T15:16:28Z|00689|binding|INFO|1b18c40e-cce7-4971-98d2-c95ec41c9040: Claiming fa:16:3e:eb:05:c3 10.100.0.7
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.159 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.165 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:05:c3 10.100.0.7'], port_security=['fa:16:3e:eb:05:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b3504af3-390e-4ab0-8af6-15749a887d8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1b18c40e-cce7-4971-98d2-c95ec41c9040) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.166 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1b18c40e-cce7-4971-98d2-c95ec41c9040 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab bound to our chassis
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.167 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:16:28 compute-0 NetworkManager[48960]: <info>  [1768922188.1720] device (tap1b18c40e-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:16:28 compute-0 NetworkManager[48960]: <info>  [1768922188.1729] device (tap1b18c40e-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:16:28 compute-0 ovn_controller[148666]: 2026-01-20T15:16:28Z|00690|binding|INFO|Setting lport 1b18c40e-cce7-4971-98d2-c95ec41c9040 ovn-installed in OVS
Jan 20 15:16:28 compute-0 ovn_controller[148666]: 2026-01-20T15:16:28Z|00691|binding|INFO|Setting lport 1b18c40e-cce7-4971-98d2-c95ec41c9040 up in Southbound
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.181 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.185 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0794c722-0b83-4241-a938-72f57a505098]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 systemd-machined[216401]: New machine qemu-85-instance-000000bd.
Jan 20 15:16:28 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-000000bd.
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.213 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[388ca05e-6979-479e-99a7-c256ea24c764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.215 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf5f659-751c-4d40-ad8d-2b791a5c16ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.243 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[316b13de-6838-4521-89a1-dd3933e07757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.260 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9d70f8da-fe5f-4e2c-aadf-cb7f59f99d24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 24995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362772, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.276 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd7dfc4-e18c-4bb0-ace0-da24396f845d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795057, 'tstamp': 795057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362774, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795060, 'tstamp': 795060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362774, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.278 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.279 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.280 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.280 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.281 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:28.281 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:16:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Jan 20 15:16:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1173701695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:16:28 compute-0 ceph-mon[74360]: pgmap v2818: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.750 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922188.749588, b3504af3-390e-4ab0-8af6-15749a887d8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.751 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] VM Resumed (Lifecycle Event)
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.755 250022 DEBUG nova.compute.manager [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.759 250022 INFO nova.virt.libvirt.driver [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Instance running successfully.
Jan 20 15:16:28 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.762 250022 DEBUG nova.virt.libvirt.guest [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.763 250022 DEBUG nova.virt.libvirt.driver [None req-5364e26b-38e9-42a0-8c37-45b28590c7f3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.788 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.792 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.825 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.825 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922188.7516081, b3504af3-390e-4ab0-8af6-15749a887d8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.826 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] VM Started (Lifecycle Event)
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.855 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.858 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:16:28 compute-0 nova_compute[250018]: 2026-01-20 15:16:28.904 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.067 250022 DEBUG nova.compute.manager [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.068 250022 DEBUG oslo_concurrency.lockutils [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.068 250022 DEBUG oslo_concurrency.lockutils [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.069 250022 DEBUG oslo_concurrency.lockutils [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.069 250022 DEBUG nova.compute.manager [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.069 250022 WARNING nova.compute.manager [req-0262bb10-7234-4790-8a15-b7122d118195 req-a6c76d11-4074-483b-a5a3-a4506ecce1bd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received unexpected event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with vm_state active and task_state resize_finish.
Jan 20 15:16:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:29.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:29 compute-0 ovn_controller[148666]: 2026-01-20T15:16:29Z|00692|binding|INFO|Claiming lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for this chassis.
Jan 20 15:16:29 compute-0 ovn_controller[148666]: 2026-01-20T15:16:29Z|00693|binding|INFO|ebbe6083-de9d-43ca-9ab2-cf306ea0be4d: Claiming fa:16:3e:a9:77:ea 10.100.0.5
Jan 20 15:16:29 compute-0 ovn_controller[148666]: 2026-01-20T15:16:29Z|00694|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d up in Southbound
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.636 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:77:ea 10.100.0.5'], port_security=['fa:16:3e:a9:77:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5656c1b-5ac7-4b93-a25d-420e1e294678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be008398-8f36-4967-9cc8-6412553c79f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '11', 'neutron:security_group_ids': '1f7c21ab-d630-47d9-a822-01d8ee3b1d55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdafc2c8-f418-454c-b49a-dbb24d8d2298, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.637 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d in datapath be008398-8f36-4967-9cc8-6412553c79f3 bound to our chassis
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.638 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.649 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[627b917c-1b02-4d4f-97ec-b75f9c1b0b5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.650 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe008398-81 in ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.654 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe008398-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.654 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b4eec931-e8b7-48c0-80ae-7c9c9a0b5020]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.656 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[92c19fb5-fb69-4cf7-9245-7a1589ab2895]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.669 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[900f2bae-91e1-475c-a5a4-2d62bb9fbea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.696 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d2509fb9-c17f-4910-8aae-551cfa272bab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.724 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc90517-24c2-4464-9883-a6c65f1d857f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.736 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9af43519-ecff-4d3a-b122-c466acf6b0a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 NetworkManager[48960]: <info>  [1768922189.7373] manager: (tapbe008398-80): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Jan 20 15:16:29 compute-0 systemd-udevd[362842]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.761 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b1a7cb-4217-4dbc-b800-8895806adb4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.765 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c3fd39-1224-4c14-932b-040c25d128d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 NetworkManager[48960]: <info>  [1768922189.7950] device (tapbe008398-80): carrier: link connected
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.802 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fb23ab6f-8865-45d3-9cfb-4bbc1ade8282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.819 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[29dabcb8-1a86-4b2a-8ae3-5b60c8e0d774]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe008398-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:1d:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813051, 'reachable_time': 28883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362861, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.835 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[620a2601-8bb8-4a38-8a2b-a46006e377ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:1d8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813051, 'tstamp': 813051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362862, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.851 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcacbbb-130e-450c-b2d0-a7d21d2a750d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe008398-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:1d:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813051, 'reachable_time': 28883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362863, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.878 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6a40e56f-5e4b-45d8-8c15-98da4f9f96d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.938 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdf747d-43e8-419a-b761-7970aa644a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.940 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe008398-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.940 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.941 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe008398-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.942 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-0 NetworkManager[48960]: <info>  [1768922189.9433] manager: (tapbe008398-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 20 15:16:29 compute-0 kernel: tapbe008398-80: entered promiscuous mode
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.947 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe008398-80, col_values=(('external_ids', {'iface-id': 'f3fd8b5d-b152-40f2-b571-88de4b49c77e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-0 ovn_controller[148666]: 2026-01-20T15:16:29Z|00695|binding|INFO|Releasing lport f3fd8b5d-b152-40f2-b571-88de4b49c77e from this chassis (sb_readonly=0)
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.949 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.951 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.952 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2f94ab03-40ae-45fb-8059-e3c984106775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.953 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/be008398-8f36-4967-9cc8-6412553c79f3.pid.haproxy
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID be008398-8f36-4967-9cc8-6412553c79f3
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:16:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:29.954 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'env', 'PROCESS_TAG=haproxy-be008398-8f36-4967-9cc8-6412553c79f3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be008398-8f36-4967-9cc8-6412553c79f3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:16:29 compute-0 nova_compute[250018]: 2026-01-20 15:16:29.962 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:30 compute-0 nova_compute[250018]: 2026-01-20 15:16:30.055 250022 INFO nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Post operation of migration started
Jan 20 15:16:30 compute-0 podman[362897]: 2026-01-20 15:16:30.35725362 +0000 UTC m=+0.057609845 container create df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:16:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:30.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:30 compute-0 systemd[1]: Started libpod-conmon-df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538.scope.
Jan 20 15:16:30 compute-0 podman[362897]: 2026-01-20 15:16:30.323652804 +0000 UTC m=+0.024009049 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:16:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:16:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf605f61c565a99028c23024d162a3ddef217c4ac36dcfddf8a856bf20c9776/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:30 compute-0 podman[362897]: 2026-01-20 15:16:30.450207618 +0000 UTC m=+0.150563863 container init df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:16:30 compute-0 podman[362897]: 2026-01-20 15:16:30.455635214 +0000 UTC m=+0.155991439 container start df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 15:16:30 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [NOTICE]   (362916) : New worker (362918) forked
Jan 20 15:16:30 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [NOTICE]   (362916) : Loading success.
Jan 20 15:16:30 compute-0 nova_compute[250018]: 2026-01-20 15:16:30.503 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 3.8 KiB/s wr, 54 op/s
Jan 20 15:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:30.787 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:30.787 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:30.788 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:30 compute-0 ceph-mon[74360]: pgmap v2819: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 3.8 KiB/s wr, 54 op/s
Jan 20 15:16:30 compute-0 nova_compute[250018]: 2026-01-20 15:16:30.972 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:30 compute-0 nova_compute[250018]: 2026-01-20 15:16:30.973 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:30 compute-0 nova_compute[250018]: 2026-01-20 15:16:30.973 250022 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.204 250022 DEBUG nova.compute.manager [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.205 250022 DEBUG oslo_concurrency.lockutils [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.205 250022 DEBUG oslo_concurrency.lockutils [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.205 250022 DEBUG oslo_concurrency.lockutils [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.206 250022 DEBUG nova.compute.manager [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:31 compute-0 nova_compute[250018]: 2026-01-20 15:16:31.206 250022 WARNING nova.compute.manager [req-b54914f9-8397-4a36-85f4-29fe705e20f3 req-c13f15bc-9c4b-4a6b-a0b0-cef4e8d2c506 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received unexpected event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with vm_state resized and task_state None.
Jan 20 15:16:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:32.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 15:16:32 compute-0 nova_compute[250018]: 2026-01-20 15:16:32.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.196 250022 DEBUG nova.network.neutron [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.214 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.232 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.232 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.233 250022 DEBUG oslo_concurrency.lockutils [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:33 compute-0 nova_compute[250018]: 2026-01-20 15:16:33.238 250022 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 20 15:16:33 compute-0 virtqemud[249565]: Domain id=84 name='instance-000000be' uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678 is tainted: custom-monitor
Jan 20 15:16:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:33 compute-0 ceph-mon[74360]: pgmap v2820: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.2 KiB/s wr, 82 op/s
Jan 20 15:16:34 compute-0 nova_compute[250018]: 2026-01-20 15:16:34.247 250022 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 20 15:16:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:34.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 116 op/s
Jan 20 15:16:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 20 15:16:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 20 15:16:34 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 20 15:16:35 compute-0 nova_compute[250018]: 2026-01-20 15:16:35.251 250022 INFO nova.virt.libvirt.driver [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 20 15:16:35 compute-0 nova_compute[250018]: 2026-01-20 15:16:35.256 250022 DEBUG nova.compute.manager [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:35 compute-0 nova_compute[250018]: 2026-01-20 15:16:35.280 250022 DEBUG nova.objects.instance [None req-e9459132-ba13-41d9-a7cf-f952f09929e6 1998f6e29a51438c82e65b66da23d380 22d14e9a73254c8981e4a13fa61158c4 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 20 15:16:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:35 compute-0 podman[362930]: 2026-01-20 15:16:35.501825986 +0000 UTC m=+0.069031763 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:16:35 compute-0 nova_compute[250018]: 2026-01-20 15:16:35.505 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:35 compute-0 podman[362929]: 2026-01-20 15:16:35.515991199 +0000 UTC m=+0.078194700 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 15:16:35 compute-0 ceph-mon[74360]: pgmap v2821: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 116 op/s
Jan 20 15:16:35 compute-0 ceph-mon[74360]: osdmap e408: 3 total, 3 up, 3 in
Jan 20 15:16:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 KiB/s wr, 124 op/s
Jan 20 15:16:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/796769356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2719345878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:36 compute-0 ceph-mon[74360]: pgmap v2823: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 KiB/s wr, 124 op/s
Jan 20 15:16:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:16:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:16:37 compute-0 nova_compute[250018]: 2026-01-20 15:16:37.663 250022 INFO nova.compute.manager [None req-4010624e-26da-4167-811a-dca75b62d989 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Get console output
Jan 20 15:16:37 compute-0 nova_compute[250018]: 2026-01-20 15:16:37.669 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:16:37 compute-0 nova_compute[250018]: 2026-01-20 15:16:37.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/512128344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:16:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.8 KiB/s wr, 123 op/s
Jan 20 15:16:38 compute-0 ceph-mon[74360]: pgmap v2824: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.8 KiB/s wr, 123 op/s
Jan 20 15:16:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.507 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.575 250022 DEBUG nova.compute.manager [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.576 250022 DEBUG nova.compute.manager [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing instance network info cache due to event network-changed-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.576 250022 DEBUG oslo_concurrency.lockutils [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.576 250022 DEBUG oslo_concurrency.lockutils [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.576 250022 DEBUG nova.network.neutron [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Refreshing network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.620 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.621 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.621 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.621 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.622 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.622 250022 INFO nova.compute.manager [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Terminating instance
Jan 20 15:16:40 compute-0 nova_compute[250018]: 2026-01-20 15:16:40.623 250022 DEBUG nova.compute.manager [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:16:41 compute-0 ceph-mon[74360]: pgmap v2825: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:16:41 compute-0 kernel: tapebbe6083-de (unregistering): left promiscuous mode
Jan 20 15:16:41 compute-0 NetworkManager[48960]: <info>  [1768922201.0751] device (tapebbe6083-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:16:41 compute-0 ovn_controller[148666]: 2026-01-20T15:16:41Z|00696|binding|INFO|Releasing lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d from this chassis (sb_readonly=0)
Jan 20 15:16:41 compute-0 ovn_controller[148666]: 2026-01-20T15:16:41Z|00697|binding|INFO|Setting lport ebbe6083-de9d-43ca-9ab2-cf306ea0be4d down in Southbound
Jan 20 15:16:41 compute-0 ovn_controller[148666]: 2026-01-20T15:16:41Z|00698|binding|INFO|Removing iface tapebbe6083-de ovn-installed in OVS
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.086 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.089 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.094 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:77:ea 10.100.0.5'], port_security=['fa:16:3e:a9:77:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5656c1b-5ac7-4b93-a25d-420e1e294678', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be008398-8f36-4967-9cc8-6412553c79f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '13', 'neutron:security_group_ids': '1f7c21ab-d630-47d9-a822-01d8ee3b1d55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdafc2c8-f418-454c-b49a-dbb24d8d2298, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.095 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d in datapath be008398-8f36-4967-9cc8-6412553c79f3 unbound from our chassis
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.096 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be008398-8f36-4967-9cc8-6412553c79f3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.097 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7b0196-cf83-4f4d-8286-4059c9353581]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.098 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 namespace which is not needed anymore
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000be.scope: Deactivated successfully.
Jan 20 15:16:41 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000be.scope: Consumed 1.954s CPU time.
Jan 20 15:16:41 compute-0 systemd-machined[216401]: Machine qemu-84-instance-000000be terminated.
Jan 20 15:16:41 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [NOTICE]   (362916) : haproxy version is 2.8.14-c23fe91
Jan 20 15:16:41 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [NOTICE]   (362916) : path to executable is /usr/sbin/haproxy
Jan 20 15:16:41 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [WARNING]  (362916) : Exiting Master process...
Jan 20 15:16:41 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [ALERT]    (362916) : Current worker (362918) exited with code 143 (Terminated)
Jan 20 15:16:41 compute-0 neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3[362912]: [WARNING]  (362916) : All workers exited. Exiting... (0)
Jan 20 15:16:41 compute-0 systemd[1]: libpod-df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538.scope: Deactivated successfully.
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.269 250022 INFO nova.virt.libvirt.driver [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Instance destroyed successfully.
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.270 250022 DEBUG nova.objects.instance [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid b5656c1b-5ac7-4b93-a25d-420e1e294678 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:41 compute-0 podman[362999]: 2026-01-20 15:16:41.272638635 +0000 UTC m=+0.056471404 container died df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.296 250022 DEBUG nova.virt.libvirt.vif [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-20T15:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-470752205',display_name='tempest-TestNetworkAdvancedServerOps-server-470752205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-470752205',id=190,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAmgIhRv7SylyyDoXsoPKcXNSjcMbMCF7/IROo74ZCmTo9LbWE2Sanv271vjV+ounImSggkddfPFTxsQsqOAeGJ63UOB1CVRLYAEgvPLI8ngnO4k9hlNWAKjL0F7Yejx9A==',key_name='tempest-TestNetworkAdvancedServerOps-131135683',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:15:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-kjhk913w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:16:35Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=b5656c1b-5ac7-4b93-a25d-420e1e294678,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.297 250022 DEBUG nova.network.os_vif_util [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.297 250022 DEBUG nova.network.os_vif_util [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.298 250022 DEBUG os_vif [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538-userdata-shm.mount: Deactivated successfully.
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.300 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebbe6083-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.301 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bf605f61c565a99028c23024d162a3ddef217c4ac36dcfddf8a856bf20c9776-merged.mount: Deactivated successfully.
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.307 250022 INFO os_vif [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:77:ea,bridge_name='br-int',has_traffic_filtering=True,id=ebbe6083-de9d-43ca-9ab2-cf306ea0be4d,network=Network(be008398-8f36-4967-9cc8-6412553c79f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebbe6083-de')
Jan 20 15:16:41 compute-0 podman[362999]: 2026-01-20 15:16:41.317806243 +0000 UTC m=+0.101638982 container cleanup df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:16:41 compute-0 systemd[1]: libpod-conmon-df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538.scope: Deactivated successfully.
Jan 20 15:16:41 compute-0 podman[363050]: 2026-01-20 15:16:41.390100693 +0000 UTC m=+0.049471285 container remove df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.401 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f38c02c-e711-4e4c-ac85-01d0c53c7904]: (4, ('Tue Jan 20 03:16:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 (df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538)\ndf694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538\nTue Jan 20 03:16:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 (df694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538)\ndf694e2266f31bba8b240bd676daae6ef0a0632cc8df44905f1c615c3a35d538\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.403 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[939c2441-3599-4bc2-98d6-84f9a6876002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.405 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe008398-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:41 compute-0 kernel: tapbe008398-80: left promiscuous mode
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.408 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.424 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[239a103c-2c59-45fd-bf2e-0727416cb6aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7be347-48b4-46a0-a304-63fcd1b86e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.440 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[47b1ab33-af53-46ff-aa7f-3c95da9a2d4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.458 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[85340bb9-f2e1-4983-847b-e4246ff2a661]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813044, 'reachable_time': 25851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363067, 'error': None, 'target': 'ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dbe008398\x2d8f36\x2d4967\x2d9cc8\x2d6412553c79f3.mount: Deactivated successfully.
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.462 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be008398-8f36-4967-9cc8-6412553c79f3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:16:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:41.462 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[a3189272-8637-4727-a93d-95daef3e01a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.711 250022 INFO nova.virt.libvirt.driver [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deleting instance files /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678_del
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.712 250022 INFO nova.virt.libvirt.driver [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deletion of /var/lib/nova/instances/b5656c1b-5ac7-4b93-a25d-420e1e294678_del complete
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.787 250022 INFO nova.compute.manager [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Took 1.16 seconds to destroy the instance on the hypervisor.
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.788 250022 DEBUG oslo.service.loopingcall [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.788 250022 DEBUG nova.compute.manager [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:16:41 compute-0 nova_compute[250018]: 2026-01-20 15:16:41.788 250022 DEBUG nova.network.neutron [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:16:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 20 15:16:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 20 15:16:41 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 20 15:16:41 compute-0 sudo[363069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:41 compute-0 sudo[363069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:41 compute-0 sudo[363069]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:42 compute-0 sudo[363094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:42 compute-0 sudo[363094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:42 compute-0 sudo[363094]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:42 compute-0 ovn_controller[148666]: 2026-01-20T15:16:42Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:05:c3 10.100.0.7
Jan 20 15:16:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 211 KiB/s rd, 3.7 KiB/s wr, 11 op/s
Jan 20 15:16:42 compute-0 ceph-mon[74360]: osdmap e409: 3 total, 3 up, 3 in
Jan 20 15:16:42 compute-0 ceph-mon[74360]: pgmap v2827: 321 pgs: 321 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 211 KiB/s rd, 3.7 KiB/s wr, 11 op/s
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.875034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202875085, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2194, "num_deletes": 255, "total_data_size": 3662135, "memory_usage": 3728712, "flush_reason": "Manual Compaction"}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202896684, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3592662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61597, "largest_seqno": 63789, "table_properties": {"data_size": 3582980, "index_size": 6047, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20909, "raw_average_key_size": 20, "raw_value_size": 3563296, "raw_average_value_size": 3524, "num_data_blocks": 263, "num_entries": 1011, "num_filter_entries": 1011, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922008, "oldest_key_time": 1768922008, "file_creation_time": 1768922202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 21682 microseconds, and 7746 cpu microseconds.
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.896722) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3592662 bytes OK
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.896740) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.898207) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.898221) EVENT_LOG_v1 {"time_micros": 1768922202898217, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.898239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3653170, prev total WAL file size 3653170, number of live WAL files 2.
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.899414) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3508KB)], [140(9718KB)]
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202899483, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 13544082, "oldest_snapshot_seqno": -1}
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.924 250022 DEBUG nova.compute.manager [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.924 250022 DEBUG oslo_concurrency.lockutils [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.925 250022 DEBUG oslo_concurrency.lockutils [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.925 250022 DEBUG oslo_concurrency.lockutils [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.925 250022 DEBUG nova.compute.manager [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.925 250022 DEBUG nova.compute.manager [req-29e7c14e-9348-4636-8df9-07466b40578b req-410ab073-b8e8-4aa3-a2f3-9cff23588ac2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-unplugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.927 250022 DEBUG nova.compute.manager [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-changed-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.927 250022 DEBUG nova.compute.manager [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Refreshing instance network info cache due to event network-changed-1b18c40e-cce7-4971-98d2-c95ec41c9040. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.927 250022 DEBUG oslo_concurrency.lockutils [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.928 250022 DEBUG oslo_concurrency.lockutils [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:16:42 compute-0 nova_compute[250018]: 2026-01-20 15:16:42.928 250022 DEBUG nova.network.neutron [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Refreshing network info cache for port 1b18c40e-cce7-4971-98d2-c95ec41c9040 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9252 keys, 11672877 bytes, temperature: kUnknown
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202974819, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 11672877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11612871, "index_size": 35765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 243019, "raw_average_key_size": 26, "raw_value_size": 11450158, "raw_average_value_size": 1237, "num_data_blocks": 1364, "num_entries": 9252, "num_filter_entries": 9252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.975273) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11672877 bytes
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.976990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.5 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.5 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9777, records dropped: 525 output_compression: NoCompression
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.977016) EVENT_LOG_v1 {"time_micros": 1768922202977003, "job": 86, "event": "compaction_finished", "compaction_time_micros": 75473, "compaction_time_cpu_micros": 28589, "output_level": 6, "num_output_files": 1, "total_output_size": 11672877, "num_input_records": 9777, "num_output_records": 9252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202977789, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922202979484, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.899295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.979578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.979585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.979587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.979590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:42 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:16:42.979592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:16:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:43 compute-0 nova_compute[250018]: 2026-01-20 15:16:43.844 250022 DEBUG nova.network.neutron [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updated VIF entry in instance network info cache for port ebbe6083-de9d-43ca-9ab2-cf306ea0be4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:43 compute-0 nova_compute[250018]: 2026-01-20 15:16:43.845 250022 DEBUG nova.network.neutron [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [{"id": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "address": "fa:16:3e:a9:77:ea", "network": {"id": "be008398-8f36-4967-9cc8-6412553c79f3", "bridge": "br-int", "label": "tempest-network-smoke--259749923", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebbe6083-de", "ovs_interfaceid": "ebbe6083-de9d-43ca-9ab2-cf306ea0be4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:43 compute-0 nova_compute[250018]: 2026-01-20 15:16:43.888 250022 DEBUG oslo_concurrency.lockutils [req-9520b4d0-b91f-4025-9b6c-f8bebab7f7f2 req-74f006bf-afa8-43b0-bee1-598f09a67215 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b5656c1b-5ac7-4b93-a25d-420e1e294678" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.154 250022 DEBUG nova.network.neutron [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.171 250022 INFO nova.compute.manager [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Took 2.38 seconds to deallocate network for instance.
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.262 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.262 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.269 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.334 250022 INFO nova.scheduler.client.report [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance b5656c1b-5ac7-4b93-a25d-420e1e294678
Jan 20 15:16:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.411 250022 DEBUG oslo_concurrency.lockutils [None req-a393efb1-8788-4663-b54f-2dcb5a26b436 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 568 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 3.4 KiB/s wr, 16 op/s
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.732 250022 DEBUG nova.network.neutron [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updated VIF entry in instance network info cache for port 1b18c40e-cce7-4971-98d2-c95ec41c9040. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.732 250022 DEBUG nova.network.neutron [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating instance_info_cache with network_info: [{"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:16:44 compute-0 ceph-mon[74360]: pgmap v2828: 321 pgs: 321 active+clean; 568 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 3.4 KiB/s wr, 16 op/s
Jan 20 15:16:44 compute-0 nova_compute[250018]: 2026-01-20 15:16:44.760 250022 DEBUG oslo_concurrency.lockutils [req-b5463199-456f-44b5-8c97-8b16f76abae2 req-1dcf461c-feb1-4954-a966-fdb417c4341f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b3504af3-390e-4ab0-8af6-15749a887d8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.032 250022 DEBUG nova.compute.manager [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.032 250022 DEBUG oslo_concurrency.lockutils [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.033 250022 DEBUG oslo_concurrency.lockutils [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.033 250022 DEBUG oslo_concurrency.lockutils [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b5656c1b-5ac7-4b93-a25d-420e1e294678-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.033 250022 DEBUG nova.compute.manager [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] No waiting events found dispatching network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.034 250022 WARNING nova.compute.manager [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received unexpected event network-vif-plugged-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d for instance with vm_state deleted and task_state None.
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.034 250022 DEBUG nova.compute.manager [req-05c19c5d-b7bd-4a14-80fc-f09cb9eb90b1 req-5df5be84-e0fe-455b-a217-fe81c96b5ff2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Received event network-vif-deleted-ebbe6083-de9d-43ca-9ab2-cf306ea0be4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:16:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:45 compute-0 nova_compute[250018]: 2026-01-20 15:16:45.508 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:46 compute-0 nova_compute[250018]: 2026-01-20 15:16:46.302 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:47 compute-0 ceph-mon[74360]: pgmap v2829: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:48.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:48 compute-0 ovn_controller[148666]: 2026-01-20T15:16:48Z|00699|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:16:48 compute-0 nova_compute[250018]: 2026-01-20 15:16:48.689 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:49 compute-0 nova_compute[250018]: 2026-01-20 15:16:49.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:16:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:49.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:49 compute-0 ceph-mon[74360]: pgmap v2830: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 662 KiB/s rd, 17 KiB/s wr, 90 op/s
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.267 250022 DEBUG oslo_concurrency.lockutils [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.267 250022 DEBUG oslo_concurrency.lockutils [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.283 250022 INFO nova.compute.manager [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Detaching volume 933c5c7a-f496-4bcc-b304-68156c235fe5
Jan 20 15:16:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:50.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.443 250022 INFO nova.virt.block_device [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Attempting to driver detach volume 933c5c7a-f496-4bcc-b304-68156c235fe5 from mountpoint /dev/vdb
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.452 250022 DEBUG nova.virt.libvirt.driver [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Attempting to detach device vdb from instance b3504af3-390e-4ab0-8af6-15749a887d8f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.453 250022 DEBUG nova.virt.libvirt.guest [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   </source>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]: </disk>
Jan 20 15:16:50 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.460 250022 INFO nova.virt.libvirt.driver [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully detached device vdb from instance b3504af3-390e-4ab0-8af6-15749a887d8f from the persistent domain config.
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.461 250022 DEBUG nova.virt.libvirt.driver [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b3504af3-390e-4ab0-8af6-15749a887d8f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.461 250022 DEBUG nova.virt.libvirt.guest [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] detach device xml: <disk type="network" device="disk">
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <source protocol="rbd" name="volumes/volume-933c5c7a-f496-4bcc-b304-68156c235fe5">
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.100" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.102" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:     <host name="192.168.122.101" port="6789"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   </source>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <target dev="vdb" bus="virtio"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <serial>933c5c7a-f496-4bcc-b304-68156c235fe5</serial>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <shareable/>
Jan 20 15:16:50 compute-0 nova_compute[250018]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 20 15:16:50 compute-0 nova_compute[250018]: </disk>
Jan 20 15:16:50 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.510 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.514 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768922210.5142126, b3504af3-390e-4ab0-8af6-15749a887d8f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.516 250022 DEBUG nova.virt.libvirt.driver [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b3504af3-390e-4ab0-8af6-15749a887d8f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.519 250022 INFO nova.virt.libvirt.driver [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully detached device vdb from instance b3504af3-390e-4ab0-8af6-15749a887d8f from the live domain config.
Jan 20 15:16:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:50.536 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.536 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:50.537 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:16:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 664 KiB/s rd, 17 KiB/s wr, 89 op/s
Jan 20 15:16:50 compute-0 ceph-mon[74360]: pgmap v2831: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 664 KiB/s rd, 17 KiB/s wr, 89 op/s
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.795 250022 DEBUG nova.objects.instance [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'flavor' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:16:50 compute-0 nova_compute[250018]: 2026-01-20 15:16:50.831 250022 DEBUG oslo_concurrency.lockutils [None req-26e381bf-c044-40d1-bf8c-b95e3a2c62c9 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:16:51 compute-0 nova_compute[250018]: 2026-01-20 15:16:51.304 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:52.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 26 KiB/s wr, 83 op/s
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:16:52
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'backups']
Jan 20 15:16:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:16:52 compute-0 ceph-mon[74360]: pgmap v2832: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 26 KiB/s wr, 83 op/s
Jan 20 15:16:53 compute-0 nova_compute[250018]: 2026-01-20 15:16:53.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:54.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:16:54.539 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:16:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 24 KiB/s wr, 74 op/s
Jan 20 15:16:54 compute-0 ceph-mon[74360]: pgmap v2833: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 24 KiB/s wr, 74 op/s
Jan 20 15:16:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:55.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:55 compute-0 nova_compute[250018]: 2026-01-20 15:16:55.512 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:56 compute-0 nova_compute[250018]: 2026-01-20 15:16:56.268 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922201.266106, b5656c1b-5ac7-4b93-a25d-420e1e294678 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:16:56 compute-0 nova_compute[250018]: 2026-01-20 15:16:56.269 250022 INFO nova.compute.manager [-] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] VM Stopped (Lifecycle Event)
Jan 20 15:16:56 compute-0 nova_compute[250018]: 2026-01-20 15:16:56.288 250022 DEBUG nova.compute.manager [None req-0d4b9371-86ff-4116-9bb7-516eac4587ab - - - - - -] [instance: b5656c1b-5ac7-4b93-a25d-420e1e294678] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:16:56 compute-0 nova_compute[250018]: 2026-01-20 15:16:56.306 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:16:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:56.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:16:56 compute-0 sudo[363130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:56 compute-0 sudo[363130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-0 sudo[363130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 70 op/s
Jan 20 15:16:56 compute-0 sudo[363155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:16:56 compute-0 sudo[363155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-0 sudo[363155]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-0 sudo[363180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:56 compute-0 sudo[363180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-0 sudo[363180]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:56 compute-0 sudo[363205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:16:56 compute-0 sudo[363205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:56 compute-0 ceph-mon[74360]: pgmap v2834: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 27 KiB/s wr, 70 op/s
Jan 20 15:16:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:16:57 compute-0 sudo[363205]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6b184720-ba43-45ed-95b1-2853aa4abf95 does not exist
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5753861f-d4d3-4f82-900e-5269857f4442 does not exist
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d8d977cc-564c-4b17-9a1d-dfcdb7597ac5 does not exist
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:16:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-0 sudo[363261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:57 compute-0 sudo[363261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:57 compute-0 sudo[363261]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:57 compute-0 sudo[363286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:16:57 compute-0 sudo[363286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:57 compute-0 sudo[363286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:57 compute-0 sudo[363311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:57 compute-0 sudo[363311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:57 compute-0 sudo[363311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:57 compute-0 sudo[363336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:16:57 compute-0 sudo[363336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:16:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:16:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:16:57 compute-0 nova_compute[250018]: 2026-01-20 15:16:57.827 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:16:57 compute-0 podman[363402]: 2026-01-20 15:16:57.908289775 +0000 UTC m=+0.035776706 container create bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:16:57 compute-0 systemd[1]: Started libpod-conmon-bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7.scope.
Jan 20 15:16:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:16:57 compute-0 podman[363402]: 2026-01-20 15:16:57.892582991 +0000 UTC m=+0.020069942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:16:57 compute-0 podman[363402]: 2026-01-20 15:16:57.993194845 +0000 UTC m=+0.120681806 container init bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:16:58 compute-0 podman[363402]: 2026-01-20 15:16:58.000318957 +0000 UTC m=+0.127805898 container start bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:16:58 compute-0 podman[363402]: 2026-01-20 15:16:58.004748807 +0000 UTC m=+0.132235748 container attach bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:16:58 compute-0 fervent_faraday[363418]: 167 167
Jan 20 15:16:58 compute-0 systemd[1]: libpod-bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7.scope: Deactivated successfully.
Jan 20 15:16:58 compute-0 conmon[363418]: conmon bfcb117f01f303162edd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7.scope/container/memory.events
Jan 20 15:16:58 compute-0 podman[363402]: 2026-01-20 15:16:58.009061593 +0000 UTC m=+0.136548524 container died bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e85608cc7a744800a5436182c512e29665c1a4cdd3ccedb89dc9c33f5964045-merged.mount: Deactivated successfully.
Jan 20 15:16:58 compute-0 podman[363402]: 2026-01-20 15:16:58.044186771 +0000 UTC m=+0.171673702 container remove bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 15:16:58 compute-0 systemd[1]: libpod-conmon-bfcb117f01f303162eddde18095966ba8d0df3cc666cf2d36ad26f6d9ae173a7.scope: Deactivated successfully.
Jan 20 15:16:58 compute-0 podman[363442]: 2026-01-20 15:16:58.211603717 +0000 UTC m=+0.039451915 container create 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:16:58 compute-0 systemd[1]: Started libpod-conmon-0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f.scope.
Jan 20 15:16:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:16:58 compute-0 podman[363442]: 2026-01-20 15:16:58.196115259 +0000 UTC m=+0.023963487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:16:58 compute-0 podman[363442]: 2026-01-20 15:16:58.291850971 +0000 UTC m=+0.119699179 container init 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:16:58 compute-0 podman[363442]: 2026-01-20 15:16:58.299017085 +0000 UTC m=+0.126865293 container start 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:16:58 compute-0 podman[363442]: 2026-01-20 15:16:58.302224352 +0000 UTC m=+0.130072560 container attach 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:16:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:16:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:16:58.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:16:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 2 op/s
Jan 20 15:16:58 compute-0 ceph-mon[74360]: pgmap v2835: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 2 op/s
Jan 20 15:16:59 compute-0 modest_goodall[363460]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:16:59 compute-0 modest_goodall[363460]: --> relative data size: 1.0
Jan 20 15:16:59 compute-0 modest_goodall[363460]: --> All data devices are unavailable
Jan 20 15:16:59 compute-0 systemd[1]: libpod-0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f.scope: Deactivated successfully.
Jan 20 15:16:59 compute-0 podman[363442]: 2026-01-20 15:16:59.068951774 +0000 UTC m=+0.896799982 container died 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 15:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-36449c3f81ad634e1005ad1ef0702fd0ff2b3de5f3b2c9cfec4e317fb2ca4224-merged.mount: Deactivated successfully.
Jan 20 15:16:59 compute-0 podman[363442]: 2026-01-20 15:16:59.121958924 +0000 UTC m=+0.949807122 container remove 0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:16:59 compute-0 systemd[1]: libpod-conmon-0a93e3d8826fdf76e48b333c6be8a774d0f1cae6f461c693fec964833de8780f.scope: Deactivated successfully.
Jan 20 15:16:59 compute-0 sudo[363336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:59 compute-0 sudo[363489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:59 compute-0 sudo[363489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:59 compute-0 sudo[363489]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:59 compute-0 sudo[363514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:16:59 compute-0 sudo[363514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:59 compute-0 sudo[363514]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:59 compute-0 sudo[363539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:16:59 compute-0 sudo[363539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:59 compute-0 sudo[363539]: pam_unix(sudo:session): session closed for user root
Jan 20 15:16:59 compute-0 sudo[363564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:16:59 compute-0 sudo[363564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:16:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:16:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:16:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:16:59.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.742622177 +0000 UTC m=+0.043798613 container create ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:16:59 compute-0 systemd[1]: Started libpod-conmon-ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00.scope.
Jan 20 15:16:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.722776721 +0000 UTC m=+0.023953177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.82430882 +0000 UTC m=+0.125485276 container init ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.830309021 +0000 UTC m=+0.131485457 container start ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.833921019 +0000 UTC m=+0.135097455 container attach ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:16:59 compute-0 dazzling_mirzakhani[363645]: 167 167
Jan 20 15:16:59 compute-0 systemd[1]: libpod-ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00.scope: Deactivated successfully.
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.837911377 +0000 UTC m=+0.139087853 container died ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-91b6ced88bb09569d814383b1a977c3214c535b895942ab3b2b9583af8fc6e3c-merged.mount: Deactivated successfully.
Jan 20 15:16:59 compute-0 podman[363629]: 2026-01-20 15:16:59.882568742 +0000 UTC m=+0.183745178 container remove ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:16:59 compute-0 systemd[1]: libpod-conmon-ed2560f63a9cfcfc3b331e7ae7b4060761331a6fc413d2b34217a42056f8ba00.scope: Deactivated successfully.
Jan 20 15:17:00 compute-0 nova_compute[250018]: 2026-01-20 15:17:00.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:00 compute-0 podman[363670]: 2026-01-20 15:17:00.05527651 +0000 UTC m=+0.043242717 container create f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:17:00 compute-0 systemd[1]: Started libpod-conmon-f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6.scope.
Jan 20 15:17:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54ba40ee58de386ee9747dbbc6163631300e98e7fe014be17167deebee1069b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54ba40ee58de386ee9747dbbc6163631300e98e7fe014be17167deebee1069b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54ba40ee58de386ee9747dbbc6163631300e98e7fe014be17167deebee1069b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e54ba40ee58de386ee9747dbbc6163631300e98e7fe014be17167deebee1069b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:00 compute-0 podman[363670]: 2026-01-20 15:17:00.12423481 +0000 UTC m=+0.112201027 container init f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:17:00 compute-0 podman[363670]: 2026-01-20 15:17:00.034657674 +0000 UTC m=+0.022623901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:17:00 compute-0 podman[363670]: 2026-01-20 15:17:00.13533221 +0000 UTC m=+0.123298417 container start f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:17:00 compute-0 podman[363670]: 2026-01-20 15:17:00.142910064 +0000 UTC m=+0.130876271 container attach f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 15:17:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:00.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:00 compute-0 nova_compute[250018]: 2026-01-20 15:17:00.513 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 10 op/s
Jan 20 15:17:00 compute-0 laughing_yonath[363686]: {
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:     "0": [
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:         {
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "devices": [
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "/dev/loop3"
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             ],
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "lv_name": "ceph_lv0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "lv_size": "7511998464",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "name": "ceph_lv0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "tags": {
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.cluster_name": "ceph",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.crush_device_class": "",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.encrypted": "0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.osd_id": "0",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.type": "block",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:                 "ceph.vdo": "0"
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             },
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "type": "block",
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:             "vg_name": "ceph_vg0"
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:         }
Jan 20 15:17:00 compute-0 laughing_yonath[363686]:     ]
Jan 20 15:17:00 compute-0 laughing_yonath[363686]: }
Jan 20 15:17:00 compute-0 systemd[1]: libpod-f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6.scope: Deactivated successfully.
Jan 20 15:17:00 compute-0 podman[363696]: 2026-01-20 15:17:00.987805985 +0000 UTC m=+0.023677729 container died f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e54ba40ee58de386ee9747dbbc6163631300e98e7fe014be17167deebee1069b-merged.mount: Deactivated successfully.
Jan 20 15:17:01 compute-0 podman[363696]: 2026-01-20 15:17:01.036608062 +0000 UTC m=+0.072479776 container remove f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:17:01 compute-0 systemd[1]: libpod-conmon-f0f05f1efd5699a9142a2d0612466edc9cfd0353c4a20a0e3c3f7d16537cb1a6.scope: Deactivated successfully.
Jan 20 15:17:01 compute-0 sudo[363564]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:01 compute-0 sudo[363711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:01 compute-0 sudo[363711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:01 compute-0 sudo[363711]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:01 compute-0 sudo[363736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:17:01 compute-0 sudo[363736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:01 compute-0 sudo[363736]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:01 compute-0 sudo[363761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:01 compute-0 sudo[363761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:01 compute-0 sudo[363761]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:01 compute-0 nova_compute[250018]: 2026-01-20 15:17:01.308 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:01 compute-0 sudo[363786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:17:01 compute-0 sudo[363786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:01.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.627340257 +0000 UTC m=+0.039766464 container create 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:17:01 compute-0 systemd[1]: Started libpod-conmon-780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284.scope.
Jan 20 15:17:01 compute-0 ceph-mon[74360]: pgmap v2836: 321 pgs: 321 active+clean; 521 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 10 op/s
Jan 20 15:17:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.70347092 +0000 UTC m=+0.115897137 container init 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.609750492 +0000 UTC m=+0.022176749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.709551974 +0000 UTC m=+0.121978181 container start 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.712638748 +0000 UTC m=+0.125064975 container attach 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:17:01 compute-0 silly_keller[363867]: 167 167
Jan 20 15:17:01 compute-0 systemd[1]: libpod-780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284.scope: Deactivated successfully.
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.714728884 +0000 UTC m=+0.127155081 container died 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 20 15:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-13126ba8cb3d8bb8af7e19fc9f6e337acf81efc680c369783c4548f8bf0b0eb5-merged.mount: Deactivated successfully.
Jan 20 15:17:01 compute-0 podman[363851]: 2026-01-20 15:17:01.746487301 +0000 UTC m=+0.158913508 container remove 780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_keller, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:17:01 compute-0 systemd[1]: libpod-conmon-780a378da459b1b495ac1f0d11d39ccb9ce69e7e3627a0fb129952ef4d0a1284.scope: Deactivated successfully.
Jan 20 15:17:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:01 compute-0 podman[363890]: 2026-01-20 15:17:01.913337212 +0000 UTC m=+0.042460586 container create 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:17:01 compute-0 systemd[1]: Started libpod-conmon-7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869.scope.
Jan 20 15:17:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f32976a1ec8df1218804bd0d41f2811974441f5665aa3531cd62c8a0b795791/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f32976a1ec8df1218804bd0d41f2811974441f5665aa3531cd62c8a0b795791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f32976a1ec8df1218804bd0d41f2811974441f5665aa3531cd62c8a0b795791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f32976a1ec8df1218804bd0d41f2811974441f5665aa3531cd62c8a0b795791/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:17:01 compute-0 podman[363890]: 2026-01-20 15:17:01.894027051 +0000 UTC m=+0.023150445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:17:01 compute-0 podman[363890]: 2026-01-20 15:17:01.997568104 +0000 UTC m=+0.126691498 container init 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:17:02 compute-0 podman[363890]: 2026-01-20 15:17:02.003633827 +0000 UTC m=+0.132757201 container start 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:17:02 compute-0 podman[363890]: 2026-01-20 15:17:02.007109371 +0000 UTC m=+0.136232745 container attach 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:17:02 compute-0 sudo[363912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:02 compute-0 sudo[363912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:02 compute-0 sudo[363912]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:02 compute-0 sudo[363937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:02 compute-0 sudo[363937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:02 compute-0 sudo[363937]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:02.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 986 KiB/s wr, 35 op/s
Jan 20 15:17:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1955683945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:02 compute-0 romantic_liskov[363907]: {
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:         "osd_id": 0,
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:         "type": "bluestore"
Jan 20 15:17:02 compute-0 romantic_liskov[363907]:     }
Jan 20 15:17:02 compute-0 romantic_liskov[363907]: }
Jan 20 15:17:02 compute-0 systemd[1]: libpod-7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869.scope: Deactivated successfully.
Jan 20 15:17:02 compute-0 podman[363890]: 2026-01-20 15:17:02.820992006 +0000 UTC m=+0.950115400 container died 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f32976a1ec8df1218804bd0d41f2811974441f5665aa3531cd62c8a0b795791-merged.mount: Deactivated successfully.
Jan 20 15:17:02 compute-0 podman[363890]: 2026-01-20 15:17:02.870763669 +0000 UTC m=+0.999887043 container remove 7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_liskov, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:17:02 compute-0 systemd[1]: libpod-conmon-7c55ef18ba6a4e9d465b51ecb51ff1fe4075b846e02d61f883f16f5ba6797869.scope: Deactivated successfully.
Jan 20 15:17:02 compute-0 sudo[363786]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:17:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:17:02 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 872abff2-42a0-43a5-a34f-b70abc8cd9ad does not exist
Jan 20 15:17:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6864505c-b049-4794-a34c-f1e117b0b8eb does not exist
Jan 20 15:17:02 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 38531c12-f7b8-4545-a666-1a183852f3e7 does not exist
Jan 20 15:17:02 compute-0 sudo[363993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:02 compute-0 sudo[363993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:02 compute-0 sudo[363993]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:03 compute-0 sudo[364018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:17:03 compute-0 sudo[364018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:03 compute-0 sudo[364018]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:03.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:03 compute-0 ceph-mon[74360]: pgmap v2837: 321 pgs: 321 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 986 KiB/s wr, 35 op/s
Jan 20 15:17:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:03 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:17:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/548683424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:04.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 38 op/s
Jan 20 15:17:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2580642773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4051330510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:04 compute-0 ceph-mon[74360]: pgmap v2838: 321 pgs: 321 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 38 op/s
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.077 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608648706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.507 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.514 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:05 compute-0 podman[364067]: 2026-01-20 15:17:05.607858291 +0000 UTC m=+0.050572066 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.626 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.627 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.630 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.631 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:17:05 compute-0 podman[364066]: 2026-01-20 15:17:05.646581065 +0000 UTC m=+0.089038572 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 20 15:17:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2900254021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2608648706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.784 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.785 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3718MB free_disk=20.805618286132812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.785 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.785 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.872 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance e79c0704-f95e-422f-9c25-ed35fca7cb7c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.872 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b3504af3-390e-4ab0-8af6-15749a887d8f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.872 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.872 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:17:05 compute-0 nova_compute[250018]: 2026-01-20 15:17:05.941 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.310 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104006290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:06.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.405 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.410 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.443 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.461 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:17:06 compute-0 nova_compute[250018]: 2026-01-20 15:17:06.461 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 64 op/s
Jan 20 15:17:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3529627429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/104006290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:06 compute-0 ceph-mon[74360]: pgmap v2839: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 64 op/s
Jan 20 15:17:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:07.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:07 compute-0 nova_compute[250018]: 2026-01-20 15:17:07.461 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:07 compute-0 nova_compute[250018]: 2026-01-20 15:17:07.462 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:17:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3133455725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:08 compute-0 nova_compute[250018]: 2026-01-20 15:17:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:08.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 20 15:17:08 compute-0 sshd-session[364133]: Invalid user admin from 134.122.57.138 port 47052
Jan 20 15:17:08 compute-0 ceph-mon[74360]: pgmap v2840: 321 pgs: 321 active+clean; 601 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 20 15:17:08 compute-0 sshd-session[364133]: Connection closed by invalid user admin 134.122.57.138 port 47052 [preauth]
Jan 20 15:17:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:09.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/630892543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1661905444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:10.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:10 compute-0 nova_compute[250018]: 2026-01-20 15:17:10.515 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Jan 20 15:17:10 compute-0 ceph-mon[74360]: pgmap v2841: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Jan 20 15:17:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3979331012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:17:11 compute-0 nova_compute[250018]: 2026-01-20 15:17:11.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009691833007630747 of space, bias 1.0, pg target 2.907549902289224 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005310660652801042 of space, bias 1.0, pg target 1.5825768745347106 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:17:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 20 15:17:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Jan 20 15:17:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:13 compute-0 ceph-mon[74360]: pgmap v2842: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Jan 20 15:17:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1693259685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:14.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 782 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 20 15:17:14 compute-0 ceph-mon[74360]: pgmap v2843: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 782 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 20 15:17:15 compute-0 nova_compute[250018]: 2026-01-20 15:17:15.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:15.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:15 compute-0 nova_compute[250018]: 2026-01-20 15:17:15.516 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 20 15:17:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 20 15:17:15 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-0 nova_compute[250018]: 2026-01-20 15:17:16.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:16 compute-0 nova_compute[250018]: 2026-01-20 15:17:16.316 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 447 KiB/s wr, 182 op/s
Jan 20 15:17:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 20 15:17:16 compute-0 ceph-mon[74360]: osdmap e410: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-0 ceph-mon[74360]: pgmap v2845: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 447 KiB/s wr, 182 op/s
Jan 20 15:17:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 20 15:17:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:17:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 51K writes, 195K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 51K writes, 19K syncs, 2.68 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5949 writes, 22K keys, 5949 commit groups, 1.0 writes per commit group, ingest: 21.30 MB, 0.04 MB/s
                                           Interval WAL: 5949 writes, 2410 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:17:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:17 compute-0 ceph-mon[74360]: osdmap e411: 3 total, 3 up, 3 in
Jan 20 15:17:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:18.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 42 KiB/s wr, 213 op/s
Jan 20 15:17:18 compute-0 ceph-mon[74360]: pgmap v2847: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 42 KiB/s wr, 213 op/s
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:17:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.838 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.838 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.838 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:17:19 compute-0 nova_compute[250018]: 2026-01-20 15:17:19.838 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e79c0704-f95e-422f-9c25-ed35fca7cb7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:20 compute-0 nova_compute[250018]: 2026-01-20 15:17:20.519 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 43 KiB/s wr, 231 op/s
Jan 20 15:17:20 compute-0 ceph-mon[74360]: pgmap v2848: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 43 KiB/s wr, 231 op/s
Jan 20 15:17:21 compute-0 nova_compute[250018]: 2026-01-20 15:17:21.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:21.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:21 compute-0 nova_compute[250018]: 2026-01-20 15:17:21.894 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [{"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:21 compute-0 nova_compute[250018]: 2026-01-20 15:17:21.909 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-e79c0704-f95e-422f-9c25-ed35fca7cb7c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:17:21 compute-0 nova_compute[250018]: 2026-01-20 15:17:21.909 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:17:22 compute-0 sudo[364142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:22 compute-0 sudo[364142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:22 compute-0 sudo[364142]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:22 compute-0 sudo[364168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:22 compute-0 sudo[364168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:22 compute-0 sudo[364168]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:17:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:22.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 42 KiB/s wr, 184 op/s
Jan 20 15:17:22 compute-0 ceph-mon[74360]: pgmap v2849: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 42 KiB/s wr, 184 op/s
Jan 20 15:17:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3656409958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:24.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 618 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 365 KiB/s wr, 138 op/s
Jan 20 15:17:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 20 15:17:25 compute-0 ceph-mon[74360]: pgmap v2850: 321 pgs: 321 active+clean; 618 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 365 KiB/s wr, 138 op/s
Jan 20 15:17:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 20 15:17:25 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 20 15:17:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:25 compute-0 nova_compute[250018]: 2026-01-20 15:17:25.520 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:26 compute-0 ceph-mon[74360]: osdmap e412: 3 total, 3 up, 3 in
Jan 20 15:17:26 compute-0 nova_compute[250018]: 2026-01-20 15:17:26.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:26.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 651 KiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 20 15:17:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 20 15:17:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2594788816' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2594788816' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:27 compute-0 ceph-mon[74360]: pgmap v2852: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 651 KiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 20 15:17:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 20 15:17:27 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 20 15:17:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:28 compute-0 ceph-mon[74360]: osdmap e413: 3 total, 3 up, 3 in
Jan 20 15:17:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:17:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:17:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:17:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:17:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.2 MiB/s wr, 120 op/s
Jan 20 15:17:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1785409203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:29 compute-0 ceph-mon[74360]: pgmap v2854: 321 pgs: 321 active+clean; 630 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.2 MiB/s wr, 120 op/s
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.416 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.416 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.416 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.417 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.417 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.418 250022 INFO nova.compute.manager [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Terminating instance
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.419 250022 DEBUG nova.compute.manager [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:17:29 compute-0 kernel: tap1b18c40e-cc (unregistering): left promiscuous mode
Jan 20 15:17:29 compute-0 NetworkManager[48960]: <info>  [1768922249.4680] device (tap1b18c40e-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:17:29 compute-0 ovn_controller[148666]: 2026-01-20T15:17:29Z|00700|binding|INFO|Releasing lport 1b18c40e-cce7-4971-98d2-c95ec41c9040 from this chassis (sb_readonly=0)
Jan 20 15:17:29 compute-0 ovn_controller[148666]: 2026-01-20T15:17:29Z|00701|binding|INFO|Setting lport 1b18c40e-cce7-4971-98d2-c95ec41c9040 down in Southbound
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.476 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 ovn_controller[148666]: 2026-01-20T15:17:29Z|00702|binding|INFO|Removing iface tap1b18c40e-cc ovn-installed in OVS
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.478 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.483 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:05:c3 10.100.0.7'], port_security=['fa:16:3e:eb:05:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b3504af3-390e-4ab0-8af6-15749a887d8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5ace6a2f-56c6-4679-bb81-70ccb27ab312', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1b18c40e-cce7-4971-98d2-c95ec41c9040) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:29.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.484 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1b18c40e-cce7-4971-98d2-c95ec41c9040 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab unbound from our chassis
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.485 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.493 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.500 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4870a4ec-2b5c-4a0d-9879-a0ed908d7078]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.527 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb68554-3e3e-4e9d-9017-04d2a8bef248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000bd.scope: Consumed 16.182s CPU time.
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.530 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[66ad74b8-21f5-449a-9bfe-589e98c5b1f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 systemd-machined[216401]: Machine qemu-85-instance-000000bd terminated.
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.555 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[593830af-dd07-4848-bacf-ee4aa1b6f560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.571 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[717c02d6-986c-4d31-b4bb-ef5c16e488cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1f4a971-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:30:f0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795048, 'reachable_time': 42886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364207, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.584 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[90987252-8bd6-4bd8-b65c-cf7a7b7a7a11]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795057, 'tstamp': 795057}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364208, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc1f4a971-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795060, 'tstamp': 795060}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364208, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.585 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.587 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.591 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1f4a971-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.592 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.592 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1f4a971-00, col_values=(('external_ids', {'iface-id': 'b20b0e27-0b08-4316-b6df-6784416f44c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:29.592 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.653 250022 INFO nova.virt.libvirt.driver [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Instance destroyed successfully.
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.653 250022 DEBUG nova.objects.instance [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'resources' on Instance uuid b3504af3-390e-4ab0-8af6-15749a887d8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.669 250022 DEBUG nova.virt.libvirt.vif [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:14:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=189,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL5L2o6o5dLcQyaIfhCZ5CKxQlecqNGmP68oHIQEsVoKIC2qfrMKjObT9GdMU8oznX9LVUwIWCShhlEJu9ZqPiutEL2afEJ1hQQamjERNcx9wWS2NfOgykA4yugQphfOtA==',key_name='tempest-keypair-1568469072',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:16:29Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-akp879yg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:16:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=b3504af3-390e-4ab0-8af6-15749a887d8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.669 250022 DEBUG nova.network.os_vif_util [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "address": "fa:16:3e:eb:05:c3", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b18c40e-cc", "ovs_interfaceid": "1b18c40e-cce7-4971-98d2-c95ec41c9040", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.670 250022 DEBUG nova.network.os_vif_util [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.670 250022 DEBUG os_vif [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.672 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.672 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b18c40e-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.674 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.675 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.677 250022 INFO os_vif [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:05:c3,bridge_name='br-int',has_traffic_filtering=True,id=1b18c40e-cce7-4971-98d2-c95ec41c9040,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b18c40e-cc')
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.780 250022 DEBUG nova.compute.manager [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.780 250022 DEBUG oslo_concurrency.lockutils [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.781 250022 DEBUG oslo_concurrency.lockutils [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.781 250022 DEBUG oslo_concurrency.lockutils [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.781 250022 DEBUG nova.compute.manager [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:29 compute-0 nova_compute[250018]: 2026-01-20 15:17:29.781 250022 DEBUG nova.compute.manager [req-2cea2c2f-1e11-4682-bfd9-841f2cd64bcc req-92691a3e-8370-4e6f-b57b-a01ea284c76d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-unplugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.078 250022 INFO nova.virt.libvirt.driver [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Deleting instance files /var/lib/nova/instances/b3504af3-390e-4ab0-8af6-15749a887d8f_del
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.079 250022 INFO nova.virt.libvirt.driver [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Deletion of /var/lib/nova/instances/b3504af3-390e-4ab0-8af6-15749a887d8f_del complete
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.138 250022 INFO nova.compute.manager [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.139 250022 DEBUG oslo.service.loopingcall [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.139 250022 DEBUG nova.compute.manager [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.139 250022 DEBUG nova.network.neutron [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:17:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.555 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 565 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 3.2 MiB/s wr, 195 op/s
Jan 20 15:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:30.787 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:30.787 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:30.788 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.985 250022 DEBUG nova.network.neutron [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:30.993 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:30 compute-0 nova_compute[250018]: 2026-01-20 15:17:30.993 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:30.994 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.032 250022 INFO nova.compute.manager [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Took 0.89 seconds to deallocate network for instance.
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.077 250022 DEBUG nova.compute.manager [req-47785f2b-a3a1-4553-b548-c5a8ec99e1d2 req-b488c59c-d214-4e2b-8e96-185666804ea4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-deleted-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.122 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.122 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.201 250022 DEBUG oslo_concurrency.processutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:31.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13352825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.667 250022 DEBUG oslo_concurrency.processutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.674 250022 DEBUG nova.compute.provider_tree [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:31 compute-0 ceph-mon[74360]: pgmap v2855: 321 pgs: 321 active+clean; 565 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 3.2 MiB/s wr, 195 op/s
Jan 20 15:17:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/13352825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.696 250022 DEBUG nova.scheduler.client.report [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.722 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.752 250022 INFO nova.scheduler.client.report [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Deleted allocations for instance b3504af3-390e-4ab0-8af6-15749a887d8f
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.825 250022 DEBUG oslo_concurrency.lockutils [None req-2d4e7b10-5bfb-4ed1-964e-b9de8d4eef01 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 20 15:17:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.876 250022 DEBUG nova.compute.manager [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.876 250022 DEBUG oslo_concurrency.lockutils [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:31 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.877 250022 DEBUG oslo_concurrency.lockutils [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.877 250022 DEBUG oslo_concurrency.lockutils [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b3504af3-390e-4ab0-8af6-15749a887d8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.877 250022 DEBUG nova.compute.manager [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] No waiting events found dispatching network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:31 compute-0 nova_compute[250018]: 2026-01-20 15:17:31.878 250022 WARNING nova.compute.manager [req-d4ff7d46-c42a-483b-96e7-d1c9efa7f85d req-07630f5c-a5df-4259-baa4-d1f5feaa1c86 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Received unexpected event network-vif-plugged-1b18c40e-cce7-4971-98d2-c95ec41c9040 for instance with vm_state deleted and task_state None.
Jan 20 15:17:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 15:17:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 532 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 226 KiB/s rd, 982 KiB/s wr, 142 op/s
Jan 20 15:17:32 compute-0 ceph-mon[74360]: osdmap e414: 3 total, 3 up, 3 in
Jan 20 15:17:32 compute-0 ceph-mon[74360]: pgmap v2857: 321 pgs: 321 active+clean; 532 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 226 KiB/s rd, 982 KiB/s wr, 142 op/s
Jan 20 15:17:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:17:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:17:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 500 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 42 KiB/s wr, 106 op/s
Jan 20 15:17:34 compute-0 nova_compute[250018]: 2026-01-20 15:17:34.674 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:35.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:35 compute-0 nova_compute[250018]: 2026-01-20 15:17:35.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:35 compute-0 ceph-mon[74360]: pgmap v2858: 321 pgs: 321 active+clean; 500 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 42 KiB/s wr, 106 op/s
Jan 20 15:17:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3058422568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:36.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:36 compute-0 podman[364267]: 2026-01-20 15:17:36.481623432 +0000 UTC m=+0.067311947 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:17:36 compute-0 podman[364266]: 2026-01-20 15:17:36.51530963 +0000 UTC m=+0.100998325 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:17:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 53 KiB/s wr, 127 op/s
Jan 20 15:17:36 compute-0 ceph-mon[74360]: pgmap v2859: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 53 KiB/s wr, 127 op/s
Jan 20 15:17:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 20 15:17:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 20 15:17:37 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 20 15:17:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:37.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1154417224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:37 compute-0 ceph-mon[74360]: osdmap e415: 3 total, 3 up, 3 in
Jan 20 15:17:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:38.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 22 KiB/s wr, 71 op/s
Jan 20 15:17:39 compute-0 ceph-mon[74360]: pgmap v2861: 321 pgs: 321 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 22 KiB/s wr, 71 op/s
Jan 20 15:17:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:39 compute-0 nova_compute[250018]: 2026-01-20 15:17:39.676 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:39.996 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1237641148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:40.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:40 compute-0 nova_compute[250018]: 2026-01-20 15:17:40.559 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 19 KiB/s wr, 99 op/s
Jan 20 15:17:41 compute-0 ceph-mon[74360]: pgmap v2862: 321 pgs: 321 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 19 KiB/s wr, 99 op/s
Jan 20 15:17:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3071866721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 15:17:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:41.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 15:17:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:42.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:42 compute-0 sudo[364318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:42 compute-0 sudo[364318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:42 compute-0 sudo[364318]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:42 compute-0 sudo[364343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:17:42 compute-0 sudo[364343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:17:42 compute-0 sudo[364343]: pam_unix(sudo:session): session closed for user root
Jan 20 15:17:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 289 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 20 15:17:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:43.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:43 compute-0 ceph-mon[74360]: pgmap v2863: 321 pgs: 321 active+clean; 289 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 20 15:17:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:44.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 262 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 19 KiB/s wr, 123 op/s
Jan 20 15:17:44 compute-0 nova_compute[250018]: 2026-01-20 15:17:44.652 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922249.650829, b3504af3-390e-4ab0-8af6-15749a887d8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:17:44 compute-0 nova_compute[250018]: 2026-01-20 15:17:44.652 250022 INFO nova.compute.manager [-] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] VM Stopped (Lifecycle Event)
Jan 20 15:17:44 compute-0 nova_compute[250018]: 2026-01-20 15:17:44.671 250022 DEBUG nova.compute.manager [None req-db11fb5c-bb4a-4e38-89e1-de55d2df22fb - - - - - -] [instance: b3504af3-390e-4ab0-8af6-15749a887d8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:17:44 compute-0 nova_compute[250018]: 2026-01-20 15:17:44.677 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:44 compute-0 ceph-mon[74360]: pgmap v2864: 321 pgs: 321 active+clean; 262 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 19 KiB/s wr, 123 op/s
Jan 20 15:17:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3826035582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:45 compute-0 nova_compute[250018]: 2026-01-20 15:17:45.560 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Jan 20 15:17:46 compute-0 ceph-mon[74360]: pgmap v2865: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Jan 20 15:17:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:46 compute-0 ovn_controller[148666]: 2026-01-20T15:17:46Z|00703|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:17:46 compute-0 nova_compute[250018]: 2026-01-20 15:17:46.977 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 ovn_controller[148666]: 2026-01-20T15:17:47Z|00704|binding|INFO|Releasing lport b20b0e27-0b08-4316-b6df-6784416f44c0 from this chassis (sb_readonly=0)
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.574 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.574 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.574 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.575 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.575 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.576 250022 INFO nova.compute.manager [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Terminating instance
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.577 250022 DEBUG nova.compute.manager [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:17:47 compute-0 kernel: tap1421cc5f-9a (unregistering): left promiscuous mode
Jan 20 15:17:47 compute-0 NetworkManager[48960]: <info>  [1768922267.6275] device (tap1421cc5f-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.633 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 ovn_controller[148666]: 2026-01-20T15:17:47Z|00705|binding|INFO|Releasing lport 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 from this chassis (sb_readonly=0)
Jan 20 15:17:47 compute-0 ovn_controller[148666]: 2026-01-20T15:17:47Z|00706|binding|INFO|Setting lport 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 down in Southbound
Jan 20 15:17:47 compute-0 ovn_controller[148666]: 2026-01-20T15:17:47Z|00707|binding|INFO|Removing iface tap1421cc5f-9a ovn-installed in OVS
Jan 20 15:17:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:47.641 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:1e:c2 10.100.0.8'], port_security=['fa:16:3e:3b:1e:c2 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e79c0704-f95e-422f-9c25-ed35fca7cb7c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fff727019f86407498e83d7948d54962', 'neutron:revision_number': '4', 'neutron:security_group_ids': '278e6fc3-62b7-45c0-b1a3-c75cbe3171fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87d69a20-7690-494a-ac16-7c600840561a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1421cc5f-9a45-447a-bb3a-3f13dcc5a309) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:17:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:47.642 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1421cc5f-9a45-447a-bb3a-3f13dcc5a309 in datapath c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab unbound from our chassis
Jan 20 15:17:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:47.643 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:17:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:47.644 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fe74c569-66b2-4a79-b34c-1497e32e121e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:47.645 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab namespace which is not needed anymore
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.652 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Jan 20 15:17:47 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b6.scope: Consumed 24.514s CPU time.
Jan 20 15:17:47 compute-0 systemd-machined[216401]: Machine qemu-80-instance-000000b6 terminated.
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [NOTICE]   (357518) : haproxy version is 2.8.14-c23fe91
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [NOTICE]   (357518) : path to executable is /usr/sbin/haproxy
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [WARNING]  (357518) : Exiting Master process...
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [WARNING]  (357518) : Exiting Master process...
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [ALERT]    (357518) : Current worker (357520) exited with code 143 (Terminated)
Jan 20 15:17:47 compute-0 neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab[357514]: [WARNING]  (357518) : All workers exited. Exiting... (0)
Jan 20 15:17:47 compute-0 systemd[1]: libpod-0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1.scope: Deactivated successfully.
Jan 20 15:17:47 compute-0 podman[364394]: 2026-01-20 15:17:47.802794079 +0000 UTC m=+0.065535449 container died 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.812 250022 INFO nova.virt.libvirt.driver [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Instance destroyed successfully.
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.812 250022 DEBUG nova.objects.instance [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lazy-loading 'resources' on Instance uuid e79c0704-f95e-422f-9c25-ed35fca7cb7c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1-userdata-shm.mount: Deactivated successfully.
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.835 250022 DEBUG nova.virt.libvirt.vif [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-102275358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-102275358',id=182,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:13:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fff727019f86407498e83d7948d54962',ramdisk_id='',reservation_id='r-hnemwa3f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-418194625',owner_user_name='tempest-AttachVolumeMultiAttachTest-418194625-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:13:31Z,user_data=None,user_id='e9cc4ce3e069479ba9c789b378a68a1d',uuid=e79c0704-f95e-422f-9c25-ed35fca7cb7c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.836 250022 DEBUG nova.network.os_vif_util [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converting VIF {"id": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "address": "fa:16:3e:3b:1e:c2", "network": {"id": "c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1423306001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fff727019f86407498e83d7948d54962", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1421cc5f-9a", "ovs_interfaceid": "1421cc5f-9a45-447a-bb3a-3f13dcc5a309", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9819d32ccd5dfe0ff2da99d01f7801b8fbd0bc6a449f02d71010f2caa7c9d1a-merged.mount: Deactivated successfully.
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.838 250022 DEBUG nova.network.os_vif_util [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.838 250022 DEBUG os_vif [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.840 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.840 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1421cc5f-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.846 250022 INFO os_vif [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:1e:c2,bridge_name='br-int',has_traffic_filtering=True,id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309,network=Network(c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1421cc5f-9a')
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.875 250022 DEBUG nova.compute.manager [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-unplugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.876 250022 DEBUG oslo_concurrency.lockutils [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.876 250022 DEBUG oslo_concurrency.lockutils [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.876 250022 DEBUG oslo_concurrency.lockutils [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.876 250022 DEBUG nova.compute.manager [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] No waiting events found dispatching network-vif-unplugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:47 compute-0 nova_compute[250018]: 2026-01-20 15:17:47.877 250022 DEBUG nova.compute.manager [req-43f53e65-a98c-4188-b2a8-6aba6b75a40d req-31e4c617-41fe-4878-99cb-ef02012aa823 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-unplugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:17:47 compute-0 podman[364394]: 2026-01-20 15:17:47.885583683 +0000 UTC m=+0.148325043 container cleanup 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:17:47 compute-0 systemd[1]: libpod-conmon-0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1.scope: Deactivated successfully.
Jan 20 15:17:48 compute-0 podman[364433]: 2026-01-20 15:17:48.13016011 +0000 UTC m=+0.222828912 container remove 0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.136 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb16d4e-ed8c-4580-805b-9e484b07a01d]: (4, ('Tue Jan 20 03:17:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab (0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1)\n0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1\nTue Jan 20 03:17:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab (0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1)\n0be11f2cf24553d1d3a945a9f160f0e4c4ed5506aebb99282db1fe72f2f964b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.139 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b63a64b2-3cd1-4aff-b8ba-24073b734998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.140 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1f4a971-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.142 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:48 compute-0 kernel: tapc1f4a971-00: left promiscuous mode
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.157 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.160 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73df3609-3edf-4033-a0b5-4892d5ba1dcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.173 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f1646f13-00b6-42ca-a9ee-160b6708537b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.174 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5c862e95-d2ba-4079-bdc3-50280160ccac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.191 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[396aef73-e2a6-437d-a4ee-42ffd28b9d31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795041, 'reachable_time': 40852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364465, 'error': None, 'target': 'ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.194 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1f4a971-0bd7-41ce-bdf6-5acb2b1b4bab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:17:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:17:48.194 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b65fa69e-e81d-493d-8c8e-887211b4886a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:17:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dc1f4a971\x2d0bd7\x2d41ce\x2dbdf6\x2d5acb2b1b4bab.mount: Deactivated successfully.
Jan 20 15:17:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:48.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.501 250022 INFO nova.virt.libvirt.driver [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Deleting instance files /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c_del
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.502 250022 INFO nova.virt.libvirt.driver [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Deletion of /var/lib/nova/instances/e79c0704-f95e-422f-9c25-ed35fca7cb7c_del complete
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.557 250022 INFO nova.compute.manager [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Took 0.98 seconds to destroy the instance on the hypervisor.
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.558 250022 DEBUG oslo.service.loopingcall [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.558 250022 DEBUG nova.compute.manager [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:17:48 compute-0 nova_compute[250018]: 2026-01-20 15:17:48.558 250022 DEBUG nova.network.neutron [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:17:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 96 op/s
Jan 20 15:17:48 compute-0 ceph-mon[74360]: pgmap v2866: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.9 KiB/s wr, 96 op/s
Jan 20 15:17:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.087 250022 DEBUG nova.compute.manager [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.087 250022 DEBUG oslo_concurrency.lockutils [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.088 250022 DEBUG oslo_concurrency.lockutils [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.088 250022 DEBUG oslo_concurrency.lockutils [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.088 250022 DEBUG nova.compute.manager [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] No waiting events found dispatching network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.088 250022 WARNING nova.compute.manager [req-99a7d478-b4bb-49cf-b507-f5ebebe4afe8 req-c8ca0b22-1998-434a-9e96-4ad3c71640e4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received unexpected event network-vif-plugged-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 for instance with vm_state active and task_state deleting.
Jan 20 15:17:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:50 compute-0 nova_compute[250018]: 2026-01-20 15:17:50.562 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 94 op/s
Jan 20 15:17:50 compute-0 ceph-mon[74360]: pgmap v2867: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 94 op/s
Jan 20 15:17:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.520 250022 DEBUG nova.network.neutron [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.730 250022 DEBUG nova.compute.manager [req-d34453c7-ebf2-4324-9d4c-f9f09aa565e0 req-b2c2beba-859d-4dbf-b703-7895348466cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Received event network-vif-deleted-1421cc5f-9a45-447a-bb3a-3f13dcc5a309 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.730 250022 INFO nova.compute.manager [req-d34453c7-ebf2-4324-9d4c-f9f09aa565e0 req-b2c2beba-859d-4dbf-b703-7895348466cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Neutron deleted interface 1421cc5f-9a45-447a-bb3a-3f13dcc5a309; detaching it from the instance and deleting it from the info cache
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.731 250022 DEBUG nova.network.neutron [req-d34453c7-ebf2-4324-9d4c-f9f09aa565e0 req-b2c2beba-859d-4dbf-b703-7895348466cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.733 250022 INFO nova.compute.manager [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Took 3.17 seconds to deallocate network for instance.
Jan 20 15:17:51 compute-0 nova_compute[250018]: 2026-01-20 15:17:51.751 250022 DEBUG nova.compute.manager [req-d34453c7-ebf2-4324-9d4c-f9f09aa565e0 req-b2c2beba-859d-4dbf-b703-7895348466cf 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Detach interface failed, port_id=1421cc5f-9a45-447a-bb3a-3f13dcc5a309, reason: Instance e79c0704-f95e-422f-9c25-ed35fca7cb7c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 15:17:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.204 250022 INFO nova.compute.manager [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Took 0.47 seconds to detach 1 volumes for instance.
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.250 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.251 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.314 250022 DEBUG oslo_concurrency.processutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:17:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:17:52
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'backups', 'vms']
Jan 20 15:17:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:17:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:17:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775843392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.819 250022 DEBUG oslo_concurrency.processutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.825 250022 DEBUG nova.compute.provider_tree [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:17:52 compute-0 ceph-mon[74360]: pgmap v2868: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 3.7 KiB/s wr, 70 op/s
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.843 250022 DEBUG nova.scheduler.client.report [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.864 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.891 250022 INFO nova.scheduler.client.report [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Deleted allocations for instance e79c0704-f95e-422f-9c25-ed35fca7cb7c
Jan 20 15:17:52 compute-0 nova_compute[250018]: 2026-01-20 15:17:52.983 250022 DEBUG oslo_concurrency.lockutils [None req-7d72e24a-a569-49d0-825a-0ca3e7db15e3 e9cc4ce3e069479ba9c789b378a68a1d fff727019f86407498e83d7948d54962 - - default default] Lock "e79c0704-f95e-422f-9c25-ed35fca7cb7c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:17:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1775843392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:17:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 20 15:17:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:17:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1456547554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:17:55 compute-0 ceph-mon[74360]: pgmap v2869: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 20 15:17:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:55 compute-0 nova_compute[250018]: 2026-01-20 15:17:55.565 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:55 compute-0 sshd-session[364493]: Invalid user admin from 134.122.57.138 port 51996
Jan 20 15:17:55 compute-0 sshd-session[364493]: Connection closed by invalid user admin 134.122.57.138 port 51996 [preauth]
Jan 20 15:17:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 20 15:17:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:17:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:17:57 compute-0 ceph-mon[74360]: pgmap v2870: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:17:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:17:57 compute-0 nova_compute[250018]: 2026-01-20 15:17:57.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:58 compute-0 nova_compute[250018]: 2026-01-20 15:17:58.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:17:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:17:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:17:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:17:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:17:58 compute-0 ceph-mon[74360]: pgmap v2871: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:17:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:17:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:17:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:17:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:00.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:00 compute-0 nova_compute[250018]: 2026-01-20 15:18:00.566 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:18:01 compute-0 nova_compute[250018]: 2026-01-20 15:18:01.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:01.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:01 compute-0 ceph-mon[74360]: pgmap v2872: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 25 op/s
Jan 20 15:18:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:02 compute-0 sudo[364501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:02 compute-0 sudo[364501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:02 compute-0 sudo[364501]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:02 compute-0 sudo[364526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:02 compute-0 sudo[364526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:02 compute-0 sudo[364526]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 24 op/s
Jan 20 15:18:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2471681010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:02 compute-0 ceph-mon[74360]: pgmap v2873: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 852 B/s wr, 24 op/s
Jan 20 15:18:02 compute-0 nova_compute[250018]: 2026-01-20 15:18:02.812 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922267.8106368, e79c0704-f95e-422f-9c25-ed35fca7cb7c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:18:02 compute-0 nova_compute[250018]: 2026-01-20 15:18:02.812 250022 INFO nova.compute.manager [-] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] VM Stopped (Lifecycle Event)
Jan 20 15:18:02 compute-0 nova_compute[250018]: 2026-01-20 15:18:02.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:02 compute-0 nova_compute[250018]: 2026-01-20 15:18:02.909 250022 DEBUG nova.compute.manager [None req-d02ba1f3-d2ae-4b18-986d-8a876b051f2f - - - - - -] [instance: e79c0704-f95e-422f-9c25-ed35fca7cb7c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:18:03 compute-0 sudo[364551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:03 compute-0 sudo[364551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-0 sudo[364551]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-0 sudo[364576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:18:03 compute-0 sudo[364576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-0 sudo[364576]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-0 sudo[364601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:03 compute-0 sudo[364601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-0 sudo[364601]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:03.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:03 compute-0 sudo[364626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:18:03 compute-0 sudo[364626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1995102232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:03 compute-0 sudo[364626]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fce260c0-6256-479f-8810-ab551eb0b24c does not exist
Jan 20 15:18:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5e1fe2b9-2b08-469b-b8d4-274a1d800729 does not exist
Jan 20 15:18:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f2982038-9aea-4d33-b256-008e83e7c8fc does not exist
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:18:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-0 sudo[364682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:04 compute-0 sudo[364682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:04 compute-0 sudo[364682]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:04 compute-0 sudo[364708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:18:04 compute-0 sudo[364708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:04 compute-0 sudo[364708]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:04 compute-0 sudo[364733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:04 compute-0 sudo[364733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:04 compute-0 sudo[364733]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:04 compute-0 sudo[364758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:18:04 compute-0 sudo[364758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 12 op/s
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.733983348 +0000 UTC m=+0.042496337 container create 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:18:04 compute-0 ceph-mon[74360]: pgmap v2874: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 12 op/s
Jan 20 15:18:04 compute-0 systemd[1]: Started libpod-conmon-47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25.scope.
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.71404049 +0000 UTC m=+0.022553489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.842429124 +0000 UTC m=+0.150942143 container init 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.85010181 +0000 UTC m=+0.158614799 container start 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.853402049 +0000 UTC m=+0.161915068 container attach 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:18:04 compute-0 xenodochial_curie[364840]: 167 167
Jan 20 15:18:04 compute-0 systemd[1]: libpod-47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25.scope: Deactivated successfully.
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.860352677 +0000 UTC m=+0.168865666 container died 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a5cc5401b134f14ed14af49a7b9802f87b3e126c948b4e830455ef1e7ab758d-merged.mount: Deactivated successfully.
Jan 20 15:18:04 compute-0 podman[364824]: 2026-01-20 15:18:04.897319854 +0000 UTC m=+0.205832843 container remove 47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:18:04 compute-0 systemd[1]: libpod-conmon-47a392ed66c08237403a2b47f1b57303f8ca67ca58a65372d7cd01dba901aa25.scope: Deactivated successfully.
Jan 20 15:18:05 compute-0 nova_compute[250018]: 2026-01-20 15:18:05.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.070695141 +0000 UTC m=+0.048089588 container create 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 15:18:05 compute-0 systemd[1]: Started libpod-conmon-53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e.scope.
Jan 20 15:18:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.054831683 +0000 UTC m=+0.032226150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.163756291 +0000 UTC m=+0.141150758 container init 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.170673988 +0000 UTC m=+0.148068435 container start 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.174060009 +0000 UTC m=+0.151454476 container attach 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:18:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:05.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:05 compute-0 nova_compute[250018]: 2026-01-20 15:18:05.569 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/593386575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2904258732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3326096982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:05 compute-0 affectionate_thompson[364880]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:18:05 compute-0 affectionate_thompson[364880]: --> relative data size: 1.0
Jan 20 15:18:05 compute-0 affectionate_thompson[364880]: --> All data devices are unavailable
Jan 20 15:18:05 compute-0 systemd[1]: libpod-53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e.scope: Deactivated successfully.
Jan 20 15:18:05 compute-0 podman[364864]: 2026-01-20 15:18:05.993607317 +0000 UTC m=+0.971001804 container died 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f90b30fd4cfba2068927988ab50e029f039028d59a1c222dafbe008a48fadabb-merged.mount: Deactivated successfully.
Jan 20 15:18:06 compute-0 podman[364864]: 2026-01-20 15:18:06.046683389 +0000 UTC m=+1.024077826 container remove 53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:18:06 compute-0 systemd[1]: libpod-conmon-53772b370501a2ef2006ac4867d95117b1c6fc0c827ecd46c8e39569a651166e.scope: Deactivated successfully.
Jan 20 15:18:06 compute-0 nova_compute[250018]: 2026-01-20 15:18:06.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:06 compute-0 nova_compute[250018]: 2026-01-20 15:18:06.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:18:06 compute-0 nova_compute[250018]: 2026-01-20 15:18:06.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:06 compute-0 sudo[364758]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:06 compute-0 sudo[364908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:06 compute-0 sudo[364908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:06 compute-0 sudo[364908]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:06 compute-0 sudo[364933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:18:06 compute-0 sudo[364933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:06 compute-0 sudo[364933]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:06 compute-0 sudo[364959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:06 compute-0 sudo[364959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:06 compute-0 sudo[364959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:06 compute-0 sudo[364984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:18:06 compute-0 sudo[364984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:06.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.623642612 +0000 UTC m=+0.037694118 container create a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:18:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 20 15:18:06 compute-0 systemd[1]: Started libpod-conmon-a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281.scope.
Jan 20 15:18:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.685515741 +0000 UTC m=+0.099567257 container init a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.693841336 +0000 UTC m=+0.107892852 container start a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.698432839 +0000 UTC m=+0.112484355 container attach a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:18:06 compute-0 stoic_nightingale[365066]: 167 167
Jan 20 15:18:06 compute-0 systemd[1]: libpod-a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281.scope: Deactivated successfully.
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.7002842 +0000 UTC m=+0.114335716 container died a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.608013911 +0000 UTC m=+0.022065447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-45218df4a810d40437f5ea2869d043a496cfff1e0e80e3b05ac7373b218452fb-merged.mount: Deactivated successfully.
Jan 20 15:18:06 compute-0 podman[365047]: 2026-01-20 15:18:06.741872051 +0000 UTC m=+0.155923567 container remove a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:18:06 compute-0 podman[365065]: 2026-01-20 15:18:06.743823394 +0000 UTC m=+0.081757577 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:18:06 compute-0 systemd[1]: libpod-conmon-a5bbc0960a90ab475e7c732319bbbb5ba45b1e5ec4d4c21c69f74952c8e9b281.scope: Deactivated successfully.
Jan 20 15:18:06 compute-0 podman[365062]: 2026-01-20 15:18:06.759164098 +0000 UTC m=+0.097744578 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:18:06 compute-0 ceph-mon[74360]: pgmap v2875: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 255 B/s wr, 11 op/s
Jan 20 15:18:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:06 compute-0 podman[365134]: 2026-01-20 15:18:06.921549768 +0000 UTC m=+0.057184783 container create 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:18:06 compute-0 systemd[1]: Started libpod-conmon-7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd.scope.
Jan 20 15:18:06 compute-0 podman[365134]: 2026-01-20 15:18:06.89233662 +0000 UTC m=+0.027971675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b785c0233dde60f56a971f2f32c92a5fe4f18acf14039d56dc50831d64a3d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b785c0233dde60f56a971f2f32c92a5fe4f18acf14039d56dc50831d64a3d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b785c0233dde60f56a971f2f32c92a5fe4f18acf14039d56dc50831d64a3d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b785c0233dde60f56a971f2f32c92a5fe4f18acf14039d56dc50831d64a3d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:07 compute-0 podman[365134]: 2026-01-20 15:18:07.044748382 +0000 UTC m=+0.180383407 container init 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:18:07 compute-0 podman[365134]: 2026-01-20 15:18:07.0584497 +0000 UTC m=+0.194084745 container start 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:18:07 compute-0 podman[365134]: 2026-01-20 15:18:07.06249378 +0000 UTC m=+0.198128835 container attach 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:18:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.570 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.572 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.572 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.572 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.573 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:07 compute-0 vigilant_curran[365150]: {
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:     "0": [
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:         {
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "devices": [
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "/dev/loop3"
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             ],
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "lv_name": "ceph_lv0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "lv_size": "7511998464",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "name": "ceph_lv0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "tags": {
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.cluster_name": "ceph",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.crush_device_class": "",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.encrypted": "0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.osd_id": "0",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.type": "block",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:                 "ceph.vdo": "0"
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             },
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "type": "block",
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:             "vg_name": "ceph_vg0"
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:         }
Jan 20 15:18:07 compute-0 vigilant_curran[365150]:     ]
Jan 20 15:18:07 compute-0 vigilant_curran[365150]: }
Jan 20 15:18:07 compute-0 systemd[1]: libpod-7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd.scope: Deactivated successfully.
Jan 20 15:18:07 compute-0 podman[365134]: 2026-01-20 15:18:07.846155889 +0000 UTC m=+0.981790924 container died 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 15:18:07 compute-0 nova_compute[250018]: 2026-01-20 15:18:07.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b785c0233dde60f56a971f2f32c92a5fe4f18acf14039d56dc50831d64a3d47-merged.mount: Deactivated successfully.
Jan 20 15:18:07 compute-0 podman[365134]: 2026-01-20 15:18:07.910442404 +0000 UTC m=+1.046077419 container remove 7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:18:07 compute-0 sudo[364984]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:07 compute-0 systemd[1]: libpod-conmon-7d30066b4c46816a4db3c8d89f3af565879bef772ca97cf5725dce1e1701c7bd.scope: Deactivated successfully.
Jan 20 15:18:08 compute-0 sudo[365191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:08 compute-0 sudo[365191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:08 compute-0 sudo[365191]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:18:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563328679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.058 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:08 compute-0 sudo[365217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:18:08 compute-0 sudo[365217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:08 compute-0 sudo[365217]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/563328679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:08 compute-0 sudo[365244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:08 compute-0 sudo[365244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:08 compute-0 sudo[365244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:08 compute-0 sudo[365269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:18:08 compute-0 sudo[365269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.261 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.263 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4200MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.263 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.263 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.350 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.352 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.374 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:18:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.594524626 +0000 UTC m=+0.038062297 container create 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:18:08 compute-0 systemd[1]: Started libpod-conmon-1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6.scope.
Jan 20 15:18:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.649011866 +0000 UTC m=+0.092549537 container init 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.656513339 +0000 UTC m=+0.100051010 container start 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.659444738 +0000 UTC m=+0.102982409 container attach 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:18:08 compute-0 xenodochial_gates[365369]: 167 167
Jan 20 15:18:08 compute-0 systemd[1]: libpod-1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6.scope: Deactivated successfully.
Jan 20 15:18:08 compute-0 conmon[365369]: conmon 1733946bde7289135cd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6.scope/container/memory.events
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.663091076 +0000 UTC m=+0.106628747 container died 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.57762317 +0000 UTC m=+0.021160861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fb372072f422d06ecb3fb6cddc18d22b18af92fdac1a166452b05c1756f7c9f-merged.mount: Deactivated successfully.
Jan 20 15:18:08 compute-0 podman[365353]: 2026-01-20 15:18:08.696724893 +0000 UTC m=+0.140262564 container remove 1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:18:08 compute-0 systemd[1]: libpod-conmon-1733946bde7289135cd0a94866d3c8e1d7a8c8f82da97b09957ba6c669978be6.scope: Deactivated successfully.
Jan 20 15:18:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:18:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83561485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.848 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.857 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:18:08 compute-0 podman[365393]: 2026-01-20 15:18:08.85748929 +0000 UTC m=+0.044293225 container create a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.874 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.892 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:18:08 compute-0 nova_compute[250018]: 2026-01-20 15:18:08.892 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:08 compute-0 systemd[1]: Started libpod-conmon-a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135.scope.
Jan 20 15:18:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:18:08 compute-0 podman[365393]: 2026-01-20 15:18:08.836252497 +0000 UTC m=+0.023056462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395422ab1383f5f40f48c3981ace7629b21bea97f17677c4f5f6c5b302525ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395422ab1383f5f40f48c3981ace7629b21bea97f17677c4f5f6c5b302525ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395422ab1383f5f40f48c3981ace7629b21bea97f17677c4f5f6c5b302525ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395422ab1383f5f40f48c3981ace7629b21bea97f17677c4f5f6c5b302525ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:18:08 compute-0 podman[365393]: 2026-01-20 15:18:08.949527873 +0000 UTC m=+0.136331828 container init a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:18:08 compute-0 podman[365393]: 2026-01-20 15:18:08.955791021 +0000 UTC m=+0.142594956 container start a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:18:08 compute-0 podman[365393]: 2026-01-20 15:18:08.960286483 +0000 UTC m=+0.147090418 container attach a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:18:09 compute-0 ceph-mon[74360]: pgmap v2876: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:18:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/83561485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]: {
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:         "osd_id": 0,
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:         "type": "bluestore"
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]:     }
Jan 20 15:18:09 compute-0 inspiring_grothendieck[365410]: }
Jan 20 15:18:09 compute-0 systemd[1]: libpod-a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135.scope: Deactivated successfully.
Jan 20 15:18:09 compute-0 podman[365393]: 2026-01-20 15:18:09.806549901 +0000 UTC m=+0.993353836 container died a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7395422ab1383f5f40f48c3981ace7629b21bea97f17677c4f5f6c5b302525ba-merged.mount: Deactivated successfully.
Jan 20 15:18:09 compute-0 podman[365393]: 2026-01-20 15:18:09.860432265 +0000 UTC m=+1.047236200 container remove a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:18:09 compute-0 systemd[1]: libpod-conmon-a653c0c637be3d67bf72edd3db086cfdd6a57bdaa333d7ea8e380077c88bf135.scope: Deactivated successfully.
Jan 20 15:18:09 compute-0 sudo[365269]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:09 compute-0 nova_compute[250018]: 2026-01-20 15:18:09.886 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:09 compute-0 nova_compute[250018]: 2026-01-20 15:18:09.887 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:09 compute-0 nova_compute[250018]: 2026-01-20 15:18:09.888 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:18:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:18:09 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e0ffb18a-6ecd-4fbe-bb54-300170668c23 does not exist
Jan 20 15:18:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5727eb53-4fad-4dc6-9308-1de1dde86313 does not exist
Jan 20 15:18:09 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0107c294-a60c-4fa2-8bcd-7b163259aa64 does not exist
Jan 20 15:18:09 compute-0 sudo[365446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:09 compute-0 sudo[365446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:09 compute-0 sudo[365446]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:10 compute-0 sudo[365471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:18:10 compute-0 sudo[365471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:10 compute-0 sudo[365471]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:10.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:10 compute-0 nova_compute[250018]: 2026-01-20 15:18:10.571 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 155 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Jan 20 15:18:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:18:10 compute-0 ceph-mon[74360]: pgmap v2877: 321 pgs: 321 active+clean; 155 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Jan 20 15:18:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009454826214405361 of space, bias 1.0, pg target 0.28364478643216084 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:18:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:18:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:12 compute-0 ceph-mon[74360]: pgmap v2878: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:12 compute-0 nova_compute[250018]: 2026-01-20 15:18:12.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:13.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:18:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:18:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:18:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:18:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:18:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/149369571' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:18:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:14.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:14 compute-0 ceph-mon[74360]: pgmap v2879: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:15.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:15 compute-0 nova_compute[250018]: 2026-01-20 15:18:15.572 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2456508678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:16.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/594431206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:16 compute-0 ceph-mon[74360]: pgmap v2880: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:17 compute-0 nova_compute[250018]: 2026-01-20 15:18:17.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:17 compute-0 nova_compute[250018]: 2026-01-20 15:18:17.860 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:18.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:18 compute-0 ceph-mon[74360]: pgmap v2881: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:18:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:20 compute-0 nova_compute[250018]: 2026-01-20 15:18:20.574 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 20 15:18:20 compute-0 ceph-mon[74360]: pgmap v2882: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 924 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 20 15:18:21 compute-0 nova_compute[250018]: 2026-01-20 15:18:21.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:21 compute-0 nova_compute[250018]: 2026-01-20 15:18:21.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:18:21 compute-0 nova_compute[250018]: 2026-01-20 15:18:21.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:18:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:22.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 92 KiB/s wr, 48 op/s
Jan 20 15:18:22 compute-0 sudo[365503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:22 compute-0 sudo[365503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:22 compute-0 sudo[365503]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:22 compute-0 sudo[365528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:22 compute-0 sudo[365528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:22 compute-0 sudo[365528]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:22 compute-0 ceph-mon[74360]: pgmap v2883: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 92 KiB/s wr, 48 op/s
Jan 20 15:18:22 compute-0 nova_compute[250018]: 2026-01-20 15:18:22.861 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:24.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:24 compute-0 ceph-mon[74360]: pgmap v2884: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:25 compute-0 nova_compute[250018]: 2026-01-20 15:18:25.575 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:26.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:26 compute-0 ceph-mon[74360]: pgmap v2885: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:27 compute-0 nova_compute[250018]: 2026-01-20 15:18:27.929 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:27 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 15:18:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:29 compute-0 ceph-mon[74360]: pgmap v2886: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:18:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:30.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:30 compute-0 nova_compute[250018]: 2026-01-20 15:18:30.577 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1024 KiB/s wr, 84 op/s
Jan 20 15:18:30 compute-0 ceph-mon[74360]: pgmap v2887: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1024 KiB/s wr, 84 op/s
Jan 20 15:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:30.787 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:30.788 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:18:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:30.788 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:18:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.909017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311909104, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1304, "num_deletes": 258, "total_data_size": 2015460, "memory_usage": 2045248, "flush_reason": "Manual Compaction"}
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311920056, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1979813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63790, "largest_seqno": 65093, "table_properties": {"data_size": 1973758, "index_size": 3257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13517, "raw_average_key_size": 20, "raw_value_size": 1961240, "raw_average_value_size": 2905, "num_data_blocks": 144, "num_entries": 675, "num_filter_entries": 675, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922204, "oldest_key_time": 1768922204, "file_creation_time": 1768922311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 11070 microseconds, and 4782 cpu microseconds.
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.920113) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1979813 bytes OK
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.920130) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.922316) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.922329) EVENT_LOG_v1 {"time_micros": 1768922311922325, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.922344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2009638, prev total WAL file size 2009638, number of live WAL files 2.
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.923088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353230' seq:72057594037927935, type:22 .. '6C6F676D0032373732' seq:0, type:0; will stop at (end)
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1933KB)], [143(11MB)]
Jan 20 15:18:31 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922311923174, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13652690, "oldest_snapshot_seqno": -1}
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9394 keys, 13514204 bytes, temperature: kUnknown
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922312013311, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 13514204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13451115, "index_size": 38528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 247061, "raw_average_key_size": 26, "raw_value_size": 13283829, "raw_average_value_size": 1414, "num_data_blocks": 1478, "num_entries": 9394, "num_filter_entries": 9394, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922311, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.013632) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 13514204 bytes
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.015115) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.7 rd, 150.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.1 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(13.7) write-amplify(6.8) OK, records in: 9927, records dropped: 533 output_compression: NoCompression
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.015136) EVENT_LOG_v1 {"time_micros": 1768922312015126, "job": 88, "event": "compaction_finished", "compaction_time_micros": 90018, "compaction_time_cpu_micros": 43244, "output_level": 6, "num_output_files": 1, "total_output_size": 13514204, "num_input_records": 9927, "num_output_records": 9394, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922312015684, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922312018292, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:31.922917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.018323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.018327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.018329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.018330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:18:32.018331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:18:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 182 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 20 15:18:32 compute-0 nova_compute[250018]: 2026-01-20 15:18:32.930 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:32 compute-0 ceph-mon[74360]: pgmap v2888: 321 pgs: 321 active+clean; 182 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 20 15:18:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2134992538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:18:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:18:35 compute-0 ceph-mon[74360]: pgmap v2889: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:18:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:35 compute-0 nova_compute[250018]: 2026-01-20 15:18:35.580 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:36 compute-0 ceph-mon[74360]: pgmap v2890: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:36 compute-0 nova_compute[250018]: 2026-01-20 15:18:36.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:36.915 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:18:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:36.916 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:18:37 compute-0 podman[365562]: 2026-01-20 15:18:37.459034418 +0000 UTC m=+0.049426945 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:18:37 compute-0 podman[365561]: 2026-01-20 15:18:37.489709756 +0000 UTC m=+0.079898897 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 20 15:18:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:37 compute-0 nova_compute[250018]: 2026-01-20 15:18:37.933 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:38 compute-0 ceph-mon[74360]: pgmap v2891: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 20 15:18:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:40 compute-0 nova_compute[250018]: 2026-01-20 15:18:40.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 20 15:18:40 compute-0 ceph-mon[74360]: pgmap v2892: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 20 15:18:40 compute-0 ovn_controller[148666]: 2026-01-20T15:18:40Z|00708|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 15:18:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:41.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2986932180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:18:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:42.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:18:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 20 15:18:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2338561239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:18:42 compute-0 ceph-mon[74360]: pgmap v2893: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 20 15:18:42 compute-0 sudo[365611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:42 compute-0 sudo[365611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:42 compute-0 sudo[365611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:42 compute-0 sudo[365636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:18:42 compute-0 sudo[365636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:18:42 compute-0 sudo[365636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:18:42 compute-0 nova_compute[250018]: 2026-01-20 15:18:42.934 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:43 compute-0 sshd-session[365608]: Invalid user admin from 134.122.57.138 port 40522
Jan 20 15:18:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:43.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:43 compute-0 sshd-session[365608]: Connection closed by invalid user admin 134.122.57.138 port 40522 [preauth]
Jan 20 15:18:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 20 15:18:44 compute-0 ceph-mon[74360]: pgmap v2894: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Jan 20 15:18:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:45 compute-0 nova_compute[250018]: 2026-01-20 15:18:45.584 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:18:45.918 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:18:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:46.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 1.9 MiB/s wr, 75 op/s
Jan 20 15:18:46 compute-0 ceph-mon[74360]: pgmap v2895: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 1.9 MiB/s wr, 75 op/s
Jan 20 15:18:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:47 compute-0 nova_compute[250018]: 2026-01-20 15:18:47.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:48.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.6 MiB/s wr, 34 op/s
Jan 20 15:18:48 compute-0 ceph-mon[74360]: pgmap v2896: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.6 MiB/s wr, 34 op/s
Jan 20 15:18:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 15:18:49 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 15:18:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 20 15:18:50 compute-0 nova_compute[250018]: 2026-01-20 15:18:50.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:18:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:50.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:50 compute-0 nova_compute[250018]: 2026-01-20 15:18:50.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 153 op/s
Jan 20 15:18:50 compute-0 ceph-mon[74360]: pgmap v2897: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 153 op/s
Jan 20 15:18:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:52.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:18:52
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data']
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:18:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 45 KiB/s wr, 157 op/s
Jan 20 15:18:52 compute-0 ceph-mon[74360]: pgmap v2898: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 45 KiB/s wr, 157 op/s
Jan 20 15:18:52 compute-0 nova_compute[250018]: 2026-01-20 15:18:52.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:18:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:18:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:54.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 223 op/s
Jan 20 15:18:54 compute-0 ceph-mon[74360]: pgmap v2899: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 223 op/s
Jan 20 15:18:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:55 compute-0 nova_compute[250018]: 2026-01-20 15:18:55.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:56.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 302 op/s
Jan 20 15:18:56 compute-0 ceph-mon[74360]: pgmap v2900: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 22 KiB/s wr, 302 op/s
Jan 20 15:18:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:18:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:57.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:18:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:18:57 compute-0 nova_compute[250018]: 2026-01-20 15:18:57.942 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:18:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:18:58.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:18:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 0 B/s wr, 277 op/s
Jan 20 15:18:58 compute-0 ceph-mon[74360]: pgmap v2901: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 0 B/s wr, 277 op/s
Jan 20 15:18:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:18:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:18:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:18:59.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:00.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:00 compute-0 nova_compute[250018]: 2026-01-20 15:19:00.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 343 op/s
Jan 20 15:19:00 compute-0 ceph-mon[74360]: pgmap v2902: 321 pgs: 321 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 343 op/s
Jan 20 15:19:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:01.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:02 compute-0 nova_compute[250018]: 2026-01-20 15:19:02.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/412186716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:02.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 15:19:02 compute-0 nova_compute[250018]: 2026-01-20 15:19:02.943 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:02 compute-0 sudo[365672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:02 compute-0 sudo[365672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:02 compute-0 sudo[365672]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:03 compute-0 sudo[365697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:03 compute-0 sudo[365697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:03 compute-0 sudo[365697]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:03 compute-0 ceph-mon[74360]: pgmap v2903: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 20 15:19:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/817734252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:03.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:04.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 950 KiB/s rd, 2.1 MiB/s wr, 258 op/s
Jan 20 15:19:04 compute-0 ceph-mon[74360]: pgmap v2904: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 950 KiB/s rd, 2.1 MiB/s wr, 258 op/s
Jan 20 15:19:05 compute-0 nova_compute[250018]: 2026-01-20 15:19:05.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:06 compute-0 nova_compute[250018]: 2026-01-20 15:19:06.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:06 compute-0 nova_compute[250018]: 2026-01-20 15:19:06.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:19:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 910 KiB/s rd, 2.2 MiB/s wr, 192 op/s
Jan 20 15:19:06 compute-0 ceph-mon[74360]: pgmap v2905: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 910 KiB/s rd, 2.2 MiB/s wr, 192 op/s
Jan 20 15:19:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.091 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.091 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.092 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.092 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.092 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:07.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:19:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414621009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.633 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3146294823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3414621009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.795 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.797 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4209MB free_disk=20.897098541259766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.797 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.798 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.885 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.886 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.904 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.931 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.931 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.951 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:19:07 compute-0 nova_compute[250018]: 2026-01-20 15:19:07.973 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.001 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:19:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3736625337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.443 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.448 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:19:08 compute-0 podman[365768]: 2026-01-20 15:19:08.46120016 +0000 UTC m=+0.053092844 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.469 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.471 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:19:08 compute-0 nova_compute[250018]: 2026-01-20 15:19:08.471 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:08 compute-0 podman[365767]: 2026-01-20 15:19:08.509331678 +0000 UTC m=+0.096972727 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:19:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 861 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:19:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/222608440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/750879127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3736625337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:08 compute-0 ceph-mon[74360]: pgmap v2906: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 861 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 20 15:19:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3200555813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:09 compute-0 nova_compute[250018]: 2026-01-20 15:19:09.466 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:09 compute-0 nova_compute[250018]: 2026-01-20 15:19:09.467 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:10 compute-0 sudo[365809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-0 sudo[365809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365809]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[365834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:10 compute-0 sudo[365834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365834]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[365859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-0 sudo[365859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365859]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[365884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:19:10 compute-0 sudo[365884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:10 compute-0 nova_compute[250018]: 2026-01-20 15:19:10.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Jan 20 15:19:10 compute-0 sudo[365884]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:19:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:19:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:10 compute-0 ceph-mon[74360]: pgmap v2907: 321 pgs: 321 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Jan 20 15:19:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:10 compute-0 sudo[365929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-0 sudo[365929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365929]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[365954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:10 compute-0 sudo[365954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365954]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[365979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:10 compute-0 sudo[365979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:10 compute-0 sudo[365979]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:10 compute-0 sudo[366004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:19:10 compute-0 sudo[366004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:11 compute-0 sudo[366004]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c52d5916-631d-46e9-9da5-d53beef12180 does not exist
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d2c3f98e-0899-4167-9bf9-737c24959337 does not exist
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 96ba542d-917d-4e3b-a72d-f49733f213e5 does not exist
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:19:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:11 compute-0 sudo[366061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:11 compute-0 sudo[366061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:11 compute-0 sudo[366061]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:11 compute-0 sudo[366086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:11 compute-0 sudo[366086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:11 compute-0 sudo[366086]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:11 compute-0 sudo[366111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:11 compute-0 sudo[366111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:11 compute-0 sudo[366111]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:11 compute-0 sudo[366136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:19:11 compute-0 sudo[366136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024562191745765866 of space, bias 1.0, pg target 0.736865752372976 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6485334659408152 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:19:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:19:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:11 compute-0 podman[366201]: 2026-01-20 15:19:11.942058496 +0000 UTC m=+0.044835620 container create 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:19:11 compute-0 systemd[1]: Started libpod-conmon-2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba.scope.
Jan 20 15:19:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:12.013440652 +0000 UTC m=+0.116217786 container init 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:11.920556076 +0000 UTC m=+0.023333230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:12.025660212 +0000 UTC m=+0.128437336 container start 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:12.028910349 +0000 UTC m=+0.131687493 container attach 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:19:12 compute-0 stupefied_williams[366217]: 167 167
Jan 20 15:19:12 compute-0 systemd[1]: libpod-2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba.scope: Deactivated successfully.
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:12.030923743 +0000 UTC m=+0.133700867 container died 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eafc4cd3c1d8e4c0dd63a956a2f6f42a2e43b7a60634064dfd3d85b2edc7d9d-merged.mount: Deactivated successfully.
Jan 20 15:19:12 compute-0 podman[366201]: 2026-01-20 15:19:12.070587023 +0000 UTC m=+0.173364177 container remove 2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_williams, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:19:12 compute-0 systemd[1]: libpod-conmon-2512bdb2ec1c0af63062c29e29e0aa983b6f05ae1554ff01b9f3632ae8c8c7ba.scope: Deactivated successfully.
Jan 20 15:19:12 compute-0 podman[366242]: 2026-01-20 15:19:12.266687233 +0000 UTC m=+0.054485071 container create 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:19:12 compute-0 systemd[1]: Started libpod-conmon-2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71.scope.
Jan 20 15:19:12 compute-0 podman[366242]: 2026-01-20 15:19:12.247505445 +0000 UTC m=+0.035303303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:12 compute-0 podman[366242]: 2026-01-20 15:19:12.377322628 +0000 UTC m=+0.165120466 container init 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:19:12 compute-0 podman[366242]: 2026-01-20 15:19:12.385624812 +0000 UTC m=+0.173422690 container start 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:19:12 compute-0 podman[366242]: 2026-01-20 15:19:12.389996279 +0000 UTC m=+0.177794137 container attach 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 15:19:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 654 KiB/s wr, 75 op/s
Jan 20 15:19:12 compute-0 ceph-mon[74360]: pgmap v2908: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 654 KiB/s wr, 75 op/s
Jan 20 15:19:12 compute-0 nova_compute[250018]: 2026-01-20 15:19:12.947 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:13 compute-0 eloquent_hugle[366259]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:19:13 compute-0 eloquent_hugle[366259]: --> relative data size: 1.0
Jan 20 15:19:13 compute-0 eloquent_hugle[366259]: --> All data devices are unavailable
Jan 20 15:19:13 compute-0 systemd[1]: libpod-2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71.scope: Deactivated successfully.
Jan 20 15:19:13 compute-0 podman[366274]: 2026-01-20 15:19:13.223080072 +0000 UTC m=+0.029988201 container died 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 15:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e4b2b7aea6f0f24855807892766b287b566652de322b9f6cdbd0d47eec0554-merged.mount: Deactivated successfully.
Jan 20 15:19:13 compute-0 podman[366274]: 2026-01-20 15:19:13.274500419 +0000 UTC m=+0.081408528 container remove 2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:19:13 compute-0 systemd[1]: libpod-conmon-2d91125fdbf11101f763f63ac737e5513a5e70e4c9547cc55439153e00690c71.scope: Deactivated successfully.
Jan 20 15:19:13 compute-0 sudo[366136]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:13 compute-0 sudo[366289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:13 compute-0 sudo[366289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:13 compute-0 sudo[366289]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:13 compute-0 sudo[366314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:13 compute-0 sudo[366314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:13 compute-0 sudo[366314]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:13 compute-0 sudo[366339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:13 compute-0 sudo[366339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:13 compute-0 sudo[366339]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:13 compute-0 sudo[366364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:19:13 compute-0 sudo[366364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:19:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:19:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:19:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:19:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:13.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:19:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2789919308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:19:13 compute-0 podman[366430]: 2026-01-20 15:19:13.875600944 +0000 UTC m=+0.023409103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:13 compute-0 podman[366430]: 2026-01-20 15:19:13.975075597 +0000 UTC m=+0.122883736 container create 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:19:14 compute-0 systemd[1]: Started libpod-conmon-9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c.scope.
Jan 20 15:19:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:14 compute-0 podman[366430]: 2026-01-20 15:19:14.273782464 +0000 UTC m=+0.421590623 container init 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:19:14 compute-0 podman[366430]: 2026-01-20 15:19:14.284624407 +0000 UTC m=+0.432432546 container start 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:19:14 compute-0 busy_hermann[366446]: 167 167
Jan 20 15:19:14 compute-0 systemd[1]: libpod-9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c.scope: Deactivated successfully.
Jan 20 15:19:14 compute-0 podman[366430]: 2026-01-20 15:19:14.395478827 +0000 UTC m=+0.543286986 container attach 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:19:14 compute-0 podman[366430]: 2026-01-20 15:19:14.395894769 +0000 UTC m=+0.543702908 container died 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:19:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bb7c559bc0239d70f38ea088d0d0dd4481b21ecef22019c617604a6d1b6450-merged.mount: Deactivated successfully.
Jan 20 15:19:14 compute-0 podman[366430]: 2026-01-20 15:19:14.591472954 +0000 UTC m=+0.739281103 container remove 9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:19:14 compute-0 systemd[1]: libpod-conmon-9de1615ae413df6fac18afaa80e3fc0e23d746ae6ca7d1cc06b7255cba9d741c.scope: Deactivated successfully.
Jan 20 15:19:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 20 15:19:14 compute-0 podman[366470]: 2026-01-20 15:19:14.75407718 +0000 UTC m=+0.044618444 container create 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:19:14 compute-0 systemd[1]: Started libpod-conmon-84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987.scope.
Jan 20 15:19:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c770139408b967d03cd5d66016fc4139adc7545e727756a0730789a431369d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c770139408b967d03cd5d66016fc4139adc7545e727756a0730789a431369d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c770139408b967d03cd5d66016fc4139adc7545e727756a0730789a431369d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c770139408b967d03cd5d66016fc4139adc7545e727756a0730789a431369d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:14 compute-0 ceph-mon[74360]: pgmap v2909: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 20 15:19:14 compute-0 podman[366470]: 2026-01-20 15:19:14.733674821 +0000 UTC m=+0.024216135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:14 compute-0 podman[366470]: 2026-01-20 15:19:14.827731678 +0000 UTC m=+0.118272962 container init 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:19:14 compute-0 podman[366470]: 2026-01-20 15:19:14.835152287 +0000 UTC m=+0.125693551 container start 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:19:14 compute-0 podman[366470]: 2026-01-20 15:19:14.838423646 +0000 UTC m=+0.128964910 container attach 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:19:15 compute-0 distracted_pike[366486]: {
Jan 20 15:19:15 compute-0 distracted_pike[366486]:     "0": [
Jan 20 15:19:15 compute-0 distracted_pike[366486]:         {
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "devices": [
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "/dev/loop3"
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             ],
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "lv_name": "ceph_lv0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "lv_size": "7511998464",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "name": "ceph_lv0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "tags": {
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.cluster_name": "ceph",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.crush_device_class": "",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.encrypted": "0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.osd_id": "0",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.type": "block",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:                 "ceph.vdo": "0"
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             },
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "type": "block",
Jan 20 15:19:15 compute-0 distracted_pike[366486]:             "vg_name": "ceph_vg0"
Jan 20 15:19:15 compute-0 distracted_pike[366486]:         }
Jan 20 15:19:15 compute-0 distracted_pike[366486]:     ]
Jan 20 15:19:15 compute-0 distracted_pike[366486]: }
Jan 20 15:19:15 compute-0 systemd[1]: libpod-84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987.scope: Deactivated successfully.
Jan 20 15:19:15 compute-0 podman[366470]: 2026-01-20 15:19:15.555269233 +0000 UTC m=+0.845810537 container died 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-30c770139408b967d03cd5d66016fc4139adc7545e727756a0730789a431369d-merged.mount: Deactivated successfully.
Jan 20 15:19:15 compute-0 nova_compute[250018]: 2026-01-20 15:19:15.597 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:15 compute-0 podman[366470]: 2026-01-20 15:19:15.614296515 +0000 UTC m=+0.904837779 container remove 84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:19:15 compute-0 systemd[1]: libpod-conmon-84afaa97732e675f3aa0648a1175f9703db54893b308ab1330c20d3dc1bb2987.scope: Deactivated successfully.
Jan 20 15:19:15 compute-0 sudo[366364]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:15 compute-0 sudo[366509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:15 compute-0 sudo[366509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:15 compute-0 sudo[366509]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:15 compute-0 sudo[366534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:19:15 compute-0 sudo[366534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:15 compute-0 sudo[366534]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:15 compute-0 sudo[366559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:15 compute-0 sudo[366559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:15 compute-0 sudo[366559]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:15 compute-0 sudo[366584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:19:15 compute-0 sudo[366584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:16 compute-0 nova_compute[250018]: 2026-01-20 15:19:16.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.216392367 +0000 UTC m=+0.037301878 container create 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:19:16 compute-0 systemd[1]: Started libpod-conmon-1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99.scope.
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.200540439 +0000 UTC m=+0.021449970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.315196722 +0000 UTC m=+0.136106263 container init 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.32289713 +0000 UTC m=+0.143806641 container start 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.325936982 +0000 UTC m=+0.146846493 container attach 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:19:16 compute-0 angry_chatelet[366664]: 167 167
Jan 20 15:19:16 compute-0 systemd[1]: libpod-1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99.scope: Deactivated successfully.
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.3303388 +0000 UTC m=+0.151248311 container died 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-72fe37b4a0120d0f4c3f594c15c58f52e7e7f9ea1ec7fdd59c992a65c00cc573-merged.mount: Deactivated successfully.
Jan 20 15:19:16 compute-0 podman[366646]: 2026-01-20 15:19:16.365663583 +0000 UTC m=+0.186573094 container remove 1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:19:16 compute-0 systemd[1]: libpod-conmon-1ff1f24372322d26e5e3861e7a778d5c80e853998e760dd1e8c69e6ff41d3d99.scope: Deactivated successfully.
Jan 20 15:19:16 compute-0 podman[366688]: 2026-01-20 15:19:16.510896141 +0000 UTC m=+0.035339644 container create 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:19:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:16.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:16 compute-0 systemd[1]: Started libpod-conmon-1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1.scope.
Jan 20 15:19:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddfff5a30664caac755fba45997a7691e1459662d60c076d5b94500685cd1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddfff5a30664caac755fba45997a7691e1459662d60c076d5b94500685cd1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddfff5a30664caac755fba45997a7691e1459662d60c076d5b94500685cd1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1ddfff5a30664caac755fba45997a7691e1459662d60c076d5b94500685cd1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:16 compute-0 podman[366688]: 2026-01-20 15:19:16.590871198 +0000 UTC m=+0.115314741 container init 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:19:16 compute-0 podman[366688]: 2026-01-20 15:19:16.495643129 +0000 UTC m=+0.020086662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:19:16 compute-0 podman[366688]: 2026-01-20 15:19:16.599232704 +0000 UTC m=+0.123676217 container start 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:19:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 26 KiB/s wr, 32 op/s
Jan 20 15:19:16 compute-0 podman[366688]: 2026-01-20 15:19:16.808633183 +0000 UTC m=+0.333076716 container attach 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:19:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:17 compute-0 nova_compute[250018]: 2026-01-20 15:19:17.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:17 compute-0 nova_compute[250018]: 2026-01-20 15:19:17.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:17 compute-0 nova_compute[250018]: 2026-01-20 15:19:17.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:19:17 compute-0 ceph-mon[74360]: pgmap v2910: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 26 KiB/s wr, 32 op/s
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]: {
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:         "osd_id": 0,
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:         "type": "bluestore"
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]:     }
Jan 20 15:19:17 compute-0 flamboyant_hertz[366704]: }
Jan 20 15:19:17 compute-0 systemd[1]: libpod-1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1.scope: Deactivated successfully.
Jan 20 15:19:17 compute-0 podman[366688]: 2026-01-20 15:19:17.423706474 +0000 UTC m=+0.948150007 container died 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1ddfff5a30664caac755fba45997a7691e1459662d60c076d5b94500685cd1e-merged.mount: Deactivated successfully.
Jan 20 15:19:17 compute-0 podman[366688]: 2026-01-20 15:19:17.474100264 +0000 UTC m=+0.998543777 container remove 1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 15:19:17 compute-0 systemd[1]: libpod-conmon-1a4e2fec834b5c9b9a41b81072be83955cdc62681696ea8c5f6bb4c9f8ff0ab1.scope: Deactivated successfully.
Jan 20 15:19:17 compute-0 sudo[366584]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:19:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:19:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 058c3dd8-db22-45f0-9f3f-6c2bf41dd05f does not exist
Jan 20 15:19:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 798d4f43-65e3-451f-a03c-c547432b8b2e does not exist
Jan 20 15:19:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9fd24ed3-4110-4d34-81da-cd21f3b539ef does not exist
Jan 20 15:19:17 compute-0 sudo[366739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:17 compute-0 sudo[366739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:17 compute-0 sudo[366739]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:17.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:17 compute-0 sudo[366764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:19:17 compute-0 sudo[366764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:17 compute-0 sudo[366764]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:17 compute-0 nova_compute[250018]: 2026-01-20 15:19:17.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.441 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.442 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.470 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:19:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:19:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.561 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.562 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.568 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.568 250022 INFO nova.compute.claims [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:19:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 20 15:19:18 compute-0 nova_compute[250018]: 2026-01-20 15:19:18.689 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:19:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879300125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.116 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.123 250022 DEBUG nova.compute.provider_tree [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.338 250022 DEBUG nova.scheduler.client.report [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.359 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.360 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.456 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.457 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.477 250022 INFO nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.514 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:19:19 compute-0 ceph-mon[74360]: pgmap v2911: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 20 15:19:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2879300125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.623 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.624 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.625 250022 INFO nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Creating image(s)
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.649 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.675 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.699 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.703 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.772 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.773 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.773 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.774 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.800 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.804 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3fe66973-cc26-4136-af61-3a600ec10948_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:19 compute-0 nova_compute[250018]: 2026-01-20 15:19:19.985 250022 DEBUG nova.policy [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cd9a8f26b71f4631a387e555e6b18428', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9156c0a9920c4721843416b9a44404f9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.250 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 3fe66973-cc26-4136-af61-3a600ec10948_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.323 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] resizing rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.431 250022 DEBUG nova.objects.instance [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'migration_context' on Instance uuid 3fe66973-cc26-4136-af61-3a600ec10948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.448 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.449 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Ensure instance console log exists: /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.449 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.450 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.450 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:20.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:20 compute-0 nova_compute[250018]: 2026-01-20 15:19:20.600 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 214 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 699 KiB/s wr, 33 op/s
Jan 20 15:19:20 compute-0 ceph-mon[74360]: pgmap v2912: 321 pgs: 321 active+clean; 214 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 699 KiB/s wr, 33 op/s
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.188 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.189 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.189 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.222 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.223 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:19:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:21.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:21 compute-0 nova_compute[250018]: 2026-01-20 15:19:21.941 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:22 compute-0 nova_compute[250018]: 2026-01-20 15:19:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:22 compute-0 nova_compute[250018]: 2026-01-20 15:19:22.255 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Successfully created port: 1ef1abd1-3781-489f-a588-0ddbcb6f4192 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:19:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 221 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 948 KiB/s wr, 27 op/s
Jan 20 15:19:22 compute-0 nova_compute[250018]: 2026-01-20 15:19:22.951 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:22 compute-0 ceph-mon[74360]: pgmap v2913: 321 pgs: 321 active+clean; 221 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 948 KiB/s wr, 27 op/s
Jan 20 15:19:23 compute-0 sudo[366980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:23 compute-0 sudo[366980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:23 compute-0 sudo[366980]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:23 compute-0 sudo[367005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:23 compute-0 sudo[367005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:23 compute-0 sudo[367005]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:23.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:23 compute-0 nova_compute[250018]: 2026-01-20 15:19:23.994 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Successfully updated port: 1ef1abd1-3781-489f-a588-0ddbcb6f4192 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.032 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.032 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquired lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.033 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.297 250022 DEBUG nova.compute.manager [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-changed-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.297 250022 DEBUG nova.compute.manager [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Refreshing instance network info cache due to event network-changed-1ef1abd1-3781-489f-a588-0ddbcb6f4192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.297 250022 DEBUG oslo_concurrency.lockutils [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:19:24 compute-0 nova_compute[250018]: 2026-01-20 15:19:24.469 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:19:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 241 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 20 15:19:25 compute-0 ceph-mon[74360]: pgmap v2914: 321 pgs: 321 active+clean; 241 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Jan 20 15:19:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:25.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:25 compute-0 nova_compute[250018]: 2026-01-20 15:19:25.643 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:26 compute-0 ceph-mon[74360]: pgmap v2915: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.905 250022 DEBUG nova.network.neutron [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updating instance_info_cache with network_info: [{"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:19:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.960 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Releasing lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.960 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Instance network_info: |[{"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.961 250022 DEBUG oslo_concurrency.lockutils [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.961 250022 DEBUG nova.network.neutron [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Refreshing network info cache for port 1ef1abd1-3781-489f-a588-0ddbcb6f4192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.964 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Start _get_guest_xml network_info=[{"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.970 250022 WARNING nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.978 250022 DEBUG nova.virt.libvirt.host [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.979 250022 DEBUG nova.virt.libvirt.host [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.986 250022 DEBUG nova.virt.libvirt.host [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.986 250022 DEBUG nova.virt.libvirt.host [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.988 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.988 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.989 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.989 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.989 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.990 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.990 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.990 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.991 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.991 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.991 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.991 250022 DEBUG nova.virt.hardware [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:19:26 compute-0 nova_compute[250018]: 2026-01-20 15:19:26.995 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:19:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774464195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.435 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.463 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.467 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:27.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/774464195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:19:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252354878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.921 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.925 250022 DEBUG nova.virt.libvirt.vif [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:19:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-998097081',display_name='tempest-AttachVolumeNegativeTest-server-998097081',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-998097081',id=195,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfaK16zx4hGL4X06Tx3ufhY5e5we/GkGDDP0xP64L4n/DaigaofrP+riSKAvoz/7obxzJJuCTTIvoKdXhAEKZ+WnHpkoCJxpFQScPA3LmqJKTG5B0+342102Iu2o5ADTw==',key_name='tempest-keypair-224036004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-diglwi2a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:19:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=3fe66973-cc26-4136-af61-3a600ec10948,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.926 250022 DEBUG nova.network.os_vif_util [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.928 250022 DEBUG nova.network.os_vif_util [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.930 250022 DEBUG nova.objects.instance [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3fe66973-cc26-4136-af61-3a600ec10948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.952 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <uuid>3fe66973-cc26-4136-af61-3a600ec10948</uuid>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <name>instance-000000c3</name>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:name>tempest-AttachVolumeNegativeTest-server-998097081</nova:name>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:19:26</nova:creationTime>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:user uuid="cd9a8f26b71f4631a387e555e6b18428">tempest-AttachVolumeNegativeTest-1505789262-project-member</nova:user>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:project uuid="9156c0a9920c4721843416b9a44404f9">tempest-AttachVolumeNegativeTest-1505789262</nova:project>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <nova:port uuid="1ef1abd1-3781-489f-a588-0ddbcb6f4192">
Jan 20 15:19:27 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <system>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="serial">3fe66973-cc26-4136-af61-3a600ec10948</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="uuid">3fe66973-cc26-4136-af61-3a600ec10948</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </system>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <os>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </os>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <features>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </features>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/3fe66973-cc26-4136-af61-3a600ec10948_disk">
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/3fe66973-cc26-4136-af61-3a600ec10948_disk.config">
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:19:27 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:a1:4e:99"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <target dev="tap1ef1abd1-37"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/console.log" append="off"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <video>
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </video>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:19:27 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:19:27 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:19:27 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:19:27 compute-0 nova_compute[250018]: </domain>
Jan 20 15:19:27 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.954 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Preparing to wait for external event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.954 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.955 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.955 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.956 250022 DEBUG nova.virt.libvirt.vif [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:19:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-998097081',display_name='tempest-AttachVolumeNegativeTest-server-998097081',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-998097081',id=195,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfaK16zx4hGL4X06Tx3ufhY5e5we/GkGDDP0xP64L4n/DaigaofrP+riSKAvoz/7obxzJJuCTTIvoKdXhAEKZ+WnHpkoCJxpFQScPA3LmqJKTG5B0+342102Iu2o5ADTw==',key_name='tempest-keypair-224036004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-diglwi2a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:19:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=3fe66973-cc26-4136-af61-3a600ec10948,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.956 250022 DEBUG nova.network.os_vif_util [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.957 250022 DEBUG nova.network.os_vif_util [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.957 250022 DEBUG os_vif [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.958 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.959 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.960 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.960 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.965 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef1abd1-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.966 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ef1abd1-37, col_values=(('external_ids', {'iface-id': '1ef1abd1-3781-489f-a588-0ddbcb6f4192', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:4e:99', 'vm-uuid': '3fe66973-cc26-4136-af61-3a600ec10948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.968 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-0 NetworkManager[48960]: <info>  [1768922367.9694] manager: (tap1ef1abd1-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.975 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:27 compute-0 nova_compute[250018]: 2026-01-20 15:19:27.975 250022 INFO os_vif [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37')
Jan 20 15:19:28 compute-0 nova_compute[250018]: 2026-01-20 15:19:28.079 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:19:28 compute-0 nova_compute[250018]: 2026-01-20 15:19:28.079 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:19:28 compute-0 nova_compute[250018]: 2026-01-20 15:19:28.079 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] No VIF found with MAC fa:16:3e:a1:4e:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:19:28 compute-0 nova_compute[250018]: 2026-01-20 15:19:28.080 250022 INFO nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Using config drive
Jan 20 15:19:28 compute-0 nova_compute[250018]: 2026-01-20 15:19:28.103 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3252354878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:28 compute-0 ceph-mon[74360]: pgmap v2916: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:29.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.072 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.073 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.128 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.512 250022 INFO nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Creating config drive at /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.518 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx85ogom execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.656 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx85ogom" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.690 250022 DEBUG nova.storage.rbd_utils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] rbd image 3fe66973-cc26-4136-af61-3a600ec10948_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.694 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config 3fe66973-cc26-4136-af61-3a600ec10948_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:19:30 compute-0 sshd-session[367115]: Invalid user admin from 134.122.57.138 port 50734
Jan 20 15:19:30 compute-0 ceph-mon[74360]: pgmap v2917: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.788 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.789 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.789 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.850 250022 DEBUG oslo_concurrency.processutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config 3fe66973-cc26-4136-af61-3a600ec10948_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.852 250022 INFO nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Deleting local config drive /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948/disk.config because it was imported into RBD.
Jan 20 15:19:30 compute-0 sshd-session[367115]: Connection closed by invalid user admin 134.122.57.138 port 50734 [preauth]
Jan 20 15:19:30 compute-0 kernel: tap1ef1abd1-37: entered promiscuous mode
Jan 20 15:19:30 compute-0 ovn_controller[148666]: 2026-01-20T15:19:30Z|00709|binding|INFO|Claiming lport 1ef1abd1-3781-489f-a588-0ddbcb6f4192 for this chassis.
Jan 20 15:19:30 compute-0 NetworkManager[48960]: <info>  [1768922370.9142] manager: (tap1ef1abd1-37): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.913 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:30 compute-0 ovn_controller[148666]: 2026-01-20T15:19:30Z|00710|binding|INFO|1ef1abd1-3781-489f-a588-0ddbcb6f4192: Claiming fa:16:3e:a1:4e:99 10.100.0.5
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.923 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:30 compute-0 nova_compute[250018]: 2026-01-20 15:19:30.924 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:30 compute-0 NetworkManager[48960]: <info>  [1768922370.9259] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 20 15:19:30 compute-0 NetworkManager[48960]: <info>  [1768922370.9302] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.930 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:4e:99 10.100.0.5'], port_security=['fa:16:3e:a1:4e:99 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3fe66973-cc26-4136-af61-3a600ec10948', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec7b0ec7-6cdf-4a7c-bed3-5b907626a659', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1ef1abd1-3781-489f-a588-0ddbcb6f4192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.932 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef1abd1-3781-489f-a588-0ddbcb6f4192 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a bound to our chassis
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.934 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:19:30 compute-0 systemd-udevd[367170]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.950 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ff4028e3-8908-45d5-9399-7f369ad81ed2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.951 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76c2d716-71 in ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.953 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76c2d716-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.953 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8cd71f-9dda-47ce-b574-6ea627dd71f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.954 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9763eecf-0292-408b-8c90-b3ea1aacdb27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:30 compute-0 systemd-machined[216401]: New machine qemu-86-instance-000000c3.
Jan 20 15:19:30 compute-0 NetworkManager[48960]: <info>  [1768922370.9662] device (tap1ef1abd1-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:19:30 compute-0 NetworkManager[48960]: <info>  [1768922370.9667] device (tap1ef1abd1-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.965 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[16e497da-de7a-48c7-88ad-f42bb8afd2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:30 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-000000c3.
Jan 20 15:19:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:30.991 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7326436e-6c1e-48e6-997f-05bc4e71956d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.023 250022 DEBUG nova.network.neutron [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updated VIF entry in instance network info cache for port 1ef1abd1-3781-489f-a588-0ddbcb6f4192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.023 250022 DEBUG nova.network.neutron [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updating instance_info_cache with network_info: [{"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.025 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.023 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[1875baf1-a0ec-472f-ac5a-9acc91b91faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 NetworkManager[48960]: <info>  [1768922371.0289] manager: (tap76c2d716-70): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.028 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8647b1d7-a593-4a49-adce-bb5899d7d681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.044 250022 DEBUG oslo_concurrency.lockutils [req-5674bad6-ee40-476a-bbeb-c08ed8bda35c req-5fb63fc8-c9e0-466b-95e4-8d0eaf3d193e 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:19:31 compute-0 ovn_controller[148666]: 2026-01-20T15:19:31Z|00711|binding|INFO|Setting lport 1ef1abd1-3781-489f-a588-0ddbcb6f4192 ovn-installed in OVS
Jan 20 15:19:31 compute-0 ovn_controller[148666]: 2026-01-20T15:19:31Z|00712|binding|INFO|Setting lport 1ef1abd1-3781-489f-a588-0ddbcb6f4192 up in Southbound
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.068 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cc19a002-9d70-4b1e-91eb-67457a02eefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.070 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaca94a-4938-4d48-b9b3-7d62a7e0f6fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 NetworkManager[48960]: <info>  [1768922371.0896] device (tap76c2d716-70): carrier: link connected
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.093 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[04a72c31-920b-46d4-8fa2-e742d1a4c394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.108 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[865b9944-1a10-4e0d-b8a5-9b8f5719dc6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831181, 'reachable_time': 36169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367204, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.119 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c51c18f7-3c94-4051-8496-36c552452131]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:44ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 831181, 'tstamp': 831181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367205, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.132 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[beb8f442-8d69-4d6d-ab4b-c8d8cd189fb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76c2d716-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:44:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831181, 'reachable_time': 36169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367206, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.157 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2a425d12-ed86-4abe-aa6e-07768c506345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.206 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7501b4a0-91cf-4147-b240-4cd7d7409dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.207 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.207 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.208 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76c2d716-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:31 compute-0 NetworkManager[48960]: <info>  [1768922371.2099] manager: (tap76c2d716-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.209 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 kernel: tap76c2d716-70: entered promiscuous mode
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.212 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76c2d716-70, col_values=(('external_ids', {'iface-id': '2c0bba0e-e9b6-4ece-8349-62642b94d91d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.211 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 ovn_controller[148666]: 2026-01-20T15:19:31Z|00713|binding|INFO|Releasing lport 2c0bba0e-e9b6-4ece-8349-62642b94d91d from this chassis (sb_readonly=0)
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.213 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.227 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.229 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8f25cd-c647-4380-91dd-d94caba6f3d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.229 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.pid.haproxy
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 76c2d716-7d14-4bc1-b83b-a3290ee99d9a
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:19:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:31.231 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'env', 'PROCESS_TAG=haproxy-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76c2d716-7d14-4bc1-b83b-a3290ee99d9a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.378 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922371.3780913, 3fe66973-cc26-4136-af61-3a600ec10948 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:19:31 compute-0 nova_compute[250018]: 2026-01-20 15:19:31.379 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] VM Started (Lifecycle Event)
Jan 20 15:19:31 compute-0 podman[367280]: 2026-01-20 15:19:31.598521642 +0000 UTC m=+0.048713375 container create 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:19:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:31.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:31 compute-0 systemd[1]: Started libpod-conmon-5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549.scope.
Jan 20 15:19:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:19:31 compute-0 podman[367280]: 2026-01-20 15:19:31.571421221 +0000 UTC m=+0.021612984 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca00e0f467dbdf8505afd41197a022169f569e7e7d81208624a51120aee7d37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:19:31 compute-0 podman[367280]: 2026-01-20 15:19:31.678296484 +0000 UTC m=+0.128488237 container init 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:19:31 compute-0 podman[367280]: 2026-01-20 15:19:31.685062967 +0000 UTC m=+0.135254710 container start 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:19:31 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [NOTICE]   (367299) : New worker (367301) forked
Jan 20 15:19:31 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [NOTICE]   (367299) : Loading success.
Jan 20 15:19:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 20 15:19:32 compute-0 ceph-mon[74360]: pgmap v2918: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Jan 20 15:19:32 compute-0 nova_compute[250018]: 2026-01-20 15:19:32.968 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.346 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.350 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922371.3789744, 3fe66973-cc26-4136-af61-3a600ec10948 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.350 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] VM Paused (Lifecycle Event)
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.406 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.410 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:19:33 compute-0 nova_compute[250018]: 2026-01-20 15:19:33.474 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:19:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.457 250022 DEBUG nova.compute.manager [req-ec33e155-64bc-4c97-9a18-b60314c5d0a3 req-0fea42f3-8769-4ab1-a17d-c9afcb09b1d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.457 250022 DEBUG oslo_concurrency.lockutils [req-ec33e155-64bc-4c97-9a18-b60314c5d0a3 req-0fea42f3-8769-4ab1-a17d-c9afcb09b1d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.457 250022 DEBUG oslo_concurrency.lockutils [req-ec33e155-64bc-4c97-9a18-b60314c5d0a3 req-0fea42f3-8769-4ab1-a17d-c9afcb09b1d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.457 250022 DEBUG oslo_concurrency.lockutils [req-ec33e155-64bc-4c97-9a18-b60314c5d0a3 req-0fea42f3-8769-4ab1-a17d-c9afcb09b1d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.457 250022 DEBUG nova.compute.manager [req-ec33e155-64bc-4c97-9a18-b60314c5d0a3 req-0fea42f3-8769-4ab1-a17d-c9afcb09b1d0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Processing event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.458 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.461 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922374.461395, 3fe66973-cc26-4136-af61-3a600ec10948 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.461 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] VM Resumed (Lifecycle Event)
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.463 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.466 250022 INFO nova.virt.libvirt.driver [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Instance spawned successfully.
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.467 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.487 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.488 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.488 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.489 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.489 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.490 250022 DEBUG nova.virt.libvirt.driver [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.525 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.530 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:19:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.558 250022 INFO nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Took 14.93 seconds to spawn the instance on the hypervisor.
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.559 250022 DEBUG nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.559 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.623 250022 INFO nova.compute.manager [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Took 16.09 seconds to build instance.
Jan 20 15:19:34 compute-0 nova_compute[250018]: 2026-01-20 15:19:34.658 250022 DEBUG oslo_concurrency.lockutils [None req-a1353496-d6f8-452b-bfe1-42a2dda56c93 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 887 KiB/s wr, 14 op/s
Jan 20 15:19:34 compute-0 ceph-mon[74360]: pgmap v2919: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 887 KiB/s wr, 14 op/s
Jan 20 15:19:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:35.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:35 compute-0 nova_compute[250018]: 2026-01-20 15:19:35.645 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.607 250022 DEBUG nova.compute.manager [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.607 250022 DEBUG oslo_concurrency.lockutils [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.607 250022 DEBUG oslo_concurrency.lockutils [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.608 250022 DEBUG oslo_concurrency.lockutils [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.608 250022 DEBUG nova.compute.manager [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] No waiting events found dispatching network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:19:36 compute-0 nova_compute[250018]: 2026-01-20 15:19:36.608 250022 WARNING nova.compute.manager [req-09605db5-9c01-4967-a823-58eb184abefa req-364a3b30-664d-42c3-acb5-fd8fa027aded 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received unexpected event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 for instance with vm_state active and task_state None.
Jan 20 15:19:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 489 KiB/s wr, 24 op/s
Jan 20 15:19:36 compute-0 ceph-mon[74360]: pgmap v2920: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 489 KiB/s wr, 24 op/s
Jan 20 15:19:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:37 compute-0 nova_compute[250018]: 2026-01-20 15:19:37.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:37.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:37 compute-0 nova_compute[250018]: 2026-01-20 15:19:37.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 17 KiB/s wr, 23 op/s
Jan 20 15:19:38 compute-0 ceph-mon[74360]: pgmap v2921: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 17 KiB/s wr, 23 op/s
Jan 20 15:19:39 compute-0 podman[367315]: 2026-01-20 15:19:39.471178658 +0000 UTC m=+0.059059365 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:19:39 compute-0 podman[367314]: 2026-01-20 15:19:39.499232595 +0000 UTC m=+0.093287008 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:19:39 compute-0 nova_compute[250018]: 2026-01-20 15:19:39.616 250022 DEBUG nova.compute.manager [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-changed-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:39 compute-0 nova_compute[250018]: 2026-01-20 15:19:39.616 250022 DEBUG nova.compute.manager [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Refreshing instance network info cache due to event network-changed-1ef1abd1-3781-489f-a588-0ddbcb6f4192. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:19:39 compute-0 nova_compute[250018]: 2026-01-20 15:19:39.617 250022 DEBUG oslo_concurrency.lockutils [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:19:39 compute-0 nova_compute[250018]: 2026-01-20 15:19:39.617 250022 DEBUG oslo_concurrency.lockutils [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:19:39 compute-0 nova_compute[250018]: 2026-01-20 15:19:39.617 250022 DEBUG nova.network.neutron [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Refreshing network info cache for port 1ef1abd1-3781-489f-a588-0ddbcb6f4192 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:19:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:40 compute-0 nova_compute[250018]: 2026-01-20 15:19:40.647 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 20 15:19:40 compute-0 ceph-mon[74360]: pgmap v2922: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 20 15:19:41 compute-0 nova_compute[250018]: 2026-01-20 15:19:41.347 250022 DEBUG nova.network.neutron [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updated VIF entry in instance network info cache for port 1ef1abd1-3781-489f-a588-0ddbcb6f4192. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:19:41 compute-0 nova_compute[250018]: 2026-01-20 15:19:41.347 250022 DEBUG nova.network.neutron [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updating instance_info_cache with network_info: [{"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:19:41 compute-0 nova_compute[250018]: 2026-01-20 15:19:41.371 250022 DEBUG oslo_concurrency.lockutils [req-e43c3249-b6dd-40ee-b0c1-d9cabbf2f2a1 req-7f71e9a9-749c-439d-a117-3c2b544afdb7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-3fe66973-cc26-4136-af61-3a600ec10948" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:19:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2107288519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:19:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 20 15:19:42 compute-0 nova_compute[250018]: 2026-01-20 15:19:42.972 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:43 compute-0 sudo[367359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:43 compute-0 sudo[367359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:43 compute-0 sudo[367359]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:43 compute-0 sudo[367384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:19:43 compute-0 sudo[367384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:19:43 compute-0 sudo[367384]: pam_unix(sudo:session): session closed for user root
Jan 20 15:19:43 compute-0 ceph-mon[74360]: pgmap v2923: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 20 15:19:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:44.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 254 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 KiB/s wr, 84 op/s
Jan 20 15:19:44 compute-0 ceph-mon[74360]: pgmap v2924: 321 pgs: 321 active+clean; 254 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 KiB/s wr, 84 op/s
Jan 20 15:19:45 compute-0 nova_compute[250018]: 2026-01-20 15:19:45.197 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:45.198 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:19:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:45.199 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:19:45 compute-0 nova_compute[250018]: 2026-01-20 15:19:45.649 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:45.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:46.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 20 15:19:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:47 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:47.200 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:47 compute-0 ceph-mon[74360]: pgmap v2925: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 20 15:19:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:47.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:47 compute-0 ovn_controller[148666]: 2026-01-20T15:19:47Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:4e:99 10.100.0.5
Jan 20 15:19:47 compute-0 ovn_controller[148666]: 2026-01-20T15:19:47Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:4e:99 10.100.0.5
Jan 20 15:19:47 compute-0 nova_compute[250018]: 2026-01-20 15:19:47.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:48.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 20 15:19:48 compute-0 ceph-mon[74360]: pgmap v2926: 321 pgs: 321 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 20 15:19:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:49.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3438431235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:50 compute-0 nova_compute[250018]: 2026-01-20 15:19:50.107 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:19:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:50.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:50 compute-0 nova_compute[250018]: 2026-01-20 15:19:50.654 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 312 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Jan 20 15:19:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3654454720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:19:50 compute-0 ceph-mon[74360]: pgmap v2927: 321 pgs: 321 active+clean; 312 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Jan 20 15:19:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:51.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:19:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:52.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:19:52
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'vms', 'default.rgw.meta']
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:19:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:52 compute-0 nova_compute[250018]: 2026-01-20 15:19:52.976 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:53 compute-0 ceph-mon[74360]: pgmap v2928: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:54.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:54 compute-0 ceph-mon[74360]: pgmap v2929: 321 pgs: 321 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:19:55 compute-0 nova_compute[250018]: 2026-01-20 15:19:55.656 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:55.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.048 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.049 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.050 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.050 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.050 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.052 250022 INFO nova.compute.manager [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Terminating instance
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.053 250022 DEBUG nova.compute.manager [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:19:56 compute-0 kernel: tap1ef1abd1-37 (unregistering): left promiscuous mode
Jan 20 15:19:56 compute-0 NetworkManager[48960]: <info>  [1768922396.1580] device (tap1ef1abd1-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.168 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 ovn_controller[148666]: 2026-01-20T15:19:56Z|00714|binding|INFO|Releasing lport 1ef1abd1-3781-489f-a588-0ddbcb6f4192 from this chassis (sb_readonly=0)
Jan 20 15:19:56 compute-0 ovn_controller[148666]: 2026-01-20T15:19:56Z|00715|binding|INFO|Setting lport 1ef1abd1-3781-489f-a588-0ddbcb6f4192 down in Southbound
Jan 20 15:19:56 compute-0 ovn_controller[148666]: 2026-01-20T15:19:56Z|00716|binding|INFO|Removing iface tap1ef1abd1-37 ovn-installed in OVS
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.171 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.190 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Jan 20 15:19:56 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000c3.scope: Consumed 13.581s CPU time.
Jan 20 15:19:56 compute-0 systemd-machined[216401]: Machine qemu-86-instance-000000c3 terminated.
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.226 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:4e:99 10.100.0.5'], port_security=['fa:16:3e:a1:4e:99 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3fe66973-cc26-4136-af61-3a600ec10948', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9156c0a9920c4721843416b9a44404f9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec7b0ec7-6cdf-4a7c-bed3-5b907626a659', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bfc4e2a-eeed-480e-aa18-68fc6c8f2cc2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=1ef1abd1-3781-489f-a588-0ddbcb6f4192) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.227 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 1ef1abd1-3781-489f-a588-0ddbcb6f4192 in datapath 76c2d716-7d14-4bc1-b83b-a3290ee99d9a unbound from our chassis
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.228 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.229 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f869077f-8f14-46b1-a3e5-5ecabcbf01c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.230 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a namespace which is not needed anymore
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.294 250022 INFO nova.virt.libvirt.driver [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Instance destroyed successfully.
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.295 250022 DEBUG nova.objects.instance [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lazy-loading 'resources' on Instance uuid 3fe66973-cc26-4136-af61-3a600ec10948 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.328 250022 DEBUG nova.virt.libvirt.vif [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:19:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-998097081',display_name='tempest-AttachVolumeNegativeTest-server-998097081',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-998097081',id=195,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNfaK16zx4hGL4X06Tx3ufhY5e5we/GkGDDP0xP64L4n/DaigaofrP+riSKAvoz/7obxzJJuCTTIvoKdXhAEKZ+WnHpkoCJxpFQScPA3LmqJKTG5B0+342102Iu2o5ADTw==',key_name='tempest-keypair-224036004',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:19:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9156c0a9920c4721843416b9a44404f9',ramdisk_id='',reservation_id='r-diglwi2a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1505789262',owner_user_name='tempest-AttachVolumeNegativeTest-1505789262-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:19:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='cd9a8f26b71f4631a387e555e6b18428',uuid=3fe66973-cc26-4136-af61-3a600ec10948,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.328 250022 DEBUG nova.network.os_vif_util [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converting VIF {"id": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "address": "fa:16:3e:a1:4e:99", "network": {"id": "76c2d716-7d14-4bc1-b83b-a3290ee99d9a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-782760714-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9156c0a9920c4721843416b9a44404f9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ef1abd1-37", "ovs_interfaceid": "1ef1abd1-3781-489f-a588-0ddbcb6f4192", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.329 250022 DEBUG nova.network.os_vif_util [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.330 250022 DEBUG os_vif [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.332 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ef1abd1-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.339 250022 INFO os_vif [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:4e:99,bridge_name='br-int',has_traffic_filtering=True,id=1ef1abd1-3781-489f-a588-0ddbcb6f4192,network=Network(76c2d716-7d14-4bc1-b83b-a3290ee99d9a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ef1abd1-37')
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [NOTICE]   (367299) : haproxy version is 2.8.14-c23fe91
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [NOTICE]   (367299) : path to executable is /usr/sbin/haproxy
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [WARNING]  (367299) : Exiting Master process...
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [WARNING]  (367299) : Exiting Master process...
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [ALERT]    (367299) : Current worker (367301) exited with code 143 (Terminated)
Jan 20 15:19:56 compute-0 neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a[367295]: [WARNING]  (367299) : All workers exited. Exiting... (0)
Jan 20 15:19:56 compute-0 systemd[1]: libpod-5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549.scope: Deactivated successfully.
Jan 20 15:19:56 compute-0 podman[367453]: 2026-01-20 15:19:56.37679567 +0000 UTC m=+0.053277548 container died 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549-userdata-shm.mount: Deactivated successfully.
Jan 20 15:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca00e0f467dbdf8505afd41197a022169f569e7e7d81208624a51120aee7d37-merged.mount: Deactivated successfully.
Jan 20 15:19:56 compute-0 podman[367453]: 2026-01-20 15:19:56.419298206 +0000 UTC m=+0.095780074 container cleanup 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:19:56 compute-0 systemd[1]: libpod-conmon-5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549.scope: Deactivated successfully.
Jan 20 15:19:56 compute-0 podman[367499]: 2026-01-20 15:19:56.47801016 +0000 UTC m=+0.038987173 container remove 5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.483 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[62f62d78-7084-4042-975b-b7ab0566037a]: (4, ('Tue Jan 20 03:19:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549)\n5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549\nTue Jan 20 03:19:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a (5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549)\n5f4428faef815d44d75ae17121ebafbd0325f6359ebc3ae35d89d62dfd427549\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.485 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[471741d5-25e3-42b5-ab4b-380c00acd383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.486 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76c2d716-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.488 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 kernel: tap76c2d716-70: left promiscuous mode
Jan 20 15:19:56 compute-0 nova_compute[250018]: 2026-01-20 15:19:56.499 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.502 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0488e67c-31d2-43bb-8617-fd5ed63902f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.515 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c48c0838-4d88-430f-a8ed-df6567e76ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.517 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d06834e6-1370-4079-a874-3ab77a72e8b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.531 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[09f9f928-bb3e-4e1e-8ba8-deb0cc9b9652]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831173, 'reachable_time': 21204, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367514, 'error': None, 'target': 'ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.533 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76c2d716-7d14-4bc1-b83b-a3290ee99d9a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:19:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:19:56.533 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0e5ad9-6df3-4b96-8035-51439f806e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:19:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d76c2d716\x2d7d14\x2d4bc1\x2db83b\x2da3290ee99d9a.mount: Deactivated successfully.
Jan 20 15:19:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:56.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 20 15:19:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:19:57 compute-0 ceph-mon[74360]: pgmap v2930: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 20 15:19:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:19:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:19:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.051 250022 DEBUG nova.compute.manager [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-unplugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.052 250022 DEBUG oslo_concurrency.lockutils [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.052 250022 DEBUG oslo_concurrency.lockutils [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.053 250022 DEBUG oslo_concurrency.lockutils [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.053 250022 DEBUG nova.compute.manager [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] No waiting events found dispatching network-vif-unplugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.053 250022 DEBUG nova.compute.manager [req-db8be280-5617-4c2a-b1e1-f7cde706e8ea req-17078b66-1810-40cc-8bc6-c9d1c22398a0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-unplugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.109 250022 INFO nova.virt.libvirt.driver [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Deleting instance files /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948_del
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.110 250022 INFO nova.virt.libvirt.driver [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Deletion of /var/lib/nova/instances/3fe66973-cc26-4136-af61-3a600ec10948_del complete
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.184 250022 INFO nova.compute.manager [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Took 2.13 seconds to destroy the instance on the hypervisor.
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.184 250022 DEBUG oslo.service.loopingcall [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.185 250022 DEBUG nova.compute.manager [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:19:58 compute-0 nova_compute[250018]: 2026-01-20 15:19:58.185 250022 DEBUG nova.network.neutron [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:19:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:19:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:19:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:19:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 20 15:19:58 compute-0 ceph-mon[74360]: pgmap v2931: 321 pgs: 321 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 20 15:19:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:19:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:19:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:19:59.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.687 250022 DEBUG nova.network.neutron [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.705 250022 INFO nova.compute.manager [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Took 1.52 seconds to deallocate network for instance.
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.780 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.780 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.818 250022 DEBUG nova.compute.manager [req-636fc2a5-09f6-4613-82e9-e4da79e0408d req-3240d33c-6a07-4450-a617-d2536bd79927 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-deleted-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:19:59 compute-0 nova_compute[250018]: 2026-01-20 15:19:59.831 250022 DEBUG oslo_concurrency.processutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.116 250022 DEBUG nova.compute.manager [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.116 250022 DEBUG oslo_concurrency.lockutils [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "3fe66973-cc26-4136-af61-3a600ec10948-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.117 250022 DEBUG oslo_concurrency.lockutils [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.117 250022 DEBUG oslo_concurrency.lockutils [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.117 250022 DEBUG nova.compute.manager [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] No waiting events found dispatching network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.117 250022 WARNING nova.compute.manager [req-c026c5fd-59ed-48c2-a257-b06ca5d9293a req-c39ef26b-e510-490d-9a81-46e926b5aeb1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Received unexpected event network-vif-plugged-1ef1abd1-3781-489f-a588-0ddbcb6f4192 for instance with vm_state deleted and task_state None.
Jan 20 15:20:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:20:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/18900664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.289 250022 DEBUG oslo_concurrency.processutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.294 250022 DEBUG nova.compute.provider_tree [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.309 250022 DEBUG nova.scheduler.client.report [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.341 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.366 250022 INFO nova.scheduler.client.report [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Deleted allocations for instance 3fe66973-cc26-4136-af61-3a600ec10948
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.421 250022 DEBUG oslo_concurrency.lockutils [None req-24cf239e-c210-4b0e-9db4-d7ee16d82a06 cd9a8f26b71f4631a387e555e6b18428 9156c0a9920c4721843416b9a44404f9 - - default default] Lock "3fe66973-cc26-4136-af61-3a600ec10948" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:00 compute-0 nova_compute[250018]: 2026-01-20 15:20:00.660 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:20:01 compute-0 nova_compute[250018]: 2026-01-20 15:20:01.335 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/18900664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:01 compute-0 ceph-mon[74360]: pgmap v2932: 321 pgs: 321 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 20 15:20:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:01.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 593 KiB/s wr, 121 op/s
Jan 20 15:20:03 compute-0 nova_compute[250018]: 2026-01-20 15:20:03.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:03 compute-0 ceph-mon[74360]: pgmap v2933: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 593 KiB/s wr, 121 op/s
Jan 20 15:20:03 compute-0 sudo[367541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:03 compute-0 sudo[367541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:03 compute-0 sudo[367541]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:03 compute-0 sudo[367566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:03 compute-0 sudo[367566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:03 compute-0 sudo[367566]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 103 op/s
Jan 20 15:20:04 compute-0 ceph-mon[74360]: pgmap v2934: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 103 op/s
Jan 20 15:20:05 compute-0 nova_compute[250018]: 2026-01-20 15:20:05.659 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:05.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3916580390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:06 compute-0 nova_compute[250018]: 2026-01-20 15:20:06.337 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 120 op/s
Jan 20 15:20:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/102669052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:06 compute-0 ceph-mon[74360]: pgmap v2935: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 120 op/s
Jan 20 15:20:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:07 compute-0 nova_compute[250018]: 2026-01-20 15:20:07.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:07 compute-0 nova_compute[250018]: 2026-01-20 15:20:07.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:07 compute-0 nova_compute[250018]: 2026-01-20 15:20:07.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:20:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:20:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:20:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/35866287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3825334878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/703266583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:08.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 110 op/s
Jan 20 15:20:08 compute-0 ceph-mon[74360]: pgmap v2936: 321 pgs: 321 active+clean; 230 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 110 op/s
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.863559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408863658, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1109, "num_deletes": 251, "total_data_size": 1730172, "memory_usage": 1764704, "flush_reason": "Manual Compaction"}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408878092, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1698431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65095, "largest_seqno": 66202, "table_properties": {"data_size": 1693229, "index_size": 2661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11553, "raw_average_key_size": 19, "raw_value_size": 1682677, "raw_average_value_size": 2891, "num_data_blocks": 118, "num_entries": 582, "num_filter_entries": 582, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922312, "oldest_key_time": 1768922312, "file_creation_time": 1768922408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 14599 microseconds, and 5963 cpu microseconds.
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.878167) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1698431 bytes OK
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.878189) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.880059) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.880079) EVENT_LOG_v1 {"time_micros": 1768922408880073, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.880099) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1725182, prev total WAL file size 1725182, number of live WAL files 2.
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.881346) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1658KB)], [146(12MB)]
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408881439, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 15212635, "oldest_snapshot_seqno": -1}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9461 keys, 13363212 bytes, temperature: kUnknown
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408970771, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 13363212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13299918, "index_size": 38541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23685, "raw_key_size": 249152, "raw_average_key_size": 26, "raw_value_size": 13131681, "raw_average_value_size": 1387, "num_data_blocks": 1475, "num_entries": 9461, "num_filter_entries": 9461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.971038) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 13363212 bytes
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.972189) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.9 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(16.8) write-amplify(7.9) OK, records in: 9976, records dropped: 515 output_compression: NoCompression
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.972203) EVENT_LOG_v1 {"time_micros": 1768922408972196, "job": 90, "event": "compaction_finished", "compaction_time_micros": 89402, "compaction_time_cpu_micros": 31612, "output_level": 6, "num_output_files": 1, "total_output_size": 13363212, "num_input_records": 9976, "num_output_records": 9461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408972725, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922408975133, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.881244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.975195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.975199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.975201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.975202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:20:08.975203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.069 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.069 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.069 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.070 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.070 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480512486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.513 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:09 compute-0 podman[367619]: 2026-01-20 15:20:09.649455512 +0000 UTC m=+0.090774600 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:20:09 compute-0 podman[367618]: 2026-01-20 15:20:09.681877236 +0000 UTC m=+0.123065290 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:20:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.722 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.723 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4231MB free_disk=20.931278228759766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.724 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.724 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.783 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.783 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:20:09 compute-0 nova_compute[250018]: 2026-01-20 15:20:09.799 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:20:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3480512486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:20:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688265108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.211 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.216 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.232 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.260 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.261 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:10.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:10 compute-0 nova_compute[250018]: 2026-01-20 15:20:10.662 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 20 15:20:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3688265108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:10 compute-0 ceph-mon[74360]: pgmap v2937: 321 pgs: 321 active+clean; 190 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Jan 20 15:20:11 compute-0 nova_compute[250018]: 2026-01-20 15:20:11.292 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922396.2908924, 3fe66973-cc26-4136-af61-3a600ec10948 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:20:11 compute-0 nova_compute[250018]: 2026-01-20 15:20:11.293 250022 INFO nova.compute.manager [-] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] VM Stopped (Lifecycle Event)
Jan 20 15:20:11 compute-0 nova_compute[250018]: 2026-01-20 15:20:11.333 250022 DEBUG nova.compute.manager [None req-825c1e24-d50d-4a77-8e65-bc7fbe5961d3 - - - - - -] [instance: 3fe66973-cc26-4136-af61-3a600ec10948] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:20:11 compute-0 nova_compute[250018]: 2026-01-20 15:20:11.338 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001774097629350456 of space, bias 1.0, pg target 0.5322292888051368 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6485334659408152 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:20:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:20:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:12.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 20 15:20:13 compute-0 ceph-mon[74360]: pgmap v2938: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 20 15:20:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:13.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3382626955' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:20:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3382626955' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:20:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2383700053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:14.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 20 15:20:15 compute-0 ceph-mon[74360]: pgmap v2939: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 20 15:20:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:15.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:15 compute-0 nova_compute[250018]: 2026-01-20 15:20:15.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:16 compute-0 nova_compute[250018]: 2026-01-20 15:20:16.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:16.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 20 15:20:16 compute-0 ceph-mon[74360]: pgmap v2940: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 3.6 MiB/s wr, 119 op/s
Jan 20 15:20:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:16 compute-0 sshd-session[367688]: Invalid user admin from 134.122.57.138 port 58066
Jan 20 15:20:17 compute-0 sshd-session[367688]: Connection closed by invalid user admin 134.122.57.138 port 58066 [preauth]
Jan 20 15:20:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:17 compute-0 sudo[367691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:17 compute-0 sudo[367691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:17 compute-0 sudo[367691]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:17 compute-0 sudo[367716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:17 compute-0 sudo[367716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-0 sudo[367716]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-0 sudo[367741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:18 compute-0 sudo[367741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-0 sudo[367741]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-0 sudo[367766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:20:18 compute-0 sudo[367766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:20:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:20:18 compute-0 sudo[367766]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:18.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:20:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 15:20:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Jan 20 15:20:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:20:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:19 compute-0 ceph-mon[74360]: pgmap v2941: 321 pgs: 321 active+clean; 240 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Jan 20 15:20:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:19.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a2a12c0c-60a7-4c22-9910-48c9be57e401 does not exist
Jan 20 15:20:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cc0e3a0c-ac74-40dd-8b8d-d11a0d7c5a67 does not exist
Jan 20 15:20:20 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0bc7b702-c20e-4294-8be2-77be49f99ff9 does not exist
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:20:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-0 sudo[367824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:20 compute-0 sudo[367824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:20 compute-0 sudo[367824]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:20 compute-0 sudo[367849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:20 compute-0 sudo[367849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:20 compute-0 sudo[367849]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:20 compute-0 sudo[367875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:20 compute-0 sudo[367875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:20 compute-0 sudo[367875]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:20 compute-0 nova_compute[250018]: 2026-01-20 15:20:20.262 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:20 compute-0 sudo[367900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:20:20 compute-0 sudo[367900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:20:20 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:20:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:20.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.599728566 +0000 UTC m=+0.035960101 container create 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:20:20 compute-0 systemd[1]: Started libpod-conmon-22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41.scope.
Jan 20 15:20:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.583345934 +0000 UTC m=+0.019577489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.681009409 +0000 UTC m=+0.117240974 container init 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.687679189 +0000 UTC m=+0.123910724 container start 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.690950007 +0000 UTC m=+0.127181582 container attach 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:20:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 15:20:20 compute-0 relaxed_greider[367981]: 167 167
Jan 20 15:20:20 compute-0 systemd[1]: libpod-22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41.scope: Deactivated successfully.
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.69585578 +0000 UTC m=+0.132087315 container died 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:20:20 compute-0 nova_compute[250018]: 2026-01-20 15:20:20.707 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a99a32c6310052240fab6fc328d26a3bc90707230c0967a3ed355948211d6595-merged.mount: Deactivated successfully.
Jan 20 15:20:20 compute-0 podman[367965]: 2026-01-20 15:20:20.730811123 +0000 UTC m=+0.167042658 container remove 22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_greider, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 15:20:20 compute-0 systemd[1]: libpod-conmon-22fba31086f41b7f7a81d668616727832636b9bca3e66697c4304c3fde9d6b41.scope: Deactivated successfully.
Jan 20 15:20:20 compute-0 podman[368005]: 2026-01-20 15:20:20.882996377 +0000 UTC m=+0.041868539 container create cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:20:20 compute-0 systemd[1]: Started libpod-conmon-cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58.scope.
Jan 20 15:20:20 compute-0 podman[368005]: 2026-01-20 15:20:20.866583575 +0000 UTC m=+0.025455757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:20 compute-0 podman[368005]: 2026-01-20 15:20:20.981575897 +0000 UTC m=+0.140448079 container init cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:20:20 compute-0 podman[368005]: 2026-01-20 15:20:20.99170038 +0000 UTC m=+0.150572542 container start cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:20:20 compute-0 podman[368005]: 2026-01-20 15:20:20.995104521 +0000 UTC m=+0.153976713 container attach cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:20:21 compute-0 nova_compute[250018]: 2026-01-20 15:20:21.381 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:21 compute-0 ceph-mon[74360]: pgmap v2942: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 20 15:20:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3496344543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/243034538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:21.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:21 compute-0 optimistic_cori[368021]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:20:21 compute-0 optimistic_cori[368021]: --> relative data size: 1.0
Jan 20 15:20:21 compute-0 optimistic_cori[368021]: --> All data devices are unavailable
Jan 20 15:20:21 compute-0 systemd[1]: libpod-cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58.scope: Deactivated successfully.
Jan 20 15:20:21 compute-0 podman[368005]: 2026-01-20 15:20:21.809788588 +0000 UTC m=+0.968660770 container died cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9254d5267ae005677ce2493f517fb63fb0086fe75aa2be542873da968dedcc1-merged.mount: Deactivated successfully.
Jan 20 15:20:21 compute-0 podman[368005]: 2026-01-20 15:20:21.857890796 +0000 UTC m=+1.016762948 container remove cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cori, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:20:21 compute-0 systemd[1]: libpod-conmon-cc3ed9f3e841dad00137e8e1781eab120545ca01f32a5d14c887c423c303ad58.scope: Deactivated successfully.
Jan 20 15:20:21 compute-0 sudo[367900]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:21 compute-0 sudo[368048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:21 compute-0 sudo[368048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:21 compute-0 sudo[368048]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:21 compute-0 sudo[368073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:21 compute-0 sudo[368073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:21 compute-0 sudo[368073]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:22 compute-0 sudo[368098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:22 compute-0 sudo[368098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:22 compute-0 sudo[368098]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:22 compute-0 sudo[368123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:20:22 compute-0 sudo[368123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.396747132 +0000 UTC m=+0.037166594 container create 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:20:22 compute-0 systemd[1]: Started libpod-conmon-713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14.scope.
Jan 20 15:20:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.467984652 +0000 UTC m=+0.108404134 container init 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.379329771 +0000 UTC m=+0.019749253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.475829735 +0000 UTC m=+0.116249197 container start 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.479707989 +0000 UTC m=+0.120127451 container attach 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:20:22 compute-0 hardcore_heisenberg[368206]: 167 167
Jan 20 15:20:22 compute-0 systemd[1]: libpod-713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14.scope: Deactivated successfully.
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.480952053 +0000 UTC m=+0.121371525 container died 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b1c16d5c585ed9726a6631c96e6f99992778f3815bf85fee4c98573ebe7f24-merged.mount: Deactivated successfully.
Jan 20 15:20:22 compute-0 podman[368190]: 2026-01-20 15:20:22.518262619 +0000 UTC m=+0.158682081 container remove 713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:20:22 compute-0 systemd[1]: libpod-conmon-713027ea623a7b8e2ae532cc0179463a2e9eed21e81907edbf4fa9bd375e0d14.scope: Deactivated successfully.
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:22.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:22 compute-0 podman[368233]: 2026-01-20 15:20:22.663510637 +0000 UTC m=+0.039358092 container create a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:20:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/409901699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 20 15:20:22 compute-0 systemd[1]: Started libpod-conmon-a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245.scope.
Jan 20 15:20:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314ed6555f760716647a49f59fac664d587eabbda37859b4e25e049f0d5451d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314ed6555f760716647a49f59fac664d587eabbda37859b4e25e049f0d5451d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314ed6555f760716647a49f59fac664d587eabbda37859b4e25e049f0d5451d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/314ed6555f760716647a49f59fac664d587eabbda37859b4e25e049f0d5451d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:22 compute-0 podman[368233]: 2026-01-20 15:20:22.648255116 +0000 UTC m=+0.024102591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:22 compute-0 podman[368233]: 2026-01-20 15:20:22.743941678 +0000 UTC m=+0.119789133 container init a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:20:22 compute-0 podman[368233]: 2026-01-20 15:20:22.752753855 +0000 UTC m=+0.128601310 container start a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:20:22 compute-0 podman[368233]: 2026-01-20 15:20:22.756067314 +0000 UTC m=+0.131914799 container attach a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:20:23 compute-0 nova_compute[250018]: 2026-01-20 15:20:23.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:23 compute-0 nova_compute[250018]: 2026-01-20 15:20:23.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:20:23 compute-0 nova_compute[250018]: 2026-01-20 15:20:23.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:20:23 compute-0 nova_compute[250018]: 2026-01-20 15:20:23.082 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:20:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:23.124 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:20:23 compute-0 nova_compute[250018]: 2026-01-20 15:20:23.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:23.125 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]: {
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:     "0": [
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:         {
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "devices": [
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "/dev/loop3"
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             ],
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "lv_name": "ceph_lv0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "lv_size": "7511998464",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "name": "ceph_lv0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "tags": {
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.cluster_name": "ceph",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.crush_device_class": "",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.encrypted": "0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.osd_id": "0",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.type": "block",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:                 "ceph.vdo": "0"
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             },
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "type": "block",
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:             "vg_name": "ceph_vg0"
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:         }
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]:     ]
Jan 20 15:20:23 compute-0 epic_brahmagupta[368249]: }
Jan 20 15:20:23 compute-0 systemd[1]: libpod-a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245.scope: Deactivated successfully.
Jan 20 15:20:23 compute-0 podman[368258]: 2026-01-20 15:20:23.530057653 +0000 UTC m=+0.025025327 container died a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-314ed6555f760716647a49f59fac664d587eabbda37859b4e25e049f0d5451d2-merged.mount: Deactivated successfully.
Jan 20 15:20:23 compute-0 sudo[368264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-0 sudo[368264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-0 sudo[368264]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 podman[368258]: 2026-01-20 15:20:23.578280553 +0000 UTC m=+0.073248197 container remove a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 15:20:23 compute-0 systemd[1]: libpod-conmon-a472abe493cd1b87ad733c25034591047f8a3086d16788254f27564d637c1245.scope: Deactivated successfully.
Jan 20 15:20:23 compute-0 sudo[368123]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 sudo[368296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-0 sudo[368296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-0 sudo[368296]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 sudo[368319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-0 sudo[368319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-0 sudo[368319]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 ceph-mon[74360]: pgmap v2943: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 20 15:20:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3883154021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:20:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:23.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:23 compute-0 sudo[368346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:20:23 compute-0 sudo[368346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-0 sudo[368346]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 sudo[368371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:23 compute-0 sudo[368371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:23 compute-0 sudo[368371]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:23 compute-0 sudo[368396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:20:23 compute-0 sudo[368396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:24.127 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.16533622 +0000 UTC m=+0.041738428 container create 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:20:24 compute-0 systemd[1]: Started libpod-conmon-8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a.scope.
Jan 20 15:20:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.237625059 +0000 UTC m=+0.114027257 container init 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.14498158 +0000 UTC m=+0.021383798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.244145115 +0000 UTC m=+0.120547313 container start 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:20:24 compute-0 crazy_yonath[368479]: 167 167
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.247726952 +0000 UTC m=+0.124129150 container attach 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:20:24 compute-0 systemd[1]: libpod-8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a.scope: Deactivated successfully.
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.249055358 +0000 UTC m=+0.125457566 container died 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e167f9f9536b1540209de464ff120905c6bb1a6883eb1f1dce7a185606258ea2-merged.mount: Deactivated successfully.
Jan 20 15:20:24 compute-0 podman[368462]: 2026-01-20 15:20:24.290136046 +0000 UTC m=+0.166538244 container remove 8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:20:24 compute-0 systemd[1]: libpod-conmon-8919a9fcfa61d66050412ddf1d3828f1049cc2e47bf3851c0af753ddd757b33a.scope: Deactivated successfully.
Jan 20 15:20:24 compute-0 podman[368505]: 2026-01-20 15:20:24.436279988 +0000 UTC m=+0.036778713 container create 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:20:24 compute-0 systemd[1]: Started libpod-conmon-6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c.scope.
Jan 20 15:20:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7742a8aa782e89e54d09f96cf64487c65e9c13f3a541c1f67cb20da2c2e92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7742a8aa782e89e54d09f96cf64487c65e9c13f3a541c1f67cb20da2c2e92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7742a8aa782e89e54d09f96cf64487c65e9c13f3a541c1f67cb20da2c2e92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e7742a8aa782e89e54d09f96cf64487c65e9c13f3a541c1f67cb20da2c2e92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:20:24 compute-0 podman[368505]: 2026-01-20 15:20:24.50714043 +0000 UTC m=+0.107639185 container init 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:20:24 compute-0 podman[368505]: 2026-01-20 15:20:24.516119081 +0000 UTC m=+0.116617806 container start 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:20:24 compute-0 podman[368505]: 2026-01-20 15:20:24.420222545 +0000 UTC m=+0.020721290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:20:24 compute-0 podman[368505]: 2026-01-20 15:20:24.519451482 +0000 UTC m=+0.119950237 container attach 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:20:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Jan 20 15:20:24 compute-0 ceph-mon[74360]: pgmap v2944: 321 pgs: 321 active+clean; 215 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 2.5 MiB/s wr, 66 op/s
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]: {
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:         "osd_id": 0,
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:         "type": "bluestore"
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]:     }
Jan 20 15:20:25 compute-0 eager_matsumoto[368522]: }
Jan 20 15:20:25 compute-0 systemd[1]: libpod-6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c.scope: Deactivated successfully.
Jan 20 15:20:25 compute-0 podman[368505]: 2026-01-20 15:20:25.319517923 +0000 UTC m=+0.920016648 container died 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e7742a8aa782e89e54d09f96cf64487c65e9c13f3a541c1f67cb20da2c2e92-merged.mount: Deactivated successfully.
Jan 20 15:20:25 compute-0 podman[368505]: 2026-01-20 15:20:25.373307534 +0000 UTC m=+0.973806279 container remove 6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_matsumoto, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:20:25 compute-0 systemd[1]: libpod-conmon-6ba60847a2c01f08b0026c285114d275a41879112fec7f0c1ddd205b95ca9a3c.scope: Deactivated successfully.
Jan 20 15:20:25 compute-0 sudo[368396]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:20:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:20:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 35c45de1-1019-4f90-a902-f5fb2ad4cbce does not exist
Jan 20 15:20:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a1be9b51-77b3-4092-85b5-c5dc3a00395b does not exist
Jan 20 15:20:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1404fb40-5ba3-47e9-9148-67cdeeef61a9 does not exist
Jan 20 15:20:25 compute-0 sudo[368557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:25 compute-0 sudo[368557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:25 compute-0 sudo[368557]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:25 compute-0 sudo[368582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:20:25 compute-0 sudo[368582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:25 compute-0 sudo[368582]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:25 compute-0 nova_compute[250018]: 2026-01-20 15:20:25.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:26 compute-0 nova_compute[250018]: 2026-01-20 15:20:26.382 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:20:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 15:20:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:27 compute-0 ceph-mon[74360]: pgmap v2945: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 20 15:20:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:20:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:27.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:20:27 compute-0 nova_compute[250018]: 2026-01-20 15:20:27.890 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:20:28 compute-0 ceph-mon[74360]: pgmap v2946: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 20 15:20:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:29.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:30.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 206 op/s
Jan 20 15:20:30 compute-0 nova_compute[250018]: 2026-01-20 15:20:30.712 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:30 compute-0 ceph-mon[74360]: pgmap v2947: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 206 op/s
Jan 20 15:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:30.790 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:30.790 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:20:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:20:30.790 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:20:31 compute-0 nova_compute[250018]: 2026-01-20 15:20:31.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:31.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Jan 20 15:20:32 compute-0 ceph-mon[74360]: pgmap v2948: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Jan 20 15:20:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:33.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:34 compute-0 systemd[1]: Starting dnf makecache...
Jan 20 15:20:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Jan 20 15:20:34 compute-0 ceph-mon[74360]: pgmap v2949: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Jan 20 15:20:34 compute-0 dnf[368612]: Metadata cache refreshed recently.
Jan 20 15:20:34 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 15:20:34 compute-0 systemd[1]: Finished dnf makecache.
Jan 20 15:20:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:35.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:35 compute-0 nova_compute[250018]: 2026-01-20 15:20:35.752 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:36 compute-0 nova_compute[250018]: 2026-01-20 15:20:36.386 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:36.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 192 op/s
Jan 20 15:20:36 compute-0 ceph-mon[74360]: pgmap v2950: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 192 op/s
Jan 20 15:20:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:38.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Jan 20 15:20:38 compute-0 ceph-mon[74360]: pgmap v2951: 321 pgs: 321 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 90 op/s
Jan 20 15:20:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:39.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:40 compute-0 podman[368617]: 2026-01-20 15:20:40.464278948 +0000 UTC m=+0.057679638 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:20:40 compute-0 podman[368616]: 2026-01-20 15:20:40.501231264 +0000 UTC m=+0.094341445 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:20:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:40.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 20 15:20:40 compute-0 nova_compute[250018]: 2026-01-20 15:20:40.753 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:40 compute-0 ceph-mon[74360]: pgmap v2952: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 20 15:20:41 compute-0 nova_compute[250018]: 2026-01-20 15:20:41.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:20:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:20:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:42.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:42 compute-0 ceph-mon[74360]: pgmap v2953: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:43 compute-0 sudo[368662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:43 compute-0 sudo[368662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:43 compute-0 sudo[368662]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:43 compute-0 sudo[368687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:20:43 compute-0 sudo[368687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:20:43 compute-0 sudo[368687]: pam_unix(sudo:session): session closed for user root
Jan 20 15:20:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:44.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:44 compute-0 ceph-mon[74360]: pgmap v2954: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Jan 20 15:20:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:45 compute-0 nova_compute[250018]: 2026-01-20 15:20:45.755 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:46 compute-0 nova_compute[250018]: 2026-01-20 15:20:46.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:20:46 compute-0 ceph-mon[74360]: pgmap v2955: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:20:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.6 MiB/s wr, 104 op/s
Jan 20 15:20:48 compute-0 ceph-mon[74360]: pgmap v2956: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.6 MiB/s wr, 104 op/s
Jan 20 15:20:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2647159744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:20:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:50.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 598 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Jan 20 15:20:50 compute-0 nova_compute[250018]: 2026-01-20 15:20:50.757 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:50 compute-0 ceph-mon[74360]: pgmap v2957: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 598 KiB/s rd, 2.6 MiB/s wr, 118 op/s
Jan 20 15:20:51 compute-0 nova_compute[250018]: 2026-01-20 15:20:51.432 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:52 compute-0 nova_compute[250018]: 2026-01-20 15:20:52.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:20:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:20:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:52.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:20:52
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes']
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:20:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 373 KiB/s wr, 43 op/s
Jan 20 15:20:52 compute-0 ceph-mon[74360]: pgmap v2958: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 373 KiB/s wr, 43 op/s
Jan 20 15:20:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:20:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:20:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:54.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 28 op/s
Jan 20 15:20:54 compute-0 ceph-mon[74360]: pgmap v2959: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 28 op/s
Jan 20 15:20:55 compute-0 nova_compute[250018]: 2026-01-20 15:20:55.345 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:55 compute-0 nova_compute[250018]: 2026-01-20 15:20:55.475 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:55 compute-0 nova_compute[250018]: 2026-01-20 15:20:55.759 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:56 compute-0 nova_compute[250018]: 2026-01-20 15:20:56.433 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:20:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:56.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 28 op/s
Jan 20 15:20:56 compute-0 ceph-mon[74360]: pgmap v2960: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 28 op/s
Jan 20 15:20:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:20:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:20:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:20:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:20:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:20:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:20:58 compute-0 ceph-mon[74360]: pgmap v2961: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:20:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:20:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:20:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:20:59.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:00.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.3 KiB/s wr, 28 op/s
Jan 20 15:21:00 compute-0 nova_compute[250018]: 2026-01-20 15:21:00.761 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:00 compute-0 ceph-mon[74360]: pgmap v2962: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.3 KiB/s wr, 28 op/s
Jan 20 15:21:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:21:01 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54920815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:01 compute-0 nova_compute[250018]: 2026-01-20 15:21:01.435 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:01.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:01 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/54920815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:01 compute-0 sshd-session[368721]: Invalid user admin from 134.122.57.138 port 55078
Jan 20 15:21:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.950330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461950353, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 747, "num_deletes": 250, "total_data_size": 1014586, "memory_usage": 1027920, "flush_reason": "Manual Compaction"}
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461956533, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 697690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66203, "largest_seqno": 66949, "table_properties": {"data_size": 694314, "index_size": 1219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9080, "raw_average_key_size": 21, "raw_value_size": 687182, "raw_average_value_size": 1590, "num_data_blocks": 52, "num_entries": 432, "num_filter_entries": 432, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922409, "oldest_key_time": 1768922409, "file_creation_time": 1768922461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 6246 microseconds, and 2501 cpu microseconds.
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.956570) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 697690 bytes OK
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.956591) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.957836) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.957851) EVENT_LOG_v1 {"time_micros": 1768922461957846, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.957867) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1010835, prev total WAL file size 1010835, number of live WAL files 2.
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.958556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(681KB)], [149(12MB)]
Jan 20 15:21:01 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922461958583, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 14060902, "oldest_snapshot_seqno": -1}
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9396 keys, 10497601 bytes, temperature: kUnknown
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462027168, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10497601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10438811, "index_size": 34165, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 247988, "raw_average_key_size": 26, "raw_value_size": 10275843, "raw_average_value_size": 1093, "num_data_blocks": 1293, "num_entries": 9396, "num_filter_entries": 9396, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.027491) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10497601 bytes
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.028975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.7 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(35.2) write-amplify(15.0) OK, records in: 9893, records dropped: 497 output_compression: NoCompression
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.029035) EVENT_LOG_v1 {"time_micros": 1768922462029025, "job": 92, "event": "compaction_finished", "compaction_time_micros": 68674, "compaction_time_cpu_micros": 25093, "output_level": 6, "num_output_files": 1, "total_output_size": 10497601, "num_input_records": 9893, "num_output_records": 9396, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462029341, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922462031811, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:01.958470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.031887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.031893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.031894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.031896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:21:02.031898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:21:02 compute-0 sshd-session[368721]: Connection closed by invalid user admin 134.122.57.138 port 55078 [preauth]
Jan 20 15:21:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:02.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 2.2 KiB/s wr, 14 op/s
Jan 20 15:21:02 compute-0 ceph-mon[74360]: pgmap v2963: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 2.2 KiB/s wr, 14 op/s
Jan 20 15:21:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:21:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:21:03 compute-0 sudo[368725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:03 compute-0 sudo[368725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:03 compute-0 sudo[368725]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:03 compute-0 sudo[368750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:03 compute-0 sudo[368750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:03 compute-0 sudo[368750]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/373551683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:04 compute-0 nova_compute[250018]: 2026-01-20 15:21:04.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:04.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 1 op/s
Jan 20 15:21:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3237900193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:05 compute-0 ceph-mon[74360]: pgmap v2964: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 KiB/s wr, 1 op/s
Jan 20 15:21:05 compute-0 nova_compute[250018]: 2026-01-20 15:21:05.763 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:06 compute-0 nova_compute[250018]: 2026-01-20 15:21:06.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:06.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 3.0 KiB/s wr, 3 op/s
Jan 20 15:21:06 compute-0 ceph-mon[74360]: pgmap v2965: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 3.0 KiB/s wr, 3 op/s
Jan 20 15:21:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:08 compute-0 nova_compute[250018]: 2026-01-20 15:21:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:08 compute-0 nova_compute[250018]: 2026-01-20 15:21:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:08 compute-0 nova_compute[250018]: 2026-01-20 15:21:08.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:21:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:08.142 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:21:08 compute-0 nova_compute[250018]: 2026-01-20 15:21:08.142 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:08.143 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:21:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1320144358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.179 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.180 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.180 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.180 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.181 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:09 compute-0 ceph-mon[74360]: pgmap v2966: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 2.0 KiB/s wr, 3 op/s
Jan 20 15:21:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2027985971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519889462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.639 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.769 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.770 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4212MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.770 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.770 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.990 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:21:09 compute-0 nova_compute[250018]: 2026-01-20 15:21:09.990 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.008 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1519889462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3327566445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212363485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.473 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.479 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:21:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:10.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.667 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.669 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.670 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.7 KiB/s wr, 20 op/s
Jan 20 15:21:10 compute-0 nova_compute[250018]: 2026-01-20 15:21:10.765 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/212363485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:11 compute-0 ceph-mon[74360]: pgmap v2967: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 2.7 KiB/s wr, 20 op/s
Jan 20 15:21:11 compute-0 nova_compute[250018]: 2026-01-20 15:21:11.459 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:11 compute-0 podman[368824]: 2026-01-20 15:21:11.514366438 +0000 UTC m=+0.101423286 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:21:11 compute-0 podman[368823]: 2026-01-20 15:21:11.516758943 +0000 UTC m=+0.103817181 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 15:21:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:11.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004700150055834729 of space, bias 1.0, pg target 0.1410045016750419 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:21:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:21:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:12.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:12 compute-0 nova_compute[250018]: 2026-01-20 15:21:12.665 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 30 op/s
Jan 20 15:21:12 compute-0 ceph-mon[74360]: pgmap v2968: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 3.0 KiB/s wr, 30 op/s
Jan 20 15:21:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:21:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:21:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:21:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:21:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:13.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:21:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/204930984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:21:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:14.144 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Jan 20 15:21:14 compute-0 ceph-mon[74360]: pgmap v2969: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Jan 20 15:21:15 compute-0 nova_compute[250018]: 2026-01-20 15:21:15.766 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:15.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.363 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.364 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.426 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.461 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.559 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.559 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.566 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.566 250022 INFO nova.compute.claims [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:21:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:16 compute-0 nova_compute[250018]: 2026-01-20 15:21:16.688 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Jan 20 15:21:16 compute-0 ceph-mon[74360]: pgmap v2970: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Jan 20 15:21:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:21:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995444439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.162 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.168 250022 DEBUG nova.compute.provider_tree [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.184 250022 DEBUG nova.scheduler.client.report [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.207 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.207 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.303 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.304 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.335 250022 INFO nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.353 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.437 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.439 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.440 250022 INFO nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Creating image(s)
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.480 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.509 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.542 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.545 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.618 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.619 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.620 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.620 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.648 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.652 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 65aa2157-f058-4e5c-b448-64cf956310ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:17.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3995444439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:17 compute-0 nova_compute[250018]: 2026-01-20 15:21:17.952 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 65aa2157-f058-4e5c-b448-64cf956310ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.008 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.048 250022 DEBUG nova.policy [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.102 250022 DEBUG nova.objects.instance [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.123 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.123 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Ensure instance console log exists: /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.124 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.124 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:18 compute-0 nova_compute[250018]: 2026-01-20 15:21:18.124 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:18.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:21:18 compute-0 ceph-mon[74360]: pgmap v2971: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:21:19 compute-0 nova_compute[250018]: 2026-01-20 15:21:19.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:19 compute-0 nova_compute[250018]: 2026-01-20 15:21:19.182 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Successfully created port: 2eefbfcb-7c22-4c45-bb7b-75319242796c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:21:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:21:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:19.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:21:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1618258914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:21:20 compute-0 nova_compute[250018]: 2026-01-20 15:21:20.768 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.018 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Successfully updated port: 2eefbfcb-7c22-4c45-bb7b-75319242796c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.046 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.046 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.046 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.247 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:21:21 compute-0 ceph-mon[74360]: pgmap v2972: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.463 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.484 250022 DEBUG nova.compute.manager [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.484 250022 DEBUG nova.compute.manager [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing instance network info cache due to event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:21:21 compute-0 nova_compute[250018]: 2026-01-20 15:21:21.484 250022 DEBUG oslo_concurrency.lockutils [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:21.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.112 250022 DEBUG nova.network.neutron [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.130 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.131 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance network_info: |[{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.131 250022 DEBUG oslo_concurrency.lockutils [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.131 250022 DEBUG nova.network.neutron [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.134 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Start _get_guest_xml network_info=[{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.138 250022 WARNING nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.145 250022 DEBUG nova.virt.libvirt.host [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.145 250022 DEBUG nova.virt.libvirt.host [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.152 250022 DEBUG nova.virt.libvirt.host [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.152 250022 DEBUG nova.virt.libvirt.host [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.154 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.154 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.155 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.155 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.155 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.155 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.156 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.156 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.156 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.156 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.157 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.157 250022 DEBUG nova.virt.hardware [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.159 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:21:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770881495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.624 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.650 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:22 compute-0 nova_compute[250018]: 2026-01-20 15:21:22.654 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2770881495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:21:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:21:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554796984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.078 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.081 250022 DEBUG nova.virt.libvirt.vif [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:21:17Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.082 250022 DEBUG nova.network.os_vif_util [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.083 250022 DEBUG nova.network.os_vif_util [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.085 250022 DEBUG nova.objects.instance [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.109 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <uuid>65aa2157-f058-4e5c-b448-64cf956310ba</uuid>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <name>instance-000000c6</name>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-890684112</nova:name>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:21:22</nova:creationTime>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <nova:port uuid="2eefbfcb-7c22-4c45-bb7b-75319242796c">
Jan 20 15:21:23 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <system>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="serial">65aa2157-f058-4e5c-b448-64cf956310ba</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="uuid">65aa2157-f058-4e5c-b448-64cf956310ba</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </system>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <os>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </os>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <features>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </features>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/65aa2157-f058-4e5c-b448-64cf956310ba_disk">
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </source>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/65aa2157-f058-4e5c-b448-64cf956310ba_disk.config">
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </source>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:21:23 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:7a:99:f8"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <target dev="tap2eefbfcb-7c"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/console.log" append="off"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <video>
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </video>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:21:23 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:21:23 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:21:23 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:21:23 compute-0 nova_compute[250018]: </domain>
Jan 20 15:21:23 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.110 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Preparing to wait for external event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.110 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.111 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.111 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.111 250022 DEBUG nova.virt.libvirt.vif [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:21:17Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.112 250022 DEBUG nova.network.os_vif_util [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.112 250022 DEBUG nova.network.os_vif_util [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.112 250022 DEBUG os_vif [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.113 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.113 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.114 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.119 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.119 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2eefbfcb-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.119 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2eefbfcb-7c, col_values=(('external_ids', {'iface-id': '2eefbfcb-7c22-4c45-bb7b-75319242796c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:99:f8', 'vm-uuid': '65aa2157-f058-4e5c-b448-64cf956310ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 NetworkManager[48960]: <info>  [1768922483.1221] manager: (tap2eefbfcb-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.130 250022 INFO os_vif [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c')
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.191 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.192 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.192 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:7a:99:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.192 250022 INFO nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Using config drive
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.220 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.379 250022 DEBUG nova.network.neutron [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated VIF entry in instance network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.380 250022 DEBUG nova.network.neutron [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.416 250022 DEBUG oslo_concurrency.lockutils [req-0b3eac2c-3a40-4fda-b222-6577e0f10cd7 req-39843c97-aee5-486c-b2be-c28d0dcaa427 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.550 250022 INFO nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Creating config drive at /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.554 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp070fun_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:23 compute-0 ceph-mon[74360]: pgmap v2973: 321 pgs: 321 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:21:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3554796984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.686 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp070fun_o" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.714 250022 DEBUG nova.storage.rbd_utils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image 65aa2157-f058-4e5c-b448-64cf956310ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.717 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config 65aa2157-f058-4e5c-b448-64cf956310ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:23.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.871 250022 DEBUG oslo_concurrency.processutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config 65aa2157-f058-4e5c-b448-64cf956310ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.872 250022 INFO nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Deleting local config drive /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/disk.config because it was imported into RBD.
Jan 20 15:21:23 compute-0 kernel: tap2eefbfcb-7c: entered promiscuous mode
Jan 20 15:21:23 compute-0 NetworkManager[48960]: <info>  [1768922483.9364] manager: (tap2eefbfcb-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/343)
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 ovn_controller[148666]: 2026-01-20T15:21:23Z|00717|binding|INFO|Claiming lport 2eefbfcb-7c22-4c45-bb7b-75319242796c for this chassis.
Jan 20 15:21:23 compute-0 ovn_controller[148666]: 2026-01-20T15:21:23Z|00718|binding|INFO|2eefbfcb-7c22-4c45-bb7b-75319242796c: Claiming fa:16:3e:7a:99:f8 10.100.0.4
Jan 20 15:21:23 compute-0 nova_compute[250018]: 2026-01-20 15:21:23.990 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:23 compute-0 systemd-udevd[369211]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:21:24 compute-0 sudo[369186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:24 compute-0 NetworkManager[48960]: <info>  [1768922484.0054] device (tap2eefbfcb-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:21:24 compute-0 sudo[369186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:24 compute-0 NetworkManager[48960]: <info>  [1768922484.0062] device (tap2eefbfcb-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.002 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:99:f8 10.100.0.4'], port_security=['fa:16:3e:7a:99:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '65aa2157-f058-4e5c-b448-64cf956310ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9217303-0a2c-4a19-a65b-396cb455c1f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93c7ac88-5c28-4609-8d16-8949ae99e457, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2eefbfcb-7c22-4c45-bb7b-75319242796c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.004 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2eefbfcb-7c22-4c45-bb7b-75319242796c in datapath 6eb3ab38-e480-46b8-ae2d-d286fe61de3c bound to our chassis
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.006 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:21:24 compute-0 sudo[369186]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:24 compute-0 systemd-machined[216401]: New machine qemu-87-instance-000000c6.
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.019 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9be12ff4-4047-48af-a695-17b29d41b202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.020 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6eb3ab38-e1 in ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.023 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6eb3ab38-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.023 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1c737ead-16ca-4d8f-b187-bf270a1edfd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.024 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8c64823c-67d7-48e3-a3e3-cb6246e8ca8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-000000c6.
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.038 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d19c599f-8bf2-4229-a72a-7ed9c4cf0eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_controller[148666]: 2026-01-20T15:21:24Z|00719|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c ovn-installed in OVS
Jan 20 15:21:24 compute-0 ovn_controller[148666]: 2026-01-20T15:21:24Z|00720|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c up in Southbound
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.065 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.067 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d0671c01-0c37-4295-aec6-23147aad1f23]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 sudo[369220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:24 compute-0 sudo[369220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:24 compute-0 sudo[369220]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.098 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[39f484d6-798e-477b-8338-9abb0d6b13e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.102 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[280b7452-e5f5-4488-b7e2-91d7cfa143c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 NetworkManager[48960]: <info>  [1768922484.1040] manager: (tap6eb3ab38-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/344)
Jan 20 15:21:24 compute-0 systemd-udevd[369218]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.130 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[201ee17f-97c8-4706-97fc-d8351aaeb7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.133 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9722765a-9612-487c-8317-701955e66e8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 NetworkManager[48960]: <info>  [1768922484.1555] device (tap6eb3ab38-e0): carrier: link connected
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.161 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3639b723-84b4-4288-8a3e-4f792da8752c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.176 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[73e99a00-a9a9-41bf-99ba-5e8aed85cfde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eb3ab38-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842487, 'reachable_time': 37370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369274, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.191 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bb97490a-c498-4cdb-b41f-f020450d41cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:428f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842487, 'tstamp': 842487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369275, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.206 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3dec7735-46fd-413d-b58d-d04f0ec2d823]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eb3ab38-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:42:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842487, 'reachable_time': 37370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369276, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.233 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d4b08e-5eb3-4a76-a9f8-b8b4ed2693c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.292 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[eba52a6e-f5cd-4029-bc55-720189ce3027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.293 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb3ab38-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.293 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.294 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eb3ab38-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:24 compute-0 kernel: tap6eb3ab38-e0: entered promiscuous mode
Jan 20 15:21:24 compute-0 NetworkManager[48960]: <info>  [1768922484.2964] manager: (tap6eb3ab38-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.296 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.300 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eb3ab38-e0, col_values=(('external_ids', {'iface-id': 'f6896e14-17f7-4c25-9eea-77cd7f8fe02c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:24 compute-0 ovn_controller[148666]: 2026-01-20T15:21:24Z|00721|binding|INFO|Releasing lport f6896e14-17f7-4c25-9eea-77cd7f8fe02c from this chassis (sb_readonly=0)
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.301 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.314 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.315 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77fd79-9d69-4dc3-923a-30ee11b52867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.316 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.pid.haproxy
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 6eb3ab38-e480-46b8-ae2d-d286fe61de3c
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:21:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:24.316 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'env', 'PROCESS_TAG=haproxy-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6eb3ab38-e480-46b8-ae2d-d286fe61de3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.416 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922484.4159908, 65aa2157-f058-4e5c-b448-64cf956310ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.417 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Started (Lifecycle Event)
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.434 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.438 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922484.4187784, 65aa2157-f058-4e5c-b448-64cf956310ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.439 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Paused (Lifecycle Event)
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.458 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.461 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:21:24 compute-0 nova_compute[250018]: 2026-01-20 15:21:24.482 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:21:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:24.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:24 compute-0 podman[369350]: 2026-01-20 15:21:24.664225547 +0000 UTC m=+0.049426785 container create 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:21:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/300643478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:24 compute-0 systemd[1]: Started libpod-conmon-59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64.scope.
Jan 20 15:21:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 189 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 34 op/s
Jan 20 15:21:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/663691cd2f49b6ff492cf063fe0812ee494b21691abd9b0065cb2e42d531ea6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:24 compute-0 podman[369350]: 2026-01-20 15:21:24.642171802 +0000 UTC m=+0.027373060 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:21:24 compute-0 podman[369350]: 2026-01-20 15:21:24.745535821 +0000 UTC m=+0.130737079 container init 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 15:21:24 compute-0 podman[369350]: 2026-01-20 15:21:24.750841134 +0000 UTC m=+0.136042372 container start 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:21:24 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [NOTICE]   (369369) : New worker (369371) forked
Jan 20 15:21:24 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [NOTICE]   (369369) : Loading success.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.145 250022 DEBUG nova.compute.manager [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.145 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.145 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.146 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.146 250022 DEBUG nova.compute.manager [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Processing event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.146 250022 DEBUG nova.compute.manager [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.146 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.147 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.147 250022 DEBUG oslo_concurrency.lockutils [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.147 250022 DEBUG nova.compute.manager [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.147 250022 WARNING nova.compute.manager [req-db9315f7-baa6-4b25-9775-b65ec7077f04 req-3922874d-e29c-4383-a4eb-f723d803645d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state building and task_state spawning.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.148 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.153 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922485.153492, 65aa2157-f058-4e5c-b448-64cf956310ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.154 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Resumed (Lifecycle Event)
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.156 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.160 250022 INFO nova.virt.libvirt.driver [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance spawned successfully.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.160 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.189 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.195 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.198 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.198 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.198 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.199 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.199 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.199 250022 DEBUG nova.virt.libvirt.driver [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.233 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.271 250022 INFO nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Took 7.83 seconds to spawn the instance on the hypervisor.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.272 250022 DEBUG nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.381 250022 INFO nova.compute.manager [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Took 8.88 seconds to build instance.
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.406 250022 DEBUG oslo_concurrency.lockutils [None req-7b3d52d4-965e-4d09-a13b-04f08c244a60 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:25 compute-0 ceph-mon[74360]: pgmap v2974: 321 pgs: 321 active+clean; 189 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 MiB/s wr, 34 op/s
Jan 20 15:21:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3508581801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:21:25 compute-0 nova_compute[250018]: 2026-01-20 15:21:25.770 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:25 compute-0 sudo[369380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:25 compute-0 sudo[369380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-0 sudo[369380]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:25 compute-0 sudo[369405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:21:25 compute-0 sudo[369405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-0 sudo[369405]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:25 compute-0 sudo[369430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:25 compute-0 sudo[369430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:25 compute-0 sudo[369430]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-0 sudo[369455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:21:26 compute-0 sudo[369455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-0 sudo[369455]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 897869e1-2da6-4e3e-8628-8a40dcd55a9e does not exist
Jan 20 15:21:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 39563dcc-17cc-4f2d-afbb-cd3d583931bd does not exist
Jan 20 15:21:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ab5f4299-4fde-4481-87e4-f73ccdcb723d does not exist
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:21:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:26.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:26 compute-0 sudo[369511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:26 compute-0 sudo[369511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-0 sudo[369511]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:21:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:26 compute-0 sudo[369536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:21:26 compute-0 sudo[369536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-0 sudo[369536]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-0 sudo[369561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:26 compute-0 sudo[369561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-0 sudo[369561]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:26 compute-0 sudo[369586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:21:26 compute-0 sudo[369586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.262500356 +0000 UTC m=+0.067963334 container create 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 20 15:21:27 compute-0 systemd[1]: Started libpod-conmon-8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d.scope.
Jan 20 15:21:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.239573038 +0000 UTC m=+0.045036056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.33825889 +0000 UTC m=+0.143721888 container init 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.347047967 +0000 UTC m=+0.152510945 container start 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.350441378 +0000 UTC m=+0.155904376 container attach 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:21:27 compute-0 adoring_perlman[369668]: 167 167
Jan 20 15:21:27 compute-0 systemd[1]: libpod-8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d.scope: Deactivated successfully.
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.356548103 +0000 UTC m=+0.162011111 container died 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-76f19d9971cd43edd8f2e28e472a596276ca3f4ae94f4235ba18fa4564c2b078-merged.mount: Deactivated successfully.
Jan 20 15:21:27 compute-0 podman[369652]: 2026-01-20 15:21:27.411013383 +0000 UTC m=+0.216476361 container remove 8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:21:27 compute-0 systemd[1]: libpod-conmon-8dadf167981428ffff7f2f7e9e27f584a12705f0d18cab3abb9b043e0ee1fd5d.scope: Deactivated successfully.
Jan 20 15:21:27 compute-0 podman[369692]: 2026-01-20 15:21:27.579002574 +0000 UTC m=+0.044168973 container create 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:21:27 compute-0 systemd[1]: Started libpod-conmon-31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9.scope.
Jan 20 15:21:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:27 compute-0 podman[369692]: 2026-01-20 15:21:27.65411613 +0000 UTC m=+0.119282539 container init 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:21:27 compute-0 podman[369692]: 2026-01-20 15:21:27.560416392 +0000 UTC m=+0.025582821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:27 compute-0 podman[369692]: 2026-01-20 15:21:27.662524517 +0000 UTC m=+0.127690916 container start 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:21:27 compute-0 podman[369692]: 2026-01-20 15:21:27.665400784 +0000 UTC m=+0.130567183 container attach 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:21:27 compute-0 ceph-mon[74360]: pgmap v2975: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.121 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:28 compute-0 NetworkManager[48960]: <info>  [1768922488.1533] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 20 15:21:28 compute-0 NetworkManager[48960]: <info>  [1768922488.1545] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.152 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.233 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:28 compute-0 ovn_controller[148666]: 2026-01-20T15:21:28Z|00722|binding|INFO|Releasing lport f6896e14-17f7-4c25-9eea-77cd7f8fe02c from this chassis (sb_readonly=0)
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.243 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:28 compute-0 kind_heisenberg[369708]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:21:28 compute-0 kind_heisenberg[369708]: --> relative data size: 1.0
Jan 20 15:21:28 compute-0 kind_heisenberg[369708]: --> All data devices are unavailable
Jan 20 15:21:28 compute-0 systemd[1]: libpod-31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9.scope: Deactivated successfully.
Jan 20 15:21:28 compute-0 podman[369692]: 2026-01-20 15:21:28.513960184 +0000 UTC m=+0.979126593 container died 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:21:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b02317abdc029fa58d7d35d54133d43b058bc7fb6b62a677695ee0c455130e0c-merged.mount: Deactivated successfully.
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.542 250022 DEBUG nova.compute.manager [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.543 250022 DEBUG nova.compute.manager [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing instance network info cache due to event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.543 250022 DEBUG oslo_concurrency.lockutils [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.544 250022 DEBUG oslo_concurrency.lockutils [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:28 compute-0 nova_compute[250018]: 2026-01-20 15:21:28.544 250022 DEBUG nova.network.neutron [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:21:28 compute-0 podman[369692]: 2026-01-20 15:21:28.564248341 +0000 UTC m=+1.029414740 container remove 31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:21:28 compute-0 systemd[1]: libpod-conmon-31cfa6af93800cd5e80fe4c52975a76d4bb05853f9d8ed609a32938e3893bca9.scope: Deactivated successfully.
Jan 20 15:21:28 compute-0 sudo[369586]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:28 compute-0 sudo[369736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:28 compute-0 sudo[369736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:28 compute-0 sudo[369736]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:28.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:28 compute-0 sudo[369761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:21:28 compute-0 sudo[369761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:28 compute-0 sudo[369761]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:28 compute-0 sudo[369786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:28 compute-0 sudo[369786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:28 compute-0 sudo[369786]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:28 compute-0 sudo[369811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:21:28 compute-0 sudo[369811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:28 compute-0 ceph-mon[74360]: pgmap v2976: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.095257044 +0000 UTC m=+0.041170811 container create aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:21:29 compute-0 systemd[1]: Started libpod-conmon-aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680.scope.
Jan 20 15:21:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.077610679 +0000 UTC m=+0.023524466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.179278071 +0000 UTC m=+0.125191848 container init aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.186856336 +0000 UTC m=+0.132770113 container start aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:21:29 compute-0 condescending_hodgkin[369893]: 167 167
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.190315919 +0000 UTC m=+0.136229736 container attach aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:21:29 compute-0 systemd[1]: libpod-aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680.scope: Deactivated successfully.
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.190960987 +0000 UTC m=+0.136874754 container died aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac490c1ae6492bb4a21857403da9ad9e91272db5ee89c2c2a394d33595694b04-merged.mount: Deactivated successfully.
Jan 20 15:21:29 compute-0 podman[369877]: 2026-01-20 15:21:29.227771119 +0000 UTC m=+0.173684886 container remove aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:21:29 compute-0 systemd[1]: libpod-conmon-aa2458a173503b413286ff405767f57709d56b12a2db7658cb16752728cfd680.scope: Deactivated successfully.
Jan 20 15:21:29 compute-0 podman[369917]: 2026-01-20 15:21:29.393755987 +0000 UTC m=+0.040208116 container create f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 20 15:21:29 compute-0 systemd[1]: Started libpod-conmon-f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde.scope.
Jan 20 15:21:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3b1d0a52dae875d89ece0e705d461abdf1908ce7b0e851b74f0ef71d140b38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3b1d0a52dae875d89ece0e705d461abdf1908ce7b0e851b74f0ef71d140b38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3b1d0a52dae875d89ece0e705d461abdf1908ce7b0e851b74f0ef71d140b38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3b1d0a52dae875d89ece0e705d461abdf1908ce7b0e851b74f0ef71d140b38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:29 compute-0 podman[369917]: 2026-01-20 15:21:29.3749691 +0000 UTC m=+0.021421249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:29 compute-0 podman[369917]: 2026-01-20 15:21:29.481724559 +0000 UTC m=+0.128176718 container init f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:21:29 compute-0 podman[369917]: 2026-01-20 15:21:29.492252224 +0000 UTC m=+0.138704353 container start f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:21:29 compute-0 podman[369917]: 2026-01-20 15:21:29.500429945 +0000 UTC m=+0.146882084 container attach f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:21:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:29 compute-0 nova_compute[250018]: 2026-01-20 15:21:29.864 250022 DEBUG nova.network.neutron [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated VIF entry in instance network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:21:29 compute-0 nova_compute[250018]: 2026-01-20 15:21:29.868 250022 DEBUG nova.network.neutron [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:29 compute-0 nova_compute[250018]: 2026-01-20 15:21:29.888 250022 DEBUG oslo_concurrency.lockutils [req-0cc86bf5-9671-4909-a43a-845703a4c07f req-2fc072ea-3341-4c72-a2f7-ba4954c7deb9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:21:30 compute-0 pensive_gould[369933]: {
Jan 20 15:21:30 compute-0 pensive_gould[369933]:     "0": [
Jan 20 15:21:30 compute-0 pensive_gould[369933]:         {
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "devices": [
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "/dev/loop3"
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             ],
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "lv_name": "ceph_lv0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "lv_size": "7511998464",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "name": "ceph_lv0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "tags": {
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.cluster_name": "ceph",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.crush_device_class": "",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.encrypted": "0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.osd_id": "0",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.type": "block",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:                 "ceph.vdo": "0"
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             },
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "type": "block",
Jan 20 15:21:30 compute-0 pensive_gould[369933]:             "vg_name": "ceph_vg0"
Jan 20 15:21:30 compute-0 pensive_gould[369933]:         }
Jan 20 15:21:30 compute-0 pensive_gould[369933]:     ]
Jan 20 15:21:30 compute-0 pensive_gould[369933]: }
Jan 20 15:21:30 compute-0 systemd[1]: libpod-f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde.scope: Deactivated successfully.
Jan 20 15:21:30 compute-0 podman[369917]: 2026-01-20 15:21:30.317995939 +0000 UTC m=+0.964448088 container died f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3b1d0a52dae875d89ece0e705d461abdf1908ce7b0e851b74f0ef71d140b38-merged.mount: Deactivated successfully.
Jan 20 15:21:30 compute-0 podman[369917]: 2026-01-20 15:21:30.377189965 +0000 UTC m=+1.023642094 container remove f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:21:30 compute-0 systemd[1]: libpod-conmon-f450dc8dc5f4ab0be2305c918be72c61b6095b3c540c4f65a053430b998e8dde.scope: Deactivated successfully.
Jan 20 15:21:30 compute-0 sudo[369811]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:30 compute-0 sudo[369955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:30 compute-0 sudo[369955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:30 compute-0 sudo[369955]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:30 compute-0 sudo[369980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:21:30 compute-0 sudo[369980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:30 compute-0 sudo[369980]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:30 compute-0 sudo[370005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:30 compute-0 sudo[370005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:30 compute-0 sudo[370005]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:30 compute-0 sudo[370030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:21:30 compute-0 sudo[370030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:21:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:30.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:21:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Jan 20 15:21:30 compute-0 nova_compute[250018]: 2026-01-20 15:21:30.771 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:30.790 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:30.791 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:30.792 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:30 compute-0 ceph-mon[74360]: pgmap v2977: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 188 op/s
Jan 20 15:21:30 compute-0 podman[370096]: 2026-01-20 15:21:30.937734785 +0000 UTC m=+0.038546310 container create 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:21:30 compute-0 systemd[1]: Started libpod-conmon-11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24.scope.
Jan 20 15:21:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:30.920693036 +0000 UTC m=+0.021504581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:31.025709059 +0000 UTC m=+0.126520594 container init 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:31.034187767 +0000 UTC m=+0.134999292 container start 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:31.039504911 +0000 UTC m=+0.140316456 container attach 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:21:31 compute-0 keen_pascal[370113]: 167 167
Jan 20 15:21:31 compute-0 systemd[1]: libpod-11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24.scope: Deactivated successfully.
Jan 20 15:21:31 compute-0 conmon[370113]: conmon 11485e09d36f5c34e910 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24.scope/container/memory.events
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:31.041968238 +0000 UTC m=+0.142779763 container died 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8cd79fd50c031f3fe892d5d0cfa17eae80ac3193e50d4f76a744545a092de5b-merged.mount: Deactivated successfully.
Jan 20 15:21:31 compute-0 podman[370096]: 2026-01-20 15:21:31.079293634 +0000 UTC m=+0.180105159 container remove 11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:21:31 compute-0 systemd[1]: libpod-conmon-11485e09d36f5c34e910b3b040aafcbf63d96b828896a6e9e5a111f03a45ce24.scope: Deactivated successfully.
Jan 20 15:21:31 compute-0 podman[370137]: 2026-01-20 15:21:31.250354149 +0000 UTC m=+0.049867996 container create 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:21:31 compute-0 systemd[1]: Started libpod-conmon-06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c.scope.
Jan 20 15:21:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:21:31 compute-0 podman[370137]: 2026-01-20 15:21:31.222553939 +0000 UTC m=+0.022067806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66318193b3b4ebed0157c0ba0294683f001e66cb8aa3b6cec68e6f53e1b3e30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66318193b3b4ebed0157c0ba0294683f001e66cb8aa3b6cec68e6f53e1b3e30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66318193b3b4ebed0157c0ba0294683f001e66cb8aa3b6cec68e6f53e1b3e30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66318193b3b4ebed0157c0ba0294683f001e66cb8aa3b6cec68e6f53e1b3e30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:21:31 compute-0 podman[370137]: 2026-01-20 15:21:31.349131753 +0000 UTC m=+0.148645630 container init 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:21:31 compute-0 podman[370137]: 2026-01-20 15:21:31.35753869 +0000 UTC m=+0.157052557 container start 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 20 15:21:31 compute-0 podman[370137]: 2026-01-20 15:21:31.360932102 +0000 UTC m=+0.160445979 container attach 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:21:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:32 compute-0 awesome_pare[370154]: {
Jan 20 15:21:32 compute-0 awesome_pare[370154]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:21:32 compute-0 awesome_pare[370154]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:21:32 compute-0 awesome_pare[370154]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:21:32 compute-0 awesome_pare[370154]:         "osd_id": 0,
Jan 20 15:21:32 compute-0 awesome_pare[370154]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:21:32 compute-0 awesome_pare[370154]:         "type": "bluestore"
Jan 20 15:21:32 compute-0 awesome_pare[370154]:     }
Jan 20 15:21:32 compute-0 awesome_pare[370154]: }
Jan 20 15:21:32 compute-0 systemd[1]: libpod-06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c.scope: Deactivated successfully.
Jan 20 15:21:32 compute-0 podman[370137]: 2026-01-20 15:21:32.175945137 +0000 UTC m=+0.975459004 container died 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66318193b3b4ebed0157c0ba0294683f001e66cb8aa3b6cec68e6f53e1b3e30-merged.mount: Deactivated successfully.
Jan 20 15:21:32 compute-0 podman[370137]: 2026-01-20 15:21:32.228430673 +0000 UTC m=+1.027944520 container remove 06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:21:32 compute-0 systemd[1]: libpod-conmon-06e038a5c5ef029896d0f7f7d68c9ab7d28c1fda0497a252ae6bcddb167ea79c.scope: Deactivated successfully.
Jan 20 15:21:32 compute-0 sudo[370030]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:21:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:21:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ca984e83-e0ef-4641-bb5d-9ebaeff3a866 does not exist
Jan 20 15:21:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 14ee3a49-f319-432b-a3ca-db5b25ad14e5 does not exist
Jan 20 15:21:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c9def5b6-81fa-4316-9bd9-bd085aec6a71 does not exist
Jan 20 15:21:32 compute-0 sudo[370187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:32 compute-0 sudo[370187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:32 compute-0 sudo[370187]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:32 compute-0 sudo[370212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:21:32 compute-0 sudo[370212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:32 compute-0 sudo[370212]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:32.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 176 op/s
Jan 20 15:21:33 compute-0 nova_compute[250018]: 2026-01-20 15:21:33.124 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:21:33 compute-0 ceph-mon[74360]: pgmap v2978: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 176 op/s
Jan 20 15:21:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:34.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Jan 20 15:21:34 compute-0 ceph-mon[74360]: pgmap v2979: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Jan 20 15:21:35 compute-0 nova_compute[250018]: 2026-01-20 15:21:35.773 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:36.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 166 op/s
Jan 20 15:21:36 compute-0 ceph-mon[74360]: pgmap v2980: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 762 KiB/s wr, 166 op/s
Jan 20 15:21:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:37.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:38 compute-0 nova_compute[250018]: 2026-01-20 15:21:38.125 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:38.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 134 op/s
Jan 20 15:21:38 compute-0 ceph-mon[74360]: pgmap v2981: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 134 op/s
Jan 20 15:21:38 compute-0 ovn_controller[148666]: 2026-01-20T15:21:38Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:99:f8 10.100.0.4
Jan 20 15:21:38 compute-0 ovn_controller[148666]: 2026-01-20T15:21:38Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:99:f8 10.100.0.4
Jan 20 15:21:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:21:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:39.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:21:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 258 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Jan 20 15:21:40 compute-0 nova_compute[250018]: 2026-01-20 15:21:40.777 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:40 compute-0 ceph-mon[74360]: pgmap v2982: 321 pgs: 321 active+clean; 258 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Jan 20 15:21:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:41.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:42 compute-0 podman[370243]: 2026-01-20 15:21:42.4714548 +0000 UTC m=+0.058374566 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:21:42 compute-0 podman[370242]: 2026-01-20 15:21:42.517335908 +0000 UTC m=+0.102047974 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:21:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:42.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Jan 20 15:21:42 compute-0 ceph-mon[74360]: pgmap v2983: 321 pgs: 321 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 854 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Jan 20 15:21:43 compute-0 nova_compute[250018]: 2026-01-20 15:21:43.126 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:44 compute-0 sudo[370288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:44 compute-0 sudo[370288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:44 compute-0 sudo[370288]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:44 compute-0 sudo[370313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:21:44 compute-0 sudo[370313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:21:44 compute-0 sudo[370313]: pam_unix(sudo:session): session closed for user root
Jan 20 15:21:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:44.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:21:44 compute-0 ceph-mon[74360]: pgmap v2984: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 20 15:21:45 compute-0 nova_compute[250018]: 2026-01-20 15:21:45.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:45.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:46 compute-0 nova_compute[250018]: 2026-01-20 15:21:46.093 250022 INFO nova.compute.manager [None req-975901db-fdfb-4bf7-aef4-1ee3fb65d26a 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Get console output
Jan 20 15:21:46 compute-0 nova_compute[250018]: 2026-01-20 15:21:46.097 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:21:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:46.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:46 compute-0 ceph-mon[74360]: pgmap v2985: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:46 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:47 compute-0 sshd-session[370339]: Invalid user test from 134.122.57.138 port 44718
Jan 20 15:21:47 compute-0 sshd-session[370339]: Connection closed by invalid user test 134.122.57.138 port 44718 [preauth]
Jan 20 15:21:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:48 compute-0 nova_compute[250018]: 2026-01-20 15:21:48.128 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:48 compute-0 ceph-mon[74360]: pgmap v2986: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:49 compute-0 nova_compute[250018]: 2026-01-20 15:21:49.621 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:49 compute-0 nova_compute[250018]: 2026-01-20 15:21:49.622 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:49 compute-0 nova_compute[250018]: 2026-01-20 15:21:49.622 250022 DEBUG nova.network.neutron [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:21:49 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3610823315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:21:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.575 250022 DEBUG nova.network.neutron [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.593 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.671 250022 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.671 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Creating file /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/de4af2c4098640ab929b91459b1461ef.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.672 250022 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/de4af2c4098640ab929b91459b1461ef.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:50 compute-0 nova_compute[250018]: 2026-01-20 15:21:50.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:50 compute-0 ceph-mon[74360]: pgmap v2987: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.115 250022 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/de4af2c4098640ab929b91459b1461ef.tmp" returned: 1 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.116 250022 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba/de4af2c4098640ab929b91459b1461ef.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.116 250022 DEBUG nova.virt.libvirt.volume.remotefs [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Creating directory /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.116 250022 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.331 250022 DEBUG oslo_concurrency.processutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/65aa2157-f058-4e5c-b448-64cf956310ba" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:21:51 compute-0 nova_compute[250018]: 2026-01-20 15:21:51.337 250022 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 20 15:21:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:21:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:21:51 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:21:52
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'vms']
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:21:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:52.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1001 KiB/s wr, 21 op/s
Jan 20 15:21:52 compute-0 ceph-mon[74360]: pgmap v2988: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1001 KiB/s wr, 21 op/s
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.131 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 kernel: tap2eefbfcb-7c (unregistering): left promiscuous mode
Jan 20 15:21:53 compute-0 NetworkManager[48960]: <info>  [1768922513.5876] device (tap2eefbfcb-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:21:53 compute-0 ovn_controller[148666]: 2026-01-20T15:21:53Z|00723|binding|INFO|Releasing lport 2eefbfcb-7c22-4c45-bb7b-75319242796c from this chassis (sb_readonly=0)
Jan 20 15:21:53 compute-0 ovn_controller[148666]: 2026-01-20T15:21:53Z|00724|binding|INFO|Setting lport 2eefbfcb-7c22-4c45-bb7b-75319242796c down in Southbound
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.595 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 ovn_controller[148666]: 2026-01-20T15:21:53Z|00725|binding|INFO|Removing iface tap2eefbfcb-7c ovn-installed in OVS
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.624 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:99:f8 10.100.0.4'], port_security=['fa:16:3e:7a:99:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '65aa2157-f058-4e5c-b448-64cf956310ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9217303-0a2c-4a19-a65b-396cb455c1f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93c7ac88-5c28-4609-8d16-8949ae99e457, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=2eefbfcb-7c22-4c45-bb7b-75319242796c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.625 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 2eefbfcb-7c22-4c45-bb7b-75319242796c in datapath 6eb3ab38-e480-46b8-ae2d-d286fe61de3c unbound from our chassis
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.626 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.628 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d07c1b01-c688-46ba-9549-3b738d01caee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.628 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c namespace which is not needed anymore
Jan 20 15:21:53 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Jan 20 15:21:53 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c6.scope: Consumed 14.578s CPU time.
Jan 20 15:21:53 compute-0 systemd-machined[216401]: Machine qemu-87-instance-000000c6 terminated.
Jan 20 15:21:53 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [NOTICE]   (369369) : haproxy version is 2.8.14-c23fe91
Jan 20 15:21:53 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [NOTICE]   (369369) : path to executable is /usr/sbin/haproxy
Jan 20 15:21:53 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [WARNING]  (369369) : Exiting Master process...
Jan 20 15:21:53 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [ALERT]    (369369) : Current worker (369371) exited with code 143 (Terminated)
Jan 20 15:21:53 compute-0 neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c[369365]: [WARNING]  (369369) : All workers exited. Exiting... (0)
Jan 20 15:21:53 compute-0 systemd[1]: libpod-59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64.scope: Deactivated successfully.
Jan 20 15:21:53 compute-0 conmon[369365]: conmon 59d1c9efeaf418c21559 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64.scope/container/memory.events
Jan 20 15:21:53 compute-0 podman[370372]: 2026-01-20 15:21:53.756465736 +0000 UTC m=+0.042760675 container died 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64-userdata-shm.mount: Deactivated successfully.
Jan 20 15:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-663691cd2f49b6ff492cf063fe0812ee494b21691abd9b0065cb2e42d531ea6c-merged.mount: Deactivated successfully.
Jan 20 15:21:53 compute-0 podman[370372]: 2026-01-20 15:21:53.795119718 +0000 UTC m=+0.081414647 container cleanup 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:21:53 compute-0 systemd[1]: libpod-conmon-59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64.scope: Deactivated successfully.
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.871 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:53.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.876 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 podman[370404]: 2026-01-20 15:21:53.911178139 +0000 UTC m=+0.040605058 container remove 59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.917 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6f66fd70-7524-4fef-bdec-021bfe8233b1]: (4, ('Tue Jan 20 03:21:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c (59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64)\n59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64\nTue Jan 20 03:21:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c (59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64)\n59d1c9efeaf418c21559ec0b01717e93a48eceb45b731a830af419d7e9f21d64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.918 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d6aaace6-0a82-4f4c-b6c3-2626269d7169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.919 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eb3ab38-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 kernel: tap6eb3ab38-e0: left promiscuous mode
Jan 20 15:21:53 compute-0 nova_compute[250018]: 2026-01-20 15:21:53.936 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.939 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[17fed794-724e-4c27-b38d-c9353910fe66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.966 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1b339db2-bcb4-45d3-b579-deb786985c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.967 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ddca5148-4512-4a36-8bec-fa49cde5c036]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.982 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[82172ca9-1670-45fe-bb61-2c2acc34f18e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842481, 'reachable_time': 40157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370431, 'error': None, 'target': 'ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d6eb3ab38\x2de480\x2d46b8\x2dae2d\x2dd286fe61de3c.mount: Deactivated successfully.
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.985 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6eb3ab38-e480-46b8-ae2d-d286fe61de3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:21:53 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:21:53.985 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c53c4e-52d2-49c8-8340-a3fd7eda9ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.238 250022 DEBUG nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.239 250022 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.239 250022 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.239 250022 DEBUG oslo_concurrency.lockutils [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.240 250022 DEBUG nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.240 250022 WARNING nova.compute.manager [req-384c106f-c8da-4c7d-9edd-b7cf32361a1d req-5b8050a1-452d-45ac-bafa-e98f6c2fa345 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-unplugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_migrating.
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.354 250022 INFO nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance shutdown successfully after 3 seconds.
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.358 250022 INFO nova.virt.libvirt.driver [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Instance destroyed successfully.
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.359 250022 DEBUG nova.virt.libvirt.vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:21:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:21:48Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.360 250022 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1810657086", "vif_mac": "fa:16:3e:7a:99:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.360 250022 DEBUG nova.network.os_vif_util [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.361 250022 DEBUG os_vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.363 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eefbfcb-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.366 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.368 250022 INFO os_vif [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c')
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.372 250022 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:21:54 compute-0 nova_compute[250018]: 2026-01-20 15:21:54.372 250022 DEBUG nova.virt.libvirt.driver [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:21:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:54.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 78 KiB/s wr, 6 op/s
Jan 20 15:21:54 compute-0 ceph-mon[74360]: pgmap v2989: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 78 KiB/s wr, 6 op/s
Jan 20 15:21:55 compute-0 nova_compute[250018]: 2026-01-20 15:21:55.025 250022 DEBUG neutronclient.v2_0.client [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 2eefbfcb-7c22-4c45-bb7b-75319242796c for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:21:55 compute-0 nova_compute[250018]: 2026-01-20 15:21:55.782 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:55 compute-0 nova_compute[250018]: 2026-01-20 15:21:55.867 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:55 compute-0 nova_compute[250018]: 2026-01-20 15:21:55.867 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:55 compute-0 nova_compute[250018]: 2026-01-20 15:21:55.868 250022 DEBUG oslo_concurrency.lockutils [None req-2dfa6b97-6b8c-4dc9-a6b3-7f0c65eddc84 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:55.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.403 250022 DEBUG nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.403 250022 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.404 250022 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.404 250022 DEBUG oslo_concurrency.lockutils [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.404 250022 DEBUG nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:21:56 compute-0 nova_compute[250018]: 2026-01-20 15:21:56.404 250022 WARNING nova.compute.manager [req-c0dd9026-9437-4203-a4c8-b0b1f917be34 req-116c3512-6ea9-4060-b99e-5eeb18df2d6b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_migrated.
Jan 20 15:21:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:56.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 44 KiB/s wr, 4 op/s
Jan 20 15:21:56 compute-0 ceph-mon[74360]: pgmap v2990: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 44 KiB/s wr, 4 op/s
Jan 20 15:21:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:21:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:21:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:21:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:57.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:21:58 compute-0 nova_compute[250018]: 2026-01-20 15:21:58.414 250022 DEBUG nova.compute.manager [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:21:58 compute-0 nova_compute[250018]: 2026-01-20 15:21:58.414 250022 DEBUG nova.compute.manager [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing instance network info cache due to event network-changed-2eefbfcb-7c22-4c45-bb7b-75319242796c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:21:58 compute-0 nova_compute[250018]: 2026-01-20 15:21:58.415 250022 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:21:58 compute-0 nova_compute[250018]: 2026-01-20 15:21:58.415 250022 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:21:58 compute-0 nova_compute[250018]: 2026-01-20 15:21:58.415 250022 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Refreshing network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:21:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:21:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:21:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:21:58 compute-0 ceph-mon[74360]: pgmap v2991: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:21:59 compute-0 nova_compute[250018]: 2026-01-20 15:21:59.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:21:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:21:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:21:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:21:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:22:00 compute-0 nova_compute[250018]: 2026-01-20 15:22:00.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:00 compute-0 ceph-mon[74360]: pgmap v2992: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 20 15:22:01 compute-0 nova_compute[250018]: 2026-01-20 15:22:01.585 250022 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updated VIF entry in instance network info cache for port 2eefbfcb-7c22-4c45-bb7b-75319242796c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:22:01 compute-0 nova_compute[250018]: 2026-01-20 15:22:01.586 250022 DEBUG nova.network.neutron [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:01 compute-0 nova_compute[250018]: 2026-01-20 15:22:01.603 250022 DEBUG oslo_concurrency.lockutils [req-77276c5d-236f-4ee6-af46-35521532af4d req-d09a6e41-cd28-413f-ae3a-b5d4adcb2c3c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 20 15:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 20 15:22:01 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 20 15:22:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:01 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 24 KiB/s wr, 4 op/s
Jan 20 15:22:02 compute-0 ceph-mon[74360]: osdmap e416: 3 total, 3 up, 3 in
Jan 20 15:22:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1804081147' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:02 compute-0 ceph-mon[74360]: pgmap v2994: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 24 KiB/s wr, 4 op/s
Jan 20 15:22:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:22:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1944454760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.767 250022 DEBUG nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.767 250022 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.768 250022 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.768 250022 DEBUG oslo_concurrency.lockutils [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.768 250022 DEBUG nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:03 compute-0 nova_compute[250018]: 2026-01-20 15:22:03.768 250022 WARNING nova.compute.manager [req-83a6fbbf-3571-4f8e-88b6-d8bc4a976c99 req-b7ac87d6-24d0-4a16-b5ee-50c2882cb493 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state active and task_state resize_finish.
Jan 20 15:22:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1822320099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1944454760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:22:04 compute-0 sudo[370439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:04 compute-0 sudo[370439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:04 compute-0 sudo[370439]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:04 compute-0 sudo[370465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:04 compute-0 sudo[370465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:04 compute-0 sudo[370465]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:04 compute-0 nova_compute[250018]: 2026-01-20 15:22:04.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 20 15:22:05 compute-0 ceph-mon[74360]: pgmap v2995: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 20 15:22:05 compute-0 nova_compute[250018]: 2026-01-20 15:22:05.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:05 compute-0 nova_compute[250018]: 2026-01-20 15:22:05.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.135 250022 DEBUG nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.135 250022 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.135 250022 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.135 250022 DEBUG oslo_concurrency.lockutils [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.136 250022 DEBUG nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] No waiting events found dispatching network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.136 250022 WARNING nova.compute.manager [req-f7e34805-2e49-4fd7-98e5-6b9b544da6bd req-9a930593-e911-4620-9244-ae9145ec9e0f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Received unexpected event network-vif-plugged-2eefbfcb-7c22-4c45-bb7b-75319242796c for instance with vm_state resized and task_state None.
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.196 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "65aa2157-f058-4e5c-b448-64cf956310ba" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.196 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:06 compute-0 nova_compute[250018]: 2026-01-20 15:22:06.197 250022 DEBUG nova.compute.manager [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 20 15:22:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4076641553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:07 compute-0 nova_compute[250018]: 2026-01-20 15:22:07.367 250022 DEBUG neutronclient.v2_0.client [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 2eefbfcb-7c22-4c45-bb7b-75319242796c for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 20 15:22:07 compute-0 nova_compute[250018]: 2026-01-20 15:22:07.368 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:22:07 compute-0 nova_compute[250018]: 2026-01-20 15:22:07.368 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:22:07 compute-0 nova_compute[250018]: 2026-01-20 15:22:07.368 250022 DEBUG nova.network.neutron [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:22:07 compute-0 nova_compute[250018]: 2026-01-20 15:22:07.369 250022 DEBUG nova.objects.instance [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'info_cache' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:07.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:07 compute-0 ceph-mon[74360]: pgmap v2996: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:22:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.880 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922513.8793304, 65aa2157-f058-4e5c-b448-64cf956310ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.881 250022 INFO nova.compute.manager [-] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] VM Stopped (Lifecycle Event)
Jan 20 15:22:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3036926870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3733158156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:08 compute-0 ceph-mon[74360]: pgmap v2997: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.910 250022 DEBUG nova.compute.manager [None req-6b5148e5-e0ad-4f26-be93-cd5b2f154e01 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.912 250022 DEBUG nova.compute.manager [None req-6b5148e5-e0ad-4f26-be93-cd5b2f154e01 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:22:08 compute-0 nova_compute[250018]: 2026-01-20 15:22:08.947 250022 INFO nova.compute.manager [None req-6b5148e5-e0ad-4f26-be93-cd5b2f154e01 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.092 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.092 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.092 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.093 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.093 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.369 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797713413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.540 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.662 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.663 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.785 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.786 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=20.897098541259766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.786 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.787 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.819 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration for instance 65aa2157-f058-4e5c-b448-64cf956310ba refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.841 250022 INFO nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating resource usage from migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.842 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Starting to track outgoing migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646 with flavor 522deaab-a741-4dbb-932d-d8b13a211c33 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.869 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.869 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.870 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:22:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:09.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:09 compute-0 nova_compute[250018]: 2026-01-20 15:22:09.908 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4092149460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/797713413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090225516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.327 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.333 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.362 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.401 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.402 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:10.593 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:10.594 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.664 250022 DEBUG nova.network.neutron [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: 65aa2157-f058-4e5c-b448-64cf956310ba] Updating instance_info_cache with network_info: [{"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.687 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-65aa2157-f058-4e5c-b448-64cf956310ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.687 250022 DEBUG nova.objects.instance [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid 65aa2157-f058-4e5c-b448-64cf956310ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:22:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 119 op/s
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.773 250022 DEBUG nova.storage.rbd_utils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] removing snapshot(nova-resize) on rbd image(65aa2157-f058-4e5c-b448-64cf956310ba_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 20 15:22:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2090225516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:10 compute-0 ceph-mon[74360]: pgmap v2998: 321 pgs: 321 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 119 op/s
Jan 20 15:22:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 20 15:22:10 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.981 250022 DEBUG nova.virt.libvirt.vif [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:21:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-890684112',display_name='tempest-TestNetworkAdvancedServerOps-server-890684112',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-890684112',id=198,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBESV7XYmzz1neUUH7k/g2EXDk6RAN24jF19myyoRv6wDjFXd5E2VXPhzcf3Q2CFmKA+oZARXh9ZLZnZRzD1iPeEGFbgLb8nt50MGrmQlAcYMGRSCqrzrniFYSfPnybQWNg==',key_name='tempest-TestNetworkAdvancedServerOps-1160843308',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-n64n905g',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:22:04Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=65aa2157-f058-4e5c-b448-64cf956310ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.982 250022 DEBUG nova.network.os_vif_util [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "address": "fa:16:3e:7a:99:f8", "network": {"id": "6eb3ab38-e480-46b8-ae2d-d286fe61de3c", "bridge": "br-int", "label": "tempest-network-smoke--1810657086", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2eefbfcb-7c", "ovs_interfaceid": "2eefbfcb-7c22-4c45-bb7b-75319242796c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.983 250022 DEBUG nova.network.os_vif_util [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.983 250022 DEBUG os_vif [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.985 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2eefbfcb-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.985 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.987 250022 INFO os_vif [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:99:f8,bridge_name='br-int',has_traffic_filtering=True,id=2eefbfcb-7c22-4c45-bb7b-75319242796c,network=Network(6eb3ab38-e480-46b8-ae2d-d286fe61de3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2eefbfcb-7c')
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.987 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:10 compute-0 nova_compute[250018]: 2026-01-20 15:22:10.987 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.041 250022 DEBUG oslo_concurrency.processutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:22:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:22:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891151784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.504 250022 DEBUG oslo_concurrency.processutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.509 250022 DEBUG nova.compute.provider_tree [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.522 250022 DEBUG nova.scheduler.client.report [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.565 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.733 250022 INFO nova.scheduler.client.report [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocation for migration 9b9a1c84-0aa6-44a5-9e94-a94a60dd6646
Jan 20 15:22:11 compute-0 nova_compute[250018]: 2026-01-20 15:22:11.794 250022 DEBUG oslo_concurrency.lockutils [None req-603b7cf9-7915-4e6b-a65c-0f8ce13270ba 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "65aa2157-f058-4e5c-b448-64cf956310ba" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022557448701842546 of space, bias 1.0, pg target 0.6767234610552764 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6487515703517588 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:22:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:22:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:22:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:11.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:22:11 compute-0 ceph-mon[74360]: osdmap e417: 3 total, 3 up, 3 in
Jan 20 15:22:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/971058922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/891151784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:12 compute-0 nova_compute[250018]: 2026-01-20 15:22:12.397 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 KiB/s wr, 142 op/s
Jan 20 15:22:12 compute-0 ceph-mon[74360]: pgmap v3000: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.2 KiB/s wr, 142 op/s
Jan 20 15:22:13 compute-0 podman[370598]: 2026-01-20 15:22:13.48750368 +0000 UTC m=+0.072005104 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 15:22:13 compute-0 podman[370597]: 2026-01-20 15:22:13.508425024 +0000 UTC m=+0.093423761 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 15:22:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:22:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:22:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1403667597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3059706914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:14 compute-0 nova_compute[250018]: 2026-01-20 15:22:14.372 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:14 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:14.597 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:22:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:14.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 KiB/s wr, 137 op/s
Jan 20 15:22:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2030652367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2030652367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:15 compute-0 ceph-mon[74360]: pgmap v3001: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 KiB/s wr, 137 op/s
Jan 20 15:22:15 compute-0 nova_compute[250018]: 2026-01-20 15:22:15.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/170002193' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:22:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:16.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:17 compute-0 ceph-mon[74360]: pgmap v3002: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:18 compute-0 ceph-mon[74360]: pgmap v3003: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 KiB/s wr, 141 op/s
Jan 20 15:22:19 compute-0 nova_compute[250018]: 2026-01-20 15:22:19.374 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:19.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:20 compute-0 nova_compute[250018]: 2026-01-20 15:22:20.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 18 KiB/s wr, 122 op/s
Jan 20 15:22:20 compute-0 nova_compute[250018]: 2026-01-20 15:22:20.790 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:20 compute-0 ceph-mon[74360]: pgmap v3004: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 18 KiB/s wr, 122 op/s
Jan 20 15:22:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 20 15:22:22 compute-0 nova_compute[250018]: 2026-01-20 15:22:22.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:22 compute-0 nova_compute[250018]: 2026-01-20 15:22:22.510 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 20 15:22:22 compute-0 ceph-mon[74360]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 20 15:22:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 16 KiB/s wr, 112 op/s
Jan 20 15:22:23 compute-0 ceph-mon[74360]: osdmap e418: 3 total, 3 up, 3 in
Jan 20 15:22:23 compute-0 ceph-mon[74360]: pgmap v3006: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 16 KiB/s wr, 112 op/s
Jan 20 15:22:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:24 compute-0 nova_compute[250018]: 2026-01-20 15:22:24.376 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:24 compute-0 sudo[370651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:24 compute-0 sudo[370651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:24 compute-0 sudo[370651]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:24 compute-0 sudo[370676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:24 compute-0 sudo[370676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:24 compute-0 sudo[370676]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:24.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 15 KiB/s wr, 103 op/s
Jan 20 15:22:24 compute-0 ceph-mon[74360]: pgmap v3007: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 15 KiB/s wr, 103 op/s
Jan 20 15:22:25 compute-0 nova_compute[250018]: 2026-01-20 15:22:25.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:25 compute-0 nova_compute[250018]: 2026-01-20 15:22:25.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:22:25 compute-0 nova_compute[250018]: 2026-01-20 15:22:25.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:22:25 compute-0 nova_compute[250018]: 2026-01-20 15:22:25.066 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:22:25 compute-0 nova_compute[250018]: 2026-01-20 15:22:25.792 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:26.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:26 compute-0 ceph-mon[74360]: pgmap v3008: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1554924812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:28 compute-0 ceph-mon[74360]: pgmap v3009: 321 pgs: 321 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 603 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 20 15:22:29 compute-0 nova_compute[250018]: 2026-01-20 15:22:29.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:29.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:30.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 44 op/s
Jan 20 15:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:30.791 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:30.791 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:22:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:22:30.791 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:22:30 compute-0 nova_compute[250018]: 2026-01-20 15:22:30.794 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:30 compute-0 ceph-mon[74360]: pgmap v3010: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 44 op/s
Jan 20 15:22:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:32.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 20 15:22:32 compute-0 sudo[370707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:32 compute-0 sudo[370707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-0 sudo[370707]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-0 sshd-session[370704]: Invalid user test from 134.122.57.138 port 33146
Jan 20 15:22:32 compute-0 ceph-mon[74360]: pgmap v3011: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 20 15:22:32 compute-0 sudo[370732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:32 compute-0 sudo[370732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-0 sudo[370732]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-0 sudo[370757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:32 compute-0 sudo[370757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:32 compute-0 sudo[370757]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:32 compute-0 sudo[370782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:22:32 compute-0 sudo[370782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:33 compute-0 sshd-session[370704]: Connection closed by invalid user test 134.122.57.138 port 33146 [preauth]
Jan 20 15:22:33 compute-0 podman[370877]: 2026-01-20 15:22:33.408397971 +0000 UTC m=+0.059896237 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:22:33 compute-0 podman[370877]: 2026-01-20 15:22:33.497646657 +0000 UTC m=+0.149144883 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:22:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:34 compute-0 podman[371032]: 2026-01-20 15:22:34.042511796 +0000 UTC m=+0.047219115 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:22:34 compute-0 podman[371032]: 2026-01-20 15:22:34.051318843 +0000 UTC m=+0.056026152 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:22:34 compute-0 podman[371097]: 2026-01-20 15:22:34.307809312 +0000 UTC m=+0.054475640 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 15:22:34 compute-0 podman[371097]: 2026-01-20 15:22:34.348672534 +0000 UTC m=+0.095338842 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived)
Jan 20 15:22:34 compute-0 nova_compute[250018]: 2026-01-20 15:22:34.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:22:34 compute-0 sudo[370782]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:22:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:22:35 compute-0 ceph-mon[74360]: pgmap v3012: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:22:35 compute-0 sudo[371130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:35 compute-0 sudo[371130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-0 sudo[371130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:35 compute-0 sudo[371155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:35 compute-0 sudo[371155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-0 sudo[371155]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-0 sudo[371180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:35 compute-0 sudo[371180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-0 sudo[371180]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:35 compute-0 sudo[371205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:22:35 compute-0 sudo[371205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:35 compute-0 nova_compute[250018]: 2026-01-20 15:22:35.796 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:36 compute-0 sudo[371205]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev de6e5b9f-439d-439e-989e-081e6e410f50 does not exist
Jan 20 15:22:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 12ed9a18-aee9-4871-9ef0-ed468e45c557 does not exist
Jan 20 15:22:36 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 07bc323a-7f7b-4361-b2ba-3db89569565b does not exist
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:22:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:36 compute-0 sudo[371262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:36 compute-0 sudo[371262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:36 compute-0 sudo[371262]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:36 compute-0 sudo[371287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:36 compute-0 sudo[371287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:36 compute-0 sudo[371287]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:22:36 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:22:36 compute-0 sudo[371312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:36 compute-0 sudo[371312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:36 compute-0 sudo[371312]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:36 compute-0 sudo[371337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:22:36 compute-0 sudo[371337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.876268146 +0000 UTC m=+0.041349496 container create 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 15:22:36 compute-0 systemd[1]: Started libpod-conmon-95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302.scope.
Jan 20 15:22:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.942087052 +0000 UTC m=+0.107168432 container init 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.951837715 +0000 UTC m=+0.116919065 container start 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.858309983 +0000 UTC m=+0.023391363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.958662279 +0000 UTC m=+0.123743649 container attach 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:36 compute-0 practical_liskov[371420]: 167 167
Jan 20 15:22:36 compute-0 systemd[1]: libpod-95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302.scope: Deactivated successfully.
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.960495979 +0000 UTC m=+0.125577329 container died 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8eba571933bb87b717d10887a4daa306cea04530ee4b02c6d20327734fc9053-merged.mount: Deactivated successfully.
Jan 20 15:22:36 compute-0 podman[371404]: 2026-01-20 15:22:36.995891484 +0000 UTC m=+0.160972834 container remove 95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:22:37 compute-0 systemd[1]: libpod-conmon-95b12798ad8b3e2aeb37daddc4a1000258c43e41948b07dd669a8346783d4302.scope: Deactivated successfully.
Jan 20 15:22:37 compute-0 podman[371442]: 2026-01-20 15:22:37.165956601 +0000 UTC m=+0.047893093 container create 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:22:37 compute-0 systemd[1]: Started libpod-conmon-09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f.scope.
Jan 20 15:22:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:37 compute-0 podman[371442]: 2026-01-20 15:22:37.144694797 +0000 UTC m=+0.026631269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:37 compute-0 podman[371442]: 2026-01-20 15:22:37.250264476 +0000 UTC m=+0.132200928 container init 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:22:37 compute-0 podman[371442]: 2026-01-20 15:22:37.264464139 +0000 UTC m=+0.146400631 container start 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:37 compute-0 podman[371442]: 2026-01-20 15:22:37.268776735 +0000 UTC m=+0.150713187 container attach 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:22:37 compute-0 ceph-mon[74360]: pgmap v3013: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:22:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:38 compute-0 eloquent_albattani[371458]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:22:38 compute-0 eloquent_albattani[371458]: --> relative data size: 1.0
Jan 20 15:22:38 compute-0 eloquent_albattani[371458]: --> All data devices are unavailable
Jan 20 15:22:38 compute-0 systemd[1]: libpod-09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f.scope: Deactivated successfully.
Jan 20 15:22:38 compute-0 podman[371442]: 2026-01-20 15:22:38.058943769 +0000 UTC m=+0.940880221 container died 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:22:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ef772258e73ad3c8045da8d96f04752e2b7d5d78fd622dfea57bb0e04d47c2-merged.mount: Deactivated successfully.
Jan 20 15:22:38 compute-0 podman[371442]: 2026-01-20 15:22:38.110318816 +0000 UTC m=+0.992255268 container remove 09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:22:38 compute-0 systemd[1]: libpod-conmon-09debd0c5e383ea7483fd26f55533f575092c69a584b4f7529fc645cd9cc172f.scope: Deactivated successfully.
Jan 20 15:22:38 compute-0 sudo[371337]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:38 compute-0 sudo[371487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:38 compute-0 sudo[371487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:38 compute-0 sudo[371487]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:38 compute-0 sudo[371512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:38 compute-0 sudo[371512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:38 compute-0 sudo[371512]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:38 compute-0 sudo[371538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:38 compute-0 sudo[371538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:38 compute-0 sudo[371538]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:38 compute-0 sudo[371563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:22:38 compute-0 sudo[371563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:38.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.730138846 +0000 UTC m=+0.038644624 container create 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:22:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:38 compute-0 systemd[1]: Started libpod-conmon-0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94.scope.
Jan 20 15:22:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.806716231 +0000 UTC m=+0.115222019 container init 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.715454279 +0000 UTC m=+0.023960087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.814176602 +0000 UTC m=+0.122682380 container start 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.81852414 +0000 UTC m=+0.127029918 container attach 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:22:38 compute-0 ceph-mon[74360]: pgmap v3014: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:38 compute-0 sharp_elbakyan[371643]: 167 167
Jan 20 15:22:38 compute-0 systemd[1]: libpod-0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94.scope: Deactivated successfully.
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.822961229 +0000 UTC m=+0.131466997 container died 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d79ba7b8b31ae3c390614d57efa1d8a99dc80262d850c3314760667d93daa9a-merged.mount: Deactivated successfully.
Jan 20 15:22:38 compute-0 podman[371627]: 2026-01-20 15:22:38.856299019 +0000 UTC m=+0.164804787 container remove 0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:22:38 compute-0 systemd[1]: libpod-conmon-0baa89783b86f7a6b0ac87aedfb6124a49e3fdcf3e5e3f3754a3ae895adedd94.scope: Deactivated successfully.
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.011263768 +0000 UTC m=+0.041478769 container create a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 20 15:22:39 compute-0 systemd[1]: Started libpod-conmon-a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d.scope.
Jan 20 15:22:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4d9d89ea0ccf21c590789f469e146eb42e7ddd315c872d547d77b357cd12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4d9d89ea0ccf21c590789f469e146eb42e7ddd315c872d547d77b357cd12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4d9d89ea0ccf21c590789f469e146eb42e7ddd315c872d547d77b357cd12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4d9d89ea0ccf21c590789f469e146eb42e7ddd315c872d547d77b357cd12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:38.993687445 +0000 UTC m=+0.023902466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.093274681 +0000 UTC m=+0.123489712 container init a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.099805117 +0000 UTC m=+0.130020118 container start a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.104334569 +0000 UTC m=+0.134549590 container attach a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:22:39 compute-0 nova_compute[250018]: 2026-01-20 15:22:39.381 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]: {
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:     "0": [
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:         {
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "devices": [
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "/dev/loop3"
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             ],
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "lv_name": "ceph_lv0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "lv_size": "7511998464",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "name": "ceph_lv0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "tags": {
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.cluster_name": "ceph",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.crush_device_class": "",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.encrypted": "0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.osd_id": "0",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.type": "block",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:                 "ceph.vdo": "0"
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             },
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "type": "block",
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:             "vg_name": "ceph_vg0"
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:         }
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]:     ]
Jan 20 15:22:39 compute-0 stoic_bhabha[371686]: }
Jan 20 15:22:39 compute-0 systemd[1]: libpod-a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d.scope: Deactivated successfully.
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.871226166 +0000 UTC m=+0.901441167 container died a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f4d9d89ea0ccf21c590789f469e146eb42e7ddd315c872d547d77b357cd12a-merged.mount: Deactivated successfully.
Jan 20 15:22:39 compute-0 podman[371669]: 2026-01-20 15:22:39.928133461 +0000 UTC m=+0.958348462 container remove a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:22:39 compute-0 systemd[1]: libpod-conmon-a30d450aaacbcc57eca5a55dd7dbf811b07d2511c2de6e28c4a7c92731aad08d.scope: Deactivated successfully.
Jan 20 15:22:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:39 compute-0 sudo[371563]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:40 compute-0 sudo[371709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:40 compute-0 sudo[371709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:40 compute-0 sudo[371709]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:40 compute-0 sudo[371734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:22:40 compute-0 sudo[371734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:40 compute-0 sudo[371734]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:40 compute-0 sudo[371759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:40 compute-0 sudo[371759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:40 compute-0 sudo[371759]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:40 compute-0 sudo[371784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:22:40 compute-0 sudo[371784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.464874279 +0000 UTC m=+0.034385258 container create be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:40 compute-0 systemd[1]: Started libpod-conmon-be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da.scope.
Jan 20 15:22:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.540691245 +0000 UTC m=+0.110202244 container init be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.450084671 +0000 UTC m=+0.019595660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.547647253 +0000 UTC m=+0.117158232 container start be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.55125749 +0000 UTC m=+0.120768499 container attach be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:22:40 compute-0 objective_sutherland[371868]: 167 167
Jan 20 15:22:40 compute-0 systemd[1]: libpod-be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da.scope: Deactivated successfully.
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.553139691 +0000 UTC m=+0.122650670 container died be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1062b6000b24248b00a225d6f7a2bb1ae7686b7f5023ebcab0b675acbfd8dd5-merged.mount: Deactivated successfully.
Jan 20 15:22:40 compute-0 podman[371852]: 2026-01-20 15:22:40.583745296 +0000 UTC m=+0.153256265 container remove be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:22:40 compute-0 systemd[1]: libpod-conmon-be7ee29e00ef45bf5e455855dee2bbf40c41c555444dace81b06c07811f2d5da.scope: Deactivated successfully.
Jan 20 15:22:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:40.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:40 compute-0 podman[371892]: 2026-01-20 15:22:40.74922555 +0000 UTC m=+0.042129698 container create 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:22:40 compute-0 systemd[1]: Started libpod-conmon-26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070.scope.
Jan 20 15:22:40 compute-0 nova_compute[250018]: 2026-01-20 15:22:40.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:40 compute-0 ceph-mon[74360]: pgmap v3015: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:22:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7259fd7a32de079c1cfc4fbba6c9b200c7bb4606eb245a14da0d0e373bf174c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7259fd7a32de079c1cfc4fbba6c9b200c7bb4606eb245a14da0d0e373bf174c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7259fd7a32de079c1cfc4fbba6c9b200c7bb4606eb245a14da0d0e373bf174c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7259fd7a32de079c1cfc4fbba6c9b200c7bb4606eb245a14da0d0e373bf174c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:22:40 compute-0 podman[371892]: 2026-01-20 15:22:40.730900875 +0000 UTC m=+0.023805013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:22:40 compute-0 podman[371892]: 2026-01-20 15:22:40.836562496 +0000 UTC m=+0.129466624 container init 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:22:40 compute-0 podman[371892]: 2026-01-20 15:22:40.843868323 +0000 UTC m=+0.136772451 container start 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 15:22:40 compute-0 podman[371892]: 2026-01-20 15:22:40.84712203 +0000 UTC m=+0.140026208 container attach 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:41 compute-0 laughing_bell[371908]: {
Jan 20 15:22:41 compute-0 laughing_bell[371908]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:22:41 compute-0 laughing_bell[371908]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:22:41 compute-0 laughing_bell[371908]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:22:41 compute-0 laughing_bell[371908]:         "osd_id": 0,
Jan 20 15:22:41 compute-0 laughing_bell[371908]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:22:41 compute-0 laughing_bell[371908]:         "type": "bluestore"
Jan 20 15:22:41 compute-0 laughing_bell[371908]:     }
Jan 20 15:22:41 compute-0 laughing_bell[371908]: }
Jan 20 15:22:41 compute-0 systemd[1]: libpod-26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070.scope: Deactivated successfully.
Jan 20 15:22:41 compute-0 podman[371892]: 2026-01-20 15:22:41.645862217 +0000 UTC m=+0.938766345 container died 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 15:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7259fd7a32de079c1cfc4fbba6c9b200c7bb4606eb245a14da0d0e373bf174c-merged.mount: Deactivated successfully.
Jan 20 15:22:41 compute-0 podman[371892]: 2026-01-20 15:22:41.696818441 +0000 UTC m=+0.989722569 container remove 26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:22:41 compute-0 systemd[1]: libpod-conmon-26ed386d63076f9815cef6f73395f010daa597ca054fb4c0a024c32e8a287070.scope: Deactivated successfully.
Jan 20 15:22:41 compute-0 sudo[371784]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:22:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:22:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dc15dae7-f395-4970-8034-2529c6427cd2 does not exist
Jan 20 15:22:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f09b7852-f66c-40f3-9190-3ca8f5bbc387 does not exist
Jan 20 15:22:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 79f47c43-d752-464f-aca3-9df61f44efae does not exist
Jan 20 15:22:41 compute-0 sudo[371941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:41 compute-0 sudo[371941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:41 compute-0 sudo[371941]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:41 compute-0 sudo[371966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:22:41 compute-0 sudo[371966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:41 compute-0 sudo[371966]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s
Jan 20 15:22:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:22:43 compute-0 ceph-mon[74360]: pgmap v3016: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 3 op/s
Jan 20 15:22:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:22:44 compute-0 nova_compute[250018]: 2026-01-20 15:22:44.384 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:44 compute-0 podman[371994]: 2026-01-20 15:22:44.469175636 +0000 UTC m=+0.060626446 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 15:22:44 compute-0 podman[371993]: 2026-01-20 15:22:44.492927197 +0000 UTC m=+0.084427649 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:22:44 compute-0 sudo[372037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:44 compute-0 sudo[372037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:44 compute-0 sudo[372037]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:44 compute-0 sudo[372065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:22:44 compute-0 sudo[372065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:22:44 compute-0 sudo[372065]: pam_unix(sudo:session): session closed for user root
Jan 20 15:22:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:44 compute-0 ceph-mon[74360]: pgmap v3017: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:45 compute-0 nova_compute[250018]: 2026-01-20 15:22:45.797 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:46.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:46 compute-0 ceph-mon[74360]: pgmap v3018: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:22:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:48.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:22:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:49 compute-0 nova_compute[250018]: 2026-01-20 15:22:49.386 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:49 compute-0 ceph-mon[74360]: pgmap v3019: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:50 compute-0 nova_compute[250018]: 2026-01-20 15:22:50.799 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:51 compute-0 ceph-mon[74360]: pgmap v3020: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:22:52
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.data']
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:22:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:53 compute-0 nova_compute[250018]: 2026-01-20 15:22:53.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:22:53 compute-0 ceph-mon[74360]: pgmap v3021: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3700219704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:22:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:54 compute-0 nova_compute[250018]: 2026-01-20 15:22:54.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:55 compute-0 nova_compute[250018]: 2026-01-20 15:22:55.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:56 compute-0 ceph-mon[74360]: pgmap v3022: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:22:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:22:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:22:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:22:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:57.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:58 compute-0 ceph-mon[74360]: pgmap v3023: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:22:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:22:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:22:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:22:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:22:59 compute-0 nova_compute[250018]: 2026-01-20 15:22:59.390 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:22:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:22:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:22:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:22:59.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:23:00 compute-0 ceph-mon[74360]: pgmap v3024: 321 pgs: 321 active+clean; 151 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:23:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:00 compute-0 nova_compute[250018]: 2026-01-20 15:23:00.803 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:02 compute-0 ceph-mon[74360]: pgmap v3025: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:02 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3095742531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1782031866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:04 compute-0 nova_compute[250018]: 2026-01-20 15:23:04.391 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:04 compute-0 sudo[372101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:04 compute-0 sudo[372101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:04 compute-0 sudo[372101]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:04 compute-0 sudo[372126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:04 compute-0 sudo[372126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:04 compute-0 sudo[372126]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:04 compute-0 ceph-mon[74360]: pgmap v3026: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:05 compute-0 nova_compute[250018]: 2026-01-20 15:23:05.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:05 compute-0 ceph-mon[74360]: pgmap v3027: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:23:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/147687748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:06 compute-0 nova_compute[250018]: 2026-01-20 15:23:06.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 20 15:23:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2287325465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:08 compute-0 ceph-mon[74360]: pgmap v3028: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 20 15:23:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3144324644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 507 KiB/s wr, 21 op/s
Jan 20 15:23:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2587878333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:09 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:23:09 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467097017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.514 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.662 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.664 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4208MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.664 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.664 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.738 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.738 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:23:09 compute-0 nova_compute[250018]: 2026-01-20 15:23:09.751 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:23:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:09.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:10 compute-0 ceph-mon[74360]: pgmap v3029: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 507 KiB/s wr, 21 op/s
Jan 20 15:23:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/467097017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:23:10 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619901193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.177 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.183 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.201 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.203 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.203 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 507 KiB/s wr, 74 op/s
Jan 20 15:23:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:10 compute-0 nova_compute[250018]: 2026-01-20 15:23:10.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/619901193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:11.202 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:11.203 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.203 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.203 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.204 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.204 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:11 compute-0 nova_compute[250018]: 2026-01-20 15:23:11.204 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:23:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:23:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:11.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:12 compute-0 ceph-mon[74360]: pgmap v3030: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 507 KiB/s wr, 74 op/s
Jan 20 15:23:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:12.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:14 compute-0 ceph-mon[74360]: pgmap v3031: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:23:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3891755442' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:23:14 compute-0 nova_compute[250018]: 2026-01-20 15:23:14.394 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:15 compute-0 podman[372201]: 2026-01-20 15:23:15.456090981 +0000 UTC m=+0.051103670 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Jan 20 15:23:15 compute-0 podman[372200]: 2026-01-20 15:23:15.486077739 +0000 UTC m=+0.084858880 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:23:15 compute-0 nova_compute[250018]: 2026-01-20 15:23:15.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:16 compute-0 ceph-mon[74360]: pgmap v3032: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:23:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:23:17 compute-0 ovn_controller[148666]: 2026-01-20T15:23:17Z|00726|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 20 15:23:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:23:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:23:18 compute-0 ceph-mon[74360]: pgmap v3033: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:23:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:18.205 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 20 15:23:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:19 compute-0 nova_compute[250018]: 2026-01-20 15:23:19.396 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:20 compute-0 nova_compute[250018]: 2026-01-20 15:23:20.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:20 compute-0 ceph-mon[74360]: pgmap v3034: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Jan 20 15:23:20 compute-0 sshd-session[372246]: Invalid user test from 134.122.57.138 port 49502
Jan 20 15:23:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:23:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:20 compute-0 nova_compute[250018]: 2026-01-20 15:23:20.809 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:20 compute-0 sshd-session[372246]: Connection closed by invalid user test 134.122.57.138 port 49502 [preauth]
Jan 20 15:23:21 compute-0 nova_compute[250018]: 2026-01-20 15:23:21.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:23:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:23:22 compute-0 ceph-mon[74360]: pgmap v3035: 321 pgs: 321 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 20 15:23:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:24.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:24 compute-0 ceph-mon[74360]: pgmap v3036: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 20 15:23:24 compute-0 nova_compute[250018]: 2026-01-20 15:23:24.398 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:23:24 compute-0 sudo[372253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:24 compute-0 sudo[372253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:24 compute-0 sudo[372253]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:24 compute-0 sudo[372278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:24 compute-0 sudo[372278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:24 compute-0 sudo[372278]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:25 compute-0 nova_compute[250018]: 2026-01-20 15:23:25.812 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:26 compute-0 nova_compute[250018]: 2026-01-20 15:23:26.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:26 compute-0 nova_compute[250018]: 2026-01-20 15:23:26.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:23:26 compute-0 nova_compute[250018]: 2026-01-20 15:23:26.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:23:26 compute-0 nova_compute[250018]: 2026-01-20 15:23:26.067 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:23:26 compute-0 ceph-mon[74360]: pgmap v3037: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:23:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:26.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:28 compute-0 ceph-mon[74360]: pgmap v3038: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:28.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:29 compute-0 nova_compute[250018]: 2026-01-20 15:23:29.400 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:30 compute-0 ceph-mon[74360]: pgmap v3039: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.523559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610523638, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1641, "num_deletes": 252, "total_data_size": 2754114, "memory_usage": 2796904, "flush_reason": "Manual Compaction"}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610536834, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2698868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66951, "largest_seqno": 68590, "table_properties": {"data_size": 2691378, "index_size": 4368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16277, "raw_average_key_size": 20, "raw_value_size": 2676151, "raw_average_value_size": 3361, "num_data_blocks": 190, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922462, "oldest_key_time": 1768922462, "file_creation_time": 1768922610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 13309 microseconds, and 5611 cpu microseconds.
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.536882) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2698868 bytes OK
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.536897) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.538307) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.538317) EVENT_LOG_v1 {"time_micros": 1768922610538315, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.538331) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2747187, prev total WAL file size 2747187, number of live WAL files 2.
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.538994) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2635KB)], [152(10MB)]
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610539034, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 13196469, "oldest_snapshot_seqno": -1}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9667 keys, 11330093 bytes, temperature: kUnknown
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610598150, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 11330093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11268910, "index_size": 35918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 254385, "raw_average_key_size": 26, "raw_value_size": 11100614, "raw_average_value_size": 1148, "num_data_blocks": 1363, "num_entries": 9667, "num_filter_entries": 9667, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.598640) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 11330093 bytes
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.600455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.5 rd, 191.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 10192, records dropped: 525 output_compression: NoCompression
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.600471) EVENT_LOG_v1 {"time_micros": 1768922610600463, "job": 94, "event": "compaction_finished", "compaction_time_micros": 59315, "compaction_time_cpu_micros": 26173, "output_level": 6, "num_output_files": 1, "total_output_size": 11330093, "num_input_records": 10192, "num_output_records": 9667, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610601131, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922610602994, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.538928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.603141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.603147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.603149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.603154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:23:30.603156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:23:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 20 15:23:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:30.792 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:30.793 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:23:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:30.793 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:23:30 compute-0 nova_compute[250018]: 2026-01-20 15:23:30.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:23:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:32.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:23:32 compute-0 sshd-session[372250]: Connection closed by authenticating user root 121.36.32.179 port 51920 [preauth]
Jan 20 15:23:32 compute-0 ceph-mon[74360]: pgmap v3040: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 20 15:23:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 950 KiB/s wr, 28 op/s
Jan 20 15:23:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:32.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:34.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:34 compute-0 nova_compute[250018]: 2026-01-20 15:23:34.401 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:34 compute-0 ceph-mon[74360]: pgmap v3041: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 950 KiB/s wr, 28 op/s
Jan 20 15:23:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/156660179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 20 15:23:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:34.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:35 compute-0 nova_compute[250018]: 2026-01-20 15:23:35.816 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/316765263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:23:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:36.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 30 KiB/s wr, 8 op/s
Jan 20 15:23:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:37 compute-0 ceph-mon[74360]: pgmap v3042: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 20 15:23:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 20 15:23:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:38 compute-0 ceph-mon[74360]: pgmap v3043: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 30 KiB/s wr, 8 op/s
Jan 20 15:23:39 compute-0 nova_compute[250018]: 2026-01-20 15:23:39.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:40.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:40 compute-0 ceph-mon[74360]: pgmap v3044: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 20 15:23:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 61 op/s
Jan 20 15:23:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:40 compute-0 nova_compute[250018]: 2026-01-20 15:23:40.818 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:42.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:42 compute-0 ceph-mon[74360]: pgmap v3045: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 61 op/s
Jan 20 15:23:42 compute-0 sudo[372311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:42 compute-0 sudo[372311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-0 sudo[372311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-0 sudo[372336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:23:42 compute-0 sudo[372336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-0 sudo[372336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-0 sudo[372362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:42 compute-0 sudo[372362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-0 sudo[372362]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-0 sudo[372387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:23:42 compute-0 sudo[372387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 20 15:23:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:23:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:23:42 compute-0 sudo[372387]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 00817a6e-a38d-4543-bdae-859744893239 does not exist
Jan 20 15:23:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9cd1309b-3674-44a6-aebb-bc2b758c176d does not exist
Jan 20 15:23:42 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f51da1ad-0491-46a4-951a-2b27b314aa02 does not exist
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:23:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:23:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:43 compute-0 sudo[372443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:43 compute-0 sudo[372443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:43 compute-0 sudo[372443]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:43 compute-0 sudo[372468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:23:43 compute-0 sudo[372468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:43 compute-0 sudo[372468]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:23:43 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:23:43 compute-0 sudo[372493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:43 compute-0 sudo[372493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:43 compute-0 sudo[372493]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:43 compute-0 sudo[372518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:23:43 compute-0 sudo[372518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.506260196 +0000 UTC m=+0.049695191 container create db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:23:43 compute-0 systemd[1]: Started libpod-conmon-db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23.scope.
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.473751389 +0000 UTC m=+0.017186404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.600816927 +0000 UTC m=+0.144251912 container init db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.608149384 +0000 UTC m=+0.151584379 container start db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.611575517 +0000 UTC m=+0.155010512 container attach db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:23:43 compute-0 nice_jemison[372599]: 167 167
Jan 20 15:23:43 compute-0 systemd[1]: libpod-db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23.scope: Deactivated successfully.
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.613395006 +0000 UTC m=+0.156830021 container died db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced1a54f4abbcaf447f06c964196b4cda21440af676f2f037c4beba4695318b1-merged.mount: Deactivated successfully.
Jan 20 15:23:43 compute-0 podman[372583]: 2026-01-20 15:23:43.649452079 +0000 UTC m=+0.192887084 container remove db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:23:43 compute-0 systemd[1]: libpod-conmon-db191eb576503788ae35d0d450880fcd8525a293eea755178a7b8c6347ca7e23.scope: Deactivated successfully.
Jan 20 15:23:43 compute-0 podman[372622]: 2026-01-20 15:23:43.786233168 +0000 UTC m=+0.038211622 container create 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:43 compute-0 systemd[1]: Started libpod-conmon-0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745.scope.
Jan 20 15:23:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:43 compute-0 podman[372622]: 2026-01-20 15:23:43.853869683 +0000 UTC m=+0.105848157 container init 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:23:43 compute-0 podman[372622]: 2026-01-20 15:23:43.859953867 +0000 UTC m=+0.111932311 container start 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:23:43 compute-0 podman[372622]: 2026-01-20 15:23:43.862957488 +0000 UTC m=+0.114936022 container attach 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:23:43 compute-0 podman[372622]: 2026-01-20 15:23:43.769770665 +0000 UTC m=+0.021749139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:44 compute-0 ceph-mon[74360]: pgmap v3046: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Jan 20 15:23:44 compute-0 nova_compute[250018]: 2026-01-20 15:23:44.405 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:44 compute-0 busy_dubinsky[372639]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:23:44 compute-0 busy_dubinsky[372639]: --> relative data size: 1.0
Jan 20 15:23:44 compute-0 busy_dubinsky[372639]: --> All data devices are unavailable
Jan 20 15:23:44 compute-0 systemd[1]: libpod-0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745.scope: Deactivated successfully.
Jan 20 15:23:44 compute-0 podman[372622]: 2026-01-20 15:23:44.692575957 +0000 UTC m=+0.944554411 container died 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab018f7b942990d14aa2ba3753941a2ddaf9c2ea4ef96ac5b9a2f028fe8f4ebe-merged.mount: Deactivated successfully.
Jan 20 15:23:44 compute-0 podman[372622]: 2026-01-20 15:23:44.745133784 +0000 UTC m=+0.997112238 container remove 0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:23:44 compute-0 systemd[1]: libpod-conmon-0d4fbc03fdbad4fd3f57766c51878f47182f958d74df4c5c2fbc03d253cfe745.scope: Deactivated successfully.
Jan 20 15:23:44 compute-0 sudo[372518]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:44 compute-0 sudo[372669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-0 sudo[372669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-0 sudo[372669]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 sudo[372694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:23:44 compute-0 sudo[372694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-0 sudo[372697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-0 sudo[372694]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 sudo[372697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-0 sudo[372697]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 sudo[372744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-0 sudo[372744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-0 sudo[372745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:44 compute-0 sudo[372744]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 sudo[372745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:44 compute-0 sudo[372745]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:44 compute-0 sudo[372794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:23:45 compute-0 sudo[372794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.300006793 +0000 UTC m=+0.038231973 container create 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:23:45 compute-0 systemd[1]: Started libpod-conmon-661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67.scope.
Jan 20 15:23:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.367670698 +0000 UTC m=+0.105895908 container init 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.373025132 +0000 UTC m=+0.111250312 container start 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.376022853 +0000 UTC m=+0.114248023 container attach 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:45 compute-0 wizardly_perlman[372874]: 167 167
Jan 20 15:23:45 compute-0 systemd[1]: libpod-661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67.scope: Deactivated successfully.
Jan 20 15:23:45 compute-0 conmon[372874]: conmon 661695626d27dab6dc34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67.scope/container/memory.events
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.378093219 +0000 UTC m=+0.116318399 container died 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.282951442 +0000 UTC m=+0.021176642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bf96c550be8de5a787e21c9172b8858f3e8c7d15ff50940fa2fd075ebe0f8aa-merged.mount: Deactivated successfully.
Jan 20 15:23:45 compute-0 podman[372858]: 2026-01-20 15:23:45.410464662 +0000 UTC m=+0.148689842 container remove 661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:23:45 compute-0 systemd[1]: libpod-conmon-661695626d27dab6dc34689a64e1b4640c363d269c5865423b460e19abb96c67.scope: Deactivated successfully.
Jan 20 15:23:45 compute-0 podman[372898]: 2026-01-20 15:23:45.559663887 +0000 UTC m=+0.041049398 container create c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:23:45 compute-0 systemd[1]: Started libpod-conmon-c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab.scope.
Jan 20 15:23:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ad6eb01d83e475011461d888a3e67ab50d4e572d12824b8a140e0020335be6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ad6eb01d83e475011461d888a3e67ab50d4e572d12824b8a140e0020335be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ad6eb01d83e475011461d888a3e67ab50d4e572d12824b8a140e0020335be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57ad6eb01d83e475011461d888a3e67ab50d4e572d12824b8a140e0020335be6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:45 compute-0 podman[372898]: 2026-01-20 15:23:45.628188895 +0000 UTC m=+0.109574426 container init c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:23:45 compute-0 podman[372898]: 2026-01-20 15:23:45.539334798 +0000 UTC m=+0.020720339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:45 compute-0 podman[372898]: 2026-01-20 15:23:45.636308375 +0000 UTC m=+0.117693896 container start c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:23:45 compute-0 podman[372898]: 2026-01-20 15:23:45.640201889 +0000 UTC m=+0.121587430 container attach c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:23:45 compute-0 podman[372915]: 2026-01-20 15:23:45.66430622 +0000 UTC m=+0.068864009 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:23:45 compute-0 podman[372912]: 2026-01-20 15:23:45.690225008 +0000 UTC m=+0.094797317 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:23:45 compute-0 nova_compute[250018]: 2026-01-20 15:23:45.819 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:46 compute-0 ceph-mon[74360]: pgmap v3047: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:46 compute-0 goofy_cohen[372916]: {
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:     "0": [
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:         {
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "devices": [
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "/dev/loop3"
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             ],
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "lv_name": "ceph_lv0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "lv_size": "7511998464",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "name": "ceph_lv0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "tags": {
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.cluster_name": "ceph",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.crush_device_class": "",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.encrypted": "0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.osd_id": "0",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.type": "block",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:                 "ceph.vdo": "0"
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             },
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "type": "block",
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:             "vg_name": "ceph_vg0"
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:         }
Jan 20 15:23:46 compute-0 goofy_cohen[372916]:     ]
Jan 20 15:23:46 compute-0 goofy_cohen[372916]: }
Jan 20 15:23:46 compute-0 systemd[1]: libpod-c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab.scope: Deactivated successfully.
Jan 20 15:23:46 compute-0 podman[372898]: 2026-01-20 15:23:46.434917387 +0000 UTC m=+0.916302938 container died c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-57ad6eb01d83e475011461d888a3e67ab50d4e572d12824b8a140e0020335be6-merged.mount: Deactivated successfully.
Jan 20 15:23:46 compute-0 podman[372898]: 2026-01-20 15:23:46.483815566 +0000 UTC m=+0.965201087 container remove c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:23:46 compute-0 systemd[1]: libpod-conmon-c3a7ef252ea8615614eaf24a75a9827e77002d96b2a7b5c7d1358f53d89dfbab.scope: Deactivated successfully.
Jan 20 15:23:46 compute-0 sudo[372794]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:46 compute-0 sudo[372978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:46 compute-0 sudo[372978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:46 compute-0 sudo[372978]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:46 compute-0 sudo[373003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:23:46 compute-0 sudo[373003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:46 compute-0 sudo[373003]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:46 compute-0 sudo[373028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:46 compute-0 sudo[373028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:46 compute-0 sudo[373028]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:46 compute-0 sudo[373053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:23:46 compute-0 sudo[373053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:46.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.013593236 +0000 UTC m=+0.035551329 container create ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:23:47 compute-0 systemd[1]: Started libpod-conmon-ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c.scope.
Jan 20 15:23:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.077780798 +0000 UTC m=+0.099738891 container init ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.08603252 +0000 UTC m=+0.107990613 container start ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.089614688 +0000 UTC m=+0.111572811 container attach ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:47 compute-0 vigilant_spence[373133]: 167 167
Jan 20 15:23:47 compute-0 systemd[1]: libpod-ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c.scope: Deactivated successfully.
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.092203287 +0000 UTC m=+0.114161400 container died ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:46.999577498 +0000 UTC m=+0.021535611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-18093c2cefa61263383183b2001298f1d794da12f9a39d60bea2a06d8c94af82-merged.mount: Deactivated successfully.
Jan 20 15:23:47 compute-0 podman[373117]: 2026-01-20 15:23:47.127771277 +0000 UTC m=+0.149729370 container remove ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:23:47 compute-0 systemd[1]: libpod-conmon-ca984ff21b980742aa2e8dd74aaa08d54420781c6f3f779b4106cbf0bdf67e8c.scope: Deactivated successfully.
Jan 20 15:23:47 compute-0 podman[373157]: 2026-01-20 15:23:47.273039445 +0000 UTC m=+0.040161984 container create 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:23:47 compute-0 systemd[1]: Started libpod-conmon-165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2.scope.
Jan 20 15:23:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7640b274d60cc923a129f663474774e8064f9eb0941c58d46c72ce44e744d2e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7640b274d60cc923a129f663474774e8064f9eb0941c58d46c72ce44e744d2e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7640b274d60cc923a129f663474774e8064f9eb0941c58d46c72ce44e744d2e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7640b274d60cc923a129f663474774e8064f9eb0941c58d46c72ce44e744d2e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:23:47 compute-0 podman[373157]: 2026-01-20 15:23:47.253775046 +0000 UTC m=+0.020897615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:23:47 compute-0 podman[373157]: 2026-01-20 15:23:47.587564415 +0000 UTC m=+0.354686964 container init 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:47 compute-0 podman[373157]: 2026-01-20 15:23:47.59579611 +0000 UTC m=+0.362918689 container start 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:23:47 compute-0 podman[373157]: 2026-01-20 15:23:47.599418319 +0000 UTC m=+0.366540878 container attach 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:48.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]: {
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:         "osd_id": 0,
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:         "type": "bluestore"
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]:     }
Jan 20 15:23:48 compute-0 inspiring_swanson[373173]: }
Jan 20 15:23:48 compute-0 systemd[1]: libpod-165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2.scope: Deactivated successfully.
Jan 20 15:23:48 compute-0 conmon[373173]: conmon 165130bc523d21f8ebcc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2.scope/container/memory.events
Jan 20 15:23:48 compute-0 podman[373157]: 2026-01-20 15:23:48.41828507 +0000 UTC m=+1.185407629 container died 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:23:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 66 op/s
Jan 20 15:23:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:48 compute-0 ceph-mon[74360]: pgmap v3048: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 20 15:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7640b274d60cc923a129f663474774e8064f9eb0941c58d46c72ce44e744d2e6-merged.mount: Deactivated successfully.
Jan 20 15:23:49 compute-0 podman[373157]: 2026-01-20 15:23:49.145431951 +0000 UTC m=+1.912554500 container remove 165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:23:49 compute-0 systemd[1]: libpod-conmon-165130bc523d21f8ebcc6beb0f91cbe13cfe819dacac48d9b0fc241616c02bb2.scope: Deactivated successfully.
Jan 20 15:23:49 compute-0 sudo[373053]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:23:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:23:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1228d9e7-7754-4746-9124-757aea6cc6e1 does not exist
Jan 20 15:23:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3b3f22fb-7af5-47e7-bb49-079323380976 does not exist
Jan 20 15:23:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev cbf4418b-69a4-4271-a06f-c99d5733debe does not exist
Jan 20 15:23:49 compute-0 nova_compute[250018]: 2026-01-20 15:23:49.408 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:49 compute-0 sudo[373210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:23:49 compute-0 sudo[373210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:49 compute-0 sudo[373210]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:49 compute-0 sudo[373235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:23:49 compute-0 sudo[373235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:23:49 compute-0 sudo[373235]: pam_unix(sudo:session): session closed for user root
Jan 20 15:23:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:50 compute-0 ceph-mon[74360]: pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 66 op/s
Jan 20 15:23:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:23:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 KiB/s wr, 91 op/s
Jan 20 15:23:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:50.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:50 compute-0 nova_compute[250018]: 2026-01-20 15:23:50.822 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:23:52
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.mgr']
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:23:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 12 KiB/s wr, 53 op/s
Jan 20 15:23:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:52.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:52 compute-0 ceph-mon[74360]: pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 KiB/s wr, 91 op/s
Jan 20 15:23:54 compute-0 nova_compute[250018]: 2026-01-20 15:23:54.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:23:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:23:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:23:54 compute-0 nova_compute[250018]: 2026-01-20 15:23:54.436 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:54.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:55 compute-0 nova_compute[250018]: 2026-01-20 15:23:55.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:56.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:56 compute-0 nova_compute[250018]: 2026-01-20 15:23:56.155 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:23:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:56.155 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:23:56 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:56.156 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:23:56 compute-0 ceph-mon[74360]: pgmap v3051: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 12 KiB/s wr, 53 op/s
Jan 20 15:23:56 compute-0 ceph-mon[74360]: pgmap v3052: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:56.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:23:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:23:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:23:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:23:58.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:23:58 compute-0 ceph-mon[74360]: pgmap v3053: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:23:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:23:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:23:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:23:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:23:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:23:59.158 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:23:59 compute-0 nova_compute[250018]: 2026-01-20 15:23:59.438 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:00.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:00 compute-0 ceph-mon[74360]: pgmap v3054: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 20 15:24:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 20 15:24:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:00 compute-0 nova_compute[250018]: 2026-01-20 15:24:00.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:02.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:02 compute-0 ceph-mon[74360]: pgmap v3055: 321 pgs: 321 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 21 KiB/s wr, 57 op/s
Jan 20 15:24:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 144 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 20 15:24:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:02.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2239924417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:04 compute-0 nova_compute[250018]: 2026-01-20 15:24:04.440 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:04 compute-0 ceph-mon[74360]: pgmap v3056: 321 pgs: 321 active+clean; 144 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 20 15:24:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:04.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:05 compute-0 sudo[373268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:05 compute-0 sudo[373268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:05 compute-0 sudo[373268]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:05 compute-0 sudo[373293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:05 compute-0 sudo[373293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:05 compute-0 sudo[373293]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:05 compute-0 nova_compute[250018]: 2026-01-20 15:24:05.825 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:06 compute-0 nova_compute[250018]: 2026-01-20 15:24:06.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:06.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:06 compute-0 ceph-mon[74360]: pgmap v3057: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/861187482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:06.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/95525792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:08.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:08 compute-0 ceph-mon[74360]: pgmap v3058: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:24:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:08.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:08 compute-0 sshd-session[373319]: Invalid user test from 134.122.57.138 port 32884
Jan 20 15:24:09 compute-0 sshd-session[373319]: Connection closed by invalid user test 134.122.57.138 port 32884 [preauth]
Jan 20 15:24:09 compute-0 nova_compute[250018]: 2026-01-20 15:24:09.442 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:09 compute-0 ceph-mon[74360]: pgmap v3059: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:10.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/960585230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/110819138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:10 compute-0 nova_compute[250018]: 2026-01-20 15:24:10.827 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015578960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.531 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.684 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.686 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4227MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.686 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.686 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.749 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.750 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.775 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.804 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.805 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.818 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:24:11 compute-0 ceph-mon[74360]: pgmap v3060: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:24:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2015578960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.841 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:24:11 compute-0 nova_compute[250018]: 2026-01-20 15:24:11.857 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:24:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:24:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:12.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171582387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:12 compute-0 nova_compute[250018]: 2026-01-20 15:24:12.332 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:12 compute-0 nova_compute[250018]: 2026-01-20 15:24:12.339 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:24:12 compute-0 nova_compute[250018]: 2026-01-20 15:24:12.364 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:24:12 compute-0 nova_compute[250018]: 2026-01-20 15:24:12.368 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:24:12 compute-0 nova_compute[250018]: 2026-01-20 15:24:12.368 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 20 15:24:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:12.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/171582387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:13 compute-0 nova_compute[250018]: 2026-01-20 15:24:13.369 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:13 compute-0 nova_compute[250018]: 2026-01-20 15:24:13.370 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:13 compute-0 nova_compute[250018]: 2026-01-20 15:24:13.370 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:13 compute-0 nova_compute[250018]: 2026-01-20 15:24:13.371 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:24:13 compute-0 ceph-mon[74360]: pgmap v3061: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 20 15:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2614624376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:24:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2614624376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:24:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:14.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:14 compute-0 nova_compute[250018]: 2026-01-20 15:24:14.446 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 341 B/s wr, 2 op/s
Jan 20 15:24:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:14.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:15 compute-0 nova_compute[250018]: 2026-01-20 15:24:15.829 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:15 compute-0 ceph-mon[74360]: pgmap v3062: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 341 B/s wr, 2 op/s
Jan 20 15:24:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:16 compute-0 podman[373371]: 2026-01-20 15:24:16.467336421 +0000 UTC m=+0.059038351 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 15:24:16 compute-0 podman[373370]: 2026-01-20 15:24:16.539462947 +0000 UTC m=+0.132652047 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:24:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:17 compute-0 ceph-mon[74360]: pgmap v3063: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:18.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:18.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:19 compute-0 nova_compute[250018]: 2026-01-20 15:24:19.448 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:20 compute-0 ceph-mon[74360]: pgmap v3064: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:20.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:20 compute-0 nova_compute[250018]: 2026-01-20 15:24:20.830 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:20.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:21 compute-0 nova_compute[250018]: 2026-01-20 15:24:21.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:22 compute-0 ceph-mon[74360]: pgmap v3065: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:22.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.416 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.417 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.430 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.500 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.501 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.512 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.512 250022 INFO nova.compute.claims [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:24:23 compute-0 nova_compute[250018]: 2026-01-20 15:24:23.613 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:24:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399840892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.074 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.082 250022 DEBUG nova.compute.provider_tree [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.099 250022 DEBUG nova.scheduler.client.report [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:24:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:24.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.131 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.132 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.207 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.207 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.230 250022 INFO nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.260 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.361 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.362 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.362 250022 INFO nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Creating image(s)
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.390 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.416 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.440 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.444 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.470 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.508 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.509 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.509 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.510 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:24 compute-0 ceph-mon[74360]: pgmap v3066: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1399840892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.537 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:24 compute-0 nova_compute[250018]: 2026-01-20 15:24:24.540 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.058 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.123 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] resizing rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.159 250022 DEBUG nova.policy [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '442a7a5cb8ea426a82be9762b262d171', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:24:25 compute-0 sudo[373590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:25 compute-0 sudo[373590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:25 compute-0 sudo[373590]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.226 250022 DEBUG nova.objects.instance [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'migration_context' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.240 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.240 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Ensure instance console log exists: /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.241 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.241 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.241 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:25 compute-0 sudo[373632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:25 compute-0 sudo[373632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:25 compute-0 sudo[373632]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:25 compute-0 nova_compute[250018]: 2026-01-20 15:24:25.832 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:26 compute-0 nova_compute[250018]: 2026-01-20 15:24:26.063 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:26 compute-0 nova_compute[250018]: 2026-01-20 15:24:26.064 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:24:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:26.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.080 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.080 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.081 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.097 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.097 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:24:27 compute-0 ceph-mon[74360]: pgmap v3067: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:24:27 compute-0 nova_compute[250018]: 2026-01-20 15:24:27.429 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Successfully created port: 3a19935a-bf2c-4d28-b687-e8294d1773dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:24:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:28.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:28.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:29 compute-0 ceph-mon[74360]: pgmap v3068: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.312 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Successfully updated port: 3a19935a-bf2c-4d28-b687-e8294d1773dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.333 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.333 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.333 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.402 250022 DEBUG nova.compute.manager [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.402 250022 DEBUG nova.compute.manager [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing instance network info cache due to event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.402 250022 DEBUG oslo_concurrency.lockutils [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.472 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:29 compute-0 nova_compute[250018]: 2026-01-20 15:24:29.477 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:24:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:30.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:30 compute-0 ceph-mon[74360]: pgmap v3069: 321 pgs: 321 active+clean; 161 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 14 op/s
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.395 250022 DEBUG nova.network.neutron [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.412 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.412 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance network_info: |[{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.413 250022 DEBUG oslo_concurrency.lockutils [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.413 250022 DEBUG nova.network.neutron [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.416 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Start _get_guest_xml network_info=[{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.419 250022 WARNING nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.423 250022 DEBUG nova.virt.libvirt.host [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.424 250022 DEBUG nova.virt.libvirt.host [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.431 250022 DEBUG nova.virt.libvirt.host [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.431 250022 DEBUG nova.virt.libvirt.host [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.432 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.432 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.433 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.433 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.433 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.433 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.434 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.434 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.434 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.434 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.435 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.435 250022 DEBUG nova.virt.hardware [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.437 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:30.794 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:30.794 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:30.795 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.834 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:24:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/758575018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.867 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.899 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:30 compute-0 nova_compute[250018]: 2026-01-20 15:24:30.906 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:24:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592825777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/758575018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.328 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.331 250022 DEBUG nova.virt.libvirt.vif [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:24:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1934224565',display_name='tempest-TestNetworkAdvancedServerOps-server-1934224565',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1934224565',id=201,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7eRRiZX7Z0gwhADeTQp/GmdK+Gan6DSxYDy+ge9zWSe9WjzThIejOjnXbubm42+2V6mvuyTko1B/LIcp7s5Ksj2VFdJX32syZeRGa674Ht1jUDA1vmPNUafeYfNfojGA==',key_name='tempest-TestNetworkAdvancedServerOps-1914065002',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-i99bna7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:24:24Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=d92c0bb3-a866-40ac-a664-d100a6dbbc6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.331 250022 DEBUG nova.network.os_vif_util [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.332 250022 DEBUG nova.network.os_vif_util [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.334 250022 DEBUG nova.objects.instance [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.349 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <uuid>d92c0bb3-a866-40ac-a664-d100a6dbbc6b</uuid>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <name>instance-000000c9</name>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1934224565</nova:name>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:24:30</nova:creationTime>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:user uuid="442a7a5cb8ea426a82be9762b262d171">tempest-TestNetworkAdvancedServerOps-175282664-project-member</nova:user>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:project uuid="1ed5feeeafe7448a8efb47ab975b0ead">tempest-TestNetworkAdvancedServerOps-175282664</nova:project>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <nova:port uuid="3a19935a-bf2c-4d28-b687-e8294d1773dc">
Jan 20 15:24:31 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <system>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="serial">d92c0bb3-a866-40ac-a664-d100a6dbbc6b</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="uuid">d92c0bb3-a866-40ac-a664-d100a6dbbc6b</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </system>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <os>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </os>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <features>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </features>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk">
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </source>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config">
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </source>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:24:31 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:04:8d:2f"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <target dev="tap3a19935a-bf"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/console.log" append="off"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <video>
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </video>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:24:31 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:24:31 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:24:31 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:24:31 compute-0 nova_compute[250018]: </domain>
Jan 20 15:24:31 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.350 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Preparing to wait for external event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.351 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.351 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.352 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.352 250022 DEBUG nova.virt.libvirt.vif [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:24:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1934224565',display_name='tempest-TestNetworkAdvancedServerOps-server-1934224565',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1934224565',id=201,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7eRRiZX7Z0gwhADeTQp/GmdK+Gan6DSxYDy+ge9zWSe9WjzThIejOjnXbubm42+2V6mvuyTko1B/LIcp7s5Ksj2VFdJX32syZeRGa674Ht1jUDA1vmPNUafeYfNfojGA==',key_name='tempest-TestNetworkAdvancedServerOps-1914065002',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-i99bna7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:24:24Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=d92c0bb3-a866-40ac-a664-d100a6dbbc6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.353 250022 DEBUG nova.network.os_vif_util [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.353 250022 DEBUG nova.network.os_vif_util [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.354 250022 DEBUG os_vif [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.355 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.355 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.356 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.360 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a19935a-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.360 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a19935a-bf, col_values=(('external_ids', {'iface-id': '3a19935a-bf2c-4d28-b687-e8294d1773dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8d:2f', 'vm-uuid': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.361 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:31 compute-0 NetworkManager[48960]: <info>  [1768922671.3627] manager: (tap3a19935a-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.368 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.369 250022 INFO os_vif [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf')
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.418 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.418 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.418 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] No VIF found with MAC fa:16:3e:04:8d:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.419 250022 INFO nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Using config drive
Jan 20 15:24:31 compute-0 nova_compute[250018]: 2026-01-20 15:24:31.445 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:32.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.153 250022 INFO nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Creating config drive at /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.158 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61s1brvb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.293 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61s1brvb" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:32 compute-0 ceph-mon[74360]: pgmap v3070: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/592825777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.333 250022 DEBUG nova.storage.rbd_utils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] rbd image d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.336 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.365 250022 DEBUG nova.network.neutron [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updated VIF entry in instance network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.365 250022 DEBUG nova.network.neutron [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.385 250022 DEBUG oslo_concurrency.lockutils [req-920a8768-5d48-4fa3-9e05-56195eecddcf req-fe40ec0c-3175-43d9-b486-7abb9332551d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.511 250022 DEBUG oslo_concurrency.processutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config d92c0bb3-a866-40ac-a664-d100a6dbbc6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.511 250022 INFO nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Deleting local config drive /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b/disk.config because it was imported into RBD.
Jan 20 15:24:32 compute-0 kernel: tap3a19935a-bf: entered promiscuous mode
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.5700] manager: (tap3a19935a-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Jan 20 15:24:32 compute-0 ovn_controller[148666]: 2026-01-20T15:24:32Z|00727|binding|INFO|Claiming lport 3a19935a-bf2c-4d28-b687-e8294d1773dc for this chassis.
Jan 20 15:24:32 compute-0 ovn_controller[148666]: 2026-01-20T15:24:32Z|00728|binding|INFO|3a19935a-bf2c-4d28-b687-e8294d1773dc: Claiming fa:16:3e:04:8d:2f 10.100.0.6
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.574 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.592 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8d:2f 10.100.0.6'], port_security=['fa:16:3e:04:8d:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1c1516c-e4c9-43f2-a8ac-732f711a4188', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0478e54e-d17b-4524-b6e2-b1a7413ccfd8, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3a19935a-bf2c-4d28-b687-e8294d1773dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.593 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3a19935a-bf2c-4d28-b687-e8294d1773dc in datapath 9aa12a4e-d8a3-46c3-928f-fca08b856d45 bound to our chassis
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.594 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:24:32 compute-0 systemd-udevd[373800]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:24:32 compute-0 systemd-machined[216401]: New machine qemu-88-instance-000000c9.
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.608 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dbed0b45-532a-4bc2-8547-c4d6761cca77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.609 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9aa12a4e-d1 in ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.612 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9aa12a4e-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.612 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc26030-4e7e-4b11-935e-6f4e8ca3bdc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.613 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[60934cab-f596-4f76-b3f9-c2b11a44a005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.6219] device (tap3a19935a-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.6230] device (tap3a19935a-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:24:32 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-000000c9.
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.627 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d190cb98-7479-4fa5-b634-96fbb91dfb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.656 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d651e4e9-3c89-4d14-9b39-586096e4e858]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.662 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 ovn_controller[148666]: 2026-01-20T15:24:32Z|00729|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc ovn-installed in OVS
Jan 20 15:24:32 compute-0 ovn_controller[148666]: 2026-01-20T15:24:32Z|00730|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc up in Southbound
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.668 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.684 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[812a9c38-d37b-4da5-85ee-d19f6ff980cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.689 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[88235450-9de7-4981-bb6c-2fb6bc0a2588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.6905] manager: (tap9aa12a4e-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.721 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bfdd8a-a413-4a94-8126-d6fde28de9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.723 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[2192ec62-b69f-49d8-8ca6-2f4a6146575d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.7416] device (tap9aa12a4e-d0): carrier: link connected
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.747 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[fd93c15f-bb34-4090-8466-e00213a38d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.762 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[26d7580e-33df-4c76-ab8e-bb1bf3702686]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aa12a4e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:6a:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861346, 'reachable_time': 26356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373832, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.777 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[69b71591-2b28-42f9-8526-aad3a4f5b55b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:6a1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861346, 'tstamp': 861346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373833, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.794 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f38794dc-efdb-4c99-a1ed-aa0fd36be9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aa12a4e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:6a:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861346, 'reachable_time': 26356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373834, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.825 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ff25611a-ebaf-47ec-a3e3-09437c18e4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.883 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7dec0a-1669-47bc-90ce-7d6f9d29086d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.884 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aa12a4e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.885 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.885 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aa12a4e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:32 compute-0 NetworkManager[48960]: <info>  [1768922672.8881] manager: (tap9aa12a4e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Jan 20 15:24:32 compute-0 kernel: tap9aa12a4e-d0: entered promiscuous mode
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.887 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.893 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aa12a4e-d0, col_values=(('external_ids', {'iface-id': '66a8e378-713b-4681-aec0-15a66f9670f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.895 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 ovn_controller[148666]: 2026-01-20T15:24:32Z|00731|binding|INFO|Releasing lport 66a8e378-713b-4681-aec0-15a66f9670f1 from this chassis (sb_readonly=0)
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.896 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.897 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3919ba71-d311-4174-a5a3-0261b3d5b5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.898 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:24:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:32.899 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'env', 'PROCESS_TAG=haproxy-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9aa12a4e-d8a3-46c3-928f-fca08b856d45.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.946 250022 DEBUG nova.compute.manager [req-633d267d-f6cc-4561-9542-f7f7fb56c6bf req-ad46a623-fae8-4965-b52f-5ef29ade8a3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.946 250022 DEBUG oslo_concurrency.lockutils [req-633d267d-f6cc-4561-9542-f7f7fb56c6bf req-ad46a623-fae8-4965-b52f-5ef29ade8a3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.946 250022 DEBUG oslo_concurrency.lockutils [req-633d267d-f6cc-4561-9542-f7f7fb56c6bf req-ad46a623-fae8-4965-b52f-5ef29ade8a3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.947 250022 DEBUG oslo_concurrency.lockutils [req-633d267d-f6cc-4561-9542-f7f7fb56c6bf req-ad46a623-fae8-4965-b52f-5ef29ade8a3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:32 compute-0 nova_compute[250018]: 2026-01-20 15:24:32.947 250022 DEBUG nova.compute.manager [req-633d267d-f6cc-4561-9542-f7f7fb56c6bf req-ad46a623-fae8-4965-b52f-5ef29ade8a3a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Processing event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.122 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922673.1222274, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.123 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Started (Lifecycle Event)
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.124 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.128 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.131 250022 INFO nova.virt.libvirt.driver [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance spawned successfully.
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.132 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.161 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.165 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.166 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.166 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.167 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.167 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.168 250022 DEBUG nova.virt.libvirt.driver [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.172 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.209 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.210 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922673.1224132, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.210 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Paused (Lifecycle Event)
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.227 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.230 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922673.1269484, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.230 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Resumed (Lifecycle Event)
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.234 250022 INFO nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Took 8.87 seconds to spawn the instance on the hypervisor.
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.234 250022 DEBUG nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:33 compute-0 podman[373905]: 2026-01-20 15:24:33.251688986 +0000 UTC m=+0.050909678 container create bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.263 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.266 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.294 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:24:33 compute-0 systemd[1]: Started libpod-conmon-bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4.scope.
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.308 250022 INFO nova.compute.manager [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Took 9.84 seconds to build instance.
Jan 20 15:24:33 compute-0 podman[373905]: 2026-01-20 15:24:33.221789991 +0000 UTC m=+0.021010703 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:24:33 compute-0 nova_compute[250018]: 2026-01-20 15:24:33.324 250022 DEBUG oslo_concurrency.lockutils [None req-dc762a23-ee5f-4f6b-965b-f46d22af86a3 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d50503b85b34fdeedd2441add50bb915a133803ada60bd8256407195608f30e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:33 compute-0 podman[373905]: 2026-01-20 15:24:33.34172568 +0000 UTC m=+0.140946392 container init bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 20 15:24:33 compute-0 podman[373905]: 2026-01-20 15:24:33.347629712 +0000 UTC m=+0.146850404 container start bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:24:33 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [NOTICE]   (373924) : New worker (373926) forked
Jan 20 15:24:33 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [NOTICE]   (373924) : Loading success.
Jan 20 15:24:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:34 compute-0 ceph-mon[74360]: pgmap v3071: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:24:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:24:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.067 250022 DEBUG nova.compute.manager [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.067 250022 DEBUG oslo_concurrency.lockutils [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.067 250022 DEBUG oslo_concurrency.lockutils [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.067 250022 DEBUG oslo_concurrency.lockutils [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.068 250022 DEBUG nova.compute.manager [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.068 250022 WARNING nova.compute.manager [req-a5be6540-1967-47c2-8de7-a732496f6bf7 req-d33f65ea-63a1-4868-940d-f0fd079e7716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state active and task_state None.
Jan 20 15:24:35 compute-0 ovn_controller[148666]: 2026-01-20T15:24:35Z|00732|binding|INFO|Releasing lport 66a8e378-713b-4681-aec0-15a66f9670f1 from this chassis (sb_readonly=0)
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.772 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:35 compute-0 NetworkManager[48960]: <info>  [1768922675.7762] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 20 15:24:35 compute-0 NetworkManager[48960]: <info>  [1768922675.7778] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 20 15:24:35 compute-0 ovn_controller[148666]: 2026-01-20T15:24:35Z|00733|binding|INFO|Releasing lport 66a8e378-713b-4681-aec0-15a66f9670f1 from this chassis (sb_readonly=0)
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.798 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:35 compute-0 nova_compute[250018]: 2026-01-20 15:24:35.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:36.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.183 250022 DEBUG nova.compute.manager [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.183 250022 DEBUG nova.compute.manager [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing instance network info cache due to event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.183 250022 DEBUG oslo_concurrency.lockutils [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.184 250022 DEBUG oslo_concurrency.lockutils [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.184 250022 DEBUG nova.network.neutron [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:24:36 compute-0 ceph-mon[74360]: pgmap v3072: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:24:36 compute-0 nova_compute[250018]: 2026-01-20 15:24:36.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:24:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:37 compute-0 nova_compute[250018]: 2026-01-20 15:24:37.349 250022 DEBUG nova.network.neutron [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updated VIF entry in instance network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:24:37 compute-0 nova_compute[250018]: 2026-01-20 15:24:37.350 250022 DEBUG nova.network.neutron [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:24:37 compute-0 nova_compute[250018]: 2026-01-20 15:24:37.372 250022 DEBUG oslo_concurrency.lockutils [req-4bf34d38-6a25-4ac5-9ce8-b2aebc48e28d req-49ab3709-4603-47d9-af7e-54fb6f784c17 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:24:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:38.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:38 compute-0 ceph-mon[74360]: pgmap v3073: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 20 15:24:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 467 KiB/s wr, 71 op/s
Jan 20 15:24:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:40.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:40 compute-0 ceph-mon[74360]: pgmap v3074: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 467 KiB/s wr, 71 op/s
Jan 20 15:24:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 467 KiB/s wr, 86 op/s
Jan 20 15:24:40 compute-0 nova_compute[250018]: 2026-01-20 15:24:40.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:41 compute-0 nova_compute[250018]: 2026-01-20 15:24:41.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:42 compute-0 nova_compute[250018]: 2026-01-20 15:24:42.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:42 compute-0 nova_compute[250018]: 2026-01-20 15:24:42.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:24:42 compute-0 nova_compute[250018]: 2026-01-20 15:24:42.088 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:24:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:42 compute-0 ceph-mon[74360]: pgmap v3075: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 467 KiB/s wr, 86 op/s
Jan 20 15:24:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:44 compute-0 ceph-mon[74360]: pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:44.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:45 compute-0 sudo[373943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:45 compute-0 sudo[373943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:45 compute-0 sudo[373943]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:45 compute-0 sudo[373968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:45 compute-0 sudo[373968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:45 compute-0 sudo[373968]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:45 compute-0 nova_compute[250018]: 2026-01-20 15:24:45.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:46.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:46 compute-0 nova_compute[250018]: 2026-01-20 15:24:46.368 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:46 compute-0 ceph-mon[74360]: pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:24:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 577 KiB/s wr, 84 op/s
Jan 20 15:24:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:46.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:47 compute-0 podman[373995]: 2026-01-20 15:24:47.476470639 +0000 UTC m=+0.064105209 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 15:24:47 compute-0 podman[373994]: 2026-01-20 15:24:47.50513523 +0000 UTC m=+0.092769860 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:24:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:47 compute-0 ovn_controller[148666]: 2026-01-20T15:24:47Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:8d:2f 10.100.0.6
Jan 20 15:24:47 compute-0 ovn_controller[148666]: 2026-01-20T15:24:47Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8d:2f 10.100.0.6
Jan 20 15:24:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:48 compute-0 ceph-mon[74360]: pgmap v3078: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 577 KiB/s wr, 84 op/s
Jan 20 15:24:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 576 KiB/s wr, 27 op/s
Jan 20 15:24:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:49 compute-0 sudo[374037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:49 compute-0 sudo[374037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-0 sudo[374037]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:49 compute-0 sudo[374062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:24:49 compute-0 sudo[374062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-0 sudo[374062]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:49 compute-0 sudo[374087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:49 compute-0 sudo[374087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:49 compute-0 sudo[374087]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-0 sudo[374112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:24:50 compute-0 sudo[374112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:50 compute-0 ceph-mon[74360]: pgmap v3079: 321 pgs: 321 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 576 KiB/s wr, 27 op/s
Jan 20 15:24:50 compute-0 sudo[374112]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c907504d-6462-469c-b0f1-2e97db89accf does not exist
Jan 20 15:24:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 48281225-4442-4985-8376-5e3a9f2c39cb does not exist
Jan 20 15:24:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0977d4e4-b4ea-458c-a130-56e0cdae0ec2 does not exist
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:24:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:24:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:50 compute-0 sudo[374169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:50 compute-0 sudo[374169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-0 sudo[374169]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-0 sudo[374194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:24:50 compute-0 sudo[374194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-0 sudo[374194]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-0 sudo[374219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:50 compute-0 sudo[374219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-0 sudo[374219]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:24:50 compute-0 sudo[374244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:24:50 compute-0 sudo[374244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:50.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:50 compute-0 nova_compute[250018]: 2026-01-20 15:24:50.901 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.147423165 +0000 UTC m=+0.039201729 container create c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:24:51 compute-0 systemd[1]: Started libpod-conmon-c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10.scope.
Jan 20 15:24:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.225521413 +0000 UTC m=+0.117299977 container init c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.13002084 +0000 UTC m=+0.021799414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.233211454 +0000 UTC m=+0.124990018 container start c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.236676378 +0000 UTC m=+0.128454972 container attach c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:24:51 compute-0 brave_kalam[374325]: 167 167
Jan 20 15:24:51 compute-0 systemd[1]: libpod-c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10.scope: Deactivated successfully.
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.240095821 +0000 UTC m=+0.131874385 container died c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3ab36916b8c66b6c9b7351e04214202308aa2dd270aac136820484e8d700a3f-merged.mount: Deactivated successfully.
Jan 20 15:24:51 compute-0 podman[374308]: 2026-01-20 15:24:51.279064094 +0000 UTC m=+0.170842658 container remove c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:24:51 compute-0 systemd[1]: libpod-conmon-c7a4e44283c707e50ea9e38a66aa164a3a3599e7e8a4818773fe4ade8d8f9e10.scope: Deactivated successfully.
Jan 20 15:24:51 compute-0 nova_compute[250018]: 2026-01-20 15:24:51.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:51 compute-0 podman[374351]: 2026-01-20 15:24:51.439815525 +0000 UTC m=+0.043564979 container create 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:24:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:24:51 compute-0 systemd[1]: Started libpod-conmon-4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0.scope.
Jan 20 15:24:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:51 compute-0 podman[374351]: 2026-01-20 15:24:51.41981589 +0000 UTC m=+0.023565344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:51 compute-0 podman[374351]: 2026-01-20 15:24:51.53356769 +0000 UTC m=+0.137317184 container init 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:24:51 compute-0 podman[374351]: 2026-01-20 15:24:51.540821588 +0000 UTC m=+0.144571022 container start 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:24:51 compute-0 podman[374351]: 2026-01-20 15:24:51.544108828 +0000 UTC m=+0.147858282 container attach 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:24:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:52.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:52 compute-0 pensive_heyrovsky[374367]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:24:52 compute-0 pensive_heyrovsky[374367]: --> relative data size: 1.0
Jan 20 15:24:52 compute-0 pensive_heyrovsky[374367]: --> All data devices are unavailable
Jan 20 15:24:52 compute-0 systemd[1]: libpod-4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0.scope: Deactivated successfully.
Jan 20 15:24:52 compute-0 podman[374351]: 2026-01-20 15:24:52.329932229 +0000 UTC m=+0.933681723 container died 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a27b753452060e55dcad92eb510b83aa8a35812cdd36ad2a854ab9d2a354663-merged.mount: Deactivated successfully.
Jan 20 15:24:52 compute-0 podman[374351]: 2026-01-20 15:24:52.39602011 +0000 UTC m=+0.999769544 container remove 4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:24:52 compute-0 systemd[1]: libpod-conmon-4117426ac024b11b7c364b17c3d47ec305671d3da92c49cb080364ff02ac58c0.scope: Deactivated successfully.
Jan 20 15:24:52 compute-0 sudo[374244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:52 compute-0 sudo[374396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:52 compute-0 sudo[374396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:52 compute-0 sudo[374396]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:52 compute-0 sudo[374421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:24:52 compute-0 sudo[374421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:52 compute-0 sudo[374421]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:24:52 compute-0 sudo[374446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:52 compute-0 sudo[374446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:52 compute-0 sudo[374446]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:52 compute-0 ceph-mon[74360]: pgmap v3080: 321 pgs: 321 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:24:52
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log']
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:24:52 compute-0 sudo[374471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:24:52 compute-0 sudo[374471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:52.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.005058832 +0000 UTC m=+0.041294676 container create d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:24:53 compute-0 systemd[1]: Started libpod-conmon-d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424.scope.
Jan 20 15:24:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:52.987677879 +0000 UTC m=+0.023913743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.092333212 +0000 UTC m=+0.128569076 container init d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.098366216 +0000 UTC m=+0.134602060 container start d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.102031376 +0000 UTC m=+0.138267220 container attach d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:24:53 compute-0 fervent_brattain[374552]: 167 167
Jan 20 15:24:53 compute-0 systemd[1]: libpod-d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424.scope: Deactivated successfully.
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.104833353 +0000 UTC m=+0.141069237 container died d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff1f8c2be1e012f12e79498053e35796f048f28b2d3dae6bf70e7060461753b0-merged.mount: Deactivated successfully.
Jan 20 15:24:53 compute-0 podman[374536]: 2026-01-20 15:24:53.146891659 +0000 UTC m=+0.183127503 container remove d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 15:24:53 compute-0 systemd[1]: libpod-conmon-d9dd1f03749968ea3fea74ebc75184e455501ad62494aea024f9fcad0efda424.scope: Deactivated successfully.
Jan 20 15:24:53 compute-0 podman[374577]: 2026-01-20 15:24:53.313250343 +0000 UTC m=+0.041923694 container create d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:24:53 compute-0 systemd[1]: Started libpod-conmon-d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710.scope.
Jan 20 15:24:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6527fe3b6c238369a6990cfb85d8948f2dd19345f055b6326eef915482bcf510/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6527fe3b6c238369a6990cfb85d8948f2dd19345f055b6326eef915482bcf510/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6527fe3b6c238369a6990cfb85d8948f2dd19345f055b6326eef915482bcf510/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6527fe3b6c238369a6990cfb85d8948f2dd19345f055b6326eef915482bcf510/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:53 compute-0 podman[374577]: 2026-01-20 15:24:53.296016814 +0000 UTC m=+0.024690185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:53 compute-0 podman[374577]: 2026-01-20 15:24:53.403110953 +0000 UTC m=+0.131784334 container init d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:24:53 compute-0 podman[374577]: 2026-01-20 15:24:53.412852129 +0000 UTC m=+0.141525490 container start d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:24:53 compute-0 podman[374577]: 2026-01-20 15:24:53.416280342 +0000 UTC m=+0.144953693 container attach d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.431 250022 INFO nova.compute.manager [None req-ac4a2319-c4cc-4cf3-b162-faf756670ce4 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Get console output
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.440 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.708 250022 DEBUG nova.objects.instance [None req-27fc402a-1d33-4e6e-b269-555df75dd0c7 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.745 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922693.7435696, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.747 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Paused (Lifecycle Event)
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.777 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.789 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:24:53 compute-0 nova_compute[250018]: 2026-01-20 15:24:53.814 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 20 15:24:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:54.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:54 compute-0 nice_saha[374594]: {
Jan 20 15:24:54 compute-0 nice_saha[374594]:     "0": [
Jan 20 15:24:54 compute-0 nice_saha[374594]:         {
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "devices": [
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "/dev/loop3"
Jan 20 15:24:54 compute-0 nice_saha[374594]:             ],
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "lv_name": "ceph_lv0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "lv_size": "7511998464",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "name": "ceph_lv0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "tags": {
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.cluster_name": "ceph",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.crush_device_class": "",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.encrypted": "0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.osd_id": "0",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.type": "block",
Jan 20 15:24:54 compute-0 nice_saha[374594]:                 "ceph.vdo": "0"
Jan 20 15:24:54 compute-0 nice_saha[374594]:             },
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "type": "block",
Jan 20 15:24:54 compute-0 nice_saha[374594]:             "vg_name": "ceph_vg0"
Jan 20 15:24:54 compute-0 nice_saha[374594]:         }
Jan 20 15:24:54 compute-0 nice_saha[374594]:     ]
Jan 20 15:24:54 compute-0 nice_saha[374594]: }
Jan 20 15:24:54 compute-0 systemd[1]: libpod-d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710.scope: Deactivated successfully.
Jan 20 15:24:54 compute-0 podman[374577]: 2026-01-20 15:24:54.221317336 +0000 UTC m=+0.949990727 container died d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:24:54 compute-0 ceph-mon[74360]: pgmap v3081: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6527fe3b6c238369a6990cfb85d8948f2dd19345f055b6326eef915482bcf510-merged.mount: Deactivated successfully.
Jan 20 15:24:54 compute-0 podman[374577]: 2026-01-20 15:24:54.796184847 +0000 UTC m=+1.524858208 container remove d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:24:54 compute-0 kernel: tap3a19935a-bf (unregistering): left promiscuous mode
Jan 20 15:24:54 compute-0 systemd[1]: libpod-conmon-d2f721446a6ccd764016bfe42d2128c23b7218830ac3b5a5109df909b08f7710.scope: Deactivated successfully.
Jan 20 15:24:54 compute-0 NetworkManager[48960]: <info>  [1768922694.8061] device (tap3a19935a-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:24:54 compute-0 ovn_controller[148666]: 2026-01-20T15:24:54Z|00734|binding|INFO|Releasing lport 3a19935a-bf2c-4d28-b687-e8294d1773dc from this chassis (sb_readonly=0)
Jan 20 15:24:54 compute-0 ovn_controller[148666]: 2026-01-20T15:24:54Z|00735|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc down in Southbound
Jan 20 15:24:54 compute-0 ovn_controller[148666]: 2026-01-20T15:24:54Z|00736|binding|INFO|Removing iface tap3a19935a-bf ovn-installed in OVS
Jan 20 15:24:54 compute-0 nova_compute[250018]: 2026-01-20 15:24:54.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:54.823 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8d:2f 10.100.0.6'], port_security=['fa:16:3e:04:8d:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1c1516c-e4c9-43f2-a8ac-732f711a4188', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0478e54e-d17b-4524-b6e2-b1a7413ccfd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3a19935a-bf2c-4d28-b687-e8294d1773dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:24:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:54.824 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3a19935a-bf2c-4d28-b687-e8294d1773dc in datapath 9aa12a4e-d8a3-46c3-928f-fca08b856d45 unbound from our chassis
Jan 20 15:24:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:54.825 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aa12a4e-d8a3-46c3-928f-fca08b856d45, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:24:54 compute-0 sudo[374471]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:54.828 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[894386ef-7af4-4a55-8b54-a28bdfa20b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:54.829 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 namespace which is not needed anymore
Jan 20 15:24:54 compute-0 nova_compute[250018]: 2026-01-20 15:24:54.840 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:54 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Jan 20 15:24:54 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c9.scope: Consumed 14.429s CPU time.
Jan 20 15:24:54 compute-0 systemd-machined[216401]: Machine qemu-88-instance-000000c9 terminated.
Jan 20 15:24:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:24:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:54.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:24:54 compute-0 sudo[374623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:54 compute-0 sudo[374623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:54 compute-0 sudo[374623]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:54 compute-0 sshd-session[374534]: Invalid user test from 134.122.57.138 port 50032
Jan 20 15:24:54 compute-0 sudo[374667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:24:54 compute-0 sudo[374667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:54 compute-0 nova_compute[250018]: 2026-01-20 15:24:54.977 250022 DEBUG nova.compute.manager [None req-27fc402a-1d33-4e6e-b269-555df75dd0c7 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:24:54 compute-0 sudo[374667]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:54 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [NOTICE]   (373924) : haproxy version is 2.8.14-c23fe91
Jan 20 15:24:54 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [NOTICE]   (373924) : path to executable is /usr/sbin/haproxy
Jan 20 15:24:54 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [WARNING]  (373924) : Exiting Master process...
Jan 20 15:24:54 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [ALERT]    (373924) : Current worker (373926) exited with code 143 (Terminated)
Jan 20 15:24:54 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[373920]: [WARNING]  (373924) : All workers exited. Exiting... (0)
Jan 20 15:24:54 compute-0 systemd[1]: libpod-bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4.scope: Deactivated successfully.
Jan 20 15:24:55 compute-0 podman[374668]: 2026-01-20 15:24:55.000937518 +0000 UTC m=+0.061759504 container died bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.025 250022 DEBUG nova.compute.manager [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.026 250022 DEBUG oslo_concurrency.lockutils [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.026 250022 DEBUG oslo_concurrency.lockutils [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.026 250022 DEBUG oslo_concurrency.lockutils [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.027 250022 DEBUG nova.compute.manager [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.027 250022 WARNING nova.compute.manager [req-fa9d89a9-4a7b-4363-a587-8f78416b754a req-6cb39ee4-4ea7-4bea-b594-b56288c01e34 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state active and task_state suspending.
Jan 20 15:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4-userdata-shm.mount: Deactivated successfully.
Jan 20 15:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d50503b85b34fdeedd2441add50bb915a133803ada60bd8256407195608f30e-merged.mount: Deactivated successfully.
Jan 20 15:24:55 compute-0 podman[374668]: 2026-01-20 15:24:55.045340018 +0000 UTC m=+0.106161994 container cleanup bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:24:55 compute-0 systemd[1]: libpod-conmon-bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4.scope: Deactivated successfully.
Jan 20 15:24:55 compute-0 sudo[374715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:55 compute-0 sudo[374715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:55 compute-0 sudo[374715]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.089 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:24:55 compute-0 podman[374756]: 2026-01-20 15:24:55.116521268 +0000 UTC m=+0.045771558 container remove bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.123 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e649fb30-b64f-4c97-910f-71a1a678aa74]: (4, ('Tue Jan 20 03:24:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 (bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4)\nbc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4\nTue Jan 20 03:24:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 (bc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4)\nbc5d9bde5be7084acb4bcb8f400a02f5b61e4f5800171396ee22363c720932f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.126 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[77cb20a9-0374-41b0-8551-419a3d1e2172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.127 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aa12a4e-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:55 compute-0 kernel: tap9aa12a4e-d0: left promiscuous mode
Jan 20 15:24:55 compute-0 sudo[374763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:24:55 compute-0 sudo[374763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.154 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.158 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4fcd28-73a0-41a5-ab14-ecd809c9d133]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 sshd-session[374534]: Connection closed by invalid user test 134.122.57.138 port 50032 [preauth]
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[918a8140-834e-4767-a5a5-7fbf89cc90b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.178 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[817b993b-deda-43af-96a6-ac33ebd78de3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.197 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1841f571-faa7-4ba1-9c7f-d0f43f375190]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861340, 'reachable_time': 31628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374800, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.201 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:24:55 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:24:55.201 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bb45ff-90fb-4ce0-9961-9f887f966461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:24:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d9aa12a4e\x2dd8a3\x2d46c3\x2d928f\x2dfca08b856d45.mount: Deactivated successfully.
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.493881735 +0000 UTC m=+0.046899030 container create 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:24:55 compute-0 systemd[1]: Started libpod-conmon-85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155.scope.
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.474795865 +0000 UTC m=+0.027813150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.590111609 +0000 UTC m=+0.143128914 container init 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.596370969 +0000 UTC m=+0.149388234 container start 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.599625587 +0000 UTC m=+0.152642852 container attach 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:24:55 compute-0 systemd[1]: libpod-85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155.scope: Deactivated successfully.
Jan 20 15:24:55 compute-0 silly_varahamihira[374856]: 167 167
Jan 20 15:24:55 compute-0 conmon[374856]: conmon 85b78500b45ad830b360 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155.scope/container/memory.events
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.603228876 +0000 UTC m=+0.156246141 container died 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:24:55 compute-0 podman[374840]: 2026-01-20 15:24:55.639548876 +0000 UTC m=+0.192566141 container remove 85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:24:55 compute-0 systemd[1]: libpod-conmon-85b78500b45ad830b3603e8722e90ed5edbc5babbf9861ccd782840c3880f155.scope: Deactivated successfully.
Jan 20 15:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-43047750aa7416061b2b853d0f2a6e7e6263ce0854eaa3bfa11685712debf1ea-merged.mount: Deactivated successfully.
Jan 20 15:24:55 compute-0 podman[374881]: 2026-01-20 15:24:55.800165125 +0000 UTC m=+0.038272875 container create 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:24:55 compute-0 systemd[1]: Started libpod-conmon-5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5.scope.
Jan 20 15:24:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f939efc934cb45e6d49affbea994ccadde5d78cf9e6b5ba5031b86bc9cd81d3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f939efc934cb45e6d49affbea994ccadde5d78cf9e6b5ba5031b86bc9cd81d3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f939efc934cb45e6d49affbea994ccadde5d78cf9e6b5ba5031b86bc9cd81d3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f939efc934cb45e6d49affbea994ccadde5d78cf9e6b5ba5031b86bc9cd81d3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:24:55 compute-0 podman[374881]: 2026-01-20 15:24:55.876617868 +0000 UTC m=+0.114725638 container init 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:24:55 compute-0 podman[374881]: 2026-01-20 15:24:55.783551772 +0000 UTC m=+0.021659542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:24:55 compute-0 podman[374881]: 2026-01-20 15:24:55.884632647 +0000 UTC m=+0.122740397 container start 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:24:55 compute-0 podman[374881]: 2026-01-20 15:24:55.888274965 +0000 UTC m=+0.126382735 container attach 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:24:55 compute-0 nova_compute[250018]: 2026-01-20 15:24:55.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:56.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:56 compute-0 nova_compute[250018]: 2026-01-20 15:24:56.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:24:56 compute-0 youthful_golick[374898]: {
Jan 20 15:24:56 compute-0 youthful_golick[374898]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:24:56 compute-0 youthful_golick[374898]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:24:56 compute-0 youthful_golick[374898]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:24:56 compute-0 youthful_golick[374898]:         "osd_id": 0,
Jan 20 15:24:56 compute-0 youthful_golick[374898]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:24:56 compute-0 youthful_golick[374898]:         "type": "bluestore"
Jan 20 15:24:56 compute-0 youthful_golick[374898]:     }
Jan 20 15:24:56 compute-0 youthful_golick[374898]: }
Jan 20 15:24:56 compute-0 systemd[1]: libpod-5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5.scope: Deactivated successfully.
Jan 20 15:24:56 compute-0 podman[374881]: 2026-01-20 15:24:56.692762975 +0000 UTC m=+0.930870735 container died 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f939efc934cb45e6d49affbea994ccadde5d78cf9e6b5ba5031b86bc9cd81d3f-merged.mount: Deactivated successfully.
Jan 20 15:24:56 compute-0 podman[374881]: 2026-01-20 15:24:56.746661805 +0000 UTC m=+0.984769545 container remove 5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:24:56 compute-0 systemd[1]: libpod-conmon-5ed30d620ad7fcff63462eb72f2984df4fd2d4e84080de772b7b28bc3c92b7b5.scope: Deactivated successfully.
Jan 20 15:24:56 compute-0 ceph-mon[74360]: pgmap v3082: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:24:56 compute-0 sudo[374763]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:24:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:24:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a07c816b-6bd5-4c29-9610-dd215db09af1 does not exist
Jan 20 15:24:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 27e3a4d6-7e05-44e4-9504-b6f35a787431 does not exist
Jan 20 15:24:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c8a3019c-2e64-4440-a720-72736715709a does not exist
Jan 20 15:24:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:24:56 compute-0 sudo[374934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:24:56 compute-0 sudo[374934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:56 compute-0 sudo[374934]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:56 compute-0 sudo[374959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:24:56 compute-0 sudo[374959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:24:56 compute-0 sudo[374959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:24:56 compute-0 nova_compute[250018]: 2026-01-20 15:24:56.984 250022 INFO nova.compute.manager [None req-a2c27049-e3ad-4852-9c2b-ffa2197f0df4 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Get console output
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.130 250022 DEBUG nova.compute.manager [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.131 250022 DEBUG oslo_concurrency.lockutils [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.131 250022 DEBUG oslo_concurrency.lockutils [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.131 250022 DEBUG oslo_concurrency.lockutils [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.132 250022 DEBUG nova.compute.manager [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.132 250022 WARNING nova.compute.manager [req-3faa408e-0a42-4195-a61a-153f76c0c948 req-4029d8f2-2629-4b7a-aa9b-728a7d44ead9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state suspended and task_state None.
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.193 250022 INFO nova.compute.manager [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Resuming
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.194 250022 DEBUG nova.objects.instance [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'flavor' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.225 250022 DEBUG oslo_concurrency.lockutils [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.225 250022 DEBUG oslo_concurrency.lockutils [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquired lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:24:57 compute-0 nova_compute[250018]: 2026-01-20 15:24:57.225 250022 DEBUG nova.network.neutron [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:24:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:24:57 compute-0 ceph-mon[74360]: pgmap v3083: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:24:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:24:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:24:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:24:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:24:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:24:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Jan 20 15:24:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:24:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:24:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:24:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:24:59 compute-0 ceph-mon[74360]: pgmap v3084: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Jan 20 15:25:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:00.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.378 250022 DEBUG nova.network.neutron [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.395 250022 DEBUG oslo_concurrency.lockutils [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Releasing lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.399 250022 DEBUG nova.virt.libvirt.vif [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:24:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1934224565',display_name='tempest-TestNetworkAdvancedServerOps-server-1934224565',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1934224565',id=201,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7eRRiZX7Z0gwhADeTQp/GmdK+Gan6DSxYDy+ge9zWSe9WjzThIejOjnXbubm42+2V6mvuyTko1B/LIcp7s5Ksj2VFdJX32syZeRGa674Ht1jUDA1vmPNUafeYfNfojGA==',key_name='tempest-TestNetworkAdvancedServerOps-1914065002',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:24:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-i99bna7o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:24:55Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=d92c0bb3-a866-40ac-a664-d100a6dbbc6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.400 250022 DEBUG nova.network.os_vif_util [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.400 250022 DEBUG nova.network.os_vif_util [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.401 250022 DEBUG os_vif [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.401 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.402 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.402 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.404 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.405 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a19935a-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.405 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a19935a-bf, col_values=(('external_ids', {'iface-id': '3a19935a-bf2c-4d28-b687-e8294d1773dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8d:2f', 'vm-uuid': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.406 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.406 250022 INFO os_vif [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf')
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.425 250022 DEBUG nova.objects.instance [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'numa_topology' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:25:00 compute-0 kernel: tap3a19935a-bf: entered promiscuous mode
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.4877] manager: (tap3a19935a-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Jan 20 15:25:00 compute-0 ovn_controller[148666]: 2026-01-20T15:25:00Z|00737|binding|INFO|Claiming lport 3a19935a-bf2c-4d28-b687-e8294d1773dc for this chassis.
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.488 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 ovn_controller[148666]: 2026-01-20T15:25:00Z|00738|binding|INFO|3a19935a-bf2c-4d28-b687-e8294d1773dc: Claiming fa:16:3e:04:8d:2f 10.100.0.6
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.497 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8d:2f 10.100.0.6'], port_security=['fa:16:3e:04:8d:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f1c1516c-e4c9-43f2-a8ac-732f711a4188', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0478e54e-d17b-4524-b6e2-b1a7413ccfd8, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3a19935a-bf2c-4d28-b687-e8294d1773dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.498 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3a19935a-bf2c-4d28-b687-e8294d1773dc in datapath 9aa12a4e-d8a3-46c3-928f-fca08b856d45 bound to our chassis
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.499 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:25:00 compute-0 ovn_controller[148666]: 2026-01-20T15:25:00Z|00739|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc ovn-installed in OVS
Jan 20 15:25:00 compute-0 ovn_controller[148666]: 2026-01-20T15:25:00Z|00740|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc up in Southbound
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.504 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.509 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.510 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[0f45cc9b-1cec-4217-9813-375abda5578e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.511 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9aa12a4e-d1 in ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.513 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9aa12a4e-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.513 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3814f06b-8a86-4042-b37d-bbfc2cd6d49b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.514 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7b90a5-d73a-4e0a-906e-34954904d578]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 systemd-udevd[375000]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.524 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[23a00e36-eeea-4c39-b9f7-212d1564f7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 systemd-machined[216401]: New machine qemu-89-instance-000000c9.
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.5385] device (tap3a19935a-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.5393] device (tap3a19935a-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.541 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d6a2ed-11e1-4dc5-ba56-7fc4035d9367]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-000000c9.
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.570 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e1315b82-0ae7-4a2f-8d9e-45aacebb5b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.575 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c791291e-1a6a-4fbd-88ea-c1435e1324cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 systemd-udevd[375005]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.5765] manager: (tap9aa12a4e-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.605 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[06c92da4-a3e5-481d-b7cd-e70618a4009c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.607 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee607a5-f70b-4bbd-b7b5-762563f0b5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.6309] device (tap9aa12a4e-d0): carrier: link connected
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.636 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f5bf3a-d83b-4c31-a53b-1545fb87aa94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.652 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9b2c67-d492-45b0-841a-253d683544f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aa12a4e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:6a:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864135, 'reachable_time': 27606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375033, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.666 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b6ec00-3bd7-43ee-90aa-33ae5bde5878]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:6a1b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 864135, 'tstamp': 864135}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375034, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.682 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2eafef-f01c-4b76-a666-35560fb4366b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aa12a4e-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:6a:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864135, 'reachable_time': 27606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375035, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.708 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a868ec8b-dd1e-41aa-beb7-88e20880a1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.768 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4569d087-5b09-4564-aa0d-57de6239a1f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.769 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aa12a4e-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.770 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.770 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aa12a4e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.771 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 NetworkManager[48960]: <info>  [1768922700.7724] manager: (tap9aa12a4e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 20 15:25:00 compute-0 kernel: tap9aa12a4e-d0: entered promiscuous mode
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.774 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.775 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aa12a4e-d0, col_values=(('external_ids', {'iface-id': '66a8e378-713b-4681-aec0-15a66f9670f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.776 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 ovn_controller[148666]: 2026-01-20T15:25:00Z|00741|binding|INFO|Releasing lport 66a8e378-713b-4681-aec0-15a66f9670f1 from this chassis (sb_readonly=0)
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.791 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.792 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a1288fb5-dfa4-4017-aaf3-cfe569580f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.793 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/9aa12a4e-d8a3-46c3-928f-fca08b856d45.pid.haproxy
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 9aa12a4e-d8a3-46c3-928f-fca08b856d45
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:25:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:00.794 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'env', 'PROCESS_TAG=haproxy-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9aa12a4e-d8a3-46c3-928f-fca08b856d45.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:25:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:25:00 compute-0 nova_compute[250018]: 2026-01-20 15:25:00.906 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:25:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.016 250022 DEBUG nova.virt.libvirt.host [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Removed pending event for d92c0bb3-a866-40ac-a664-d100a6dbbc6b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.016 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922701.0160832, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.017 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Started (Lifecycle Event)
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.035 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.055 250022 DEBUG nova.compute.manager [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.055 250022 DEBUG nova.objects.instance [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'pci_devices' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.058 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.073 250022 INFO nova.virt.libvirt.driver [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance running successfully.
Jan 20 15:25:01 compute-0 virtqemud[249565]: argument unsupported: QEMU guest agent is not configured
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.076 250022 DEBUG nova.virt.libvirt.guest [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.076 250022 DEBUG nova.compute.manager [None req-b73b3761-4af7-46de-9758-1e1bef9175c5 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.077 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.077 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768922701.022458, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.078 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Resumed (Lifecycle Event)
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.166 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:01 compute-0 podman[375109]: 2026-01-20 15:25:01.16849274 +0000 UTC m=+0.044559416 container create fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.169 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.194 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 20 15:25:01 compute-0 systemd[1]: Started libpod-conmon-fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00.scope.
Jan 20 15:25:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8079fc6f56869b1006b2f142b918d7f1236fdd007d76d9740217467e81d2ad2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:01 compute-0 podman[375109]: 2026-01-20 15:25:01.239635528 +0000 UTC m=+0.115702204 container init fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:25:01 compute-0 podman[375109]: 2026-01-20 15:25:01.146743736 +0000 UTC m=+0.022810432 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:25:01 compute-0 podman[375109]: 2026-01-20 15:25:01.245456718 +0000 UTC m=+0.121523394 container start fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:25:01 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [NOTICE]   (375128) : New worker (375130) forked
Jan 20 15:25:01 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [NOTICE]   (375128) : Loading success.
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.335 250022 DEBUG nova.compute.manager [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.335 250022 DEBUG oslo_concurrency.lockutils [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.335 250022 DEBUG oslo_concurrency.lockutils [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.335 250022 DEBUG oslo_concurrency.lockutils [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.336 250022 DEBUG nova.compute.manager [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.336 250022 WARNING nova.compute.manager [req-5e66a9da-cc8b-465c-a303-c24cc71b3b17 req-d4aec59e-7cc9-46c3-b69e-3cc7743cc8b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state active and task_state None.
Jan 20 15:25:01 compute-0 nova_compute[250018]: 2026-01-20 15:25:01.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:01 compute-0 ceph-mon[74360]: pgmap v3085: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 1.6 MiB/s wr, 52 op/s
Jan 20 15:25:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:02.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:02 compute-0 nova_compute[250018]: 2026-01-20 15:25:02.465 250022 INFO nova.compute.manager [None req-458ec72d-c47d-47f9-88cf-1317bce3b6a0 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Get console output
Jan 20 15:25:02 compute-0 nova_compute[250018]: 2026-01-20 15:25:02.471 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:25:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 42 KiB/s wr, 9 op/s
Jan 20 15:25:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.391 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.391 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.392 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.450 250022 DEBUG nova.compute.manager [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.451 250022 DEBUG oslo_concurrency.lockutils [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.451 250022 DEBUG oslo_concurrency.lockutils [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.451 250022 DEBUG oslo_concurrency.lockutils [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.451 250022 DEBUG nova.compute.manager [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.451 250022 WARNING nova.compute.manager [req-3c334533-faa7-40cf-83cf-403979292da5 req-4429945b-98e6-42ce-8a11-710cc3a7c290 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state active and task_state None.
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.586 250022 DEBUG nova.compute.manager [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.586 250022 DEBUG nova.compute.manager [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing instance network info cache due to event network-changed-3a19935a-bf2c-4d28-b687-e8294d1773dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.587 250022 DEBUG oslo_concurrency.lockutils [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.587 250022 DEBUG oslo_concurrency.lockutils [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.587 250022 DEBUG nova.network.neutron [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Refreshing network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.647 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.647 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.647 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.648 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.648 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.649 250022 INFO nova.compute.manager [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Terminating instance
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.650 250022 DEBUG nova.compute.manager [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:25:03 compute-0 kernel: tap3a19935a-bf (unregistering): left promiscuous mode
Jan 20 15:25:03 compute-0 NetworkManager[48960]: <info>  [1768922703.7013] device (tap3a19935a-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:25:03 compute-0 ovn_controller[148666]: 2026-01-20T15:25:03Z|00742|binding|INFO|Releasing lport 3a19935a-bf2c-4d28-b687-e8294d1773dc from this chassis (sb_readonly=0)
Jan 20 15:25:03 compute-0 ovn_controller[148666]: 2026-01-20T15:25:03Z|00743|binding|INFO|Setting lport 3a19935a-bf2c-4d28-b687-e8294d1773dc down in Southbound
Jan 20 15:25:03 compute-0 ovn_controller[148666]: 2026-01-20T15:25:03Z|00744|binding|INFO|Removing iface tap3a19935a-bf ovn-installed in OVS
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.712 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.735 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8d:2f 10.100.0.6'], port_security=['fa:16:3e:04:8d:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd92c0bb3-a866-40ac-a664-d100a6dbbc6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1ed5feeeafe7448a8efb47ab975b0ead', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f1c1516c-e4c9-43f2-a8ac-732f711a4188', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0478e54e-d17b-4524-b6e2-b1a7413ccfd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3a19935a-bf2c-4d28-b687-e8294d1773dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.736 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3a19935a-bf2c-4d28-b687-e8294d1773dc in datapath 9aa12a4e-d8a3-46c3-928f-fca08b856d45 unbound from our chassis
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.736 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.737 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aa12a4e-d8a3-46c3-928f-fca08b856d45, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.738 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0031f2-dddf-48dc-9ecb-04930aa74079]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:03 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:03.739 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 namespace which is not needed anymore
Jan 20 15:25:03 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Jan 20 15:25:03 compute-0 systemd-machined[216401]: Machine qemu-89-instance-000000c9 terminated.
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.889 250022 INFO nova.virt.libvirt.driver [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Instance destroyed successfully.
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.890 250022 DEBUG nova.objects.instance [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lazy-loading 'resources' on Instance uuid d92c0bb3-a866-40ac-a664-d100a6dbbc6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:25:03 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [NOTICE]   (375128) : haproxy version is 2.8.14-c23fe91
Jan 20 15:25:03 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [NOTICE]   (375128) : path to executable is /usr/sbin/haproxy
Jan 20 15:25:03 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [WARNING]  (375128) : Exiting Master process...
Jan 20 15:25:03 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [ALERT]    (375128) : Current worker (375130) exited with code 143 (Terminated)
Jan 20 15:25:03 compute-0 neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45[375124]: [WARNING]  (375128) : All workers exited. Exiting... (0)
Jan 20 15:25:03 compute-0 systemd[1]: libpod-fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00.scope: Deactivated successfully.
Jan 20 15:25:03 compute-0 conmon[375124]: conmon fd299582833713766a23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00.scope/container/memory.events
Jan 20 15:25:03 compute-0 podman[375166]: 2026-01-20 15:25:03.905661672 +0000 UTC m=+0.055918886 container died fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.905 250022 DEBUG nova.virt.libvirt.vif [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:24:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1934224565',display_name='tempest-TestNetworkAdvancedServerOps-server-1934224565',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1934224565',id=201,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC7eRRiZX7Z0gwhADeTQp/GmdK+Gan6DSxYDy+ge9zWSe9WjzThIejOjnXbubm42+2V6mvuyTko1B/LIcp7s5Ksj2VFdJX32syZeRGa674Ht1jUDA1vmPNUafeYfNfojGA==',key_name='tempest-TestNetworkAdvancedServerOps-1914065002',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:24:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1ed5feeeafe7448a8efb47ab975b0ead',ramdisk_id='',reservation_id='r-i99bna7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-175282664',owner_user_name='tempest-TestNetworkAdvancedServerOps-175282664-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:25:01Z,user_data=None,user_id='442a7a5cb8ea426a82be9762b262d171',uuid=d92c0bb3-a866-40ac-a664-d100a6dbbc6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.905 250022 DEBUG nova.network.os_vif_util [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converting VIF {"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.906 250022 DEBUG nova.network.os_vif_util [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.906 250022 DEBUG os_vif [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.908 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a19935a-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:03 compute-0 ceph-mon[74360]: pgmap v3086: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 42 KiB/s wr, 9 op/s
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.986 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.989 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:25:03 compute-0 nova_compute[250018]: 2026-01-20 15:25:03.991 250022 INFO os_vif [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8d:2f,bridge_name='br-int',has_traffic_filtering=True,id=3a19935a-bf2c-4d28-b687-e8294d1773dc,network=Network(9aa12a4e-d8a3-46c3-928f-fca08b856d45),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a19935a-bf')
Jan 20 15:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00-userdata-shm.mount: Deactivated successfully.
Jan 20 15:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8079fc6f56869b1006b2f142b918d7f1236fdd007d76d9740217467e81d2ad2a-merged.mount: Deactivated successfully.
Jan 20 15:25:04 compute-0 podman[375166]: 2026-01-20 15:25:04.00909998 +0000 UTC m=+0.159357194 container cleanup fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:25:04 compute-0 systemd[1]: libpod-conmon-fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00.scope: Deactivated successfully.
Jan 20 15:25:04 compute-0 podman[375218]: 2026-01-20 15:25:04.071683307 +0000 UTC m=+0.043755304 container remove fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.077 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7621cae5-646d-4ee7-980d-2adbf0cf3196]: (4, ('Tue Jan 20 03:25:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 (fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00)\nfd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00\nTue Jan 20 03:25:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 (fd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00)\nfd299582833713766a2392f37b35647d1654a4350301cc6f75d90b03207e4e00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.079 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c28187-4261-40d8-8603-71d14bc8374a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.080 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aa12a4e-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:04 compute-0 kernel: tap9aa12a4e-d0: left promiscuous mode
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.095 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.097 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5175f0-1857-48b0-90eb-843a30504270]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.112 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8f82f0f3-c6f3-417c-876b-513fa0b14167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.113 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a6df23d5-b94c-4a11-8e0a-d70393b9ee4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.127 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[444b6e17-19ce-4392-bca3-153baeecb746]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 864128, 'reachable_time': 31906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375237, 'error': None, 'target': 'ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d9aa12a4e\x2dd8a3\x2d46c3\x2d928f\x2dfca08b856d45.mount: Deactivated successfully.
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.132 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9aa12a4e-d8a3-46c3-928f-fca08b856d45 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:25:04 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:04.132 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[689ee490-8cb9-40d0-be0d-4cdbca6c914e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:25:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:04.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.357 250022 INFO nova.virt.libvirt.driver [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Deleting instance files /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b_del
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.358 250022 INFO nova.virt.libvirt.driver [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Deletion of /var/lib/nova/instances/d92c0bb3-a866-40ac-a664-d100a6dbbc6b_del complete
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.436 250022 INFO nova.compute.manager [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.437 250022 DEBUG oslo.service.loopingcall [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.438 250022 DEBUG nova.compute.manager [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:25:04 compute-0 nova_compute[250018]: 2026-01-20 15:25:04.438 250022 DEBUG nova.network.neutron [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:25:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 20 15:25:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:25:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:04.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:25:05 compute-0 sudo[375240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:05 compute-0 sudo[375240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:05 compute-0 sudo[375240]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:05 compute-0 sudo[375265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:05 compute-0 sudo[375265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:05 compute-0 sudo[375265]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.637 250022 DEBUG nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.638 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.638 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.638 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.639 250022 DEBUG nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.639 250022 DEBUG nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-unplugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.639 250022 DEBUG nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.640 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.640 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.640 250022 DEBUG oslo_concurrency.lockutils [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.641 250022 DEBUG nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] No waiting events found dispatching network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.641 250022 WARNING nova.compute.manager [req-670addf1-5fa3-4a70-97a2-a6a3d588f492 req-d3cac45b-d4dd-48a3-8fda-8bac36502660 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received unexpected event network-vif-plugged-3a19935a-bf2c-4d28-b687-e8294d1773dc for instance with vm_state active and task_state deleting.
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.709 250022 DEBUG nova.network.neutron [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.728 250022 INFO nova.compute.manager [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Took 1.29 seconds to deallocate network for instance.
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.799 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.803 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.855 250022 DEBUG nova.compute.manager [req-9eb9eef8-c83a-4fd7-8cb4-531a0bd67c10 req-07b60cf2-240b-43eb-ab7e-a6080a9ef229 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Received event network-vif-deleted-3a19935a-bf2c-4d28-b687-e8294d1773dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.856 250022 DEBUG nova.network.neutron [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updated VIF entry in instance network info cache for port 3a19935a-bf2c-4d28-b687-e8294d1773dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.856 250022 DEBUG nova.network.neutron [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Updating instance_info_cache with network_info: [{"id": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "address": "fa:16:3e:04:8d:2f", "network": {"id": "9aa12a4e-d8a3-46c3-928f-fca08b856d45", "bridge": "br-int", "label": "tempest-network-smoke--1838033144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1ed5feeeafe7448a8efb47ab975b0ead", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a19935a-bf", "ovs_interfaceid": "3a19935a-bf2c-4d28-b687-e8294d1773dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.868 250022 DEBUG oslo_concurrency.processutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.897 250022 DEBUG oslo_concurrency.lockutils [req-21a5db7c-20aa-42ef-bb2e-429ec7d0a283 req-f4f4e143-6d11-412e-bba9-d7821f1ad987 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-d92c0bb3-a866-40ac-a664-d100a6dbbc6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:25:05 compute-0 nova_compute[250018]: 2026-01-20 15:25:05.907 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:05 compute-0 ceph-mon[74360]: pgmap v3087: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:06.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3941704266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.313 250022 DEBUG oslo_concurrency.processutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.319 250022 DEBUG nova.compute.provider_tree [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.384 250022 DEBUG nova.scheduler.client.report [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.419 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.450 250022 INFO nova.scheduler.client.report [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Deleted allocations for instance d92c0bb3-a866-40ac-a664-d100a6dbbc6b
Jan 20 15:25:06 compute-0 nova_compute[250018]: 2026-01-20 15:25:06.511 250022 DEBUG oslo_concurrency.lockutils [None req-e6d281c5-9fe6-497f-a584-a4048f7e70e8 442a7a5cb8ea426a82be9762b262d171 1ed5feeeafe7448a8efb47ab975b0ead - - default default] Lock "d92c0bb3-a866-40ac-a664-d100a6dbbc6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 15:25:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3941704266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:25:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 69K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1598 writes, 6800 keys, 1598 commit groups, 1.0 writes per commit group, ingest: 10.45 MB, 0.02 MB/s
                                           Interval WAL: 1598 writes, 1598 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.0      1.70              0.29        47    0.036       0      0       0.0       0.0
                                             L6      1/0   10.81 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    142.5    121.5      3.77              1.41        46    0.082    333K    25K       0.0       0.0
                                            Sum      1/0   10.81 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     98.2    100.2      5.47              1.70        93    0.059    333K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7    147.7    150.6      0.45              0.18        10    0.045     49K   2595       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    142.5    121.5      3.77              1.41        46    0.082    333K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     53.1      1.70              0.29        46    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.088, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.54 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 5.5 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 60.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000475 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3484,57.70 MB,18.9794%) FilterBlock(94,924.42 KB,0.296959%) IndexBlock(94,1.52 MB,0.499474%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:25:07 compute-0 ceph-mon[74360]: pgmap v3088: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 15:25:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1643207792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/216630041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:08.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 15:25:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:08.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:08 compute-0 nova_compute[250018]: 2026-01-20 15:25:08.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:09.394 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:25:09 compute-0 ceph-mon[74360]: pgmap v3089: 321 pgs: 321 active+clean; 134 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 15:25:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4144926388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:10 compute-0 nova_compute[250018]: 2026-01-20 15:25:10.941 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.536 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.537 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.537 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.537 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.537 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:25:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:25:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100076280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:11 compute-0 ceph-mon[74360]: pgmap v3090: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/100076280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:11 compute-0 nova_compute[250018]: 2026-01-20 15:25:11.992 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.163 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.164 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.164 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.165 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.236 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.237 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.263 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:25:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:25:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955133676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.727 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.732 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.753 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.794 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:25:12 compute-0 nova_compute[250018]: 2026-01-20 15:25:12.795 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:12.988618) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922712988675, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1117, "num_deletes": 256, "total_data_size": 1764103, "memory_usage": 1786736, "flush_reason": "Manual Compaction"}
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922712998894, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1732615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68591, "largest_seqno": 69707, "table_properties": {"data_size": 1727336, "index_size": 2738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11445, "raw_average_key_size": 19, "raw_value_size": 1716618, "raw_average_value_size": 2919, "num_data_blocks": 122, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922611, "oldest_key_time": 1768922611, "file_creation_time": 1768922712, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 10312 microseconds, and 4634 cpu microseconds.
Jan 20 15:25:12 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:12.998934) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1732615 bytes OK
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:12.998954) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.000658) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.000674) EVENT_LOG_v1 {"time_micros": 1768922713000669, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.000693) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1759052, prev total WAL file size 1775701, number of live WAL files 2.
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.001485) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373731' seq:72057594037927935, type:22 .. '6C6F676D0033303233' seq:0, type:0; will stop at (end)
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1692KB)], [155(10MB)]
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713001565, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 13062708, "oldest_snapshot_seqno": -1}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1344433681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/955133676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9730 keys, 12927152 bytes, temperature: kUnknown
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713076013, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 12927152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12863633, "index_size": 38062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 256651, "raw_average_key_size": 26, "raw_value_size": 12692348, "raw_average_value_size": 1304, "num_data_blocks": 1454, "num_entries": 9730, "num_filter_entries": 9730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.076259) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 12927152 bytes
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.077363) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.3 rd, 173.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 10255, records dropped: 525 output_compression: NoCompression
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.077393) EVENT_LOG_v1 {"time_micros": 1768922713077370, "job": 96, "event": "compaction_finished", "compaction_time_micros": 74514, "compaction_time_cpu_micros": 31725, "output_level": 6, "num_output_files": 1, "total_output_size": 12927152, "num_input_records": 10255, "num_output_records": 9730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713077760, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922713079958, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.001304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.080038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.080045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.080047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.080049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:25:13.080050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:25:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:25:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:25:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:25:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:25:13 compute-0 nova_compute[250018]: 2026-01-20 15:25:13.829 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:13 compute-0 sshd-session[375361]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 20 15:25:13 compute-0 nova_compute[250018]: 2026-01-20 15:25:13.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:13 compute-0 nova_compute[250018]: 2026-01-20 15:25:13.987 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:14 compute-0 ceph-mon[74360]: pgmap v3091: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Jan 20 15:25:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:25:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3963659316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:25:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:14.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:14 compute-0 nova_compute[250018]: 2026-01-20 15:25:14.795 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-0 nova_compute[250018]: 2026-01-20 15:25:14.796 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-0 nova_compute[250018]: 2026-01-20 15:25:14.796 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:14 compute-0 nova_compute[250018]: 2026-01-20 15:25:14.796 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:25:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 20 15:25:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:14.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:15 compute-0 nova_compute[250018]: 2026-01-20 15:25:15.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:15 compute-0 nova_compute[250018]: 2026-01-20 15:25:15.943 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:16 compute-0 ceph-mon[74360]: pgmap v3092: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 20 15:25:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:16.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Jan 20 15:25:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:16.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:18 compute-0 ceph-mon[74360]: pgmap v3093: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Jan 20 15:25:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:18.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:18 compute-0 podman[375368]: 2026-01-20 15:25:18.451772205 +0000 UTC m=+0.047948989 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:25:18 compute-0 podman[375367]: 2026-01-20 15:25:18.48279137 +0000 UTC m=+0.078960204 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:25:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:18 compute-0 nova_compute[250018]: 2026-01-20 15:25:18.887 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768922703.8868072, d92c0bb3-a866-40ac-a664-d100a6dbbc6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:25:18 compute-0 nova_compute[250018]: 2026-01-20 15:25:18.888 250022 INFO nova.compute.manager [-] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] VM Stopped (Lifecycle Event)
Jan 20 15:25:18 compute-0 nova_compute[250018]: 2026-01-20 15:25:18.915 250022 DEBUG nova.compute.manager [None req-d17fbc0b-3887-470c-abf9-c6adb77e607d - - - - - -] [instance: d92c0bb3-a866-40ac-a664-d100a6dbbc6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:25:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:19 compute-0 nova_compute[250018]: 2026-01-20 15:25:19.045 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:20 compute-0 ceph-mon[74360]: pgmap v3094: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:20.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:20.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:20 compute-0 nova_compute[250018]: 2026-01-20 15:25:20.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:22 compute-0 ceph-mon[74360]: pgmap v3095: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 20 15:25:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:22.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:23 compute-0 nova_compute[250018]: 2026-01-20 15:25:23.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:23 compute-0 nova_compute[250018]: 2026-01-20 15:25:23.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:23 compute-0 sshd-session[375361]: Connection closed by authenticating user root 139.19.117.129 port 54234 [preauth]
Jan 20 15:25:24 compute-0 nova_compute[250018]: 2026-01-20 15:25:24.081 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:24 compute-0 ceph-mon[74360]: pgmap v3096: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:24.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:25 compute-0 sudo[375416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:25 compute-0 sudo[375416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:25 compute-0 sudo[375416]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:25 compute-0 sudo[375441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:25 compute-0 sudo[375441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:25 compute-0 sudo[375441]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:25 compute-0 nova_compute[250018]: 2026-01-20 15:25:25.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:26 compute-0 ceph-mon[74360]: pgmap v3097: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:26.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:27 compute-0 nova_compute[250018]: 2026-01-20 15:25:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:27 compute-0 nova_compute[250018]: 2026-01-20 15:25:27.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:25:27 compute-0 nova_compute[250018]: 2026-01-20 15:25:27.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:25:27 compute-0 nova_compute[250018]: 2026-01-20 15:25:27.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:25:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:28.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:28 compute-0 ceph-mon[74360]: pgmap v3098: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:29 compute-0 nova_compute[250018]: 2026-01-20 15:25:29.141 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:30 compute-0 ceph-mon[74360]: pgmap v3099: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:30.795 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:30.795 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:25:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:25:30.796 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:25:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:30 compute-0 nova_compute[250018]: 2026-01-20 15:25:30.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:32.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:32 compute-0 ceph-mon[74360]: pgmap v3100: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:34 compute-0 nova_compute[250018]: 2026-01-20 15:25:34.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:34.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:34 compute-0 ceph-mon[74360]: pgmap v3101: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:34.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:35 compute-0 nova_compute[250018]: 2026-01-20 15:25:35.947 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:36.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:36 compute-0 ceph-mon[74360]: pgmap v3102: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:36.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:38 compute-0 ceph-mon[74360]: pgmap v3103: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:38.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:39 compute-0 nova_compute[250018]: 2026-01-20 15:25:39.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:40 compute-0 ceph-mon[74360]: pgmap v3104: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:40 compute-0 sshd-session[375473]: Invalid user test from 134.122.57.138 port 57820
Jan 20 15:25:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:40 compute-0 nova_compute[250018]: 2026-01-20 15:25:40.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:40.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:41 compute-0 sshd-session[375473]: Connection closed by invalid user test 134.122.57.138 port 57820 [preauth]
Jan 20 15:25:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:42 compute-0 ceph-mon[74360]: pgmap v3105: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:42.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:44 compute-0 nova_compute[250018]: 2026-01-20 15:25:44.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:44 compute-0 ceph-mon[74360]: pgmap v3106: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:45 compute-0 sudo[375479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:45 compute-0 sudo[375479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:45 compute-0 sudo[375479]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:45 compute-0 sudo[375504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:45 compute-0 sudo[375504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:45 compute-0 sudo[375504]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:45 compute-0 nova_compute[250018]: 2026-01-20 15:25:45.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:46 compute-0 ceph-mon[74360]: pgmap v3107: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:46.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3360934547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:25:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:48.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:48 compute-0 ceph-mon[74360]: pgmap v3108: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:48.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:49 compute-0 nova_compute[250018]: 2026-01-20 15:25:49.190 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:49 compute-0 podman[375532]: 2026-01-20 15:25:49.504881171 +0000 UTC m=+0.093272434 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 15:25:49 compute-0 podman[375533]: 2026-01-20 15:25:49.524693721 +0000 UTC m=+0.097836198 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 15:25:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:50.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:50 compute-0 ceph-mon[74360]: pgmap v3109: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:25:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Jan 20 15:25:50 compute-0 nova_compute[250018]: 2026-01-20 15:25:50.954 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:50.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:52.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:25:52 compute-0 ceph-mon[74360]: pgmap v3110: 321 pgs: 321 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.5 MiB/s wr, 14 op/s
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:25:52
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'images']
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:25:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3613554405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:53 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4237637953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:25:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:54 compute-0 nova_compute[250018]: 2026-01-20 15:25:54.239 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:54.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:54 compute-0 ceph-mon[74360]: pgmap v3111: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:54.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:55 compute-0 nova_compute[250018]: 2026-01-20 15:25:55.957 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:56 compute-0 nova_compute[250018]: 2026-01-20 15:25:56.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:25:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:25:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:56.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:25:56 compute-0 ceph-mon[74360]: pgmap v3112: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:25:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:25:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:57 compute-0 sudo[375580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:57 compute-0 sudo[375580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-0 sudo[375580]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-0 sudo[375605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:25:57 compute-0 sudo[375605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-0 sudo[375605]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-0 sudo[375630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:57 compute-0 sudo[375630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-0 sudo[375630]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-0 sudo[375655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:25:57 compute-0 sudo[375655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:25:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:25:57 compute-0 sudo[375655]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:25:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 776850a3-fb4a-4029-a190-f3837fb4bbbd does not exist
Jan 20 15:25:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c3d04ecc-ed96-4427-a008-1b2d0c3c241c does not exist
Jan 20 15:25:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6d2f8aee-d63d-45db-a332-ea848e7b5400 does not exist
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:25:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:58 compute-0 sudo[375711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:58 compute-0 sudo[375711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:58 compute-0 sudo[375711]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:58 compute-0 sudo[375736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:25:58 compute-0 sudo[375736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:58 compute-0 sudo[375736]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:58 compute-0 sudo[375761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:58 compute-0 sudo[375761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:58 compute-0 sudo[375761]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:25:58.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:58 compute-0 sudo[375786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:25:58 compute-0 sudo[375786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.546051652 +0000 UTC m=+0.038971823 container create 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:25:58 compute-0 systemd[1]: Started libpod-conmon-6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8.scope.
Jan 20 15:25:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.621859369 +0000 UTC m=+0.114779550 container init 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.528965496 +0000 UTC m=+0.021885687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.628369066 +0000 UTC m=+0.121289237 container start 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.631425389 +0000 UTC m=+0.124345560 container attach 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:25:58 compute-0 pedantic_mirzakhani[375866]: 167 167
Jan 20 15:25:58 compute-0 systemd[1]: libpod-6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8.scope: Deactivated successfully.
Jan 20 15:25:58 compute-0 conmon[375866]: conmon 6f120358f5479453ca6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8.scope/container/memory.events
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.633602499 +0000 UTC m=+0.126522670 container died 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-357f977655b799ee5cf959fd280b0c7961aac8e1edfce5491a49b43bba971e01-merged.mount: Deactivated successfully.
Jan 20 15:25:58 compute-0 podman[375849]: 2026-01-20 15:25:58.67252254 +0000 UTC m=+0.165442711 container remove 6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:25:58 compute-0 systemd[1]: libpod-conmon-6f120358f5479453ca6c1b33df36317acddbe68149d64e929be02037ae8e3fc8.scope: Deactivated successfully.
Jan 20 15:25:58 compute-0 ceph-mon[74360]: pgmap v3113: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:25:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:25:58 compute-0 podman[375889]: 2026-01-20 15:25:58.82992758 +0000 UTC m=+0.040752722 container create 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:25:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:25:58 compute-0 systemd[1]: Started libpod-conmon-0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1.scope.
Jan 20 15:25:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:25:58 compute-0 podman[375889]: 2026-01-20 15:25:58.894918712 +0000 UTC m=+0.105743884 container init 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:25:58 compute-0 podman[375889]: 2026-01-20 15:25:58.902660923 +0000 UTC m=+0.113486075 container start 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:25:58 compute-0 podman[375889]: 2026-01-20 15:25:58.905920721 +0000 UTC m=+0.116745923 container attach 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:25:58 compute-0 podman[375889]: 2026-01-20 15:25:58.813988886 +0000 UTC m=+0.024814068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:25:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:25:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:25:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:25:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:25:59 compute-0 nova_compute[250018]: 2026-01-20 15:25:59.266 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:25:59 compute-0 keen_lehmann[375905]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:25:59 compute-0 keen_lehmann[375905]: --> relative data size: 1.0
Jan 20 15:25:59 compute-0 keen_lehmann[375905]: --> All data devices are unavailable
Jan 20 15:25:59 compute-0 systemd[1]: libpod-0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1.scope: Deactivated successfully.
Jan 20 15:25:59 compute-0 podman[375889]: 2026-01-20 15:25:59.717226507 +0000 UTC m=+0.928051669 container died 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bbdd8f03615273dc19da30f7fe64697c2d2bfdad06203996b4d28a2bda4c09a-merged.mount: Deactivated successfully.
Jan 20 15:25:59 compute-0 podman[375889]: 2026-01-20 15:25:59.777598853 +0000 UTC m=+0.988424005 container remove 0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:25:59 compute-0 systemd[1]: libpod-conmon-0ab796d7910685b52d401365dbe35ec50061ba729ba1b1af8ebda3d862fea8a1.scope: Deactivated successfully.
Jan 20 15:25:59 compute-0 sudo[375786]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:59 compute-0 sudo[375932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:25:59 compute-0 sudo[375932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:59 compute-0 sudo[375932]: pam_unix(sudo:session): session closed for user root
Jan 20 15:25:59 compute-0 sudo[375957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:25:59 compute-0 sudo[375957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:25:59 compute-0 sudo[375957]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:00 compute-0 sudo[375982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:00 compute-0 sudo[375982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:00 compute-0 sudo[375982]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:00 compute-0 sudo[376007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:26:00 compute-0 sudo[376007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:00.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.428368362 +0000 UTC m=+0.036435584 container create 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:26:00 compute-0 systemd[1]: Started libpod-conmon-9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1.scope.
Jan 20 15:26:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.478481849 +0000 UTC m=+0.086549101 container init 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.484090701 +0000 UTC m=+0.092157923 container start 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:26:00 compute-0 cranky_lichterman[376088]: 167 167
Jan 20 15:26:00 compute-0 systemd[1]: libpod-9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1.scope: Deactivated successfully.
Jan 20 15:26:00 compute-0 conmon[376088]: conmon 9183972dbc7aa605b79b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1.scope/container/memory.events
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.489695554 +0000 UTC m=+0.097762796 container attach 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.490134296 +0000 UTC m=+0.098201508 container died 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.412835109 +0000 UTC m=+0.020902351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffb9da5533e2641e6b249b5dd88155c0936ea0bc6db754563a74c2f6e9e445a7-merged.mount: Deactivated successfully.
Jan 20 15:26:00 compute-0 podman[376072]: 2026-01-20 15:26:00.521642544 +0000 UTC m=+0.129709766 container remove 9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:26:00 compute-0 systemd[1]: libpod-conmon-9183972dbc7aa605b79b5c8e45123695289c6ef3c44783ab577f29a6ec22ffb1.scope: Deactivated successfully.
Jan 20 15:26:00 compute-0 podman[376111]: 2026-01-20 15:26:00.665139596 +0000 UTC m=+0.035303553 container create e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:26:00 compute-0 systemd[1]: Started libpod-conmon-e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548.scope.
Jan 20 15:26:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec72c80f0775b8f5174db69be2a522f9019cd5021daafb3af7c3a9efe58d9ec8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec72c80f0775b8f5174db69be2a522f9019cd5021daafb3af7c3a9efe58d9ec8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec72c80f0775b8f5174db69be2a522f9019cd5021daafb3af7c3a9efe58d9ec8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec72c80f0775b8f5174db69be2a522f9019cd5021daafb3af7c3a9efe58d9ec8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:00 compute-0 podman[376111]: 2026-01-20 15:26:00.739278797 +0000 UTC m=+0.109442764 container init e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:26:00 compute-0 podman[376111]: 2026-01-20 15:26:00.649956652 +0000 UTC m=+0.020120639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:26:00 compute-0 podman[376111]: 2026-01-20 15:26:00.745745123 +0000 UTC m=+0.115909080 container start e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:26:00 compute-0 podman[376111]: 2026-01-20 15:26:00.74891737 +0000 UTC m=+0.119081327 container attach e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 15:26:00 compute-0 ceph-mon[74360]: pgmap v3114: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 149 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 20 15:26:00 compute-0 ovn_controller[148666]: 2026-01-20T15:26:00Z|00745|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 20 15:26:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:26:00 compute-0 nova_compute[250018]: 2026-01-20 15:26:00.957 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:00.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]: {
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:     "0": [
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:         {
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "devices": [
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "/dev/loop3"
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             ],
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "lv_name": "ceph_lv0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "lv_size": "7511998464",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "name": "ceph_lv0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "tags": {
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.cluster_name": "ceph",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.crush_device_class": "",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.encrypted": "0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.osd_id": "0",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.type": "block",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:                 "ceph.vdo": "0"
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             },
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "type": "block",
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:             "vg_name": "ceph_vg0"
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:         }
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]:     ]
Jan 20 15:26:01 compute-0 vigilant_antonelli[376127]: }
Jan 20 15:26:01 compute-0 systemd[1]: libpod-e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548.scope: Deactivated successfully.
Jan 20 15:26:01 compute-0 podman[376111]: 2026-01-20 15:26:01.523806362 +0000 UTC m=+0.893970319 container died e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:26:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec72c80f0775b8f5174db69be2a522f9019cd5021daafb3af7c3a9efe58d9ec8-merged.mount: Deactivated successfully.
Jan 20 15:26:01 compute-0 podman[376111]: 2026-01-20 15:26:01.572461419 +0000 UTC m=+0.942625376 container remove e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_antonelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:26:01 compute-0 systemd[1]: libpod-conmon-e24c3f39384aa45d2838ceb942a95e79c0a2c8cae7817d08e90dee68a4e46548.scope: Deactivated successfully.
Jan 20 15:26:01 compute-0 sudo[376007]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:01 compute-0 sudo[376147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:01 compute-0 sudo[376147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:01 compute-0 sudo[376147]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:01 compute-0 sudo[376172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:26:01 compute-0 sudo[376172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:01 compute-0 sudo[376172]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:01 compute-0 sudo[376197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:01 compute-0 sudo[376197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:01 compute-0 sudo[376197]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:01 compute-0 sudo[376222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:26:01 compute-0 sudo[376222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.132486855 +0000 UTC m=+0.037684279 container create 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:26:02 compute-0 systemd[1]: Started libpod-conmon-3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09.scope.
Jan 20 15:26:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.202209435 +0000 UTC m=+0.107406899 container init 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.208172338 +0000 UTC m=+0.113369762 container start 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.115511972 +0000 UTC m=+0.020709426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.211509378 +0000 UTC m=+0.116706802 container attach 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:26:02 compute-0 affectionate_jemison[376301]: 167 167
Jan 20 15:26:02 compute-0 systemd[1]: libpod-3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09.scope: Deactivated successfully.
Jan 20 15:26:02 compute-0 conmon[376301]: conmon 3fa77486f281d6caab40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09.scope/container/memory.events
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.218448498 +0000 UTC m=+0.123645922 container died 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b131be73b4fc85b018730f794d059e54419b24e4bfed44b71df800b51425aed-merged.mount: Deactivated successfully.
Jan 20 15:26:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:02.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:02 compute-0 podman[376285]: 2026-01-20 15:26:02.257254516 +0000 UTC m=+0.162451940 container remove 3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:26:02 compute-0 systemd[1]: libpod-conmon-3fa77486f281d6caab40a466fd017335562092fd43197e375a95535f5e1b9e09.scope: Deactivated successfully.
Jan 20 15:26:02 compute-0 podman[376328]: 2026-01-20 15:26:02.399648227 +0000 UTC m=+0.037395950 container create bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:26:02 compute-0 systemd[1]: Started libpod-conmon-bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f.scope.
Jan 20 15:26:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c6c77794325dea8db03e2771012c414257639ac99ec2d41a33046bf152296a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c6c77794325dea8db03e2771012c414257639ac99ec2d41a33046bf152296a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c6c77794325dea8db03e2771012c414257639ac99ec2d41a33046bf152296a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c6c77794325dea8db03e2771012c414257639ac99ec2d41a33046bf152296a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:26:02 compute-0 podman[376328]: 2026-01-20 15:26:02.383530498 +0000 UTC m=+0.021278241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:26:02 compute-0 podman[376328]: 2026-01-20 15:26:02.482789733 +0000 UTC m=+0.120537486 container init bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:26:02 compute-0 podman[376328]: 2026-01-20 15:26:02.491911433 +0000 UTC m=+0.129659156 container start bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:26:02 compute-0 podman[376328]: 2026-01-20 15:26:02.538495692 +0000 UTC m=+0.176243455 container attach bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:26:02 compute-0 ceph-mon[74360]: pgmap v3115: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:26:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 340 KiB/s wr, 85 op/s
Jan 20 15:26:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:02.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:03 compute-0 silly_turing[376344]: {
Jan 20 15:26:03 compute-0 silly_turing[376344]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:26:03 compute-0 silly_turing[376344]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:26:03 compute-0 silly_turing[376344]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:26:03 compute-0 silly_turing[376344]:         "osd_id": 0,
Jan 20 15:26:03 compute-0 silly_turing[376344]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:26:03 compute-0 silly_turing[376344]:         "type": "bluestore"
Jan 20 15:26:03 compute-0 silly_turing[376344]:     }
Jan 20 15:26:03 compute-0 silly_turing[376344]: }
Jan 20 15:26:03 compute-0 systemd[1]: libpod-bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f.scope: Deactivated successfully.
Jan 20 15:26:03 compute-0 podman[376328]: 2026-01-20 15:26:03.32165707 +0000 UTC m=+0.959404793 container died bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0c6c77794325dea8db03e2771012c414257639ac99ec2d41a33046bf152296a-merged.mount: Deactivated successfully.
Jan 20 15:26:03 compute-0 podman[376328]: 2026-01-20 15:26:03.368060545 +0000 UTC m=+1.005808268 container remove bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:26:03 compute-0 systemd[1]: libpod-conmon-bb863d93f63e8c49c620aaa8b57d7f54804a91738d9703aa1cdd8fe97b299a0f.scope: Deactivated successfully.
Jan 20 15:26:03 compute-0 sudo[376222]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:26:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:26:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8e9b768a-0e07-4040-8137-694173bd0875 does not exist
Jan 20 15:26:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e08fc929-9c11-4e72-bf3d-b5fc8d58fdee does not exist
Jan 20 15:26:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 10aa0028-ecc7-4a96-bdbe-a8b15f1d0e62 does not exist
Jan 20 15:26:03 compute-0 sudo[376380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:03 compute-0 sudo[376380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:03 compute-0 sudo[376380]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:03 compute-0 sudo[376405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:26:03 compute-0 sudo[376405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:03 compute-0 sudo[376405]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:04 compute-0 nova_compute[250018]: 2026-01-20 15:26:04.307 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:04 compute-0 ceph-mon[74360]: pgmap v3116: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 340 KiB/s wr, 85 op/s
Jan 20 15:26:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:26:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:26:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:04.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:05 compute-0 sudo[376431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:05 compute-0 sudo[376431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:05 compute-0 sudo[376431]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:05 compute-0 sudo[376456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:05 compute-0 sudo[376456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:05 compute-0 sudo[376456]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:05 compute-0 nova_compute[250018]: 2026-01-20 15:26:05.959 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:06.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:06 compute-0 ceph-mon[74360]: pgmap v3117: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:26:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:26:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:06.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:07 compute-0 nova_compute[250018]: 2026-01-20 15:26:07.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:08.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:08 compute-0 ceph-mon[74360]: pgmap v3118: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:26:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2608940846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 58 op/s
Jan 20 15:26:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:08.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:09 compute-0 nova_compute[250018]: 2026-01-20 15:26:09.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1694225511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:10 compute-0 ceph-mon[74360]: pgmap v3119: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 58 op/s
Jan 20 15:26:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 20 15:26:10 compute-0 nova_compute[250018]: 2026-01-20 15:26:10.960 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016863106039456542 of space, bias 1.0, pg target 0.5058931811836963 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:26:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.175 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.176 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.176 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.176 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.176 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:26:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:26:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255426292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.631 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:26:12 compute-0 ceph-mon[74360]: pgmap v3120: 321 pgs: 321 active+clean; 184 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 20 15:26:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1555682087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2255426292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.762 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.763 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4168MB free_disk=20.95288848876953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.764 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.894 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.894 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:26:12 compute-0 nova_compute[250018]: 2026-01-20 15:26:12.910 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:26:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:12.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911327537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-0 nova_compute[250018]: 2026-01-20 15:26:13.347 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:26:13 compute-0 nova_compute[250018]: 2026-01-20 15:26:13.354 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:26:13 compute-0 nova_compute[250018]: 2026-01-20 15:26:13.397 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:26:13 compute-0 nova_compute[250018]: 2026-01-20 15:26:13.398 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:26:13 compute-0 nova_compute[250018]: 2026-01-20 15:26:13.399 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:26:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:26:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:26:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1246091011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1911327537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:26:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2660355829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:26:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:14.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:14 compute-0 nova_compute[250018]: 2026-01-20 15:26:14.326 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:14 compute-0 ceph-mon[74360]: pgmap v3121: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:26:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:14.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:15 compute-0 nova_compute[250018]: 2026-01-20 15:26:15.400 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:15 compute-0 nova_compute[250018]: 2026-01-20 15:26:15.400 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:15 compute-0 nova_compute[250018]: 2026-01-20 15:26:15.400 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:15 compute-0 nova_compute[250018]: 2026-01-20 15:26:15.400 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:26:15 compute-0 nova_compute[250018]: 2026-01-20 15:26:15.966 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:16 compute-0 nova_compute[250018]: 2026-01-20 15:26:16.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:16.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:16 compute-0 ceph-mon[74360]: pgmap v3122: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:16.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:18.275 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:26:18 compute-0 nova_compute[250018]: 2026-01-20 15:26:18.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:18.276 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:26:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:18 compute-0 ceph-mon[74360]: pgmap v3123: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 15:26:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:19.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:19 compute-0 nova_compute[250018]: 2026-01-20 15:26:19.328 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:20.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:20 compute-0 podman[376535]: 2026-01-20 15:26:20.458412409 +0000 UTC m=+0.049559611 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:26:20 compute-0 podman[376534]: 2026-01-20 15:26:20.492038426 +0000 UTC m=+0.082907281 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:26:20 compute-0 ceph-mon[74360]: pgmap v3124: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:20 compute-0 nova_compute[250018]: 2026-01-20 15:26:20.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:21.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:22.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:22 compute-0 ceph-mon[74360]: pgmap v3125: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:26:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 916 KiB/s wr, 26 op/s
Jan 20 15:26:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:23.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:24 compute-0 nova_compute[250018]: 2026-01-20 15:26:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:24.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:24 compute-0 nova_compute[250018]: 2026-01-20 15:26:24.330 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:24 compute-0 ceph-mon[74360]: pgmap v3126: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 916 KiB/s wr, 26 op/s
Jan 20 15:26:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:25.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:25 compute-0 sshd-session[376578]: Invalid user test from 134.122.57.138 port 35894
Jan 20 15:26:25 compute-0 sshd-session[376578]: Connection closed by invalid user test 134.122.57.138 port 35894 [preauth]
Jan 20 15:26:25 compute-0 sudo[376581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:25 compute-0 sudo[376581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:25 compute-0 sudo[376581]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:26 compute-0 nova_compute[250018]: 2026-01-20 15:26:26.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:26 compute-0 sudo[376606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:26 compute-0 sudo[376606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:26 compute-0 sudo[376606]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:26.278 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:26:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:26 compute-0 ceph-mon[74360]: pgmap v3127: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:28 compute-0 nova_compute[250018]: 2026-01-20 15:26:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:28 compute-0 nova_compute[250018]: 2026-01-20 15:26:28.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:26:28 compute-0 nova_compute[250018]: 2026-01-20 15:26:28.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:26:28 compute-0 nova_compute[250018]: 2026-01-20 15:26:28.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:26:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:28.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:28 compute-0 ceph-mon[74360]: pgmap v3128: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:26:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:29.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:29 compute-0 nova_compute[250018]: 2026-01-20 15:26:29.331 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:30.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:30 compute-0 ceph-mon[74360]: pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:30.795 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:30.796 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:26:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:26:30.796 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:26:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:31 compute-0 nova_compute[250018]: 2026-01-20 15:26:31.002 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:31.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:32.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:32 compute-0 ceph-mon[74360]: pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:33.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:34.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:34 compute-0 nova_compute[250018]: 2026-01-20 15:26:34.352 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:34 compute-0 ceph-mon[74360]: pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:26:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:35.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:36 compute-0 nova_compute[250018]: 2026-01-20 15:26:36.004 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:36.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:36 compute-0 ceph-mon[74360]: pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:37.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:37 compute-0 ceph-mon[74360]: pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 20 15:26:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:38.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:26:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:39.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:39 compute-0 nova_compute[250018]: 2026-01-20 15:26:39.354 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:39 compute-0 ceph-mon[74360]: pgmap v3134: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:26:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:26:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:40.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:26:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 20 15:26:41 compute-0 nova_compute[250018]: 2026-01-20 15:26:41.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:41.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:42 compute-0 ceph-mon[74360]: pgmap v3135: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 20 15:26:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 20 15:26:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:43.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3507037067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:26:44 compute-0 ceph-mon[74360]: pgmap v3136: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 20 15:26:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:44.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:44 compute-0 nova_compute[250018]: 2026-01-20 15:26:44.356 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 817 KiB/s wr, 2 op/s
Jan 20 15:26:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:45.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:46 compute-0 nova_compute[250018]: 2026-01-20 15:26:46.055 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:46 compute-0 sudo[376641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:46 compute-0 sudo[376641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:46 compute-0 sudo[376641]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:46 compute-0 sudo[376666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:26:46 compute-0 sudo[376666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:26:46 compute-0 sudo[376666]: pam_unix(sudo:session): session closed for user root
Jan 20 15:26:46 compute-0 ceph-mon[74360]: pgmap v3137: 321 pgs: 321 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 817 KiB/s wr, 2 op/s
Jan 20 15:26:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:46.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:47.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:48 compute-0 ceph-mon[74360]: pgmap v3138: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:48.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:49.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:49 compute-0 nova_compute[250018]: 2026-01-20 15:26:49.357 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:50 compute-0 ceph-mon[74360]: pgmap v3139: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:50.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:51.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:51 compute-0 nova_compute[250018]: 2026-01-20 15:26:51.058 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:51 compute-0 podman[376695]: 2026-01-20 15:26:51.488221007 +0000 UTC m=+0.055638457 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:26:51 compute-0 podman[376694]: 2026-01-20 15:26:51.517400373 +0000 UTC m=+0.084817763 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:26:52 compute-0 ceph-mon[74360]: pgmap v3140: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:26:52
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', 'volumes', '.mgr']
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:26:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:53.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:54 compute-0 ceph-mon[74360]: pgmap v3141: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1693400051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:26:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:54.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:54 compute-0 nova_compute[250018]: 2026-01-20 15:26:54.359 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:55.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:55 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/252485601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:26:56 compute-0 nova_compute[250018]: 2026-01-20 15:26:56.060 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:26:56 compute-0 ceph-mon[74360]: pgmap v3142: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:26:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1000 KiB/s wr, 24 op/s
Jan 20 15:26:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:57 compute-0 nova_compute[250018]: 2026-01-20 15:26:57.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:26:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:26:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:26:58 compute-0 ceph-mon[74360]: pgmap v3143: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1000 KiB/s wr, 24 op/s
Jan 20 15:26:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:26:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:26:58.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:26:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:26:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:26:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:26:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:26:59.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:26:59 compute-0 nova_compute[250018]: 2026-01-20 15:26:59.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:00 compute-0 ceph-mon[74360]: pgmap v3144: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:00.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 20 15:27:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:01.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:01 compute-0 nova_compute[250018]: 2026-01-20 15:27:01.062 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:02 compute-0 ceph-mon[74360]: pgmap v3145: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 20 15:27:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:02.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 844 KiB/s rd, 12 KiB/s wr, 38 op/s
Jan 20 15:27:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:03.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:03 compute-0 sudo[376746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:03 compute-0 sudo[376746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-0 sudo[376746]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:03 compute-0 sudo[376771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:27:03 compute-0 sudo[376771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-0 sudo[376771]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:03 compute-0 sudo[376796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:03 compute-0 sudo[376796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:03 compute-0 sudo[376796]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-0 sudo[376821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:27:04 compute-0 sudo[376821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:04 compute-0 nova_compute[250018]: 2026-01-20 15:27:04.364 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:04 compute-0 ceph-mon[74360]: pgmap v3146: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 844 KiB/s rd, 12 KiB/s wr, 38 op/s
Jan 20 15:27:04 compute-0 sudo[376821]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4d8f7c10-5237-4aa8-b447-234c0657c05a does not exist
Jan 20 15:27:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ec1c05d6-3cbb-413a-865b-7e8a42642384 does not exist
Jan 20 15:27:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7df8ab15-a220-4132-9330-47ffe8897e14 does not exist
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:27:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:27:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:04 compute-0 sudo[376878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:04 compute-0 sudo[376878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:04 compute-0 sudo[376878]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-0 sudo[376903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:27:04 compute-0 sudo[376903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:04 compute-0 sudo[376903]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-0 sudo[376928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:04 compute-0 sudo[376928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:04 compute-0 sudo[376928]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 65 op/s
Jan 20 15:27:04 compute-0 sudo[376953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:27:04 compute-0 sudo[376953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.216586057 +0000 UTC m=+0.038856270 container create 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:27:05 compute-0 systemd[1]: Started libpod-conmon-9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea.scope.
Jan 20 15:27:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.19908999 +0000 UTC m=+0.021360223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.298429378 +0000 UTC m=+0.120699621 container init 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.305039098 +0000 UTC m=+0.127309331 container start 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.308609515 +0000 UTC m=+0.130879758 container attach 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:27:05 compute-0 kind_yonath[377032]: 167 167
Jan 20 15:27:05 compute-0 systemd[1]: libpod-9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea.scope: Deactivated successfully.
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.312959734 +0000 UTC m=+0.135229947 container died 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b5ec30caf1d784525ebae5bf7bef916574f64ff3756b0794932c2c72354694e-merged.mount: Deactivated successfully.
Jan 20 15:27:05 compute-0 podman[377016]: 2026-01-20 15:27:05.376951459 +0000 UTC m=+0.199221672 container remove 9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:27:05 compute-0 systemd[1]: libpod-conmon-9c1dd7916bd43a359eb9ebfae9a760900f70714a14da589204ca24bc651868ea.scope: Deactivated successfully.
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:27:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:27:05 compute-0 podman[377056]: 2026-01-20 15:27:05.53293474 +0000 UTC m=+0.043471376 container create 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:27:05 compute-0 systemd[1]: Started libpod-conmon-20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098.scope.
Jan 20 15:27:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:05 compute-0 podman[377056]: 2026-01-20 15:27:05.513330466 +0000 UTC m=+0.023867122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:05 compute-0 podman[377056]: 2026-01-20 15:27:05.609912978 +0000 UTC m=+0.120449634 container init 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:27:05 compute-0 podman[377056]: 2026-01-20 15:27:05.618725369 +0000 UTC m=+0.129262005 container start 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:27:05 compute-0 podman[377056]: 2026-01-20 15:27:05.621772302 +0000 UTC m=+0.132308938 container attach 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 15:27:06 compute-0 nova_compute[250018]: 2026-01-20 15:27:06.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:06 compute-0 sudo[377079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-0 sudo[377079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 sudo[377079]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 sudo[377106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-0 sudo[377106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 sudo[377106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:06 compute-0 vibrant_morse[377072]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:27:06 compute-0 vibrant_morse[377072]: --> relative data size: 1.0
Jan 20 15:27:06 compute-0 vibrant_morse[377072]: --> All data devices are unavailable
Jan 20 15:27:06 compute-0 systemd[1]: libpod-20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098.scope: Deactivated successfully.
Jan 20 15:27:06 compute-0 podman[377056]: 2026-01-20 15:27:06.42916324 +0000 UTC m=+0.939699896 container died 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c56481bfdb696f1f3af42adaca0f02935cae62016fba081514113efd7d18be56-merged.mount: Deactivated successfully.
Jan 20 15:27:06 compute-0 podman[377056]: 2026-01-20 15:27:06.482997978 +0000 UTC m=+0.993534614 container remove 20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:27:06 compute-0 systemd[1]: libpod-conmon-20682ce89db1ea82cb2421d43565e5a705000ea9d78abff24c0fbf4664cf1098.scope: Deactivated successfully.
Jan 20 15:27:06 compute-0 sudo[376953]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 sudo[377153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-0 sudo[377153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 sudo[377153]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 sudo[377178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:27:06 compute-0 sudo[377178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 sudo[377178]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 sudo[377203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:06 compute-0 sudo[377203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 sudo[377203]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:06 compute-0 ceph-mon[74360]: pgmap v3147: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 65 op/s
Jan 20 15:27:06 compute-0 sudo[377228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:27:06 compute-0 sudo[377228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:07.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.06306699 +0000 UTC m=+0.043545318 container create f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:27:07 compute-0 systemd[1]: Started libpod-conmon-f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196.scope.
Jan 20 15:27:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.132883323 +0000 UTC m=+0.113361661 container init f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.042506949 +0000 UTC m=+0.022985317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.141988341 +0000 UTC m=+0.122466669 container start f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.145682702 +0000 UTC m=+0.126161020 container attach f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 20 15:27:07 compute-0 jovial_mcnulty[377311]: 167 167
Jan 20 15:27:07 compute-0 systemd[1]: libpod-f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196.scope: Deactivated successfully.
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.148124059 +0000 UTC m=+0.128602377 container died f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-24393b401fb001a483f15c7c0f5775c6d246c95e30a9d361db731c54f216311a-merged.mount: Deactivated successfully.
Jan 20 15:27:07 compute-0 podman[377295]: 2026-01-20 15:27:07.181329904 +0000 UTC m=+0.161808222 container remove f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:27:07 compute-0 systemd[1]: libpod-conmon-f9c28471d1d6fb08b55a105c95ee43885aa3e6cf067ed10824d80996cc300196.scope: Deactivated successfully.
Jan 20 15:27:07 compute-0 podman[377337]: 2026-01-20 15:27:07.332922886 +0000 UTC m=+0.039394134 container create 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:27:07 compute-0 systemd[1]: Started libpod-conmon-620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077.scope.
Jan 20 15:27:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42d4a19248fc21394698b620375f9241d28ae28a8621d5ccdd781ac5a65bced/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42d4a19248fc21394698b620375f9241d28ae28a8621d5ccdd781ac5a65bced/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42d4a19248fc21394698b620375f9241d28ae28a8621d5ccdd781ac5a65bced/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a42d4a19248fc21394698b620375f9241d28ae28a8621d5ccdd781ac5a65bced/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:07 compute-0 podman[377337]: 2026-01-20 15:27:07.409958046 +0000 UTC m=+0.116429324 container init 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:27:07 compute-0 podman[377337]: 2026-01-20 15:27:07.316754035 +0000 UTC m=+0.023225313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:07 compute-0 podman[377337]: 2026-01-20 15:27:07.417329637 +0000 UTC m=+0.123800885 container start 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:27:07 compute-0 podman[377337]: 2026-01-20 15:27:07.420507383 +0000 UTC m=+0.126978651 container attach 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:27:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:08 compute-0 nova_compute[250018]: 2026-01-20 15:27:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:08 compute-0 crazy_feistel[377353]: {
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:     "0": [
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:         {
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "devices": [
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "/dev/loop3"
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             ],
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "lv_name": "ceph_lv0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "lv_size": "7511998464",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "name": "ceph_lv0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "tags": {
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.cluster_name": "ceph",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.crush_device_class": "",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.encrypted": "0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.osd_id": "0",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.type": "block",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:                 "ceph.vdo": "0"
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             },
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "type": "block",
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:             "vg_name": "ceph_vg0"
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:         }
Jan 20 15:27:08 compute-0 crazy_feistel[377353]:     ]
Jan 20 15:27:08 compute-0 crazy_feistel[377353]: }
Jan 20 15:27:08 compute-0 systemd[1]: libpod-620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077.scope: Deactivated successfully.
Jan 20 15:27:08 compute-0 podman[377337]: 2026-01-20 15:27:08.18818707 +0000 UTC m=+0.894658338 container died 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a42d4a19248fc21394698b620375f9241d28ae28a8621d5ccdd781ac5a65bced-merged.mount: Deactivated successfully.
Jan 20 15:27:08 compute-0 podman[377337]: 2026-01-20 15:27:08.237996387 +0000 UTC m=+0.944467635 container remove 620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:27:08 compute-0 systemd[1]: libpod-conmon-620596fe06b4dfad3c918d506275a0300d8169621e2865034fa5818119b6c077.scope: Deactivated successfully.
Jan 20 15:27:08 compute-0 sudo[377228]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:08 compute-0 sudo[377377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:08 compute-0 sudo[377377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:08 compute-0 sudo[377377]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:08.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:08 compute-0 sudo[377402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:27:08 compute-0 sudo[377402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:08 compute-0 sudo[377402]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:08 compute-0 sudo[377427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:08 compute-0 sudo[377427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:08 compute-0 sudo[377427]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:08 compute-0 sudo[377452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:27:08 compute-0 sudo[377452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:08 compute-0 ceph-mon[74360]: pgmap v3148: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.771346166 +0000 UTC m=+0.037493893 container create f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:27:08 compute-0 systemd[1]: Started libpod-conmon-f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b.scope.
Jan 20 15:27:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:08 compute-0 sshd-session[377280]: Invalid user test from 134.122.57.138 port 44836
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.846811973 +0000 UTC m=+0.112959730 container init f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.755065523 +0000 UTC m=+0.021213260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.853967298 +0000 UTC m=+0.120115025 container start f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.858210414 +0000 UTC m=+0.124358171 container attach f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:27:08 compute-0 dazzling_galois[377533]: 167 167
Jan 20 15:27:08 compute-0 systemd[1]: libpod-f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b.scope: Deactivated successfully.
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.860040373 +0000 UTC m=+0.126188110 container died f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:27:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3338dbac82bc7762c5456cda17d04477b93f8011179234e458c14fa1f5feed-merged.mount: Deactivated successfully.
Jan 20 15:27:08 compute-0 podman[377517]: 2026-01-20 15:27:08.890787742 +0000 UTC m=+0.156935469 container remove f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:27:08 compute-0 systemd[1]: libpod-conmon-f0157e684e3d0d2f95b4f8bcdd56e39154c402c186da5971fd9ba8afed77f97b.scope: Deactivated successfully.
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.04551517 +0000 UTC m=+0.040846564 container create ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:27:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:09.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:09 compute-0 systemd[1]: Started libpod-conmon-ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5.scope.
Jan 20 15:27:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcd720a45848229ccfa6d20906fd3444e4efa3335cc1d526c6e23f5cc511366/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcd720a45848229ccfa6d20906fd3444e4efa3335cc1d526c6e23f5cc511366/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcd720a45848229ccfa6d20906fd3444e4efa3335cc1d526c6e23f5cc511366/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcd720a45848229ccfa6d20906fd3444e4efa3335cc1d526c6e23f5cc511366/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.026767549 +0000 UTC m=+0.022098973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.123270169 +0000 UTC m=+0.118601563 container init ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.136279554 +0000 UTC m=+0.131610938 container start ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.139392699 +0000 UTC m=+0.134724103 container attach ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:27:09 compute-0 sshd-session[377280]: Connection closed by invalid user test 134.122.57.138 port 44836 [preauth]
Jan 20 15:27:09 compute-0 nova_compute[250018]: 2026-01-20 15:27:09.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]: {
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:         "osd_id": 0,
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:         "type": "bluestore"
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]:     }
Jan 20 15:27:09 compute-0 dreamy_dijkstra[377572]: }
Jan 20 15:27:09 compute-0 systemd[1]: libpod-ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5.scope: Deactivated successfully.
Jan 20 15:27:09 compute-0 podman[377556]: 2026-01-20 15:27:09.959505705 +0000 UTC m=+0.954837099 container died ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dcd720a45848229ccfa6d20906fd3444e4efa3335cc1d526c6e23f5cc511366-merged.mount: Deactivated successfully.
Jan 20 15:27:10 compute-0 podman[377556]: 2026-01-20 15:27:10.015490381 +0000 UTC m=+1.010821795 container remove ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:27:10 compute-0 systemd[1]: libpod-conmon-ed09f7e290934cf9bc84d5c5d90451a22a97bd76e1c50991aee7c80076ed10b5.scope: Deactivated successfully.
Jan 20 15:27:10 compute-0 sudo[377452]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:27:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:27:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev eaed56b5-49f8-4a10-9e5b-a3d2f57229d7 does not exist
Jan 20 15:27:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 59e640c3-2da9-4fd5-87ee-e8e6401333e8 does not exist
Jan 20 15:27:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa40ab18-eec9-4742-822b-62d9a73935dd does not exist
Jan 20 15:27:10 compute-0 sudo[377608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:10 compute-0 sudo[377608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:10 compute-0 sudo[377608]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:10 compute-0 sudo[377633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:27:10 compute-0 sudo[377633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:10 compute-0 sudo[377633]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:10.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:10 compute-0 ceph-mon[74360]: pgmap v3149: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:27:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:10 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:27:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 20 15:27:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:11.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:11 compute-0 nova_compute[250018]: 2026-01-20 15:27:11.065 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:11 compute-0 ceph-mon[74360]: pgmap v3150: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 20 15:27:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1964989034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031668760469011725 of space, bias 1.0, pg target 0.9500628140703518 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:27:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.073 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.073 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:27:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 20 15:27:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:12.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 20 15:27:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:27:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263708301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.510 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.667 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.669 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4172MB free_disk=20.92181396484375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.669 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.669 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.837 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.838 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:27:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 53 op/s
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.904298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832904433, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1254, "num_deletes": 251, "total_data_size": 2113395, "memory_usage": 2150216, "flush_reason": "Manual Compaction"}
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 20 15:27:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3229525351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1263708301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:12 compute-0 nova_compute[250018]: 2026-01-20 15:27:12.915 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832916764, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2078124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69708, "largest_seqno": 70961, "table_properties": {"data_size": 2072208, "index_size": 3246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12612, "raw_average_key_size": 19, "raw_value_size": 2060353, "raw_average_value_size": 3254, "num_data_blocks": 145, "num_entries": 633, "num_filter_entries": 633, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922712, "oldest_key_time": 1768922712, "file_creation_time": 1768922832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 12483 microseconds, and 5440 cpu microseconds.
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.916792) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2078124 bytes OK
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.916808) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.918195) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.918208) EVENT_LOG_v1 {"time_micros": 1768922832918204, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.918223) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2107894, prev total WAL file size 2107894, number of live WAL files 2.
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.919002) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2029KB)], [158(12MB)]
Jan 20 15:27:12 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922832919083, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15005276, "oldest_snapshot_seqno": -1}
Jan 20 15:27:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9848 keys, 13116927 bytes, temperature: kUnknown
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833010122, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 13116927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13052395, "index_size": 38819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24645, "raw_key_size": 259801, "raw_average_key_size": 26, "raw_value_size": 12878822, "raw_average_value_size": 1307, "num_data_blocks": 1480, "num_entries": 9848, "num_filter_entries": 9848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.011205) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13116927 bytes
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.012898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.7 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.3 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(13.5) write-amplify(6.3) OK, records in: 10363, records dropped: 515 output_compression: NoCompression
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.012916) EVENT_LOG_v1 {"time_micros": 1768922833012907, "job": 98, "event": "compaction_finished", "compaction_time_micros": 91104, "compaction_time_cpu_micros": 50971, "output_level": 6, "num_output_files": 1, "total_output_size": 13116927, "num_input_records": 10363, "num_output_records": 9848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833013614, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922833016013, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:12.918894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.016043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.016047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.016049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.016051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:27:13.016053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:27:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:13.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:27:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2737404840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-0 nova_compute[250018]: 2026-01-20 15:27:13.363 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:27:13 compute-0 nova_compute[250018]: 2026-01-20 15:27:13.368 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:27:13 compute-0 nova_compute[250018]: 2026-01-20 15:27:13.387 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:27:13 compute-0 nova_compute[250018]: 2026-01-20 15:27:13.389 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:27:13 compute-0 nova_compute[250018]: 2026-01-20 15:27:13.389 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:13 compute-0 ceph-mon[74360]: pgmap v3151: 321 pgs: 321 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 53 op/s
Jan 20 15:27:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2737404840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2380053491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/523948789' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:27:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/523948789' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:27:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:14.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:14 compute-0 nova_compute[250018]: 2026-01-20 15:27:14.370 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 264 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 58 op/s
Jan 20 15:27:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1376628842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:15.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:16 compute-0 nova_compute[250018]: 2026-01-20 15:27:16.082 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:16.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:16 compute-0 nova_compute[250018]: 2026-01-20 15:27:16.389 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:16 compute-0 nova_compute[250018]: 2026-01-20 15:27:16.389 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:16 compute-0 nova_compute[250018]: 2026-01-20 15:27:16.390 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:16 compute-0 nova_compute[250018]: 2026-01-20 15:27:16.390 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:27:16 compute-0 ceph-mon[74360]: pgmap v3152: 321 pgs: 321 active+clean; 264 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 58 op/s
Jan 20 15:27:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:27:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:27:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 207K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 55K writes, 20K syncs, 2.66 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3703 writes, 11K keys, 3703 commit groups, 1.0 writes per commit group, ingest: 11.83 MB, 0.02 MB/s
                                           Interval WAL: 3703 writes, 1503 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:27:17 compute-0 nova_compute[250018]: 2026-01-20 15:27:17.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:17.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:18.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:18 compute-0 ceph-mon[74360]: pgmap v3153: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 20 15:27:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:27:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:19 compute-0 nova_compute[250018]: 2026-01-20 15:27:19.372 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:20 compute-0 ceph-mon[74360]: pgmap v3154: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:27:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:21.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:21 compute-0 nova_compute[250018]: 2026-01-20 15:27:21.084 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:22 compute-0 podman[377710]: 2026-01-20 15:27:22.473376118 +0000 UTC m=+0.057541219 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 15:27:22 compute-0 podman[377709]: 2026-01-20 15:27:22.545125055 +0000 UTC m=+0.133449489 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:23 compute-0 nova_compute[250018]: 2026-01-20 15:27:23.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:23.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:23 compute-0 ceph-mon[74360]: pgmap v3155: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:24 compute-0 nova_compute[250018]: 2026-01-20 15:27:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:24 compute-0 ceph-mon[74360]: pgmap v3156: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 20 15:27:24 compute-0 nova_compute[250018]: 2026-01-20 15:27:24.375 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Jan 20 15:27:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:25.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:26 compute-0 nova_compute[250018]: 2026-01-20 15:27:26.087 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:26 compute-0 ceph-mon[74360]: pgmap v3157: 321 pgs: 321 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Jan 20 15:27:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3885040948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:26 compute-0 sudo[377757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:26 compute-0 sudo[377757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:26 compute-0 sudo[377757]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:26 compute-0 sudo[377782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:26 compute-0 sudo[377782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:26 compute-0 sudo[377782]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 193 KiB/s rd, 885 KiB/s wr, 69 op/s
Jan 20 15:27:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:27.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:28 compute-0 ceph-mon[74360]: pgmap v3158: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 193 KiB/s rd, 885 KiB/s wr, 69 op/s
Jan 20 15:27:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:29 compute-0 nova_compute[250018]: 2026-01-20 15:27:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:29 compute-0 nova_compute[250018]: 2026-01-20 15:27:29.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:27:29 compute-0 nova_compute[250018]: 2026-01-20 15:27:29.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:27:29 compute-0 nova_compute[250018]: 2026-01-20 15:27:29.079 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:27:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:29.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:29 compute-0 nova_compute[250018]: 2026-01-20 15:27:29.379 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:30.089 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:27:30 compute-0 nova_compute[250018]: 2026-01-20 15:27:30.090 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:30.091 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:27:30 compute-0 ceph-mon[74360]: pgmap v3159: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:30.797 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:30.797 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:27:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:30.797 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:27:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:31.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:31 compute-0 nova_compute[250018]: 2026-01-20 15:27:31.142 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 15:27:32 compute-0 ceph-mon[74360]: pgmap v3160: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 20 15:27:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:27:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:33.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:34 compute-0 nova_compute[250018]: 2026-01-20 15:27:34.381 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:34 compute-0 ceph-mon[74360]: pgmap v3161: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:27:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:36 compute-0 nova_compute[250018]: 2026-01-20 15:27:36.145 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:36.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:36 compute-0 ceph-mon[74360]: pgmap v3162: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:37.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:38 compute-0 ceph-mon[74360]: pgmap v3163: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Jan 20 15:27:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:27:39.093 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:27:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2202569858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:27:39 compute-0 nova_compute[250018]: 2026-01-20 15:27:39.383 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:40 compute-0 ceph-mon[74360]: pgmap v3164: 321 pgs: 321 active+clean; 177 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Jan 20 15:27:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:41 compute-0 nova_compute[250018]: 2026-01-20 15:27:41.148 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:42 compute-0 ceph-mon[74360]: pgmap v3165: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:44 compute-0 ceph-mon[74360]: pgmap v3166: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:44 compute-0 nova_compute[250018]: 2026-01-20 15:27:44.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:46 compute-0 nova_compute[250018]: 2026-01-20 15:27:46.149 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:46 compute-0 ceph-mon[74360]: pgmap v3167: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:46 compute-0 sudo[377817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:46 compute-0 sudo[377817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:46 compute-0 sudo[377817]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:46 compute-0 sudo[377842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:27:46 compute-0 sudo[377842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:27:46 compute-0 sudo[377842]: pam_unix(sudo:session): session closed for user root
Jan 20 15:27:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:47.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:48 compute-0 ceph-mon[74360]: pgmap v3168: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:27:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:48.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:27:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:49 compute-0 nova_compute[250018]: 2026-01-20 15:27:49.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:50 compute-0 ceph-mon[74360]: pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:27:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:50.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:27:50 compute-0 sshd-session[377868]: Invalid user test from 134.122.57.138 port 50224
Jan 20 15:27:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:51.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:51 compute-0 nova_compute[250018]: 2026-01-20 15:27:51.149 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:51 compute-0 sshd-session[377868]: Connection closed by invalid user test 134.122.57.138 port 50224 [preauth]
Jan 20 15:27:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:52.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:52 compute-0 ceph-mon[74360]: pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:27:52
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:27:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:53.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:53 compute-0 podman[377873]: 2026-01-20 15:27:53.490945615 +0000 UTC m=+0.087713032 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:27:53 compute-0 podman[377872]: 2026-01-20 15:27:53.497058312 +0000 UTC m=+0.095728821 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 15:27:54 compute-0 nova_compute[250018]: 2026-01-20 15:27:54.387 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:54 compute-0 ceph-mon[74360]: pgmap v3171: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:27:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:55.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:27:56 compute-0 nova_compute[250018]: 2026-01-20 15:27:56.151 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:27:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:56 compute-0 ceph-mon[74360]: pgmap v3172: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:57.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:27:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:27:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:27:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:27:58.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:58 compute-0 ceph-mon[74360]: pgmap v3173: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:27:59 compute-0 nova_compute[250018]: 2026-01-20 15:27:59.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:27:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:27:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:27:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:27:59.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:27:59 compute-0 nova_compute[250018]: 2026-01-20 15:27:59.388 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:00 compute-0 ceph-mon[74360]: pgmap v3174: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:01 compute-0 nova_compute[250018]: 2026-01-20 15:28:01.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:02 compute-0 ceph-mon[74360]: pgmap v3175: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:04 compute-0 nova_compute[250018]: 2026-01-20 15:28:04.389 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:04 compute-0 ceph-mon[74360]: pgmap v3176: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:06 compute-0 nova_compute[250018]: 2026-01-20 15:28:06.154 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:06.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:06 compute-0 sudo[377922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:06 compute-0 ceph-mon[74360]: pgmap v3177: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:06 compute-0 sudo[377922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:06 compute-0 sudo[377922]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:06 compute-0 sudo[377947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:06 compute-0 sudo[377947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:06 compute-0 sudo[377947]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:28:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:07.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:28:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:08 compute-0 ceph-mon[74360]: pgmap v3178: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:28:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/185543081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 140 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 821 KiB/s wr, 9 op/s
Jan 20 15:28:09 compute-0 nova_compute[250018]: 2026-01-20 15:28:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:09 compute-0 nova_compute[250018]: 2026-01-20 15:28:09.391 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/566218198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:28:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:28:10 compute-0 sudo[377975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:10 compute-0 sudo[377975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-0 sudo[377975]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-0 sudo[378000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:10 compute-0 sudo[378000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-0 sudo[378000]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-0 sudo[378025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:10 compute-0 sudo[378025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-0 sudo[378025]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:10 compute-0 sudo[378050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:28:10 compute-0 sudo[378050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:10 compute-0 ceph-mon[74360]: pgmap v3179: 321 pgs: 321 active+clean; 140 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 821 KiB/s wr, 9 op/s
Jan 20 15:28:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1912177260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:11 compute-0 sudo[378050]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:11.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:11 compute-0 nova_compute[250018]: 2026-01-20 15:28:11.156 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:11 compute-0 sudo[378106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:11 compute-0 sudo[378106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-0 sudo[378106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-0 sudo[378131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:11 compute-0 sudo[378131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-0 sudo[378131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-0 sudo[378156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:11 compute-0 sudo[378156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-0 sudo[378156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-0 sudo[378181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 20 15:28:11 compute-0 sudo[378181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:11 compute-0 sudo[378181]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:28:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:28:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:28:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:28:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/864390893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:28:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d325c560-59aa-4f82-a753-a0f31e977e58 does not exist
Jan 20 15:28:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 496a31e2-6b2a-4654-af1a-179af2094cf0 does not exist
Jan 20 15:28:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 739bcb0c-55ee-494c-8c74-1ea7a17ce342 does not exist
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.073 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.074 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:12 compute-0 sudo[378224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:12 compute-0 sudo[378224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:12 compute-0 sudo[378224]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:12 compute-0 sudo[378250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:12 compute-0 sudo[378250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:12 compute-0 sudo[378250]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:12 compute-0 sudo[378275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:12 compute-0 sudo[378275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:12 compute-0 sudo[378275]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:12 compute-0 sudo[378302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:28:12 compute-0 sudo[378302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:12.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:28:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323984659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.517 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:12 compute-0 podman[378384]: 2026-01-20 15:28:12.538697757 +0000 UTC m=+0.043064185 container create e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:28:12 compute-0 systemd[1]: Started libpod-conmon-e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2.scope.
Jan 20 15:28:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:12 compute-0 podman[378384]: 2026-01-20 15:28:12.517997753 +0000 UTC m=+0.022364201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:12 compute-0 podman[378384]: 2026-01-20 15:28:12.622369487 +0000 UTC m=+0.126735935 container init e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:28:12 compute-0 podman[378384]: 2026-01-20 15:28:12.63090588 +0000 UTC m=+0.135272308 container start e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:28:12 compute-0 podman[378384]: 2026-01-20 15:28:12.633873932 +0000 UTC m=+0.138240390 container attach e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:28:12 compute-0 vigilant_shtern[378402]: 167 167
Jan 20 15:28:12 compute-0 systemd[1]: libpod-e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2.scope: Deactivated successfully.
Jan 20 15:28:12 compute-0 podman[378407]: 2026-01-20 15:28:12.689667782 +0000 UTC m=+0.024060187 container died e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.693 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.697 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4197MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.698 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.698 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-048336dc814e34c7897a25cf476dea981330e8224e1b611277504ec942d60d56-merged.mount: Deactivated successfully.
Jan 20 15:28:12 compute-0 podman[378407]: 2026-01-20 15:28:12.730271779 +0000 UTC m=+0.064664164 container remove e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:28:12 compute-0 systemd[1]: libpod-conmon-e024c06e5c3b1a73ede80f0beb3974f8a78d170a3943e6baafd0138366465fb2.scope: Deactivated successfully.
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.768 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.769 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:28:12 compute-0 nova_compute[250018]: 2026-01-20 15:28:12.787 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:28:12 compute-0 podman[378428]: 2026-01-20 15:28:12.893883079 +0000 UTC m=+0.044240147 container create c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:28:12 compute-0 systemd[1]: Started libpod-conmon-c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c.scope.
Jan 20 15:28:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:12 compute-0 podman[378428]: 2026-01-20 15:28:12.961925234 +0000 UTC m=+0.112282332 container init c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:28:12 compute-0 podman[378428]: 2026-01-20 15:28:12.874193262 +0000 UTC m=+0.024550350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:12 compute-0 podman[378428]: 2026-01-20 15:28:12.972636305 +0000 UTC m=+0.122993373 container start c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:28:12 compute-0 podman[378428]: 2026-01-20 15:28:12.975500294 +0000 UTC m=+0.125857372 container attach c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:28:12 compute-0 ceph-mon[74360]: pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3409456068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1982473906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/323984659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1083406231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:28:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276571353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:13 compute-0 nova_compute[250018]: 2026-01-20 15:28:13.239 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:28:13 compute-0 nova_compute[250018]: 2026-01-20 15:28:13.247 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:28:13 compute-0 nova_compute[250018]: 2026-01-20 15:28:13.264 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:28:13 compute-0 nova_compute[250018]: 2026-01-20 15:28:13.266 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:28:13 compute-0 nova_compute[250018]: 2026-01-20 15:28:13.266 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:13 compute-0 pedantic_margulis[378464]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:28:13 compute-0 pedantic_margulis[378464]: --> relative data size: 1.0
Jan 20 15:28:13 compute-0 pedantic_margulis[378464]: --> All data devices are unavailable
Jan 20 15:28:13 compute-0 systemd[1]: libpod-c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c.scope: Deactivated successfully.
Jan 20 15:28:13 compute-0 podman[378428]: 2026-01-20 15:28:13.779247953 +0000 UTC m=+0.929605062 container died c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 15:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b92ad42d18f240096899baed523939c0578b5ab5fce3a15ba6e6b72c9d7613ed-merged.mount: Deactivated successfully.
Jan 20 15:28:13 compute-0 podman[378428]: 2026-01-20 15:28:13.857681382 +0000 UTC m=+1.008038450 container remove c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:28:13 compute-0 systemd[1]: libpod-conmon-c88dba3b9239c65251cf508bbc42fc25bb5177674dc9e05b528b670e95e6fe0c.scope: Deactivated successfully.
Jan 20 15:28:13 compute-0 sudo[378302]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:13 compute-0 sudo[378495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:13 compute-0 sudo[378495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:13 compute-0 sudo[378495]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:14 compute-0 sudo[378520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:14 compute-0 ceph-mon[74360]: pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Jan 20 15:28:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2276571353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:14 compute-0 sudo[378520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:28:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3549410671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:28:14 compute-0 sudo[378520]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:14 compute-0 sudo[378545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:14 compute-0 sudo[378545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:14 compute-0 sudo[378545]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:14 compute-0 sudo[378570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:28:14 compute-0 sudo[378570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:14 compute-0 nova_compute[250018]: 2026-01-20 15:28:14.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:14.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.488847296 +0000 UTC m=+0.039631761 container create 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:28:14 compute-0 systemd[1]: Started libpod-conmon-2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5.scope.
Jan 20 15:28:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.471598846 +0000 UTC m=+0.022383331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.565712671 +0000 UTC m=+0.116497136 container init 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.57263948 +0000 UTC m=+0.123423945 container start 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.575741004 +0000 UTC m=+0.126525539 container attach 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:28:14 compute-0 zen_dirac[378654]: 167 167
Jan 20 15:28:14 compute-0 systemd[1]: libpod-2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5.scope: Deactivated successfully.
Jan 20 15:28:14 compute-0 conmon[378654]: conmon 2ef599e294d44c4bda83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5.scope/container/memory.events
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.578788288 +0000 UTC m=+0.129572773 container died 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 15:28:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-88c7f884d7d25a6e0a4b57b918bd8e2338e42d1cdc441d0e2ee1beb99ea28f98-merged.mount: Deactivated successfully.
Jan 20 15:28:14 compute-0 podman[378637]: 2026-01-20 15:28:14.611017757 +0000 UTC m=+0.161802222 container remove 2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_dirac, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 15:28:14 compute-0 systemd[1]: libpod-conmon-2ef599e294d44c4bda83ba8bfd77cc498f5c3e4d66bf250529596e44353daec5.scope: Deactivated successfully.
Jan 20 15:28:14 compute-0 podman[378675]: 2026-01-20 15:28:14.763307467 +0000 UTC m=+0.038652484 container create 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:28:14 compute-0 systemd[1]: Started libpod-conmon-916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7.scope.
Jan 20 15:28:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70762c1d89b2a3a431db1da8446c8269d1f048b192a626493ee443c6f620d4c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70762c1d89b2a3a431db1da8446c8269d1f048b192a626493ee443c6f620d4c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70762c1d89b2a3a431db1da8446c8269d1f048b192a626493ee443c6f620d4c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70762c1d89b2a3a431db1da8446c8269d1f048b192a626493ee443c6f620d4c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:14 compute-0 podman[378675]: 2026-01-20 15:28:14.832547025 +0000 UTC m=+0.107892052 container init 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 15:28:14 compute-0 podman[378675]: 2026-01-20 15:28:14.838330083 +0000 UTC m=+0.113675100 container start 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:28:14 compute-0 podman[378675]: 2026-01-20 15:28:14.748145105 +0000 UTC m=+0.023490152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:14 compute-0 podman[378675]: 2026-01-20 15:28:14.842167918 +0000 UTC m=+0.117512955 container attach 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:28:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:28:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]: {
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:     "0": [
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:         {
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "devices": [
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "/dev/loop3"
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             ],
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "lv_name": "ceph_lv0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "lv_size": "7511998464",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "name": "ceph_lv0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "tags": {
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.cluster_name": "ceph",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.crush_device_class": "",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.encrypted": "0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.osd_id": "0",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.type": "block",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:                 "ceph.vdo": "0"
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             },
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "type": "block",
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:             "vg_name": "ceph_vg0"
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:         }
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]:     ]
Jan 20 15:28:15 compute-0 epic_chebyshev[378692]: }
Jan 20 15:28:15 compute-0 systemd[1]: libpod-916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7.scope: Deactivated successfully.
Jan 20 15:28:15 compute-0 podman[378675]: 2026-01-20 15:28:15.606364549 +0000 UTC m=+0.881709566 container died 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-70762c1d89b2a3a431db1da8446c8269d1f048b192a626493ee443c6f620d4c5-merged.mount: Deactivated successfully.
Jan 20 15:28:15 compute-0 podman[378675]: 2026-01-20 15:28:15.653332649 +0000 UTC m=+0.928677666 container remove 916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:28:15 compute-0 systemd[1]: libpod-conmon-916e46ae5507d102fff386ddf5bca1630e75f1ffa6b68e3c68a67c18853c13c7.scope: Deactivated successfully.
Jan 20 15:28:15 compute-0 sudo[378570]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:15 compute-0 sudo[378715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:15 compute-0 sudo[378715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:15 compute-0 sudo[378715]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:15 compute-0 sudo[378740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:28:15 compute-0 sudo[378740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:15 compute-0 sudo[378740]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:15 compute-0 sudo[378765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:15 compute-0 sudo[378765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:15 compute-0 sudo[378765]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:15 compute-0 sudo[378790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:28:15 compute-0 sudo[378790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:16 compute-0 nova_compute[250018]: 2026-01-20 15:28:16.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.198267863 +0000 UTC m=+0.038637474 container create c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:28:16 compute-0 systemd[1]: Started libpod-conmon-c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7.scope.
Jan 20 15:28:16 compute-0 nova_compute[250018]: 2026-01-20 15:28:16.266 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:16 compute-0 nova_compute[250018]: 2026-01-20 15:28:16.267 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:16 compute-0 nova_compute[250018]: 2026-01-20 15:28:16.267 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:28:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.180644883 +0000 UTC m=+0.021014504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.283153847 +0000 UTC m=+0.123523458 container init c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.28985485 +0000 UTC m=+0.130224451 container start c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.292820681 +0000 UTC m=+0.133190282 container attach c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:28:16 compute-0 confident_wilbur[378869]: 167 167
Jan 20 15:28:16 compute-0 systemd[1]: libpod-c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7.scope: Deactivated successfully.
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.295501904 +0000 UTC m=+0.135871505 container died c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00bc7b33fdd41766f28d3d87188c061a99488a7c4a6e677acf45de707d56ca5-merged.mount: Deactivated successfully.
Jan 20 15:28:16 compute-0 podman[378853]: 2026-01-20 15:28:16.329755977 +0000 UTC m=+0.170125578 container remove c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wilbur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:28:16 compute-0 systemd[1]: libpod-conmon-c70581e6e4556b879fbb73790fd7e0b367e853c7690a7dc6cc900b57d68142b7.scope: Deactivated successfully.
Jan 20 15:28:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:16.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:16 compute-0 podman[378893]: 2026-01-20 15:28:16.474550715 +0000 UTC m=+0.038310016 container create ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:28:16 compute-0 ceph-mon[74360]: pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:28:16 compute-0 systemd[1]: Started libpod-conmon-ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631.scope.
Jan 20 15:28:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bec7195ff0c232a3766312fd8877f6420c5c0d922da291652bc2ec179bffb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bec7195ff0c232a3766312fd8877f6420c5c0d922da291652bc2ec179bffb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bec7195ff0c232a3766312fd8877f6420c5c0d922da291652bc2ec179bffb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9bec7195ff0c232a3766312fd8877f6420c5c0d922da291652bc2ec179bffb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:28:16 compute-0 podman[378893]: 2026-01-20 15:28:16.535115636 +0000 UTC m=+0.098874957 container init ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:28:16 compute-0 podman[378893]: 2026-01-20 15:28:16.542077565 +0000 UTC m=+0.105836866 container start ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:28:16 compute-0 podman[378893]: 2026-01-20 15:28:16.545528709 +0000 UTC m=+0.109288010 container attach ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:28:16 compute-0 podman[378893]: 2026-01-20 15:28:16.458657421 +0000 UTC m=+0.022416742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:28:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 20 15:28:17 compute-0 nova_compute[250018]: 2026-01-20 15:28:17.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]: {
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:         "osd_id": 0,
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:         "type": "bluestore"
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]:     }
Jan 20 15:28:17 compute-0 recursing_mirzakhani[378909]: }
Jan 20 15:28:17 compute-0 systemd[1]: libpod-ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631.scope: Deactivated successfully.
Jan 20 15:28:17 compute-0 podman[378893]: 2026-01-20 15:28:17.328129512 +0000 UTC m=+0.891888803 container died ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9bec7195ff0c232a3766312fd8877f6420c5c0d922da291652bc2ec179bffb5-merged.mount: Deactivated successfully.
Jan 20 15:28:17 compute-0 podman[378893]: 2026-01-20 15:28:17.38126502 +0000 UTC m=+0.945024331 container remove ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:28:17 compute-0 systemd[1]: libpod-conmon-ad532081855a6b4702d21701a388519f9cf0670f69aa33ce74cb8319495ad631.scope: Deactivated successfully.
Jan 20 15:28:17 compute-0 sudo[378790]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:28:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:28:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fa9380b6-0d9a-4d8d-86ac-e87a75bd616d does not exist
Jan 20 15:28:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ac066829-a6e0-4c65-9874-37ad0f4eeb03 does not exist
Jan 20 15:28:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3291df9e-22b8-4009-951e-9add481abf85 does not exist
Jan 20 15:28:17 compute-0 sudo[378943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:17 compute-0 sudo[378943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:17 compute-0 sudo[378943]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:17 compute-0 sudo[378968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:28:17 compute-0 sudo[378968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:17 compute-0 sudo[378968]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:18 compute-0 nova_compute[250018]: 2026-01-20 15:28:18.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:18 compute-0 ceph-mon[74360]: pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 20 15:28:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:28:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:28:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:19 compute-0 nova_compute[250018]: 2026-01-20 15:28:19.393 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:20.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:20 compute-0 ceph-mon[74360]: pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:28:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1006 KiB/s wr, 90 op/s
Jan 20 15:28:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:21 compute-0 nova_compute[250018]: 2026-01-20 15:28:21.159 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:21 compute-0 ceph-mon[74360]: pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1006 KiB/s wr, 90 op/s
Jan 20 15:28:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:22.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:23.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:24 compute-0 ceph-mon[74360]: pgmap v3186: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:24 compute-0 nova_compute[250018]: 2026-01-20 15:28:24.395 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:24 compute-0 podman[378998]: 2026-01-20 15:28:24.467259297 +0000 UTC m=+0.052279865 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 15:28:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:24.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:24 compute-0 podman[378997]: 2026-01-20 15:28:24.51797453 +0000 UTC m=+0.105204529 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:28:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:26 compute-0 ceph-mon[74360]: pgmap v3187: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 20 15:28:26 compute-0 nova_compute[250018]: 2026-01-20 15:28:26.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:26 compute-0 nova_compute[250018]: 2026-01-20 15:28:26.163 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:26 compute-0 sudo[379043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:26 compute-0 sudo[379043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:26 compute-0 sudo[379043]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:26 compute-0 sudo[379068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:26 compute-0 sudo[379068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:26 compute-0 sudo[379068]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 174 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 550 KiB/s wr, 87 op/s
Jan 20 15:28:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:28 compute-0 ceph-mon[74360]: pgmap v3188: 321 pgs: 321 active+clean; 174 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 550 KiB/s wr, 87 op/s
Jan 20 15:28:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:28.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:28 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Jan 20 15:28:29 compute-0 nova_compute[250018]: 2026-01-20 15:28:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:28:29 compute-0 nova_compute[250018]: 2026-01-20 15:28:29.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:28:29 compute-0 nova_compute[250018]: 2026-01-20 15:28:29.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:28:29 compute-0 nova_compute[250018]: 2026-01-20 15:28:29.078 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:28:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:29.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:29 compute-0 nova_compute[250018]: 2026-01-20 15:28:29.396 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:30 compute-0 ceph-mon[74360]: pgmap v3189: 321 pgs: 321 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Jan 20 15:28:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:30.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:30.798 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:30.798 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:28:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:30.799 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:28:30 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:31 compute-0 nova_compute[250018]: 2026-01-20 15:28:31.165 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:32 compute-0 ceph-mon[74360]: pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:32.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:32.527 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:28:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:32.528 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:28:32 compute-0 nova_compute[250018]: 2026-01-20 15:28:32.527 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:32 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:33 compute-0 sshd-session[379095]: Invalid user user from 134.122.57.138 port 52590
Jan 20 15:28:34 compute-0 ceph-mon[74360]: pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:34 compute-0 sshd-session[379095]: Connection closed by invalid user user 134.122.57.138 port 52590 [preauth]
Jan 20 15:28:34 compute-0 nova_compute[250018]: 2026-01-20 15:28:34.397 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:34.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:34 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:36 compute-0 ceph-mon[74360]: pgmap v3192: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:28:36 compute-0 nova_compute[250018]: 2026-01-20 15:28:36.229 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:36.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:36 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:28:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:28:37.530 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:28:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:38.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:38 compute-0 ceph-mon[74360]: pgmap v3193: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:28:38 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 20 15:28:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:39.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:39 compute-0 nova_compute[250018]: 2026-01-20 15:28:39.398 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:40.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:40 compute-0 ceph-mon[74360]: pgmap v3194: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 1.6 MiB/s wr, 48 op/s
Jan 20 15:28:40 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Jan 20 15:28:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:41.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:41 compute-0 nova_compute[250018]: 2026-01-20 15:28:41.231 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:42 compute-0 ceph-mon[74360]: pgmap v3195: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Jan 20 15:28:42 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:28:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:44 compute-0 nova_compute[250018]: 2026-01-20 15:28:44.399 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:44 compute-0 ceph-mon[74360]: pgmap v3196: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:28:44 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 20 15:28:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:46 compute-0 nova_compute[250018]: 2026-01-20 15:28:46.232 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:46.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:46 compute-0 ceph-mon[74360]: pgmap v3197: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 20 15:28:46 compute-0 sudo[379105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:46 compute-0 sudo[379105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:46 compute-0 sudo[379105]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:46 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 2.7 KiB/s wr, 1 op/s
Jan 20 15:28:46 compute-0 sudo[379130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:28:46 compute-0 sudo[379130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:28:47 compute-0 sudo[379130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:28:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:47.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:48.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:48 compute-0 ceph-mon[74360]: pgmap v3198: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 2.7 KiB/s wr, 1 op/s
Jan 20 15:28:48 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.3 KiB/s wr, 0 op/s
Jan 20 15:28:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:49.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:49 compute-0 nova_compute[250018]: 2026-01-20 15:28:49.401 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 15:28:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:50 compute-0 ceph-mon[74360]: pgmap v3199: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.3 KiB/s wr, 0 op/s
Jan 20 15:28:50 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 21 op/s
Jan 20 15:28:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:51 compute-0 nova_compute[250018]: 2026-01-20 15:28:51.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:51 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/194072997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:28:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:28:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:52.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:28:52
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:28:52 compute-0 ceph-mon[74360]: pgmap v3200: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 21 op/s
Jan 20 15:28:52 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Jan 20 15:28:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:54 compute-0 nova_compute[250018]: 2026-01-20 15:28:54.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:54.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:54 compute-0 ceph-mon[74360]: pgmap v3201: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Jan 20 15:28:54 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.2 KiB/s wr, 126 op/s
Jan 20 15:28:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:55 compute-0 podman[379160]: 2026-01-20 15:28:55.515196405 +0000 UTC m=+0.097579632 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:28:55 compute-0 podman[379159]: 2026-01-20 15:28:55.529081552 +0000 UTC m=+0.121716199 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 15:28:55 compute-0 ceph-mon[74360]: pgmap v3202: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.2 KiB/s wr, 126 op/s
Jan 20 15:28:56 compute-0 nova_compute[250018]: 2026-01-20 15:28:56.235 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:28:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:28:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:56.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:28:56 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 5.2 KiB/s wr, 169 op/s
Jan 20 15:28:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:28:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:28:58 compute-0 ceph-mon[74360]: pgmap v3203: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 5.2 KiB/s wr, 169 op/s
Jan 20 15:28:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:28:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 15:28:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:28:58.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 15:28:58 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:28:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:28:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:28:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:28:59.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:28:59 compute-0 nova_compute[250018]: 2026-01-20 15:28:59.404 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:00 compute-0 nova_compute[250018]: 2026-01-20 15:29:00.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:00 compute-0 ceph-mon[74360]: pgmap v3204: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:29:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:00 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:29:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:01.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:01 compute-0 nova_compute[250018]: 2026-01-20 15:29:01.237 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:02 compute-0 ceph-mon[74360]: pgmap v3205: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 104 KiB/s rd, 4.8 KiB/s wr, 169 op/s
Jan 20 15:29:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:29:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:29:02 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:03.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:04 compute-0 ceph-mon[74360]: pgmap v3206: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:04 compute-0 nova_compute[250018]: 2026-01-20 15:29:04.405 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:04 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:05.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:06 compute-0 ceph-mon[74360]: pgmap v3207: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 148 op/s
Jan 20 15:29:06 compute-0 nova_compute[250018]: 2026-01-20 15:29:06.238 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:06.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:06 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 15:29:07 compute-0 sudo[379210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:07 compute-0 sudo[379210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:07 compute-0 sudo[379210]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:07 compute-0 sudo[379235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:07 compute-0 sudo[379235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:07 compute-0 sudo[379235]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:07.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:07 compute-0 sshd-session[379260]: banner exchange: Connection from 3.134.148.59 port 57850: invalid format
Jan 20 15:29:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:08 compute-0 ceph-mon[74360]: pgmap v3208: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 20 15:29:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:29:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:29:08 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:09 compute-0 nova_compute[250018]: 2026-01-20 15:29:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:09.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4223010889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:09 compute-0 nova_compute[250018]: 2026-01-20 15:29:09.407 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:10 compute-0 ceph-mon[74360]: pgmap v3209: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4249767932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:10 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:11.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:11 compute-0 nova_compute[250018]: 2026-01-20 15:29:11.240 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:29:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:29:12 compute-0 ceph-mon[74360]: pgmap v3210: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1352108626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:12 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:13.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/439750638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.082 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.082 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.082 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.082 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.083 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:14 compute-0 ceph-mon[74360]: pgmap v3211: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:29:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/640351222' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.407 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191349382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.507 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:14.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.671 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.672 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4256MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.673 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.673 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.738 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.739 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.757 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.773 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.773 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.785 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.802 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:29:14 compute-0 nova_compute[250018]: 2026-01-20 15:29:14.815 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:29:14 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:15.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:29:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953815649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:15 compute-0 nova_compute[250018]: 2026-01-20 15:29:15.266 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:29:15 compute-0 nova_compute[250018]: 2026-01-20 15:29:15.273 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:29:15 compute-0 nova_compute[250018]: 2026-01-20 15:29:15.298 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:29:15 compute-0 nova_compute[250018]: 2026-01-20 15:29:15.299 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:29:15 compute-0 nova_compute[250018]: 2026-01-20 15:29:15.300 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4191349382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/953815649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:16 compute-0 nova_compute[250018]: 2026-01-20 15:29:16.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:16 compute-0 ceph-mon[74360]: pgmap v3212: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:16 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:17.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:17 compute-0 nova_compute[250018]: 2026-01-20 15:29:17.299 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:17 compute-0 nova_compute[250018]: 2026-01-20 15:29:17.300 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:17 compute-0 nova_compute[250018]: 2026-01-20 15:29:17.300 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:29:17 compute-0 sshd-session[379307]: Invalid user user from 134.122.57.138 port 53542
Jan 20 15:29:17 compute-0 sshd-session[379307]: Connection closed by invalid user user 134.122.57.138 port 53542 [preauth]
Jan 20 15:29:18 compute-0 nova_compute[250018]: 2026-01-20 15:29:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:18 compute-0 sudo[379312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-0 sudo[379312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379312]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 sudo[379337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:18 compute-0 sudo[379337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379337]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:18 compute-0 sudo[379362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-0 sudo[379362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379362]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 sudo[379388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:29:18 compute-0 sudo[379388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 ceph-mon[74360]: pgmap v3213: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:18.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:18 compute-0 sudo[379388]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:29:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:29:18 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:18 compute-0 sudo[379433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-0 sudo[379433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379433]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 sudo[379458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:18 compute-0 sudo[379458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379458]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 sudo[379483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:18 compute-0 sudo[379483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 sudo[379483]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:18 compute-0 sudo[379508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:29:18 compute-0 sudo[379508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:18 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:29:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:29:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 sudo[379508]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-0 nova_compute[250018]: 2026-01-20 15:29:19.409 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:19 compute-0 sudo[379564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:19 compute-0 sudo[379564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-0 sudo[379564]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-0 sudo[379589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:19 compute-0 sudo[379589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-0 sudo[379589]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-0 sudo[379614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:19 compute-0 sudo[379614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-0 sudo[379614]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:19 compute-0 sudo[379639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- inventory --format=json-pretty --filter-for-batch
Jan 20 15:29:19 compute-0 sudo[379639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.892285538 +0000 UTC m=+0.045808179 container create 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:19 compute-0 systemd[1]: Started libpod-conmon-6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636.scope.
Jan 20 15:29:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.875569262 +0000 UTC m=+0.029091923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.984707607 +0000 UTC m=+0.138230268 container init 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.992260464 +0000 UTC m=+0.145783105 container start 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.995787609 +0000 UTC m=+0.149310270 container attach 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:29:19 compute-0 sweet_yalow[379720]: 167 167
Jan 20 15:29:19 compute-0 systemd[1]: libpod-6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636.scope: Deactivated successfully.
Jan 20 15:29:19 compute-0 podman[379704]: 2026-01-20 15:29:19.99910434 +0000 UTC m=+0.152626991 container died 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:29:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1edb15b88bfc87801e61a684cd13e2398ccc2372e28bd3f90071993f9112a61-merged.mount: Deactivated successfully.
Jan 20 15:29:20 compute-0 podman[379704]: 2026-01-20 15:29:20.042512634 +0000 UTC m=+0.196035265 container remove 6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_yalow, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:29:20 compute-0 nova_compute[250018]: 2026-01-20 15:29:20.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:20 compute-0 systemd[1]: libpod-conmon-6493f368a7f593f01b44364d1213f383e05a1fc427e584f93a9060dbd1a8b636.scope: Deactivated successfully.
Jan 20 15:29:20 compute-0 podman[379744]: 2026-01-20 15:29:20.219791605 +0000 UTC m=+0.044023490 container create 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:29:20 compute-0 podman[379744]: 2026-01-20 15:29:20.199224915 +0000 UTC m=+0.023456820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:20 compute-0 systemd[1]: Started libpod-conmon-7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a.scope.
Jan 20 15:29:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf3b99ed4378bc6100efbb90516bfab67dd78ebab08112a4534881f11daf49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf3b99ed4378bc6100efbb90516bfab67dd78ebab08112a4534881f11daf49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf3b99ed4378bc6100efbb90516bfab67dd78ebab08112a4534881f11daf49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bcf3b99ed4378bc6100efbb90516bfab67dd78ebab08112a4534881f11daf49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:20 compute-0 podman[379744]: 2026-01-20 15:29:20.393548392 +0000 UTC m=+0.217780297 container init 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:20 compute-0 podman[379744]: 2026-01-20 15:29:20.406305679 +0000 UTC m=+0.230537564 container start 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:29:20 compute-0 podman[379744]: 2026-01-20 15:29:20.452360945 +0000 UTC m=+0.276592850 container attach 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:29:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:20 compute-0 ceph-mon[74360]: pgmap v3214: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4253199867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:20 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:21 compute-0 nova_compute[250018]: 2026-01-20 15:29:21.273 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:21 compute-0 goofy_murdock[379761]: [
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:     {
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "available": false,
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "ceph_device": false,
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "lsm_data": {},
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "lvs": [],
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "path": "/dev/sr0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "rejected_reasons": [
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "Has a FileSystem",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "Insufficient space (<5GB)"
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         ],
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         "sys_api": {
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "actuators": null,
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "device_nodes": "sr0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "devname": "sr0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "human_readable_size": "482.00 KB",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "id_bus": "ata",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "model": "QEMU DVD-ROM",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "nr_requests": "2",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "parent": "/dev/sr0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "partitions": {},
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "path": "/dev/sr0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "removable": "1",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "rev": "2.5+",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "ro": "0",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "rotational": "1",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "sas_address": "",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "sas_device_handle": "",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "scheduler_mode": "mq-deadline",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "sectors": 0,
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "sectorsize": "2048",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "size": 493568.0,
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "support_discard": "2048",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "type": "disk",
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:             "vendor": "QEMU"
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:         }
Jan 20 15:29:21 compute-0 goofy_murdock[379761]:     }
Jan 20 15:29:21 compute-0 goofy_murdock[379761]: ]
Jan 20 15:29:21 compute-0 systemd[1]: libpod-7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a.scope: Deactivated successfully.
Jan 20 15:29:21 compute-0 systemd[1]: libpod-7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a.scope: Consumed 1.173s CPU time.
Jan 20 15:29:21 compute-0 podman[379744]: 2026-01-20 15:29:21.567930035 +0000 UTC m=+1.392161920 container died 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bcf3b99ed4378bc6100efbb90516bfab67dd78ebab08112a4534881f11daf49-merged.mount: Deactivated successfully.
Jan 20 15:29:21 compute-0 podman[379744]: 2026-01-20 15:29:21.617253639 +0000 UTC m=+1.441485524 container remove 7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:21 compute-0 systemd[1]: libpod-conmon-7094635e3170900764e7f39c54424a39091c4c024dbbe4bd505e4d04de03450a.scope: Deactivated successfully.
Jan 20 15:29:21 compute-0 sudo[379639]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8974c5be-5257-404a-8f79-63afb91db8f1 does not exist
Jan 20 15:29:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 12eba4b9-1a3e-4707-8e7f-0c861e20b0ce does not exist
Jan 20 15:29:21 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6a728375-f2e4-4428-bece-962c1666ae21 does not exist
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:29:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:29:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:21 compute-0 sudo[381055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:21 compute-0 sudo[381055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:21 compute-0 sudo[381055]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:21 compute-0 sudo[381080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:21 compute-0 sudo[381080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:21 compute-0 sudo[381080]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:22 compute-0 sudo[381105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:22 compute-0 sudo[381105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:22 compute-0 sudo[381105]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:22 compute-0 sudo[381130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:29:22 compute-0 sudo[381130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:22 compute-0 ceph-mon[74360]: pgmap v3215: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:29:22 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.443127521 +0000 UTC m=+0.035114708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.618161083 +0000 UTC m=+0.210148250 container create d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:22 compute-0 systemd[1]: Started libpod-conmon-d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977.scope.
Jan 20 15:29:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.683822993 +0000 UTC m=+0.275810170 container init d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.694236146 +0000 UTC m=+0.286223313 container start d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.697946488 +0000 UTC m=+0.289933655 container attach d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:29:22 compute-0 eager_snyder[381214]: 167 167
Jan 20 15:29:22 compute-0 systemd[1]: libpod-d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977.scope: Deactivated successfully.
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.699789888 +0000 UTC m=+0.291777055 container died d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e85666d0e86efb8985529826bc62bf1e7c7ecfa08c21a36af29d95644bc6c4e-merged.mount: Deactivated successfully.
Jan 20 15:29:22 compute-0 podman[381198]: 2026-01-20 15:29:22.733815116 +0000 UTC m=+0.325802283 container remove d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_snyder, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:29:22 compute-0 systemd[1]: libpod-conmon-d758f7a39956611f4054472668bc38691cb9a18d2c59ab8df427eb86fd759977.scope: Deactivated successfully.
Jan 20 15:29:22 compute-0 podman[381236]: 2026-01-20 15:29:22.880557076 +0000 UTC m=+0.038341277 container create 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:29:22 compute-0 systemd[1]: Started libpod-conmon-729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5.scope.
Jan 20 15:29:22 compute-0 podman[381236]: 2026-01-20 15:29:22.863167551 +0000 UTC m=+0.020951772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:22 compute-0 podman[381236]: 2026-01-20 15:29:22.982973818 +0000 UTC m=+0.140758039 container init 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:29:22 compute-0 podman[381236]: 2026-01-20 15:29:22.990224025 +0000 UTC m=+0.148008226 container start 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:29:22 compute-0 podman[381236]: 2026-01-20 15:29:22.994213094 +0000 UTC m=+0.151997295 container attach 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 15:29:22 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.258081) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963258140, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1405, "num_deletes": 250, "total_data_size": 2497989, "memory_usage": 2543944, "flush_reason": "Manual Compaction"}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963269800, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1517641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70962, "largest_seqno": 72366, "table_properties": {"data_size": 1512548, "index_size": 2424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13383, "raw_average_key_size": 21, "raw_value_size": 1501453, "raw_average_value_size": 2357, "num_data_blocks": 108, "num_entries": 637, "num_filter_entries": 637, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922833, "oldest_key_time": 1768922833, "file_creation_time": 1768922963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 11741 microseconds, and 4880 cpu microseconds.
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.269832) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1517641 bytes OK
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.269848) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.271730) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.271746) EVENT_LOG_v1 {"time_micros": 1768922963271741, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.271762) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2491912, prev total WAL file size 2491912, number of live WAL files 2.
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.273032) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1482KB)], [161(12MB)]
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963273078, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 14634568, "oldest_snapshot_seqno": -1}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 10027 keys, 11680054 bytes, temperature: kUnknown
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963345752, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 11680054, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11617332, "index_size": 36561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 263720, "raw_average_key_size": 26, "raw_value_size": 11443631, "raw_average_value_size": 1141, "num_data_blocks": 1392, "num_entries": 10027, "num_filter_entries": 10027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768922963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.346047) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 11680054 bytes
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.347458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.2 rd, 160.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.5 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(17.3) write-amplify(7.7) OK, records in: 10485, records dropped: 458 output_compression: NoCompression
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.347476) EVENT_LOG_v1 {"time_micros": 1768922963347466, "job": 100, "event": "compaction_finished", "compaction_time_micros": 72753, "compaction_time_cpu_micros": 32772, "output_level": 6, "num_output_files": 1, "total_output_size": 11680054, "num_input_records": 10485, "num_output_records": 10027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963347804, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768922963349656, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.272972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.349681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.349684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.349685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.349687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:29:23.349689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:29:23 compute-0 sharp_dewdney[381253]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:29:23 compute-0 sharp_dewdney[381253]: --> relative data size: 1.0
Jan 20 15:29:23 compute-0 sharp_dewdney[381253]: --> All data devices are unavailable
Jan 20 15:29:23 compute-0 systemd[1]: libpod-729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5.scope: Deactivated successfully.
Jan 20 15:29:23 compute-0 podman[381268]: 2026-01-20 15:29:23.826131831 +0000 UTC m=+0.029942078 container died 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-335b97b3d42579d0861e6945780dc61d6a535f6a01537774ec5680b5317053e3-merged.mount: Deactivated successfully.
Jan 20 15:29:23 compute-0 podman[381268]: 2026-01-20 15:29:23.875501237 +0000 UTC m=+0.079311464 container remove 729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dewdney, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:29:23 compute-0 systemd[1]: libpod-conmon-729f2ad521fd952565c8b98ba4ff5e93cd87d7d0c5406d15ceff5445e3fabef5.scope: Deactivated successfully.
Jan 20 15:29:23 compute-0 sudo[381130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:23 compute-0 sudo[381283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:23 compute-0 sudo[381283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:23 compute-0 sudo[381283]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:24 compute-0 sudo[381308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:24 compute-0 sudo[381308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:24 compute-0 sudo[381308]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:24 compute-0 nova_compute[250018]: 2026-01-20 15:29:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:24 compute-0 sudo[381333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:24 compute-0 sudo[381333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:24 compute-0 sudo[381333]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:24 compute-0 sshd-session[381381]: banner exchange: Connection from 3.134.148.59 port 44230: invalid format
Jan 20 15:29:24 compute-0 sudo[381358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:29:24 compute-0 sudo[381358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:24 compute-0 ceph-mon[74360]: pgmap v3216: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:29:24 compute-0 nova_compute[250018]: 2026-01-20 15:29:24.410 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.471352949 +0000 UTC m=+0.036840756 container create 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:29:24 compute-0 systemd[1]: Started libpod-conmon-8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c.scope.
Jan 20 15:29:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.531584721 +0000 UTC m=+0.097072548 container init 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.539426024 +0000 UTC m=+0.104913831 container start 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.542726465 +0000 UTC m=+0.108214302 container attach 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:29:24 compute-0 hardcore_murdock[381443]: 167 167
Jan 20 15:29:24 compute-0 systemd[1]: libpod-8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c.scope: Deactivated successfully.
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.545951463 +0000 UTC m=+0.111439280 container died 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.45597839 +0000 UTC m=+0.021466227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-752d1313a5c4abb23e53b554f337d3c884064a34e53f36dda1f9266778b64059-merged.mount: Deactivated successfully.
Jan 20 15:29:24 compute-0 podman[381426]: 2026-01-20 15:29:24.584267217 +0000 UTC m=+0.149755044 container remove 8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_murdock, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:29:24 compute-0 systemd[1]: libpod-conmon-8a65ea99be4cb9a50f20f160a7866821e81fa804fbb8f26764159b156e6e532c.scope: Deactivated successfully.
Jan 20 15:29:24 compute-0 podman[381465]: 2026-01-20 15:29:24.73112731 +0000 UTC m=+0.037754390 container create 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:29:24 compute-0 systemd[1]: Started libpod-conmon-11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25.scope.
Jan 20 15:29:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae20d0ce86d331849f7e95813fe0bb6769e5f66f27a28a122bb5f4c9f16fc41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae20d0ce86d331849f7e95813fe0bb6769e5f66f27a28a122bb5f4c9f16fc41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae20d0ce86d331849f7e95813fe0bb6769e5f66f27a28a122bb5f4c9f16fc41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae20d0ce86d331849f7e95813fe0bb6769e5f66f27a28a122bb5f4c9f16fc41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:24 compute-0 podman[381465]: 2026-01-20 15:29:24.808816478 +0000 UTC m=+0.115443578 container init 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:29:24 compute-0 podman[381465]: 2026-01-20 15:29:24.714733014 +0000 UTC m=+0.021360114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:24 compute-0 podman[381465]: 2026-01-20 15:29:24.815282324 +0000 UTC m=+0.121909404 container start 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:29:24 compute-0 podman[381465]: 2026-01-20 15:29:24.818321178 +0000 UTC m=+0.124948278 container attach 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:29:24 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 904 KiB/s wr, 13 op/s
Jan 20 15:29:25 compute-0 nova_compute[250018]: 2026-01-20 15:29:25.057 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:25.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:25 compute-0 exciting_sammet[381482]: {
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:     "0": [
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:         {
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "devices": [
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "/dev/loop3"
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             ],
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "lv_name": "ceph_lv0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "lv_size": "7511998464",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "name": "ceph_lv0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "tags": {
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.cluster_name": "ceph",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.crush_device_class": "",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.encrypted": "0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.osd_id": "0",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.type": "block",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:                 "ceph.vdo": "0"
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             },
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "type": "block",
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:             "vg_name": "ceph_vg0"
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:         }
Jan 20 15:29:25 compute-0 exciting_sammet[381482]:     ]
Jan 20 15:29:25 compute-0 exciting_sammet[381482]: }
Jan 20 15:29:25 compute-0 systemd[1]: libpod-11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25.scope: Deactivated successfully.
Jan 20 15:29:25 compute-0 podman[381465]: 2026-01-20 15:29:25.543544605 +0000 UTC m=+0.850171705 container died 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ae20d0ce86d331849f7e95813fe0bb6769e5f66f27a28a122bb5f4c9f16fc41-merged.mount: Deactivated successfully.
Jan 20 15:29:25 compute-0 podman[381465]: 2026-01-20 15:29:25.608777374 +0000 UTC m=+0.915404444 container remove 11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:25 compute-0 systemd[1]: libpod-conmon-11226ef0e6e20d38f6c3ce35072399a347b1a939a4673660d73d17b86f4bdb25.scope: Deactivated successfully.
Jan 20 15:29:25 compute-0 podman[381494]: 2026-01-20 15:29:25.625834219 +0000 UTC m=+0.050151119 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:29:25 compute-0 sudo[381358]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:25 compute-0 podman[381491]: 2026-01-20 15:29:25.655670423 +0000 UTC m=+0.083545989 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:29:25 compute-0 sudo[381546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:25 compute-0 sudo[381546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:25 compute-0 sudo[381546]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:25 compute-0 sudo[381572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:29:25 compute-0 sudo[381572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:25 compute-0 sudo[381572]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:25 compute-0 sudo[381597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:25 compute-0 sudo[381597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:25 compute-0 sudo[381597]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:25 compute-0 sudo[381622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:29:25 compute-0 sudo[381622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.155200589 +0000 UTC m=+0.034099440 container create b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:29:26 compute-0 systemd[1]: Started libpod-conmon-b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60.scope.
Jan 20 15:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.224803076 +0000 UTC m=+0.103701947 container init b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.230761918 +0000 UTC m=+0.109660769 container start b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.23411113 +0000 UTC m=+0.113010011 container attach b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:29:26 compute-0 naughty_bassi[381702]: 167 167
Jan 20 15:29:26 compute-0 systemd[1]: libpod-b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60.scope: Deactivated successfully.
Jan 20 15:29:26 compute-0 conmon[381702]: conmon b4acd0bb25b1ad179d31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60.scope/container/memory.events
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.140524079 +0000 UTC m=+0.019422960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.237140143 +0000 UTC m=+0.116038994 container died b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0960650a125d3fd7528023ac649c35835bce242148d68a10ea429b244363f86-merged.mount: Deactivated successfully.
Jan 20 15:29:26 compute-0 podman[381686]: 2026-01-20 15:29:26.268554659 +0000 UTC m=+0.147453530 container remove b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:29:26 compute-0 systemd[1]: libpod-conmon-b4acd0bb25b1ad179d3110c225c23472194a9eb79eb0802ffb47206fe9774e60.scope: Deactivated successfully.
Jan 20 15:29:26 compute-0 nova_compute[250018]: 2026-01-20 15:29:26.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:26 compute-0 ceph-mon[74360]: pgmap v3217: 321 pgs: 321 active+clean; 142 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 904 KiB/s wr, 13 op/s
Jan 20 15:29:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1146710376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:26 compute-0 podman[381726]: 2026-01-20 15:29:26.423207265 +0000 UTC m=+0.046070717 container create b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:26 compute-0 systemd[1]: Started libpod-conmon-b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce.scope.
Jan 20 15:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9add796273f4a48201a4b22f98165fe023695e077e927aaf2458084a43c9dce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9add796273f4a48201a4b22f98165fe023695e077e927aaf2458084a43c9dce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9add796273f4a48201a4b22f98165fe023695e077e927aaf2458084a43c9dce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9add796273f4a48201a4b22f98165fe023695e077e927aaf2458084a43c9dce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:29:26 compute-0 podman[381726]: 2026-01-20 15:29:26.494988731 +0000 UTC m=+0.117852223 container init b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:29:26 compute-0 podman[381726]: 2026-01-20 15:29:26.403776345 +0000 UTC m=+0.026639897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:29:26 compute-0 podman[381726]: 2026-01-20 15:29:26.503663678 +0000 UTC m=+0.126527130 container start b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:29:26 compute-0 podman[381726]: 2026-01-20 15:29:26.506847634 +0000 UTC m=+0.129711126 container attach b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:29:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:26 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:27 compute-0 sudo[381752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-0 sudo[381752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-0 sudo[381752]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]: {
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:         "osd_id": 0,
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:         "type": "bluestore"
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]:     }
Jan 20 15:29:27 compute-0 ecstatic_archimedes[381742]: }
Jan 20 15:29:27 compute-0 sudo[381785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-0 sudo[381785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-0 sudo[381785]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-0 systemd[1]: libpod-b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce.scope: Deactivated successfully.
Jan 20 15:29:27 compute-0 podman[381726]: 2026-01-20 15:29:27.307068228 +0000 UTC m=+0.929931680 container died b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9add796273f4a48201a4b22f98165fe023695e077e927aaf2458084a43c9dce-merged.mount: Deactivated successfully.
Jan 20 15:29:27 compute-0 podman[381726]: 2026-01-20 15:29:27.353787191 +0000 UTC m=+0.976650643 container remove b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_archimedes, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:29:27 compute-0 systemd[1]: libpod-conmon-b6612d4f7715dd2964030bae371fd33b27d10c65459bfb1aa5fe1006254df9ce.scope: Deactivated successfully.
Jan 20 15:29:27 compute-0 sudo[381622]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:29:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/475366063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:29:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:29:27 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:27 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f3937345-103f-4773-b859-51377e66a765 does not exist
Jan 20 15:29:27 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 81df92c1-55ba-44a7-af1f-9bdfd039d724 does not exist
Jan 20 15:29:27 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dae5aa87-2a21-4361-aa62-8b82fe450852 does not exist
Jan 20 15:29:27 compute-0 sudo[381823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:27 compute-0 sudo[381823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-0 sudo[381823]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:27 compute-0 sudo[381848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:29:27 compute-0 sudo[381848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:27 compute-0 sudo[381848]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:28 compute-0 nova_compute[250018]: 2026-01-20 15:29:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:28 compute-0 ceph-mon[74360]: pgmap v3218: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:29:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:28.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:29 compute-0 nova_compute[250018]: 2026-01-20 15:29:29.411 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:30 compute-0 ceph-mon[74360]: pgmap v3219: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:30.799 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:30.799 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:29:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:30.799 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:29:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:31 compute-0 nova_compute[250018]: 2026-01-20 15:29:31.053 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:31 compute-0 nova_compute[250018]: 2026-01-20 15:29:31.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:29:31 compute-0 nova_compute[250018]: 2026-01-20 15:29:31.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:29:31 compute-0 nova_compute[250018]: 2026-01-20 15:29:31.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:29:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:31 compute-0 nova_compute[250018]: 2026-01-20 15:29:31.277 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:32 compute-0 ceph-mon[74360]: pgmap v3220: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:34 compute-0 nova_compute[250018]: 2026-01-20 15:29:34.412 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:34 compute-0 ceph-mon[74360]: pgmap v3221: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 20 15:29:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:29:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:36 compute-0 nova_compute[250018]: 2026-01-20 15:29:36.279 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:36 compute-0 ceph-mon[74360]: pgmap v3222: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:29:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 924 KiB/s wr, 86 op/s
Jan 20 15:29:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:37.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:38 compute-0 nova_compute[250018]: 2026-01-20 15:29:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:38 compute-0 nova_compute[250018]: 2026-01-20 15:29:38.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:29:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:38.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:38 compute-0 ceph-mon[74360]: pgmap v3223: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 924 KiB/s wr, 86 op/s
Jan 20 15:29:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:29:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:39.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:39 compute-0 nova_compute[250018]: 2026-01-20 15:29:39.413 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:40 compute-0 ceph-mon[74360]: pgmap v3224: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:29:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 15:29:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:41 compute-0 nova_compute[250018]: 2026-01-20 15:29:41.280 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:42 compute-0 ceph-mon[74360]: pgmap v3225: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Jan 20 15:29:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 20 15:29:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:44 compute-0 nova_compute[250018]: 2026-01-20 15:29:44.413 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:44.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:44 compute-0 ceph-mon[74360]: pgmap v3226: 321 pgs: 321 active+clean; 175 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 20 15:29:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:29:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:45 compute-0 ceph-mon[74360]: pgmap v3227: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 545 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:29:46 compute-0 nova_compute[250018]: 2026-01-20 15:29:46.330 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:47 compute-0 sudo[381883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:47 compute-0 sudo[381883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:47 compute-0 sudo[381883]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:47 compute-0 sudo[381908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:29:47 compute-0 sudo[381908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:29:47 compute-0 sudo[381908]: pam_unix(sudo:session): session closed for user root
Jan 20 15:29:48 compute-0 ceph-mon[74360]: pgmap v3228: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:49.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:49 compute-0 nova_compute[250018]: 2026-01-20 15:29:49.415 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-0 ceph-mon[74360]: pgmap v3229: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:29:50 compute-0 nova_compute[250018]: 2026-01-20 15:29:50.439 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:50.440 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:29:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:50.441 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:29:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:29:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:51.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:51 compute-0 nova_compute[250018]: 2026-01-20 15:29:51.360 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:52 compute-0 ceph-mon[74360]: pgmap v3230: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:29:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:29:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:29:52
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'default.rgw.meta']
Jan 20 15:29:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:29:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 20 15:29:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:54 compute-0 nova_compute[250018]: 2026-01-20 15:29:54.417 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:54 compute-0 ceph-mon[74360]: pgmap v3231: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 20 15:29:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2846343753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:29:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 1.1 MiB/s wr, 58 op/s
Jan 20 15:29:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:56 compute-0 nova_compute[250018]: 2026-01-20 15:29:56.065 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:29:56 compute-0 nova_compute[250018]: 2026-01-20 15:29:56.065 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:29:56 compute-0 nova_compute[250018]: 2026-01-20 15:29:56.084 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:29:56 compute-0 nova_compute[250018]: 2026-01-20 15:29:56.371 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:56 compute-0 podman[381939]: 2026-01-20 15:29:56.50447981 +0000 UTC m=+0.074766759 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:29:56 compute-0 ceph-mon[74360]: pgmap v3232: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 1.1 MiB/s wr, 58 op/s
Jan 20 15:29:56 compute-0 podman[381938]: 2026-01-20 15:29:56.546783943 +0000 UTC m=+0.127312211 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:29:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 18 KiB/s wr, 34 op/s
Jan 20 15:29:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:29:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:29:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:29:58 compute-0 ceph-mon[74360]: pgmap v3233: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 18 KiB/s wr, 34 op/s
Jan 20 15:29:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:29:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:29:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:29:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:29:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:29:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:29:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:29:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:29:59 compute-0 nova_compute[250018]: 2026-01-20 15:29:59.419 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:29:59 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:29:59.443 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:30:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:30:00 compute-0 nova_compute[250018]: 2026-01-20 15:30:00.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:00 compute-0 ceph-mon[74360]: pgmap v3234: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 20 15:30:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:30:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:30:01 compute-0 sshd-session[381984]: Invalid user user from 134.122.57.138 port 51196
Jan 20 15:30:01 compute-0 sshd-session[381984]: Connection closed by invalid user user 134.122.57.138 port 51196 [preauth]
Jan 20 15:30:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:01 compute-0 nova_compute[250018]: 2026-01-20 15:30:01.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:30:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:30:02 compute-0 ceph-mon[74360]: pgmap v3235: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:30:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:04 compute-0 nova_compute[250018]: 2026-01-20 15:30:04.420 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:04.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:04 compute-0 ceph-mon[74360]: pgmap v3236: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:05.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:06 compute-0 nova_compute[250018]: 2026-01-20 15:30:06.374 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:06 compute-0 ceph-mon[74360]: pgmap v3237: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:30:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 20 15:30:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:07 compute-0 sudo[381990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:07 compute-0 sudo[381990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:07 compute-0 sudo[381990]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:07 compute-0 sudo[382015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:07 compute-0 sudo[382015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:07 compute-0 sudo[382015]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:30:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:30:08 compute-0 ceph-mon[74360]: pgmap v3238: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 20 15:30:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:09 compute-0 nova_compute[250018]: 2026-01-20 15:30:09.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:09 compute-0 nova_compute[250018]: 2026-01-20 15:30:09.422 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:10 compute-0 ceph-mon[74360]: pgmap v3239: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:10 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3785469698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:11 compute-0 nova_compute[250018]: 2026-01-20 15:30:11.377 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1052730623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3373613097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:30:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:30:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:12 compute-0 ceph-mon[74360]: pgmap v3240: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1559663311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:13.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:13 compute-0 ceph-mon[74360]: pgmap v3241: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:30:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/705460975' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.076 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.076 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.426 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:30:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980788337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.516 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:30:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.673 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.675 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4250MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.675 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.675 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.793 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.793 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:30:14 compute-0 nova_compute[250018]: 2026-01-20 15:30:14.827 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:30:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1980788337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:30:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23335784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:15.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:15 compute-0 nova_compute[250018]: 2026-01-20 15:30:15.293 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:30:15 compute-0 nova_compute[250018]: 2026-01-20 15:30:15.299 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:30:15 compute-0 nova_compute[250018]: 2026-01-20 15:30:15.321 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:30:15 compute-0 nova_compute[250018]: 2026-01-20 15:30:15.323 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:30:15 compute-0 nova_compute[250018]: 2026-01-20 15:30:15.323 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:15 compute-0 ceph-mon[74360]: pgmap v3242: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/23335784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:16 compute-0 nova_compute[250018]: 2026-01-20 15:30:16.412 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:17 compute-0 nova_compute[250018]: 2026-01-20 15:30:17.323 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:17 compute-0 nova_compute[250018]: 2026-01-20 15:30:17.323 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:30:18 compute-0 nova_compute[250018]: 2026-01-20 15:30:18.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:18 compute-0 nova_compute[250018]: 2026-01-20 15:30:18.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:18 compute-0 ceph-mon[74360]: pgmap v3243: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:19.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:19 compute-0 nova_compute[250018]: 2026-01-20 15:30:19.430 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:20 compute-0 ceph-mon[74360]: pgmap v3244: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:21 compute-0 nova_compute[250018]: 2026-01-20 15:30:21.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:21 compute-0 nova_compute[250018]: 2026-01-20 15:30:21.450 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:22 compute-0 ceph-mon[74360]: pgmap v3245: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1059682167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:24 compute-0 ceph-mon[74360]: pgmap v3246: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:30:24 compute-0 nova_compute[250018]: 2026-01-20 15:30:24.472 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 145 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 957 KiB/s wr, 25 op/s
Jan 20 15:30:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:25.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:25 compute-0 sshd-session[382089]: Invalid user ubuntu from 36.137.141.10 port 58676
Jan 20 15:30:26 compute-0 sshd-session[382089]: Received disconnect from 36.137.141.10 port 58676:11:  [preauth]
Jan 20 15:30:26 compute-0 sshd-session[382089]: Disconnected from invalid user ubuntu 36.137.141.10 port 58676 [preauth]
Jan 20 15:30:26 compute-0 ceph-mon[74360]: pgmap v3247: 321 pgs: 321 active+clean; 145 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 957 KiB/s wr, 25 op/s
Jan 20 15:30:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2182023721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:26 compute-0 nova_compute[250018]: 2026-01-20 15:30:26.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3549875571' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:30:27 compute-0 podman[382098]: 2026-01-20 15:30:27.468068582 +0000 UTC m=+0.051095175 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:30:27 compute-0 podman[382097]: 2026-01-20 15:30:27.49516499 +0000 UTC m=+0.080081464 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 15:30:27 compute-0 sudo[382144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-0 sudo[382144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-0 sudo[382144]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-0 sudo[382169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-0 sudo[382169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-0 sudo[382169]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-0 sudo[382194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:27 compute-0 sudo[382194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-0 sudo[382194]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:27 compute-0 sudo[382219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:27 compute-0 sudo[382219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:27 compute-0 sudo[382219]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-0 sudo[382244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:28 compute-0 sudo[382244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-0 sudo[382244]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-0 sudo[382269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:30:28 compute-0 sudo[382269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:30:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:30:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:30:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:30:28 compute-0 sudo[382269]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 15:30:28 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:29 compute-0 nova_compute[250018]: 2026-01-20 15:30:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 15:30:29 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:29 compute-0 nova_compute[250018]: 2026-01-20 15:30:29.475 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:29 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:30:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:30:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:30:30.800 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:30:30.800 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:30:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:30:30.801 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:30:31 compute-0 ceph-mon[74360]: pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:30:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:31.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:31 compute-0 nova_compute[250018]: 2026-01-20 15:30:31.453 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:32 compute-0 nova_compute[250018]: 2026-01-20 15:30:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:32 compute-0 nova_compute[250018]: 2026-01-20 15:30:32.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:30:32 compute-0 nova_compute[250018]: 2026-01-20 15:30:32.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:30:32 compute-0 nova_compute[250018]: 2026-01-20 15:30:32.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:30:32 compute-0 ceph-mon[74360]: pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:30:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5733c2a4-8e39-4d62-8ed2-5753d8f57ee9 does not exist
Jan 20 15:30:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c9bc0907-ae6a-4055-bd5d-3f57469b9cff does not exist
Jan 20 15:30:33 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bad39200-774f-412d-8295-0558504c61ae does not exist
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:30:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:30:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:33 compute-0 sudo[382328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:33 compute-0 sudo[382328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:33 compute-0 sudo[382328]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:33 compute-0 sudo[382353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:33 compute-0 sudo[382353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:33 compute-0 sudo[382353]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:33 compute-0 sudo[382378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:33 compute-0 sudo[382378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:33 compute-0 sudo[382378]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:33 compute-0 sudo[382403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:30:33 compute-0 sudo[382403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.831466031 +0000 UTC m=+0.039620302 container create 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:30:33 compute-0 systemd[1]: Started libpod-conmon-8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc.scope.
Jan 20 15:30:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.904665226 +0000 UTC m=+0.112819517 container init 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.813184432 +0000 UTC m=+0.021338713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.910748382 +0000 UTC m=+0.118902643 container start 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.913529298 +0000 UTC m=+0.121683589 container attach 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:30:33 compute-0 jovial_northcutt[382484]: 167 167
Jan 20 15:30:33 compute-0 systemd[1]: libpod-8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc.scope: Deactivated successfully.
Jan 20 15:30:33 compute-0 conmon[382484]: conmon 8e76f4442de8ff549544 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc.scope/container/memory.events
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.917057614 +0000 UTC m=+0.125211865 container died 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:30:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-558ab4fbc6c7601c1101cc57f835d8d7fd0ba2ec247b981decd266f8b1f7d018-merged.mount: Deactivated successfully.
Jan 20 15:30:33 compute-0 podman[382467]: 2026-01-20 15:30:33.956397556 +0000 UTC m=+0.164551837 container remove 8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:30:33 compute-0 systemd[1]: libpod-conmon-8e76f4442de8ff54954423dc4920e96d3f524e0b0820dce92a032bf35e0104fc.scope: Deactivated successfully.
Jan 20 15:30:34 compute-0 podman[382509]: 2026-01-20 15:30:34.122937596 +0000 UTC m=+0.042246312 container create a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:30:34 compute-0 systemd[1]: Started libpod-conmon-a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a.scope.
Jan 20 15:30:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:34 compute-0 podman[382509]: 2026-01-20 15:30:34.105932493 +0000 UTC m=+0.025241229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:34 compute-0 podman[382509]: 2026-01-20 15:30:34.204013516 +0000 UTC m=+0.123322252 container init a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:30:34 compute-0 podman[382509]: 2026-01-20 15:30:34.21111041 +0000 UTC m=+0.130419126 container start a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:30:34 compute-0 podman[382509]: 2026-01-20 15:30:34.214012639 +0000 UTC m=+0.133321365 container attach a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:30:34 compute-0 ceph-mon[74360]: pgmap v3251: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 777 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:30:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:30:34 compute-0 nova_compute[250018]: 2026-01-20 15:30:34.479 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:34 compute-0 eloquent_liskov[382525]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:30:34 compute-0 eloquent_liskov[382525]: --> relative data size: 1.0
Jan 20 15:30:34 compute-0 eloquent_liskov[382525]: --> All data devices are unavailable
Jan 20 15:30:35 compute-0 systemd[1]: libpod-a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a.scope: Deactivated successfully.
Jan 20 15:30:35 compute-0 podman[382509]: 2026-01-20 15:30:35.00609909 +0000 UTC m=+0.925407806 container died a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:30:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 15:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3acf3dd531b6e4279c0f6f922237320132b22237afd8a86bd28f2073222c0955-merged.mount: Deactivated successfully.
Jan 20 15:30:35 compute-0 podman[382509]: 2026-01-20 15:30:35.057583513 +0000 UTC m=+0.976892229 container remove a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 15:30:35 compute-0 systemd[1]: libpod-conmon-a6213710073e430d418962f31cb9b0f9b29216878e721eac47697580b898832a.scope: Deactivated successfully.
Jan 20 15:30:35 compute-0 sudo[382403]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:35 compute-0 sudo[382556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:35 compute-0 sudo[382556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:35 compute-0 sudo[382556]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:35 compute-0 sudo[382581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:35 compute-0 sudo[382581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:35 compute-0 sudo[382581]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:35 compute-0 sudo[382606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:35 compute-0 sudo[382606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:35 compute-0 sudo[382606]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:35 compute-0 sudo[382631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:30:35 compute-0 sudo[382631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.574646929 +0000 UTC m=+0.038220394 container create 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:30:35 compute-0 systemd[1]: Started libpod-conmon-729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e.scope.
Jan 20 15:30:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.633557054 +0000 UTC m=+0.097130539 container init 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.640238736 +0000 UTC m=+0.103812201 container start 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:30:35 compute-0 naughty_maxwell[382713]: 167 167
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.643932997 +0000 UTC m=+0.107506492 container attach 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:30:35 compute-0 systemd[1]: libpod-729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e.scope: Deactivated successfully.
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.645226082 +0000 UTC m=+0.108799547 container died 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.558344774 +0000 UTC m=+0.021918239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1f18edbfae0552c0b1aea628af26da3a6dc8a1040b5b25f68eb58166053f0ba-merged.mount: Deactivated successfully.
Jan 20 15:30:35 compute-0 podman[382697]: 2026-01-20 15:30:35.677525342 +0000 UTC m=+0.141098807 container remove 729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:30:35 compute-0 systemd[1]: libpod-conmon-729135e621cdc63af1ae9b1d5da7b637d648a661ed4ac49c389e838076b0798e.scope: Deactivated successfully.
Jan 20 15:30:35 compute-0 podman[382735]: 2026-01-20 15:30:35.819215445 +0000 UTC m=+0.036039774 container create a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:30:35 compute-0 systemd[1]: Started libpod-conmon-a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c.scope.
Jan 20 15:30:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0098c648bc477761c907680b196c4221bfe944074a27fb4948e4ca0300d2ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0098c648bc477761c907680b196c4221bfe944074a27fb4948e4ca0300d2ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0098c648bc477761c907680b196c4221bfe944074a27fb4948e4ca0300d2ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0098c648bc477761c907680b196c4221bfe944074a27fb4948e4ca0300d2ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:35 compute-0 podman[382735]: 2026-01-20 15:30:35.880503035 +0000 UTC m=+0.097327384 container init a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:30:35 compute-0 podman[382735]: 2026-01-20 15:30:35.888309719 +0000 UTC m=+0.105134048 container start a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:30:35 compute-0 podman[382735]: 2026-01-20 15:30:35.891080264 +0000 UTC m=+0.107904593 container attach a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:30:35 compute-0 podman[382735]: 2026-01-20 15:30:35.804441882 +0000 UTC m=+0.021266241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:36 compute-0 nova_compute[250018]: 2026-01-20 15:30:36.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:36 compute-0 ceph-mon[74360]: pgmap v3252: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 15:30:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:36.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]: {
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:     "0": [
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:         {
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "devices": [
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "/dev/loop3"
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             ],
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "lv_name": "ceph_lv0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "lv_size": "7511998464",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "name": "ceph_lv0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "tags": {
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.cluster_name": "ceph",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.crush_device_class": "",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.encrypted": "0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.osd_id": "0",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.type": "block",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:                 "ceph.vdo": "0"
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             },
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "type": "block",
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:             "vg_name": "ceph_vg0"
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:         }
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]:     ]
Jan 20 15:30:36 compute-0 flamboyant_shamir[382752]: }
Jan 20 15:30:36 compute-0 systemd[1]: libpod-a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c.scope: Deactivated successfully.
Jan 20 15:30:36 compute-0 podman[382735]: 2026-01-20 15:30:36.690466144 +0000 UTC m=+0.907290473 container died a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 15:30:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf0098c648bc477761c907680b196c4221bfe944074a27fb4948e4ca0300d2ce-merged.mount: Deactivated successfully.
Jan 20 15:30:36 compute-0 podman[382735]: 2026-01-20 15:30:36.736653633 +0000 UTC m=+0.953477962 container remove a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:30:36 compute-0 systemd[1]: libpod-conmon-a15d38302a708191ce8d028b39d266d636dcfc5a843ba0c1bee92d24467e544c.scope: Deactivated successfully.
Jan 20 15:30:36 compute-0 sudo[382631]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:36 compute-0 sudo[382776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:36 compute-0 sudo[382776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:36 compute-0 sudo[382776]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:36 compute-0 sudo[382801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:30:36 compute-0 sudo[382801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:36 compute-0 sudo[382801]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:36 compute-0 sudo[382826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:36 compute-0 sudo[382826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:36 compute-0 sudo[382826]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:36 compute-0 sudo[382851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:30:36 compute-0 sudo[382851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 871 KiB/s wr, 75 op/s
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.270522706 +0000 UTC m=+0.032034634 container create 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 20 15:30:37 compute-0 systemd[1]: Started libpod-conmon-1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf.scope.
Jan 20 15:30:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:30:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.325957397 +0000 UTC m=+0.087469345 container init 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.331721314 +0000 UTC m=+0.093233242 container start 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.334893001 +0000 UTC m=+0.096404939 container attach 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:30:37 compute-0 ecstatic_feynman[382933]: 167 167
Jan 20 15:30:37 compute-0 systemd[1]: libpod-1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf.scope: Deactivated successfully.
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.336942586 +0000 UTC m=+0.098454524 container died 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.257080199 +0000 UTC m=+0.018592147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-830f80db852f9a0f3e2ccf5eb08717651b48bc23c6376c72e80badc562dd6d63-merged.mount: Deactivated successfully.
Jan 20 15:30:37 compute-0 podman[382916]: 2026-01-20 15:30:37.373179544 +0000 UTC m=+0.134691472 container remove 1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:30:37 compute-0 systemd[1]: libpod-conmon-1c66e8168c866f4965d74ab4040d004112e27ffd96fc2e6f643005103288ecbf.scope: Deactivated successfully.
Jan 20 15:30:37 compute-0 podman[382957]: 2026-01-20 15:30:37.523420969 +0000 UTC m=+0.042072417 container create 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:30:37 compute-0 systemd[1]: Started libpod-conmon-30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295.scope.
Jan 20 15:30:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aef427debb955be4d22b01a5e9165d5620f8e947761436486bd9c4870314e2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aef427debb955be4d22b01a5e9165d5620f8e947761436486bd9c4870314e2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aef427debb955be4d22b01a5e9165d5620f8e947761436486bd9c4870314e2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aef427debb955be4d22b01a5e9165d5620f8e947761436486bd9c4870314e2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:30:37 compute-0 podman[382957]: 2026-01-20 15:30:37.599531785 +0000 UTC m=+0.118183243 container init 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 20 15:30:37 compute-0 podman[382957]: 2026-01-20 15:30:37.507007513 +0000 UTC m=+0.025658981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:30:37 compute-0 podman[382957]: 2026-01-20 15:30:37.606087243 +0000 UTC m=+0.124738691 container start 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:30:37 compute-0 podman[382957]: 2026-01-20 15:30:37.608761276 +0000 UTC m=+0.127412724 container attach 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:30:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:38 compute-0 practical_shaw[382973]: {
Jan 20 15:30:38 compute-0 practical_shaw[382973]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:30:38 compute-0 practical_shaw[382973]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:30:38 compute-0 practical_shaw[382973]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:30:38 compute-0 practical_shaw[382973]:         "osd_id": 0,
Jan 20 15:30:38 compute-0 practical_shaw[382973]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:30:38 compute-0 practical_shaw[382973]:         "type": "bluestore"
Jan 20 15:30:38 compute-0 practical_shaw[382973]:     }
Jan 20 15:30:38 compute-0 practical_shaw[382973]: }
Jan 20 15:30:38 compute-0 systemd[1]: libpod-30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295.scope: Deactivated successfully.
Jan 20 15:30:38 compute-0 podman[382957]: 2026-01-20 15:30:38.426840216 +0000 UTC m=+0.945491664 container died 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aef427debb955be4d22b01a5e9165d5620f8e947761436486bd9c4870314e2a-merged.mount: Deactivated successfully.
Jan 20 15:30:38 compute-0 podman[382957]: 2026-01-20 15:30:38.473343784 +0000 UTC m=+0.991995232 container remove 30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shaw, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:30:38 compute-0 systemd[1]: libpod-conmon-30eea266a5c546054b9e106f8828fcd0f45f8ef9997c1f2fc134ec360df77295.scope: Deactivated successfully.
Jan 20 15:30:38 compute-0 sudo[382851]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:30:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:30:38 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8928f2d5-1a25-4443-bce7-07e11aef6d14 does not exist
Jan 20 15:30:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 01deaa6c-4433-4a7e-a8a4-5d04c62c8d15 does not exist
Jan 20 15:30:38 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4ecdfcf9-9bb9-4a05-bbfa-de3e2b34bd77 does not exist
Jan 20 15:30:38 compute-0 sudo[383006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:38 compute-0 sudo[383006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:38 compute-0 sudo[383006]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:30:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:30:38 compute-0 sudo[383031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:30:38 compute-0 sudo[383031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:38 compute-0 sudo[383031]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:38 compute-0 ceph-mon[74360]: pgmap v3253: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 871 KiB/s wr, 75 op/s
Jan 20 15:30:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:38 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:30:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:30:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:39 compute-0 nova_compute[250018]: 2026-01-20 15:30:39.482 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:40 compute-0 ceph-mon[74360]: pgmap v3254: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:30:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:30:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:41.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:41 compute-0 nova_compute[250018]: 2026-01-20 15:30:41.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:42.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:42 compute-0 ceph-mon[74360]: pgmap v3255: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:30:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 85 B/s wr, 43 op/s
Jan 20 15:30:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:44 compute-0 nova_compute[250018]: 2026-01-20 15:30:44.486 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:44.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:44 compute-0 ceph-mon[74360]: pgmap v3256: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 85 B/s wr, 43 op/s
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.734751) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044734846, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 977, "num_deletes": 251, "total_data_size": 1558206, "memory_usage": 1581056, "flush_reason": "Manual Compaction"}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044748543, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1521192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72367, "largest_seqno": 73343, "table_properties": {"data_size": 1516323, "index_size": 2456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10746, "raw_average_key_size": 20, "raw_value_size": 1506527, "raw_average_value_size": 2805, "num_data_blocks": 107, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768922964, "oldest_key_time": 1768922964, "file_creation_time": 1768923044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 13868 microseconds, and 8131 cpu microseconds.
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.748621) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1521192 bytes OK
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.748646) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.750739) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.750768) EVENT_LOG_v1 {"time_micros": 1768923044750759, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.750791) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1553636, prev total WAL file size 1553636, number of live WAL files 2.
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.751715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1485KB)], [164(11MB)]
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044751766, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13201246, "oldest_snapshot_seqno": -1}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10045 keys, 11280661 bytes, temperature: kUnknown
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044861642, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 11280661, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11218114, "index_size": 36322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 264788, "raw_average_key_size": 26, "raw_value_size": 11044419, "raw_average_value_size": 1099, "num_data_blocks": 1377, "num_entries": 10045, "num_filter_entries": 10045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923044, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.862192) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 11280661 bytes
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.863666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 102.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(16.1) write-amplify(7.4) OK, records in: 10564, records dropped: 519 output_compression: NoCompression
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.863687) EVENT_LOG_v1 {"time_micros": 1768923044863678, "job": 102, "event": "compaction_finished", "compaction_time_micros": 110215, "compaction_time_cpu_micros": 51378, "output_level": 6, "num_output_files": 1, "total_output_size": 11280661, "num_input_records": 10564, "num_output_records": 10045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044864305, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923044867349, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.751629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.867465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.867473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.867476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.867479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:30:44.867482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:30:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 191 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 20 15:30:45 compute-0 sshd-session[383058]: Invalid user user from 134.122.57.138 port 38566
Jan 20 15:30:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:45 compute-0 sshd-session[383058]: Connection closed by invalid user user 134.122.57.138 port 38566 [preauth]
Jan 20 15:30:45 compute-0 nova_compute[250018]: 2026-01-20 15:30:45.892 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:30:46 compute-0 nova_compute[250018]: 2026-01-20 15:30:46.590 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:46 compute-0 ceph-mon[74360]: pgmap v3257: 321 pgs: 321 active+clean; 191 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 20 15:30:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:30:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:47 compute-0 sudo[383062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:47 compute-0 sudo[383062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:47 compute-0 sudo[383062]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:47 compute-0 sudo[383087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:30:47 compute-0 sudo[383087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:30:47 compute-0 sudo[383087]: pam_unix(sudo:session): session closed for user root
Jan 20 15:30:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:48 compute-0 ceph-mon[74360]: pgmap v3258: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 15:30:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:49.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:49 compute-0 nova_compute[250018]: 2026-01-20 15:30:49.490 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:50 compute-0 ceph-mon[74360]: pgmap v3259: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:30:50.869 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:30:50 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:30:50.871 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:30:50 compute-0 nova_compute[250018]: 2026-01-20 15:30:50.883 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:30:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:51 compute-0 nova_compute[250018]: 2026-01-20 15:30:51.591 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:30:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:30:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:30:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:30:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:30:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:30:52
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Jan 20 15:30:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:30:52 compute-0 ceph-mon[74360]: pgmap v3260: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:30:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:53.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:54 compute-0 nova_compute[250018]: 2026-01-20 15:30:54.531 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:54 compute-0 ceph-mon[74360]: pgmap v3261: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:55.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:56 compute-0 nova_compute[250018]: 2026-01-20 15:30:56.593 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:30:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:56 compute-0 ceph-mon[74360]: pgmap v3262: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 776 KiB/s wr, 14 op/s
Jan 20 15:30:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:57.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:30:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:30:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:30:58 compute-0 podman[383120]: 2026-01-20 15:30:58.457823775 +0000 UTC m=+0.052128322 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 20 15:30:58 compute-0 podman[383119]: 2026-01-20 15:30:58.494325019 +0000 UTC m=+0.090852117 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:30:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:30:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:58 compute-0 ceph-mon[74360]: pgmap v3263: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 776 KiB/s wr, 14 op/s
Jan 20 15:30:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 20 15:30:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:30:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:30:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:30:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:30:59 compute-0 nova_compute[250018]: 2026-01-20 15:30:59.533 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.027 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.027 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.044 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.116 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.117 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.125 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.125 250022 INFO nova.compute.claims [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.239 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:00 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:00 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450668054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.686 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.692 250022 DEBUG nova.compute.provider_tree [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.712 250022 DEBUG nova.scheduler.client.report [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.742 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.742 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.790 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.791 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.809 250022 INFO nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.831 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:31:00 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:00.873 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:00 compute-0 ceph-mon[74360]: pgmap v3264: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 13 KiB/s wr, 0 op/s
Jan 20 15:31:00 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/450668054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.931 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.932 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.933 250022 INFO nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Creating image(s)
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.965 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:00 compute-0 nova_compute[250018]: 2026-01-20 15:31:00.991 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.042 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.046 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.110 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.111 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.112 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.112 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.147 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.152 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e974e40a-a93c-491e-b30e-6bb589348dc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.549 250022 DEBUG nova.policy [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:01 compute-0 ceph-mon[74360]: pgmap v3265: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 15:31:01 compute-0 nova_compute[250018]: 2026-01-20 15:31:01.935 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 e974e40a-a93c-491e-b30e-6bb589348dc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.783s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.000 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.095 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.101 250022 DEBUG nova.objects.instance [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid e974e40a-a93c-491e-b30e-6bb589348dc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.121 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.121 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Ensure instance console log exists: /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.122 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.122 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:02 compute-0 nova_compute[250018]: 2026-01-20 15:31:02.123 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:31:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:03.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:03 compute-0 nova_compute[250018]: 2026-01-20 15:31:03.576 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Successfully created port: a244be5a-cd0d-47e6-bb18-4fc628b6e913 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:31:04 compute-0 ceph-mon[74360]: pgmap v3266: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s wr, 0 op/s
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.537 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:04.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.712 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Successfully updated port: a244be5a-cd0d-47e6-bb18-4fc628b6e913 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.741 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.741 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.741 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.847 250022 DEBUG nova.compute.manager [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-changed-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.847 250022 DEBUG nova.compute.manager [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Refreshing instance network info cache due to event network-changed-a244be5a-cd0d-47e6-bb18-4fc628b6e913. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.849 250022 DEBUG oslo_concurrency.lockutils [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:31:04 compute-0 nova_compute[250018]: 2026-01-20 15:31:04.940 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:31:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 228 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 992 KiB/s wr, 25 op/s
Jan 20 15:31:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.000 250022 DEBUG nova.network.neutron [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updating instance_info_cache with network_info: [{"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.026 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.027 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Instance network_info: |[{"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.028 250022 DEBUG oslo_concurrency.lockutils [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.029 250022 DEBUG nova.network.neutron [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Refreshing network info cache for port a244be5a-cd0d-47e6-bb18-4fc628b6e913 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.033 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Start _get_guest_xml network_info=[{"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.039 250022 WARNING nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.046 250022 DEBUG nova.virt.libvirt.host [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.047 250022 DEBUG nova.virt.libvirt.host [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.051 250022 DEBUG nova.virt.libvirt.host [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.051 250022 DEBUG nova.virt.libvirt.host [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.052 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.052 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.053 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.053 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.053 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.053 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.054 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.054 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.054 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.054 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.055 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.055 250022 DEBUG nova.virt.hardware [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.057 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:06 compute-0 ceph-mon[74360]: pgmap v3267: 321 pgs: 321 active+clean; 228 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 992 KiB/s wr, 25 op/s
Jan 20 15:31:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:31:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621508670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.519 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.547 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.551 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:06.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:06 compute-0 nova_compute[250018]: 2026-01-20 15:31:06.725 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:31:06 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393004237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.000 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.001 250022 DEBUG nova.virt.libvirt.vif [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-921631331',display_name='tempest-TestNetworkBasicOps-server-921631331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-921631331',id=207,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAqDxVBp2h33eXX3Z4Y6GndT0eZUbki+q/rsKyZMhEDLXEtj5ICn9UzsdwtliCCgWYBy3qMABSYl4IjBd7AXsPnZQPt4zv3JvdUZsACexftU/zkJ1lgqA5ni1oDpXzkXw==',key_name='tempest-TestNetworkBasicOps-245405952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-76m8m06w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:31:00Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=e974e40a-a93c-491e-b30e-6bb589348dc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.002 250022 DEBUG nova.network.os_vif_util [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.002 250022 DEBUG nova.network.os_vif_util [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.003 250022 DEBUG nova.objects.instance [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid e974e40a-a93c-491e-b30e-6bb589348dc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.016 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <uuid>e974e40a-a93c-491e-b30e-6bb589348dc8</uuid>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <name>instance-000000cf</name>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkBasicOps-server-921631331</nova:name>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:31:06</nova:creationTime>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <nova:port uuid="a244be5a-cd0d-47e6-bb18-4fc628b6e913">
Jan 20 15:31:07 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <system>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="serial">e974e40a-a93c-491e-b30e-6bb589348dc8</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="uuid">e974e40a-a93c-491e-b30e-6bb589348dc8</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </system>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <os>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </os>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <features>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </features>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/e974e40a-a93c-491e-b30e-6bb589348dc8_disk">
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </source>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config">
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </source>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:31:07 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:c9:e8:d8"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <target dev="tapa244be5a-cd"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/console.log" append="off"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <video>
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </video>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:31:07 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:31:07 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:31:07 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:31:07 compute-0 nova_compute[250018]: </domain>
Jan 20 15:31:07 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.018 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Preparing to wait for external event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.018 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.018 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.018 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.019 250022 DEBUG nova.virt.libvirt.vif [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-921631331',display_name='tempest-TestNetworkBasicOps-server-921631331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-921631331',id=207,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAqDxVBp2h33eXX3Z4Y6GndT0eZUbki+q/rsKyZMhEDLXEtj5ICn9UzsdwtliCCgWYBy3qMABSYl4IjBd7AXsPnZQPt4zv3JvdUZsACexftU/zkJ1lgqA5ni1oDpXzkXw==',key_name='tempest-TestNetworkBasicOps-245405952',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-76m8m06w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:31:00Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=e974e40a-a93c-491e-b30e-6bb589348dc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.019 250022 DEBUG nova.network.os_vif_util [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.020 250022 DEBUG nova.network.os_vif_util [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.020 250022 DEBUG os_vif [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.020 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.021 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.021 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.025 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.025 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa244be5a-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.025 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa244be5a-cd, col_values=(('external_ids', {'iface-id': 'a244be5a-cd0d-47e6-bb18-4fc628b6e913', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:e8:d8', 'vm-uuid': 'e974e40a-a93c-491e-b30e-6bb589348dc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.026 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:07 compute-0 NetworkManager[48960]: <info>  [1768923067.0282] manager: (tapa244be5a-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.029 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.033 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.034 250022 INFO os_vif [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd')
Jan 20 15:31:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.126 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.126 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.126 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:c9:e8:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.127 250022 INFO nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Using config drive
Jan 20 15:31:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3621508670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/393004237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.182 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:07.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.533 250022 DEBUG nova.network.neutron [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updated VIF entry in instance network info cache for port a244be5a-cd0d-47e6-bb18-4fc628b6e913. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.534 250022 DEBUG nova.network.neutron [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updating instance_info_cache with network_info: [{"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.554 250022 DEBUG oslo_concurrency.lockutils [req-99e3c105-9574-4a7e-af40-3836ae6e1ed1 req-1890e78d-e427-495b-99cc-652d5a1f27b6 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.631 250022 INFO nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Creating config drive at /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.640 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppg0gpitv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.796 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppg0gpitv" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.832 250022 DEBUG nova.storage.rbd_utils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:07 compute-0 nova_compute[250018]: 2026-01-20 15:31:07.836 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:07 compute-0 sudo[383459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:07 compute-0 sudo[383459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:07 compute-0 sudo[383459]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:07 compute-0 sudo[383499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:07 compute-0 sudo[383499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:08 compute-0 sudo[383499]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.069 250022 DEBUG oslo_concurrency.processutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config e974e40a-a93c-491e-b30e-6bb589348dc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.070 250022 INFO nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Deleting local config drive /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8/disk.config because it was imported into RBD.
Jan 20 15:31:08 compute-0 kernel: tapa244be5a-cd: entered promiscuous mode
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.1285] manager: (tapa244be5a-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 20 15:31:08 compute-0 ovn_controller[148666]: 2026-01-20T15:31:08Z|00746|binding|INFO|Claiming lport a244be5a-cd0d-47e6-bb18-4fc628b6e913 for this chassis.
Jan 20 15:31:08 compute-0 ovn_controller[148666]: 2026-01-20T15:31:08Z|00747|binding|INFO|a244be5a-cd0d-47e6-bb18-4fc628b6e913: Claiming fa:16:3e:c9:e8:d8 10.100.0.7
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.128 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.138 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 systemd-udevd[383539]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.159 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:e8:d8 10.100.0.7'], port_security=['fa:16:3e:c9:e8:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e974e40a-a93c-491e-b30e-6bb589348dc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0228362f-0ced-4cac-bb89-96bd472df47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8928ac80-99a4-4162-8778-e854bb24c9a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306abd7d-c001-4e00-b2a1-8a251fd6a022, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=a244be5a-cd0d-47e6-bb18-4fc628b6e913) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.160 160071 INFO neutron.agent.ovn.metadata.agent [-] Port a244be5a-cd0d-47e6-bb18-4fc628b6e913 in datapath 0228362f-0ced-4cac-bb89-96bd472df47f bound to our chassis
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.161 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.1714] device (tapa244be5a-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.1722] device (tapa244be5a-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:31:08 compute-0 systemd-machined[216401]: New machine qemu-90-instance-000000cf.
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5c06de-6936-48c3-96c7-4e96b0eb2d3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.177 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0228362f-01 in ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.179 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0228362f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.179 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3d1664-8463-470f-912d-c1e2f92e4887]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.181 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c012b3be-c0f1-4d68-9a7c-87b246b66e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ceph-mon[74360]: pgmap v3268: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:31:08 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-000000cf.
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.195 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 ovn_controller[148666]: 2026-01-20T15:31:08Z|00748|binding|INFO|Setting lport a244be5a-cd0d-47e6-bb18-4fc628b6e913 ovn-installed in OVS
Jan 20 15:31:08 compute-0 ovn_controller[148666]: 2026-01-20T15:31:08Z|00749|binding|INFO|Setting lport a244be5a-cd0d-47e6-bb18-4fc628b6e913 up in Southbound
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.197 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[e94504fb-12ca-4267-9897-fc61a2491bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.221 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d3baaa-b75c-41bb-8ee8-fc9b1fb076fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.251 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[750da0fc-1ba1-415e-9857-34c58609918a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.255 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4bbc3e-99c8-481c-93a4-82120c223269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.2568] manager: (tap0228362f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 20 15:31:08 compute-0 systemd-udevd[383543]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:31:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.282 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[96e70c7f-ffc8-4717-bcad-bc8c4834b86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.285 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[90def683-67f6-4bf1-b834-8a297e18e3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.3094] device (tap0228362f-00): carrier: link connected
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.313 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa7d469-5e95-424f-adc9-a92d4f790feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.333 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc3fac7-219c-45ba-82f0-6e054dd193f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0228362f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:13:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900903, 'reachable_time': 24092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383574, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.349 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[87b46092-7d5c-40fe-ac28-2fd5de24a6c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:1371'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 900903, 'tstamp': 900903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383575, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.470 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6c36bc3b-7c1a-4a69-815a-f70e0d770b98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0228362f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:13:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900903, 'reachable_time': 24092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383594, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.508 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[29f3307b-6508-47f4-87d0-09c5e7cd528c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.563 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[80995093-e0d9-4bce-ae6b-175e11d3bf13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.564 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0228362f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.564 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.565 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0228362f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.566 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 kernel: tap0228362f-00: entered promiscuous mode
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.568 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 NetworkManager[48960]: <info>  [1768923068.5685] manager: (tap0228362f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.568 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0228362f-00, col_values=(('external_ids', {'iface-id': 'cd551c37-a4a7-45aa-9507-04cb570a94af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.569 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 ovn_controller[148666]: 2026-01-20T15:31:08Z|00750|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.583 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.584 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[914e89cf-7b61-4a00-a258-2e4f331cb691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.585 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/0228362f-0ced-4cac-bb89-96bd472df47f.pid.haproxy
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 0228362f-0ced-4cac-bb89-96bd472df47f
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:31:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:08.586 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'env', 'PROCESS_TAG=haproxy-0228362f-0ced-4cac-bb89-96bd472df47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0228362f-0ced-4cac-bb89-96bd472df47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.599 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923068.598681, e974e40a-a93c-491e-b30e-6bb589348dc8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.599 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] VM Started (Lifecycle Event)
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.628 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.633 250022 DEBUG nova.compute.manager [req-fcb52ce2-33a0-4c4d-b0a3-fcd903fde484 req-d742c79a-96de-4e45-bcf4-dfe178fd494b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.634 250022 DEBUG oslo_concurrency.lockutils [req-fcb52ce2-33a0-4c4d-b0a3-fcd903fde484 req-d742c79a-96de-4e45-bcf4-dfe178fd494b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.634 250022 DEBUG oslo_concurrency.lockutils [req-fcb52ce2-33a0-4c4d-b0a3-fcd903fde484 req-d742c79a-96de-4e45-bcf4-dfe178fd494b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.634 250022 DEBUG oslo_concurrency.lockutils [req-fcb52ce2-33a0-4c4d-b0a3-fcd903fde484 req-d742c79a-96de-4e45-bcf4-dfe178fd494b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.635 250022 DEBUG nova.compute.manager [req-fcb52ce2-33a0-4c4d-b0a3-fcd903fde484 req-d742c79a-96de-4e45-bcf4-dfe178fd494b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Processing event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.635 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.636 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923068.5987973, e974e40a-a93c-491e-b30e-6bb589348dc8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.637 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] VM Paused (Lifecycle Event)
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.640 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.644 250022 INFO nova.virt.libvirt.driver [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Instance spawned successfully.
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.644 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.661 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.664 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923068.6392908, e974e40a-a93c-491e-b30e-6bb589348dc8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.664 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] VM Resumed (Lifecycle Event)
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.673 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.673 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.674 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.674 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.675 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.675 250022 DEBUG nova.virt.libvirt.driver [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:31:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.689 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.692 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.721 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.735 250022 INFO nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Took 7.80 seconds to spawn the instance on the hypervisor.
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.735 250022 DEBUG nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.857 250022 INFO nova.compute.manager [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Took 8.78 seconds to build instance.
Jan 20 15:31:08 compute-0 nova_compute[250018]: 2026-01-20 15:31:08.876 250022 DEBUG oslo_concurrency.lockutils [None req-c8ab9db1-9b53-4506-99d8-af52b7a2a7ca 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:08 compute-0 podman[383647]: 2026-01-20 15:31:08.958635426 +0000 UTC m=+0.064950392 container create 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 15:31:09 compute-0 systemd[1]: Started libpod-conmon-6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591.scope.
Jan 20 15:31:09 compute-0 podman[383647]: 2026-01-20 15:31:08.932324618 +0000 UTC m=+0.038639594 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:31:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a438908fc7859bd0392e23aa372ff17571f4c60f917a88b37e225b83311d96fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:31:09 compute-0 podman[383647]: 2026-01-20 15:31:09.052465523 +0000 UTC m=+0.158780509 container init 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:31:09 compute-0 podman[383647]: 2026-01-20 15:31:09.059490945 +0000 UTC m=+0.165805921 container start 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:31:09 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [NOTICE]   (383665) : New worker (383667) forked
Jan 20 15:31:09 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [NOTICE]   (383665) : Loading success.
Jan 20 15:31:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:09.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:10 compute-0 ceph-mon[74360]: pgmap v3269: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 20 15:31:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.746 250022 DEBUG nova.compute.manager [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.747 250022 DEBUG oslo_concurrency.lockutils [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.747 250022 DEBUG oslo_concurrency.lockutils [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.747 250022 DEBUG oslo_concurrency.lockutils [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.748 250022 DEBUG nova.compute.manager [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] No waiting events found dispatching network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:31:10 compute-0 nova_compute[250018]: 2026-01-20 15:31:10.748 250022 WARNING nova.compute.manager [req-67f21fbe-347a-484f-b631-c03faa23aa9e req-862d041b-5c81-4450-84ba-d49bb6b2d86a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received unexpected event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 for instance with vm_state active and task_state None.
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/165161131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:11 compute-0 nova_compute[250018]: 2026-01-20 15:31:11.726 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003166694293225386 of space, bias 1.0, pg target 0.9500082879676158 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:31:11 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.028 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:12 compute-0 ceph-mon[74360]: pgmap v3270: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3566983214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:12 compute-0 ovn_controller[148666]: 2026-01-20T15:31:12Z|00751|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:31:12 compute-0 NetworkManager[48960]: <info>  [1768923072.5207] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 20 15:31:12 compute-0 NetworkManager[48960]: <info>  [1768923072.5216] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.524 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:12 compute-0 ovn_controller[148666]: 2026-01-20T15:31:12Z|00752|binding|INFO|Releasing lport cd551c37-a4a7-45aa-9507-04cb570a94af from this chassis (sb_readonly=0)
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.588 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.991 250022 DEBUG nova.compute.manager [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-changed-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.991 250022 DEBUG nova.compute.manager [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Refreshing instance network info cache due to event network-changed-a244be5a-cd0d-47e6-bb18-4fc628b6e913. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.991 250022 DEBUG oslo_concurrency.lockutils [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.991 250022 DEBUG oslo_concurrency.lockutils [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:31:12 compute-0 nova_compute[250018]: 2026-01-20 15:31:12.992 250022 DEBUG nova.network.neutron [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Refreshing network info cache for port a244be5a-cd0d-47e6-bb18-4fc628b6e913 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:31:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/46401669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:31:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:31:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:31:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:31:14 compute-0 ceph-mon[74360]: pgmap v3271: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:31:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3338679589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:31:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1528520629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:31:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:14 compute-0 nova_compute[250018]: 2026-01-20 15:31:14.880 250022 DEBUG nova.network.neutron [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updated VIF entry in instance network info cache for port a244be5a-cd0d-47e6-bb18-4fc628b6e913. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:31:14 compute-0 nova_compute[250018]: 2026-01-20 15:31:14.881 250022 DEBUG nova.network.neutron [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updating instance_info_cache with network_info: [{"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:14 compute-0 nova_compute[250018]: 2026-01-20 15:31:14.904 250022 DEBUG oslo_concurrency.lockutils [req-41cfd40a-0b9e-45af-90c9-063ab11939da req-6d40d25e-6a6a-4987-afb1-c83c8bc3e716 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-e974e40a-a93c-491e-b30e-6bb589348dc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:31:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.079 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.080 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:15.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.516 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.594 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.594 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.733 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.734 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=20.92181396484375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.734 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.734 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.815 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance e974e40a-a93c-491e-b30e-6bb589348dc8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.815 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.816 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:31:15 compute-0 nova_compute[250018]: 2026-01-20 15:31:15.897 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:16 compute-0 ceph-mon[74360]: pgmap v3272: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:31:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2264811836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451184599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.328 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.334 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.391 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.415 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.416 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:16.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:16 compute-0 nova_compute[250018]: 2026-01-20 15:31:16.727 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:17 compute-0 nova_compute[250018]: 2026-01-20 15:31:17.029 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 841 KiB/s wr, 75 op/s
Jan 20 15:31:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1451184599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:18 compute-0 ceph-mon[74360]: pgmap v3273: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 841 KiB/s wr, 75 op/s
Jan 20 15:31:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:31:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:19.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:19 compute-0 nova_compute[250018]: 2026-01-20 15:31:19.418 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:19 compute-0 nova_compute[250018]: 2026-01-20 15:31:19.419 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:19 compute-0 nova_compute[250018]: 2026-01-20 15:31:19.419 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:31:20 compute-0 nova_compute[250018]: 2026-01-20 15:31:20.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:20 compute-0 ceph-mon[74360]: pgmap v3274: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:31:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 KiB/s wr, 65 op/s
Jan 20 15:31:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:21.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:21 compute-0 nova_compute[250018]: 2026-01-20 15:31:21.729 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:22 compute-0 nova_compute[250018]: 2026-01-20 15:31:22.031 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:22 compute-0 ovn_controller[148666]: 2026-01-20T15:31:22Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:e8:d8 10.100.0.7
Jan 20 15:31:22 compute-0 ovn_controller[148666]: 2026-01-20T15:31:22Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:e8:d8 10.100.0.7
Jan 20 15:31:22 compute-0 ceph-mon[74360]: pgmap v3275: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 KiB/s wr, 65 op/s
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:23 compute-0 nova_compute[250018]: 2026-01-20 15:31:23.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.3 KiB/s wr, 18 op/s
Jan 20 15:31:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:23.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:24 compute-0 ceph-mon[74360]: pgmap v3276: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.3 KiB/s wr, 18 op/s
Jan 20 15:31:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:25 compute-0 nova_compute[250018]: 2026-01-20 15:31:25.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 687 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 15:31:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:25.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:26 compute-0 ceph-mon[74360]: pgmap v3277: 321 pgs: 321 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 687 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Jan 20 15:31:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:26 compute-0 nova_compute[250018]: 2026-01-20 15:31:26.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:27 compute-0 nova_compute[250018]: 2026-01-20 15:31:27.074 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:28 compute-0 sudo[383733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:28 compute-0 sudo[383733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:28 compute-0 sudo[383733]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:28 compute-0 sudo[383758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:28 compute-0 sudo[383758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:28 compute-0 sudo[383758]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.314 250022 INFO nova.compute.manager [None req-106f3250-1bfc-4982-b5c7-4a909b695b68 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Get console output
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.323 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:31:28 compute-0 ceph-mon[74360]: pgmap v3278: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.652 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.652 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.652 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.653 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.653 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.655 250022 INFO nova.compute.manager [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Terminating instance
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.656 250022 DEBUG nova.compute.manager [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:31:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:28 compute-0 kernel: tapa244be5a-cd (unregistering): left promiscuous mode
Jan 20 15:31:28 compute-0 NetworkManager[48960]: <info>  [1768923088.7138] device (tapa244be5a-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.719 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 ovn_controller[148666]: 2026-01-20T15:31:28Z|00753|binding|INFO|Releasing lport a244be5a-cd0d-47e6-bb18-4fc628b6e913 from this chassis (sb_readonly=0)
Jan 20 15:31:28 compute-0 ovn_controller[148666]: 2026-01-20T15:31:28Z|00754|binding|INFO|Setting lport a244be5a-cd0d-47e6-bb18-4fc628b6e913 down in Southbound
Jan 20 15:31:28 compute-0 ovn_controller[148666]: 2026-01-20T15:31:28Z|00755|binding|INFO|Removing iface tapa244be5a-cd ovn-installed in OVS
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.722 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:28.727 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:e8:d8 10.100.0.7'], port_security=['fa:16:3e:c9:e8:d8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e974e40a-a93c-491e-b30e-6bb589348dc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0228362f-0ced-4cac-bb89-96bd472df47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8928ac80-99a4-4162-8778-e854bb24c9a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306abd7d-c001-4e00-b2a1-8a251fd6a022, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=a244be5a-cd0d-47e6-bb18-4fc628b6e913) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:31:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:28.729 160071 INFO neutron.agent.ovn.metadata.agent [-] Port a244be5a-cd0d-47e6-bb18-4fc628b6e913 in datapath 0228362f-0ced-4cac-bb89-96bd472df47f unbound from our chassis
Jan 20 15:31:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:28.729 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0228362f-0ced-4cac-bb89-96bd472df47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:31:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:28.731 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[61ecac69-deb4-4a90-b3b8-1cbc459d116a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:28.731 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f namespace which is not needed anymore
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Jan 20 15:31:28 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cf.scope: Consumed 14.635s CPU time.
Jan 20 15:31:28 compute-0 systemd-machined[216401]: Machine qemu-90-instance-000000cf terminated.
Jan 20 15:31:28 compute-0 podman[383789]: 2026-01-20 15:31:28.812524751 +0000 UTC m=+0.063011259 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:31:28 compute-0 podman[383786]: 2026-01-20 15:31:28.841955113 +0000 UTC m=+0.093431078 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [NOTICE]   (383665) : haproxy version is 2.8.14-c23fe91
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [NOTICE]   (383665) : path to executable is /usr/sbin/haproxy
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [WARNING]  (383665) : Exiting Master process...
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [WARNING]  (383665) : Exiting Master process...
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [ALERT]    (383665) : Current worker (383667) exited with code 143 (Terminated)
Jan 20 15:31:28 compute-0 neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f[383661]: [WARNING]  (383665) : All workers exited. Exiting... (0)
Jan 20 15:31:28 compute-0 systemd[1]: libpod-6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591.scope: Deactivated successfully.
Jan 20 15:31:28 compute-0 conmon[383661]: conmon 6dc2041d654f0e10b509 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591.scope/container/memory.events
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.879 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 podman[383844]: 2026-01-20 15:31:28.882964461 +0000 UTC m=+0.050720154 container died 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.885 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.895 250022 INFO nova.virt.libvirt.driver [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Instance destroyed successfully.
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.896 250022 DEBUG nova.objects.instance [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid e974e40a-a93c-491e-b30e-6bb589348dc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.911 250022 DEBUG nova.virt.libvirt.vif [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-921631331',display_name='tempest-TestNetworkBasicOps-server-921631331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-921631331',id=207,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEAqDxVBp2h33eXX3Z4Y6GndT0eZUbki+q/rsKyZMhEDLXEtj5ICn9UzsdwtliCCgWYBy3qMABSYl4IjBd7AXsPnZQPt4zv3JvdUZsACexftU/zkJ1lgqA5ni1oDpXzkXw==',key_name='tempest-TestNetworkBasicOps-245405952',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-76m8m06w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:31:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=e974e40a-a93c-491e-b30e-6bb589348dc8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.912 250022 DEBUG nova.network.os_vif_util [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "address": "fa:16:3e:c9:e8:d8", "network": {"id": "0228362f-0ced-4cac-bb89-96bd472df47f", "bridge": "br-int", "label": "tempest-network-smoke--197661202", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa244be5a-cd", "ovs_interfaceid": "a244be5a-cd0d-47e6-bb18-4fc628b6e913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.913 250022 DEBUG nova.network.os_vif_util [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.913 250022 DEBUG os_vif [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.916 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa244be5a-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.919 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:28 compute-0 nova_compute[250018]: 2026-01-20 15:31:28.922 250022 INFO os_vif [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:e8:d8,bridge_name='br-int',has_traffic_filtering=True,id=a244be5a-cd0d-47e6-bb18-4fc628b6e913,network=Network(0228362f-0ced-4cac-bb89-96bd472df47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa244be5a-cd')
Jan 20 15:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591-userdata-shm.mount: Deactivated successfully.
Jan 20 15:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a438908fc7859bd0392e23aa372ff17571f4c60f917a88b37e225b83311d96fa-merged.mount: Deactivated successfully.
Jan 20 15:31:28 compute-0 podman[383844]: 2026-01-20 15:31:28.974203048 +0000 UTC m=+0.141958741 container cleanup 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:31:28 compute-0 systemd[1]: libpod-conmon-6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591.scope: Deactivated successfully.
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:29 compute-0 podman[383906]: 2026-01-20 15:31:29.095222557 +0000 UTC m=+0.096085260 container remove 6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.100 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfbfee5-a504-42fd-9657-8cc5422f31ca]: (4, ('Tue Jan 20 03:31:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f (6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591)\n6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591\nTue Jan 20 03:31:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f (6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591)\n6dc2041d654f0e10b5098079238a9f279e586a97b6aab940ca22e13dd7178591\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.102 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[34ef1311-eda8-4fc3-805a-1bd8cda6b75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.103 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0228362f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.104 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:29 compute-0 kernel: tap0228362f-00: left promiscuous mode
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.116 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.120 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8597d286-7da5-4c06-b97c-4aab5dd10c99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.134 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[52ac5a4b-1231-4943-ac66-e262d7ff99d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.135 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c9101064-a55b-4522-9a27-1e7a8be9b580]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.150 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dc99695f-c6e4-4dc1-bfe0-92bd36637440]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900896, 'reachable_time': 21432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383921, 'error': None, 'target': 'ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d0228362f\x2d0ced\x2d4cac\x2dbb89\x2d96bd472df47f.mount: Deactivated successfully.
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.155 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0228362f-0ced-4cac-bb89-96bd472df47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:31:29 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:29.155 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[b74b5849-ad13-4dde-8003-c07ed49c84fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:31:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.374 250022 INFO nova.virt.libvirt.driver [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Deleting instance files /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8_del
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.376 250022 INFO nova.virt.libvirt.driver [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Deletion of /var/lib/nova/instances/e974e40a-a93c-491e-b30e-6bb589348dc8_del complete
Jan 20 15:31:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.444 250022 INFO nova.compute.manager [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.446 250022 DEBUG oslo.service.loopingcall [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.447 250022 DEBUG nova.compute.manager [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.447 250022 DEBUG nova.network.neutron [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.854 250022 DEBUG nova.compute.manager [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-unplugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.855 250022 DEBUG oslo_concurrency.lockutils [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.855 250022 DEBUG oslo_concurrency.lockutils [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.856 250022 DEBUG oslo_concurrency.lockutils [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.856 250022 DEBUG nova.compute.manager [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] No waiting events found dispatching network-vif-unplugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:31:29 compute-0 nova_compute[250018]: 2026-01-20 15:31:29.856 250022 DEBUG nova.compute.manager [req-ddd59124-2cff-416d-9269-fee04b135771 req-698257e1-e42e-49b4-b451-77d01776b941 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-unplugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:31:30 compute-0 ceph-mon[74360]: pgmap v3279: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:31:30 compute-0 sshd-session[383784]: Invalid user user from 134.122.57.138 port 34910
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.637 250022 DEBUG nova.network.neutron [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.656 250022 INFO nova.compute.manager [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Took 1.21 seconds to deallocate network for instance.
Jan 20 15:31:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.708 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.709 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:30 compute-0 sshd-session[383784]: Connection closed by invalid user user 134.122.57.138 port 34910 [preauth]
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.779 250022 DEBUG nova.compute.manager [req-361a634b-bbbc-480a-a846-928eec66189e req-2d7e251f-a38c-4365-9d73-3fed76da9618 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-deleted-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:30 compute-0 nova_compute[250018]: 2026-01-20 15:31:30.782 250022 DEBUG oslo_concurrency.processutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:30.801 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:30.802 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:30.802 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:31:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116014989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.214 250022 DEBUG oslo_concurrency.processutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.220 250022 DEBUG nova.compute.provider_tree [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.243 250022 DEBUG nova.scheduler.client.report [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.312 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.346 250022 INFO nova.scheduler.client.report [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance e974e40a-a93c-491e-b30e-6bb589348dc8
Jan 20 15:31:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.423 250022 DEBUG oslo_concurrency.lockutils [None req-132d8e04-39a9-42c3-9489-3659a1c517a0 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4116014989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.736 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.938 250022 DEBUG nova.compute.manager [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.939 250022 DEBUG oslo_concurrency.lockutils [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.939 250022 DEBUG oslo_concurrency.lockutils [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.939 250022 DEBUG oslo_concurrency.lockutils [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "e974e40a-a93c-491e-b30e-6bb589348dc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.940 250022 DEBUG nova.compute.manager [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] No waiting events found dispatching network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:31:31 compute-0 nova_compute[250018]: 2026-01-20 15:31:31.940 250022 WARNING nova.compute.manager [req-9807d132-6206-45be-babc-daf38875ad41 req-8232eb78-7281-4192-b0f4-05450517f5d9 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Received unexpected event network-vif-plugged-a244be5a-cd0d-47e6-bb18-4fc628b6e913 for instance with vm_state deleted and task_state None.
Jan 20 15:31:32 compute-0 ceph-mon[74360]: pgmap v3280: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 20 15:31:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 15:31:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:33.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:33 compute-0 nova_compute[250018]: 2026-01-20 15:31:33.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:34 compute-0 nova_compute[250018]: 2026-01-20 15:31:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:31:34 compute-0 nova_compute[250018]: 2026-01-20 15:31:34.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:31:34 compute-0 nova_compute[250018]: 2026-01-20 15:31:34.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:31:34 compute-0 nova_compute[250018]: 2026-01-20 15:31:34.077 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:31:34 compute-0 ceph-mon[74360]: pgmap v3281: 321 pgs: 321 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Jan 20 15:31:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:34.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 169 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Jan 20 15:31:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:36 compute-0 ceph-mon[74360]: pgmap v3282: 321 pgs: 321 active+clean; 169 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Jan 20 15:31:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/768340541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:36 compute-0 nova_compute[250018]: 2026-01-20 15:31:36.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 881 KiB/s wr, 69 op/s
Jan 20 15:31:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:37.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:38 compute-0 ceph-mon[74360]: pgmap v3283: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 245 KiB/s rd, 881 KiB/s wr, 69 op/s
Jan 20 15:31:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:38.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:38 compute-0 nova_compute[250018]: 2026-01-20 15:31:38.967 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:39 compute-0 sudo[383950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-0 sudo[383950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[383950]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 sudo[383975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:31:39 compute-0 sudo[383975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[383975]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 sudo[384000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-0 sudo[384000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[384000]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:39 compute-0 sudo[384025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:31:39 compute-0 sudo[384025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:39 compute-0 sudo[384025]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c586d8c3-80e8-4ca4-a5bc-47f2fb18693d does not exist
Jan 20 15:31:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 52595e92-8796-4249-8be4-31fefcf2c08a does not exist
Jan 20 15:31:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d86f641e-d79b-4fb0-9aed-307dc58aa791 does not exist
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:31:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:31:39 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:39 compute-0 sudo[384081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-0 sudo[384081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[384081]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 sudo[384106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:31:39 compute-0 sudo[384106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[384106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:39 compute-0 sudo[384131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:39 compute-0 sudo[384131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:39 compute-0 sudo[384131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:40 compute-0 sudo[384156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:31:40 compute-0 sudo[384156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.363400827 +0000 UTC m=+0.046443156 container create fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:31:40 compute-0 systemd[1]: Started libpod-conmon-fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771.scope.
Jan 20 15:31:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.341360957 +0000 UTC m=+0.024403256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.445587158 +0000 UTC m=+0.128629547 container init fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.453225597 +0000 UTC m=+0.136267896 container start fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.456482385 +0000 UTC m=+0.139524714 container attach fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:31:40 compute-0 kind_hopper[384238]: 167 167
Jan 20 15:31:40 compute-0 systemd[1]: libpod-fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771.scope: Deactivated successfully.
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.460290829 +0000 UTC m=+0.143333128 container died fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-323fac1090c8c61f84ffc24037a1e765e478315f0e9b31f2705ef2f6d26c7b88-merged.mount: Deactivated successfully.
Jan 20 15:31:40 compute-0 podman[384222]: 2026-01-20 15:31:40.494928313 +0000 UTC m=+0.177970612 container remove fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:31:40 compute-0 systemd[1]: libpod-conmon-fedc6f1bce82e2b72d13b797c106989d8684e3b0d60b30542224fd8ed2275771.scope: Deactivated successfully.
Jan 20 15:31:40 compute-0 ceph-mon[74360]: pgmap v3284: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:31:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:31:40 compute-0 podman[384260]: 2026-01-20 15:31:40.640904743 +0000 UTC m=+0.036110096 container create 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:31:40 compute-0 systemd[1]: Started libpod-conmon-8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248.scope.
Jan 20 15:31:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:40 compute-0 podman[384260]: 2026-01-20 15:31:40.719366562 +0000 UTC m=+0.114571925 container init 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:31:40 compute-0 podman[384260]: 2026-01-20 15:31:40.625663687 +0000 UTC m=+0.020869060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:40 compute-0 podman[384260]: 2026-01-20 15:31:40.730972828 +0000 UTC m=+0.126178181 container start 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:31:40 compute-0 podman[384260]: 2026-01-20 15:31:40.735038248 +0000 UTC m=+0.130243611 container attach 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:31:40 compute-0 nova_compute[250018]: 2026-01-20 15:31:40.932 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-0 nova_compute[250018]: 2026-01-20 15:31:41.008 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:41 compute-0 elastic_cray[384277]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:31:41 compute-0 elastic_cray[384277]: --> relative data size: 1.0
Jan 20 15:31:41 compute-0 elastic_cray[384277]: --> All data devices are unavailable
Jan 20 15:31:41 compute-0 systemd[1]: libpod-8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248.scope: Deactivated successfully.
Jan 20 15:31:41 compute-0 podman[384293]: 2026-01-20 15:31:41.627609888 +0000 UTC m=+0.022372661 container died 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd17104afdab9925b452612bbba7cb93945159077f284856d83e58b0dede19b-merged.mount: Deactivated successfully.
Jan 20 15:31:41 compute-0 podman[384293]: 2026-01-20 15:31:41.670416906 +0000 UTC m=+0.065179689 container remove 8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cray, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:31:41 compute-0 systemd[1]: libpod-conmon-8cde33c7edd279ac516cf71841b675bc44ce1c92c33a90d76d9b422ca9650248.scope: Deactivated successfully.
Jan 20 15:31:41 compute-0 nova_compute[250018]: 2026-01-20 15:31:41.687 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:41.688 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:31:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:41.689 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:31:41 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:31:41.690 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:31:41 compute-0 sudo[384156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:41 compute-0 sudo[384308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:41 compute-0 sudo[384308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:41 compute-0 sudo[384308]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:41 compute-0 nova_compute[250018]: 2026-01-20 15:31:41.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:41 compute-0 sudo[384333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:31:41 compute-0 sudo[384333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:41 compute-0 sudo[384333]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:41 compute-0 sudo[384358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:41 compute-0 sudo[384358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:41 compute-0 sudo[384358]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:41 compute-0 sudo[384383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:31:41 compute-0 sudo[384383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.319962932 +0000 UTC m=+0.039661813 container create dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:31:42 compute-0 systemd[1]: Started libpod-conmon-dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d.scope.
Jan 20 15:31:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.394131423 +0000 UTC m=+0.113830354 container init dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.302819115 +0000 UTC m=+0.022518026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.404589408 +0000 UTC m=+0.124288299 container start dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.408000232 +0000 UTC m=+0.127699163 container attach dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:31:42 compute-0 distracted_almeida[384466]: 167 167
Jan 20 15:31:42 compute-0 systemd[1]: libpod-dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d.scope: Deactivated successfully.
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.412034132 +0000 UTC m=+0.131733133 container died dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-288b8406d1580445e180a4bc2332b74ca60ba5660ee825adcb5b35ed0e2c310c-merged.mount: Deactivated successfully.
Jan 20 15:31:42 compute-0 podman[384449]: 2026-01-20 15:31:42.455711622 +0000 UTC m=+0.175410513 container remove dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:31:42 compute-0 systemd[1]: libpod-conmon-dc11648ad30a9a86cc2bcf8a7e19e34f660af28efb2e9ac8727573b91d309b4d.scope: Deactivated successfully.
Jan 20 15:31:42 compute-0 ceph-mon[74360]: pgmap v3285: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Jan 20 15:31:42 compute-0 podman[384491]: 2026-01-20 15:31:42.603295705 +0000 UTC m=+0.021689252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:42 compute-0 podman[384491]: 2026-01-20 15:31:42.700055512 +0000 UTC m=+0.118449039 container create 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:31:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:42.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:42 compute-0 systemd[1]: Started libpod-conmon-8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb.scope.
Jan 20 15:31:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d8332bece23fb18b732cf59811bfd955359882d74bd170667e95b69eddbc905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d8332bece23fb18b732cf59811bfd955359882d74bd170667e95b69eddbc905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d8332bece23fb18b732cf59811bfd955359882d74bd170667e95b69eddbc905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d8332bece23fb18b732cf59811bfd955359882d74bd170667e95b69eddbc905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:42 compute-0 podman[384491]: 2026-01-20 15:31:42.831601838 +0000 UTC m=+0.249995395 container init 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:31:42 compute-0 podman[384491]: 2026-01-20 15:31:42.839914995 +0000 UTC m=+0.258308522 container start 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 15:31:42 compute-0 podman[384491]: 2026-01-20 15:31:42.843419991 +0000 UTC m=+0.261813508 container attach 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:31:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:43.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]: {
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:     "0": [
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:         {
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "devices": [
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "/dev/loop3"
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             ],
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "lv_name": "ceph_lv0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "lv_size": "7511998464",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "name": "ceph_lv0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "tags": {
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.cluster_name": "ceph",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.crush_device_class": "",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.encrypted": "0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.osd_id": "0",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.type": "block",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:                 "ceph.vdo": "0"
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             },
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "type": "block",
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:             "vg_name": "ceph_vg0"
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:         }
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]:     ]
Jan 20 15:31:43 compute-0 vibrant_visvesvaraya[384507]: }
Jan 20 15:31:43 compute-0 systemd[1]: libpod-8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb.scope: Deactivated successfully.
Jan 20 15:31:43 compute-0 podman[384491]: 2026-01-20 15:31:43.607331674 +0000 UTC m=+1.025725201 container died 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d8332bece23fb18b732cf59811bfd955359882d74bd170667e95b69eddbc905-merged.mount: Deactivated successfully.
Jan 20 15:31:43 compute-0 podman[384491]: 2026-01-20 15:31:43.661270074 +0000 UTC m=+1.079663601 container remove 8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:31:43 compute-0 systemd[1]: libpod-conmon-8e8994e3a15a6667b549d5660dda97d3b64438f5fe4547d192c6c3a6180de7eb.scope: Deactivated successfully.
Jan 20 15:31:43 compute-0 sudo[384383]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:43 compute-0 sudo[384526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:43 compute-0 sudo[384526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:43 compute-0 sudo[384526]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:43 compute-0 sudo[384551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:31:43 compute-0 sudo[384551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:43 compute-0 sudo[384551]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:43 compute-0 sudo[384576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:43 compute-0 sudo[384576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:43 compute-0 sudo[384576]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:43 compute-0 nova_compute[250018]: 2026-01-20 15:31:43.893 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923088.8920596, e974e40a-a93c-491e-b30e-6bb589348dc8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:31:43 compute-0 nova_compute[250018]: 2026-01-20 15:31:43.894 250022 INFO nova.compute.manager [-] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] VM Stopped (Lifecycle Event)
Jan 20 15:31:43 compute-0 nova_compute[250018]: 2026-01-20 15:31:43.914 250022 DEBUG nova.compute.manager [None req-3432331c-f928-4e98-ba50-e1ff33dfff87 - - - - - -] [instance: e974e40a-a93c-491e-b30e-6bb589348dc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:31:43 compute-0 sudo[384601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:31:43 compute-0 sudo[384601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:43 compute-0 sshd-session[384625]: banner exchange: Connection from 3.134.148.59 port 60258: invalid format
Jan 20 15:31:44 compute-0 nova_compute[250018]: 2026-01-20 15:31:44.014 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.257156497 +0000 UTC m=+0.075171779 container create 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.203781482 +0000 UTC m=+0.021796844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:44 compute-0 systemd[1]: Started libpod-conmon-97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b.scope.
Jan 20 15:31:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.378492035 +0000 UTC m=+0.196507337 container init 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.384782327 +0000 UTC m=+0.202797599 container start 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.389530736 +0000 UTC m=+0.207546028 container attach 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:31:44 compute-0 kind_leakey[384682]: 167 167
Jan 20 15:31:44 compute-0 systemd[1]: libpod-97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b.scope: Deactivated successfully.
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.39114683 +0000 UTC m=+0.209162112 container died 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa2fb4bafb8a5a3915663d869eca39820e8000130da77441f8bb3cb210225671-merged.mount: Deactivated successfully.
Jan 20 15:31:44 compute-0 podman[384665]: 2026-01-20 15:31:44.439784656 +0000 UTC m=+0.257799928 container remove 97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leakey, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:31:44 compute-0 systemd[1]: libpod-conmon-97d89b54380b7d8128cf6dc73cac8445b4a9a7a0d6253ea5fe000d6eb1e7302b.scope: Deactivated successfully.
Jan 20 15:31:44 compute-0 podman[384706]: 2026-01-20 15:31:44.591046769 +0000 UTC m=+0.041192444 container create 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:31:44 compute-0 ceph-mon[74360]: pgmap v3286: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:44 compute-0 systemd[1]: Started libpod-conmon-025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781.scope.
Jan 20 15:31:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5878977fad2eedc8968fe163173847a744565105e981ad393aa638c5c0c93023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5878977fad2eedc8968fe163173847a744565105e981ad393aa638c5c0c93023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5878977fad2eedc8968fe163173847a744565105e981ad393aa638c5c0c93023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5878977fad2eedc8968fe163173847a744565105e981ad393aa638c5c0c93023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:31:44 compute-0 podman[384706]: 2026-01-20 15:31:44.656798782 +0000 UTC m=+0.106944477 container init 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:31:44 compute-0 podman[384706]: 2026-01-20 15:31:44.662804215 +0000 UTC m=+0.112949880 container start 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:31:44 compute-0 podman[384706]: 2026-01-20 15:31:44.666175137 +0000 UTC m=+0.116320842 container attach 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:31:44 compute-0 podman[384706]: 2026-01-20 15:31:44.57310982 +0000 UTC m=+0.023255535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:31:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:44.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:45.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:45 compute-0 quirky_burnell[384723]: {
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:         "osd_id": 0,
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:         "type": "bluestore"
Jan 20 15:31:45 compute-0 quirky_burnell[384723]:     }
Jan 20 15:31:45 compute-0 quirky_burnell[384723]: }
Jan 20 15:31:45 compute-0 systemd[1]: libpod-025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781.scope: Deactivated successfully.
Jan 20 15:31:45 compute-0 podman[384706]: 2026-01-20 15:31:45.556026803 +0000 UTC m=+1.006172478 container died 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5878977fad2eedc8968fe163173847a744565105e981ad393aa638c5c0c93023-merged.mount: Deactivated successfully.
Jan 20 15:31:45 compute-0 podman[384706]: 2026-01-20 15:31:45.704608783 +0000 UTC m=+1.154754458 container remove 025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:31:45 compute-0 systemd[1]: libpod-conmon-025ccf387af25b0bd719c9f239802c2f1f73f51d5535f1f3cb422546a3f91781.scope: Deactivated successfully.
Jan 20 15:31:45 compute-0 sudo[384601]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:31:45 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:45 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:31:46 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a18877b1-6908-45ca-af5b-9eaa5c0cae67 does not exist
Jan 20 15:31:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8ea999b3-ab33-473f-b0f4-754d5b9fa713 does not exist
Jan 20 15:31:46 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7964a158-eda6-4468-b525-fe17baee590b does not exist
Jan 20 15:31:46 compute-0 sudo[384756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:46 compute-0 sudo[384756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:46 compute-0 sudo[384756]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:46 compute-0 sudo[384781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:31:46 compute-0 sudo[384781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:46 compute-0 sudo[384781]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:46 compute-0 ceph-mon[74360]: pgmap v3287: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Jan 20 15:31:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:46 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:31:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:46 compute-0 nova_compute[250018]: 2026-01-20 15:31:46.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 20 15:31:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:47.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:48 compute-0 ceph-mon[74360]: pgmap v3288: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 20 15:31:48 compute-0 sudo[384807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:48 compute-0 sudo[384807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:48 compute-0 sudo[384807]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:48 compute-0 sudo[384832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:31:48 compute-0 sudo[384832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:31:48 compute-0 sudo[384832]: pam_unix(sudo:session): session closed for user root
Jan 20 15:31:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:49 compute-0 nova_compute[250018]: 2026-01-20 15:31:49.056 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:31:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:49.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:50 compute-0 ceph-mon[74360]: pgmap v3289: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:31:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:31:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:31:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:51.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:51 compute-0 nova_compute[250018]: 2026-01-20 15:31:51.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:52 compute-0 ceph-mon[74360]: pgmap v3290: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:31:52
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.mgr']
Jan 20 15:31:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:31:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:53.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:54 compute-0 nova_compute[250018]: 2026-01-20 15:31:54.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:54 compute-0 ceph-mon[74360]: pgmap v3291: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:54.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:55.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:56 compute-0 ceph-mon[74360]: pgmap v3292: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:31:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:31:56 compute-0 nova_compute[250018]: 2026-01-20 15:31:56.787 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:57.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.746 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.746 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.763 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.831 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.832 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.843 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.843 250022 INFO nova.compute.claims [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:31:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:31:57 compute-0 nova_compute[250018]: 2026-01-20 15:31:57.946 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:31:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:31:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442367220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.370 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.377 250022 DEBUG nova.compute.provider_tree [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.392 250022 DEBUG nova.scheduler.client.report [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.412 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.413 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:31:58 compute-0 ceph-mon[74360]: pgmap v3293: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1442367220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.453 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.454 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.471 250022 INFO nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.490 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.592 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.593 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.594 250022 INFO nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Creating image(s)
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.617 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.639 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.662 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.666 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.730 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.731 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:31:58.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.732 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.732 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.760 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:31:58 compute-0 nova_compute[250018]: 2026-01-20 15:31:58.764 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.060 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.092 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.136 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:31:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.244 250022 DEBUG nova.objects.instance [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.267 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.267 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Ensure instance console log exists: /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.267 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.268 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.268 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:31:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:31:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:31:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:31:59.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:31:59 compute-0 podman[385052]: 2026-01-20 15:31:59.473165662 +0000 UTC m=+0.064082888 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:31:59 compute-0 podman[385051]: 2026-01-20 15:31:59.501531295 +0000 UTC m=+0.091843045 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Jan 20 15:31:59 compute-0 nova_compute[250018]: 2026-01-20 15:31:59.545 250022 DEBUG nova.policy [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:32:00 compute-0 ceph-mon[74360]: pgmap v3294: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:32:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:00.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:01 compute-0 nova_compute[250018]: 2026-01-20 15:32:01.563 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Successfully created port: 3d9d0177-0374-4df6-b536-6a369c25c060 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:32:01 compute-0 nova_compute[250018]: 2026-01-20 15:32:01.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:02 compute-0 ceph-mon[74360]: pgmap v3295: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.174 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Successfully updated port: 3d9d0177-0374-4df6-b536-6a369c25c060 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:32:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.192 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.192 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.193 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:32:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.331 250022 DEBUG nova.compute.manager [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.332 250022 DEBUG nova.compute.manager [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.332 250022 DEBUG oslo_concurrency.lockutils [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:03 compute-0 nova_compute[250018]: 2026-01-20 15:32:03.366 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:32:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:03.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:04 compute-0 ceph-mon[74360]: pgmap v3296: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Jan 20 15:32:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:04.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.792 250022 DEBUG nova.network.neutron [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.810 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.811 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Instance network_info: |[{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.812 250022 DEBUG oslo_concurrency.lockutils [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.813 250022 DEBUG nova.network.neutron [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.817 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Start _get_guest_xml network_info=[{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.824 250022 WARNING nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.830 250022 DEBUG nova.virt.libvirt.host [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.831 250022 DEBUG nova.virt.libvirt.host [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.842 250022 DEBUG nova.virt.libvirt.host [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.843 250022 DEBUG nova.virt.libvirt.host [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.845 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.846 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.847 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.847 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.848 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.848 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.849 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.849 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.850 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.850 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.851 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.851 250022 DEBUG nova.virt.hardware [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:32:04 compute-0 nova_compute[250018]: 2026-01-20 15:32:04.856 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:32:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767820426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.295 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.322 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.326 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:05.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:05 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3767820426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:32:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040056216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.747 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.750 250022 DEBUG nova.virt.libvirt.vif [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:31:58Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.751 250022 DEBUG nova.network.os_vif_util [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.753 250022 DEBUG nova.network.os_vif_util [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.755 250022 DEBUG nova.objects.instance [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.775 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <uuid>00832e9b-36fa-4d41-be21-ca6b05cd493f</uuid>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <name>instance-000000d0</name>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:32:04</nova:creationTime>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:32:05 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <system>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="serial">00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="uuid">00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </system>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <os>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </os>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <features>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </features>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk">
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </source>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config">
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </source>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:32:05 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:e7:ba:22"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <target dev="tap3d9d0177-03"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log" append="off"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <video>
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </video>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:32:05 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:32:05 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:32:05 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:32:05 compute-0 nova_compute[250018]: </domain>
Jan 20 15:32:05 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.777 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Preparing to wait for external event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.778 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.779 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.779 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.781 250022 DEBUG nova.virt.libvirt.vif [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:31:58Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.781 250022 DEBUG nova.network.os_vif_util [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.782 250022 DEBUG nova.network.os_vif_util [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.783 250022 DEBUG os_vif [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.785 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.786 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.790 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d9d0177-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.790 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d9d0177-03, col_values=(('external_ids', {'iface-id': '3d9d0177-0374-4df6-b536-6a369c25c060', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:ba:22', 'vm-uuid': '00832e9b-36fa-4d41-be21-ca6b05cd493f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.792 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:05 compute-0 NetworkManager[48960]: <info>  [1768923125.7937] manager: (tap3d9d0177-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.796 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.801 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.802 250022 INFO os_vif [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03')
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.851 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.852 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.852 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:e7:ba:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.853 250022 INFO nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Using config drive
Jan 20 15:32:05 compute-0 nova_compute[250018]: 2026-01-20 15:32:05.876 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:06 compute-0 ceph-mon[74360]: pgmap v3297: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3040056216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.612 250022 INFO nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Creating config drive at /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.616 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof645gr5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.750 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpof645gr5" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.788 250022 DEBUG nova.storage.rbd_utils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.793 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.980 250022 DEBUG oslo_concurrency.processutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config 00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:06 compute-0 nova_compute[250018]: 2026-01-20 15:32:06.980 250022 INFO nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Deleting local config drive /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/disk.config because it was imported into RBD.
Jan 20 15:32:07 compute-0 kernel: tap3d9d0177-03: entered promiscuous mode
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.0344] manager: (tap3d9d0177-03): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Jan 20 15:32:07 compute-0 ovn_controller[148666]: 2026-01-20T15:32:07Z|00756|binding|INFO|Claiming lport 3d9d0177-0374-4df6-b536-6a369c25c060 for this chassis.
Jan 20 15:32:07 compute-0 ovn_controller[148666]: 2026-01-20T15:32:07Z|00757|binding|INFO|3d9d0177-0374-4df6-b536-6a369c25c060: Claiming fa:16:3e:e7:ba:22 10.100.0.7
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.034 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.040 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.049 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:ba:22 10.100.0.7'], port_security=['fa:16:3e:e7:ba:22 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00832e9b-36fa-4d41-be21-ca6b05cd493f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ec0e47d-339a-4483-a67a-b500a294d021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41c4163e-420d-44a5-bbd3-fcb6dc547b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248a3941-38d3-4ab1-8c64-6993152de5bd, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3d9d0177-0374-4df6-b536-6a369c25c060) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.051 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3d9d0177-0374-4df6-b536-6a369c25c060 in datapath 2ec0e47d-339a-4483-a67a-b500a294d021 bound to our chassis
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.053 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ec0e47d-339a-4483-a67a-b500a294d021
Jan 20 15:32:07 compute-0 systemd-machined[216401]: New machine qemu-91-instance-000000d0.
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.072 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e873beb2-928a-4649-bc11-d60241ac6674]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.073 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ec0e47d-31 in ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.075 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ec0e47d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.075 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[218bf825-e08a-4600-89c5-3fd85c05a754]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.076 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[691abfb1-987b-4d5d-88c8-cc0a78e977a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.093 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[689d1560-2d05-46eb-bba5-bab4760e719a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-000000d0.
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 systemd-udevd[385239]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:32:07 compute-0 ovn_controller[148666]: 2026-01-20T15:32:07Z|00758|binding|INFO|Setting lport 3d9d0177-0374-4df6-b536-6a369c25c060 ovn-installed in OVS
Jan 20 15:32:07 compute-0 ovn_controller[148666]: 2026-01-20T15:32:07Z|00759|binding|INFO|Setting lport 3d9d0177-0374-4df6-b536-6a369c25c060 up in Southbound
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.114 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.116 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[4383c3ca-1c47-4fb0-9766-fd525e66c337]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.1232] device (tap3d9d0177-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.1241] device (tap3d9d0177-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.146 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[166f2afc-744d-4112-9b24-a7d3513b2888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 systemd-udevd[385242]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.151 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[baef0337-88f9-4cf8-a97a-b28c259aa8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.1528] manager: (tap2ec0e47d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.178 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c1294b50-4285-46c2-b4bd-71d88aec66eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.181 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[81cf1dff-3d71-4693-9f9c-251427c5ac19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.2024] device (tap2ec0e47d-30): carrier: link connected
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.209 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[444356bb-b551-4f62-a511-77d7c16cf11e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.224 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c858b51e-156b-4047-b9cd-337ff7680366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ec0e47d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:ea:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906792, 'reachable_time': 19221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385269, 'error': None, 'target': 'ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.241 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8f8f6b-4f7c-4bac-b0b0-ddadd448387a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:eaaf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 906792, 'tstamp': 906792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385270, 'error': None, 'target': 'ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.259 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[30c99d92-e642-499c-9556-eddb2c5de1c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ec0e47d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:ea:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906792, 'reachable_time': 19221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385271, 'error': None, 'target': 'ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.289 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[94a03a7e-060f-4a1f-8a46-22c1a9ae9dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.350 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d44fc02b-ac00-45c4-8b69-dd6a1ce3f57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.351 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ec0e47d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.351 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.352 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ec0e47d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:07 compute-0 NetworkManager[48960]: <info>  [1768923127.3538] manager: (tap2ec0e47d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 20 15:32:07 compute-0 kernel: tap2ec0e47d-30: entered promiscuous mode
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.354 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.355 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ec0e47d-30, col_values=(('external_ids', {'iface-id': '3cb40d6c-abbe-41b5-94ed-cb8dcac5474e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:07 compute-0 ovn_controller[148666]: 2026-01-20T15:32:07Z|00760|binding|INFO|Releasing lport 3cb40d6c-abbe-41b5-94ed-cb8dcac5474e from this chassis (sb_readonly=0)
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.357 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.358 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ec0e47d-339a-4483-a67a-b500a294d021.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ec0e47d-339a-4483-a67a-b500a294d021.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.369 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.370 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[fa918e37-cbb0-45f4-80a8-42fbf8c572f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.371 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-2ec0e47d-339a-4483-a67a-b500a294d021
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/2ec0e47d-339a-4483-a67a-b500a294d021.pid.haproxy
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 2ec0e47d-339a-4483-a67a-b500a294d021
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:32:07 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:07.372 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021', 'env', 'PROCESS_TAG=haproxy-2ec0e47d-339a-4483-a67a-b500a294d021', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ec0e47d-339a-4483-a67a-b500a294d021.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.373 250022 DEBUG nova.network.neutron [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.374 250022 DEBUG nova.network.neutron [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.391 250022 DEBUG oslo_concurrency.lockutils [req-4be5c1d8-7c2d-469c-85ef-9778539c9910 req-06b3404a-39b8-42b6-8528-c1eea53c7355 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.522 250022 DEBUG nova.compute.manager [req-05c7e228-674d-407e-8c76-3001eaa9c8a8 req-c64f92af-396f-43e9-9bba-e30f490001aa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.523 250022 DEBUG oslo_concurrency.lockutils [req-05c7e228-674d-407e-8c76-3001eaa9c8a8 req-c64f92af-396f-43e9-9bba-e30f490001aa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.523 250022 DEBUG oslo_concurrency.lockutils [req-05c7e228-674d-407e-8c76-3001eaa9c8a8 req-c64f92af-396f-43e9-9bba-e30f490001aa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.523 250022 DEBUG oslo_concurrency.lockutils [req-05c7e228-674d-407e-8c76-3001eaa9c8a8 req-c64f92af-396f-43e9-9bba-e30f490001aa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:07 compute-0 nova_compute[250018]: 2026-01-20 15:32:07.523 250022 DEBUG nova.compute.manager [req-05c7e228-674d-407e-8c76-3001eaa9c8a8 req-c64f92af-396f-43e9-9bba-e30f490001aa 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Processing event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:32:07 compute-0 podman[385303]: 2026-01-20 15:32:07.75695525 +0000 UTC m=+0.053726655 container create 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 15:32:07 compute-0 systemd[1]: Started libpod-conmon-4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8.scope.
Jan 20 15:32:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:07 compute-0 podman[385303]: 2026-01-20 15:32:07.730757046 +0000 UTC m=+0.027528441 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918fd2f60ca748264a0344ae1f79df4a2d533fa1aff64c09c6d284c00785cba8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:07 compute-0 podman[385303]: 2026-01-20 15:32:07.844509957 +0000 UTC m=+0.141281382 container init 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:32:07 compute-0 podman[385303]: 2026-01-20 15:32:07.849477452 +0000 UTC m=+0.146248857 container start 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 20 15:32:07 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [NOTICE]   (385322) : New worker (385339) forked
Jan 20 15:32:07 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [NOTICE]   (385322) : Loading success.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.047 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923128.0471504, 00832e9b-36fa-4d41-be21-ca6b05cd493f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.048 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] VM Started (Lifecycle Event)
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.049 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.052 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.055 250022 INFO nova.virt.libvirt.driver [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Instance spawned successfully.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.055 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.072 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.079 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.084 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.085 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.086 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.086 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.087 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.088 250022 DEBUG nova.virt.libvirt.driver [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.121 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.121 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923128.0472443, 00832e9b-36fa-4d41-be21-ca6b05cd493f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.122 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] VM Paused (Lifecycle Event)
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.168 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.172 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923128.051668, 00832e9b-36fa-4d41-be21-ca6b05cd493f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.173 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] VM Resumed (Lifecycle Event)
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.193 250022 INFO nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Took 9.60 seconds to spawn the instance on the hypervisor.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.194 250022 DEBUG nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.195 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.201 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.235 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.256 250022 INFO nova.compute.manager [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Took 10.45 seconds to build instance.
Jan 20 15:32:08 compute-0 nova_compute[250018]: 2026-01-20 15:32:08.294 250022 DEBUG oslo_concurrency.lockutils [None req-525fbeaf-5033-4725-a850-d379255d9b07 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:08 compute-0 sudo[385376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:08 compute-0 sudo[385376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:08 compute-0 sudo[385376]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:08 compute-0 sudo[385401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:08 compute-0 sudo[385401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:08 compute-0 sudo[385401]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:08 compute-0 ceph-mon[74360]: pgmap v3298: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:32:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 15:32:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.624 250022 DEBUG nova.compute.manager [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.624 250022 DEBUG oslo_concurrency.lockutils [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.625 250022 DEBUG oslo_concurrency.lockutils [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.625 250022 DEBUG oslo_concurrency.lockutils [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.625 250022 DEBUG nova.compute.manager [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:32:09 compute-0 nova_compute[250018]: 2026-01-20 15:32:09.625 250022 WARNING nova.compute.manager [req-c9070def-af50-416c-a5a0-403e9c0b4095 req-a31bb945-5333-4c02-82df-66bf61eade4d 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 for instance with vm_state active and task_state None.
Jan 20 15:32:10 compute-0 nova_compute[250018]: 2026-01-20 15:32:10.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:10 compute-0 ceph-mon[74360]: pgmap v3299: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 15:32:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:10 compute-0 nova_compute[250018]: 2026-01-20 15:32:10.793 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:32:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:11 compute-0 ovn_controller[148666]: 2026-01-20T15:32:11Z|00761|binding|INFO|Releasing lport 3cb40d6c-abbe-41b5-94ed-cb8dcac5474e from this chassis (sb_readonly=0)
Jan 20 15:32:11 compute-0 nova_compute[250018]: 2026-01-20 15:32:11.766 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:11 compute-0 NetworkManager[48960]: <info>  [1768923131.7704] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 20 15:32:11 compute-0 NetworkManager[48960]: <info>  [1768923131.7711] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 20 15:32:11 compute-0 nova_compute[250018]: 2026-01-20 15:32:11.819 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:11 compute-0 ovn_controller[148666]: 2026-01-20T15:32:11Z|00762|binding|INFO|Releasing lport 3cb40d6c-abbe-41b5-94ed-cb8dcac5474e from this chassis (sb_readonly=0)
Jan 20 15:32:11 compute-0 nova_compute[250018]: 2026-01-20 15:32:11.826 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:32:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:32:12 compute-0 nova_compute[250018]: 2026-01-20 15:32:12.459 250022 DEBUG nova.compute.manager [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:12 compute-0 nova_compute[250018]: 2026-01-20 15:32:12.459 250022 DEBUG nova.compute.manager [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:32:12 compute-0 nova_compute[250018]: 2026-01-20 15:32:12.459 250022 DEBUG oslo_concurrency.lockutils [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:12 compute-0 nova_compute[250018]: 2026-01-20 15:32:12.459 250022 DEBUG oslo_concurrency.lockutils [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:12 compute-0 nova_compute[250018]: 2026-01-20 15:32:12.460 250022 DEBUG nova.network.neutron [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:32:12 compute-0 ceph-mon[74360]: pgmap v3300: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 20 15:32:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:12.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 151 KiB/s wr, 68 op/s
Jan 20 15:32:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:13.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/779206332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:13 compute-0 nova_compute[250018]: 2026-01-20 15:32:13.650 250022 DEBUG nova.network.neutron [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:32:13 compute-0 nova_compute[250018]: 2026-01-20 15:32:13.651 250022 DEBUG nova.network.neutron [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:13 compute-0 nova_compute[250018]: 2026-01-20 15:32:13.671 250022 DEBUG oslo_concurrency.lockutils [req-0cea5cf8-7243-4834-9d43-35f174975fa0 req-fd1b917f-46c5-4ddd-9b52-1f440a3ac96a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:14 compute-0 ceph-mon[74360]: pgmap v3301: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 151 KiB/s wr, 68 op/s
Jan 20 15:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2050766060' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2050766060' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3001522942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1817760916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3050195879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 151 KiB/s wr, 74 op/s
Jan 20 15:32:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:15 compute-0 nova_compute[250018]: 2026-01-20 15:32:15.795 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:15 compute-0 sshd-session[385429]: Invalid user user from 134.122.57.138 port 53042
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.094 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.094 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.095 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.095 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.096 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:16 compute-0 sshd-session[385429]: Connection closed by invalid user user 134.122.57.138 port 53042 [preauth]
Jan 20 15:32:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:32:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230532106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.522 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.599 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.599 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.740 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.741 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.741 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.742 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:16.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.823 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:16 compute-0 ceph-mon[74360]: pgmap v3302: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 151 KiB/s wr, 74 op/s
Jan 20 15:32:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2230532106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.874 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 00832e9b-36fa-4d41-be21-ca6b05cd493f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.875 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.876 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:32:16 compute-0 nova_compute[250018]: 2026-01-20 15:32:16.948 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:32:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:32:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791023127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:17 compute-0 nova_compute[250018]: 2026-01-20 15:32:17.385 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:32:17 compute-0 nova_compute[250018]: 2026-01-20 15:32:17.392 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:32:17 compute-0 nova_compute[250018]: 2026-01-20 15:32:17.407 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:32:17 compute-0 nova_compute[250018]: 2026-01-20 15:32:17.425 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:32:17 compute-0 nova_compute[250018]: 2026-01-20 15:32:17.426 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:17.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1791023127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:18 compute-0 ceph-mon[74360]: pgmap v3303: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:19 compute-0 nova_compute[250018]: 2026-01-20 15:32:19.427 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:19 compute-0 nova_compute[250018]: 2026-01-20 15:32:19.427 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:19 compute-0 nova_compute[250018]: 2026-01-20 15:32:19.428 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:32:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:19.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:19 compute-0 ceph-mon[74360]: pgmap v3304: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:32:20 compute-0 nova_compute[250018]: 2026-01-20 15:32:20.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:20 compute-0 nova_compute[250018]: 2026-01-20 15:32:20.798 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 84 op/s
Jan 20 15:32:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:21 compute-0 ovn_controller[148666]: 2026-01-20T15:32:21Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:ba:22 10.100.0.7
Jan 20 15:32:21 compute-0 ovn_controller[148666]: 2026-01-20T15:32:21Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:ba:22 10.100.0.7
Jan 20 15:32:21 compute-0 nova_compute[250018]: 2026-01-20 15:32:21.826 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:22 compute-0 ceph-mon[74360]: pgmap v3305: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 84 op/s
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 475 KiB/s wr, 18 op/s
Jan 20 15:32:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:23.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:24 compute-0 ceph-mon[74360]: pgmap v3306: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 203 KiB/s rd, 475 KiB/s wr, 18 op/s
Jan 20 15:32:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:24.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:25 compute-0 nova_compute[250018]: 2026-01-20 15:32:25.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 199 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 528 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:32:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:25.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:25 compute-0 nova_compute[250018]: 2026-01-20 15:32:25.839 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:26 compute-0 ceph-mon[74360]: pgmap v3307: 321 pgs: 321 active+clean; 199 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 528 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 15:32:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:26.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:26 compute-0 nova_compute[250018]: 2026-01-20 15:32:26.828 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:27 compute-0 nova_compute[250018]: 2026-01-20 15:32:27.357 250022 INFO nova.compute.manager [None req-3af96364-8c0a-440c-9b39-2957ba3dd6d5 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Get console output
Jan 20 15:32:27 compute-0 nova_compute[250018]: 2026-01-20 15:32:27.363 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:32:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:27.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:28 compute-0 ceph-mon[74360]: pgmap v3308: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:28 compute-0 sudo[385485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:28 compute-0 sudo[385485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:28 compute-0 sudo[385485]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:28 compute-0 sudo[385510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:28 compute-0 sudo[385510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:28 compute-0 sudo[385510]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:28.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:29 compute-0 nova_compute[250018]: 2026-01-20 15:32:29.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:29.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:30 compute-0 ceph-mon[74360]: pgmap v3309: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:30 compute-0 podman[385537]: 2026-01-20 15:32:30.452517637 +0000 UTC m=+0.048331769 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 15:32:30 compute-0 podman[385536]: 2026-01-20 15:32:30.537306707 +0000 UTC m=+0.134734233 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:32:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:30.805 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:30.806 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:30.807 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:30 compute-0 nova_compute[250018]: 2026-01-20 15:32:30.840 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:31.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:31 compute-0 nova_compute[250018]: 2026-01-20 15:32:31.830 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:32 compute-0 ceph-mon[74360]: pgmap v3310: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:32:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:32.510 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:32:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:32.511 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:32:32 compute-0 nova_compute[250018]: 2026-01-20 15:32:32.510 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 53 op/s
Jan 20 15:32:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.311738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153311864, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1143, "num_deletes": 256, "total_data_size": 1840873, "memory_usage": 1873088, "flush_reason": "Manual Compaction"}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153323545, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1819587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73345, "largest_seqno": 74486, "table_properties": {"data_size": 1814121, "index_size": 2861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11617, "raw_average_key_size": 19, "raw_value_size": 1803194, "raw_average_value_size": 3025, "num_data_blocks": 127, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923045, "oldest_key_time": 1768923045, "file_creation_time": 1768923153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 11803 microseconds, and 4801 cpu microseconds.
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.323574) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1819587 bytes OK
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.323589) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.325060) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.325072) EVENT_LOG_v1 {"time_micros": 1768923153325069, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.325086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1835760, prev total WAL file size 1835760, number of live WAL files 2.
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.325715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303232' seq:72057594037927935, type:22 .. '6C6F676D0033323734' seq:0, type:0; will stop at (end)
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1776KB)], [167(10MB)]
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153325807, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 13100248, "oldest_snapshot_seqno": -1}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10116 keys, 12968978 bytes, temperature: kUnknown
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153402862, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 12968978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12903908, "index_size": 38653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 267226, "raw_average_key_size": 26, "raw_value_size": 12726949, "raw_average_value_size": 1258, "num_data_blocks": 1473, "num_entries": 10116, "num_filter_entries": 10116, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.403716) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 12968978 bytes
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.405301) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.1 rd, 167.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.8 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 10641, records dropped: 525 output_compression: NoCompression
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.405346) EVENT_LOG_v1 {"time_micros": 1768923153405328, "job": 104, "event": "compaction_finished", "compaction_time_micros": 77483, "compaction_time_cpu_micros": 33515, "output_level": 6, "num_output_files": 1, "total_output_size": 12968978, "num_input_records": 10641, "num_output_records": 10116, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153406169, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923153409496, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.325572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.409637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.409642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.409644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.409645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:32:33.409647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:32:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:33.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:33 compute-0 nova_compute[250018]: 2026-01-20 15:32:33.694 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:33 compute-0 nova_compute[250018]: 2026-01-20 15:32:33.694 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:33 compute-0 nova_compute[250018]: 2026-01-20 15:32:33.695 250022 DEBUG nova.objects.instance [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'flavor' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:34 compute-0 nova_compute[250018]: 2026-01-20 15:32:34.081 250022 DEBUG nova.objects.instance [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_requests' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:34 compute-0 nova_compute[250018]: 2026-01-20 15:32:34.093 250022 DEBUG nova.network.neutron [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:32:34 compute-0 nova_compute[250018]: 2026-01-20 15:32:34.331 250022 DEBUG nova.policy [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:32:34 compute-0 ceph-mon[74360]: pgmap v3311: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 53 op/s
Jan 20 15:32:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:34.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:34 compute-0 nova_compute[250018]: 2026-01-20 15:32:34.821 250022 DEBUG nova.network.neutron [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Successfully created port: cecb01b7-0ffd-44bf-bc66-2bdc91b95936 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:32:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.276 250022 INFO nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating ports in neutron
Jan 20 15:32:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:35.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.526 250022 INFO nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.610 250022 DEBUG nova.network.neutron [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Successfully updated port: cecb01b7-0ffd-44bf-bc66-2bdc91b95936 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.632 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.632 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.632 250022 DEBUG nova.network.neutron [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.722 250022 DEBUG nova.compute.manager [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.723 250022 DEBUG nova.compute.manager [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.723 250022 DEBUG oslo_concurrency.lockutils [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.845 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.937 250022 DEBUG nova.compute.manager [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.938 250022 DEBUG nova.compute.manager [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:32:35 compute-0 nova_compute[250018]: 2026-01-20 15:32:35.938 250022 DEBUG oslo_concurrency.lockutils [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:32:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:36 compute-0 nova_compute[250018]: 2026-01-20 15:32:36.831 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:36 compute-0 ceph-mon[74360]: pgmap v3312: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 20 15:32:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 20 15:32:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:37.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:37 compute-0 ceph-mon[74360]: pgmap v3313: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 20 15:32:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:38.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:32:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:39.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.757 250022 DEBUG nova.network.neutron [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.790 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.792 250022 DEBUG oslo_concurrency.lockutils [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.792 250022 DEBUG nova.network.neutron [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.797 250022 DEBUG nova.virt.libvirt.vif [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.797 250022 DEBUG nova.network.os_vif_util [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.798 250022 DEBUG nova.network.os_vif_util [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.798 250022 DEBUG os_vif [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.798 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.799 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.799 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.802 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.802 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcecb01b7-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.803 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcecb01b7-0f, col_values=(('external_ids', {'iface-id': 'cecb01b7-0ffd-44bf-bc66-2bdc91b95936', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:b2:7c', 'vm-uuid': '00832e9b-36fa-4d41-be21-ca6b05cd493f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.8056] manager: (tapcecb01b7-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.815 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.816 250022 INFO os_vif [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f')
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.816 250022 DEBUG nova.virt.libvirt.vif [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.817 250022 DEBUG nova.network.os_vif_util [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.817 250022 DEBUG nova.network.os_vif_util [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.820 250022 DEBUG nova.virt.libvirt.guest [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] attach device xml: <interface type="ethernet">
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:66:b2:7c"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <target dev="tapcecb01b7-0f"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]: </interface>
Jan 20 15:32:39 compute-0 nova_compute[250018]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 15:32:39 compute-0 kernel: tapcecb01b7-0f: entered promiscuous mode
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.8339] manager: (tapcecb01b7-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 ovn_controller[148666]: 2026-01-20T15:32:39Z|00763|binding|INFO|Claiming lport cecb01b7-0ffd-44bf-bc66-2bdc91b95936 for this chassis.
Jan 20 15:32:39 compute-0 ovn_controller[148666]: 2026-01-20T15:32:39Z|00764|binding|INFO|cecb01b7-0ffd-44bf-bc66-2bdc91b95936: Claiming fa:16:3e:66:b2:7c 10.100.0.19
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.843 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:b2:7c 10.100.0.19'], port_security=['fa:16:3e:66:b2:7c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '00832e9b-36fa-4d41-be21-ca6b05cd493f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1610a22-2f29-4495-85e7-ab2081f73701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'aaa69ba6-9a27-441e-877e-2cd188322a42', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=733f71a0-4d98-4c07-b692-f20cf2a632ed, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=cecb01b7-0ffd-44bf-bc66-2bdc91b95936) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.844 160071 INFO neutron.agent.ovn.metadata.agent [-] Port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 in datapath e1610a22-2f29-4495-85e7-ab2081f73701 bound to our chassis
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.845 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.859 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3e87301c-00ba-4c97-9d97-46d8c355264a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.860 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape1610a22-21 in ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.863 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape1610a22-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.863 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[91065658-c84d-49c6-9e3d-8c4d5b5da994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.865 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cf369e04-4a36-43c8-9ab7-c7c21b0758d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.878 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1a1dcb-4102-42cf-8da8-8892d75b1c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 ovn_controller[148666]: 2026-01-20T15:32:39Z|00765|binding|INFO|Setting lport cecb01b7-0ffd-44bf-bc66-2bdc91b95936 ovn-installed in OVS
Jan 20 15:32:39 compute-0 ovn_controller[148666]: 2026-01-20T15:32:39Z|00766|binding|INFO|Setting lport cecb01b7-0ffd-44bf-bc66-2bdc91b95936 up in Southbound
Jan 20 15:32:39 compute-0 systemd-udevd[385589]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.8974] device (tapcecb01b7-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.8991] device (tapcecb01b7-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.906 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[75fdd163-af0c-4721-8a6a-00f5d54ec64f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.918 250022 DEBUG nova.virt.libvirt.driver [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.918 250022 DEBUG nova.virt.libvirt.driver [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.918 250022 DEBUG nova.virt.libvirt.driver [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:e7:ba:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.918 250022 DEBUG nova.virt.libvirt.driver [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:66:b2:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.939 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[c6458ec7-4a87-492f-8cb9-06064cbbf146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.941 250022 DEBUG nova.virt.libvirt.guest [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:32:39</nova:creationTime>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:32:39 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     <nova:port uuid="cecb01b7-0ffd-44bf-bc66-2bdc91b95936">
Jan 20 15:32:39 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Jan 20 15:32:39 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:32:39 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:32:39 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:32:39 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.9454] manager: (tape1610a22-20): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.944 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ac783ad6-4881-4f95-9ece-532ca3eb3a85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 nova_compute[250018]: 2026-01-20 15:32:39.970 250022 DEBUG oslo_concurrency.lockutils [None req-f9d7379c-f362-4fd2-96e4-83b805a38df8 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.974 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aaf1b2-c4bc-4760-8f54-d9cb7bd4a397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:39.977 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2f55d7-0c5a-4ca7-ade7-440db6b41a14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:39 compute-0 NetworkManager[48960]: <info>  [1768923159.9996] device (tape1610a22-20): carrier: link connected
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.005 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8e0ea0-3459-45b9-9c00-2af5501e38f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.025 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[771c8fc3-cae0-4e09-aafe-1b31b964296b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1610a22-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:09:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910072, 'reachable_time': 27306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385614, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.041 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[27a4f5d8-6167-4f34-bd53-7114699baba3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:961'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910072, 'tstamp': 910072}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385615, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.055 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cef21f37-a620-4c83-a94c-a136908e70f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1610a22-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:09:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910072, 'reachable_time': 27306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385616, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.080 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2ecad1-8f05-4bbc-af0f-8140e678e7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.134 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[017f4240-815c-4f78-b39a-211a589439a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.136 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1610a22-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.136 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.137 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1610a22-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.139 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:40 compute-0 NetworkManager[48960]: <info>  [1768923160.1410] manager: (tape1610a22-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 20 15:32:40 compute-0 kernel: tape1610a22-20: entered promiscuous mode
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.143 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.145 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape1610a22-20, col_values=(('external_ids', {'iface-id': '6d7499a4-3049-4825-9ec9-301fdceff3a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:40 compute-0 ovn_controller[148666]: 2026-01-20T15:32:40Z|00767|binding|INFO|Releasing lport 6d7499a4-3049-4825-9ec9-301fdceff3a8 from this chassis (sb_readonly=0)
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.159 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.159 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.160 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f17b6a74-cb85-45fc-a003-0a26e01dc229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.161 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/e1610a22-2f29-4495-85e7-ab2081f73701.pid.haproxy
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID e1610a22-2f29-4495-85e7-ab2081f73701
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.161 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'env', 'PROCESS_TAG=haproxy-e1610a22-2f29-4495-85e7-ab2081f73701', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e1610a22-2f29-4495-85e7-ab2081f73701.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:32:40 compute-0 ceph-mon[74360]: pgmap v3314: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:32:40 compute-0 podman[385649]: 2026-01-20 15:32:40.511435401 +0000 UTC m=+0.050953200 container create c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:32:40 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:32:40.514 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:32:40 compute-0 systemd[1]: Started libpod-conmon-c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849.scope.
Jan 20 15:32:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:40 compute-0 podman[385649]: 2026-01-20 15:32:40.486512972 +0000 UTC m=+0.026030791 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e66f18d24985eed188a470d8a263fcf112701d8f54051261cc1a4d4c1a6d94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.600 250022 DEBUG nova.compute.manager [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.601 250022 DEBUG oslo_concurrency.lockutils [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.601 250022 DEBUG oslo_concurrency.lockutils [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.601 250022 DEBUG oslo_concurrency.lockutils [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.601 250022 DEBUG nova.compute.manager [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:32:40 compute-0 podman[385649]: 2026-01-20 15:32:40.601803294 +0000 UTC m=+0.141321123 container init c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:32:40 compute-0 nova_compute[250018]: 2026-01-20 15:32:40.601 250022 WARNING nova.compute.manager [req-7a49f303-d760-438f-b834-1efda03bcdf6 req-dcd14a6c-4d23-43cd-8523-23dae48e641f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 for instance with vm_state active and task_state None.
Jan 20 15:32:40 compute-0 podman[385649]: 2026-01-20 15:32:40.606696178 +0000 UTC m=+0.146213977 container start c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 15:32:40 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [NOTICE]   (385668) : New worker (385670) forked
Jan 20 15:32:40 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [NOTICE]   (385668) : Loading success.
Jan 20 15:32:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:40 compute-0 ovn_controller[148666]: 2026-01-20T15:32:40Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:b2:7c 10.100.0.19
Jan 20 15:32:40 compute-0 ovn_controller[148666]: 2026-01-20T15:32:40Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:b2:7c 10.100.0.19
Jan 20 15:32:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:32:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:41.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.881 250022 DEBUG nova.network.neutron [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.881 250022 DEBUG nova.network.neutron [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.899 250022 DEBUG oslo_concurrency.lockutils [req-89856f14-89cc-4b67-8692-1e8233ddba0a req-a753d67e-7621-4197-b61f-2382d1cab1f0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.899 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.900 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:32:41 compute-0 nova_compute[250018]: 2026-01-20 15:32:41.900 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:32:42 compute-0 ceph-mon[74360]: pgmap v3315: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.722 250022 DEBUG nova.compute.manager [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.723 250022 DEBUG oslo_concurrency.lockutils [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.723 250022 DEBUG oslo_concurrency.lockutils [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.723 250022 DEBUG oslo_concurrency.lockutils [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.723 250022 DEBUG nova.compute.manager [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:32:42 compute-0 nova_compute[250018]: 2026-01-20 15:32:42.724 250022 WARNING nova.compute.manager [req-2274389a-e0e9-4f2c-9166-116266b83329 req-fbf4a82d-0e53-494f-89f1-e3acc268968b 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 for instance with vm_state active and task_state None.
Jan 20 15:32:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 15:32:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:43.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:44 compute-0 ceph-mon[74360]: pgmap v3316: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 20 15:32:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:44 compute-0 nova_compute[250018]: 2026-01-20 15:32:44.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Jan 20 15:32:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:45.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:45 compute-0 nova_compute[250018]: 2026-01-20 15:32:45.524 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:45 compute-0 nova_compute[250018]: 2026-01-20 15:32:45.542 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:45 compute-0 nova_compute[250018]: 2026-01-20 15:32:45.543 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:32:45 compute-0 nova_compute[250018]: 2026-01-20 15:32:45.543 250022 DEBUG oslo_concurrency.lockutils [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:32:45 compute-0 nova_compute[250018]: 2026-01-20 15:32:45.543 250022 DEBUG nova.network.neutron [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:32:46 compute-0 ceph-mon[74360]: pgmap v3317: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.0 KiB/s wr, 0 op/s
Jan 20 15:32:46 compute-0 sudo[385682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:46 compute-0 sudo[385682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-0 sudo[385682]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-0 sudo[385707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:46 compute-0 sudo[385707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-0 sudo[385707]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-0 sudo[385732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:46 compute-0 sudo[385732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-0 sudo[385732]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:46 compute-0 sudo[385757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:32:46 compute-0 sudo[385757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:46.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:46 compute-0 nova_compute[250018]: 2026-01-20 15:32:46.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:46 compute-0 nova_compute[250018]: 2026-01-20 15:32:46.902 250022 DEBUG nova.network.neutron [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:32:46 compute-0 nova_compute[250018]: 2026-01-20 15:32:46.902 250022 DEBUG nova.network.neutron [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:32:46 compute-0 nova_compute[250018]: 2026-01-20 15:32:46.919 250022 DEBUG oslo_concurrency.lockutils [req-cb10ec80-90cc-45c3-90fc-416e8c59015d req-5e657704-1020-4eaf-8657-260f85a350de 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:32:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Jan 20 15:32:47 compute-0 podman[385854]: 2026-01-20 15:32:47.252982778 +0000 UTC m=+0.146347831 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 20 15:32:47 compute-0 podman[385854]: 2026-01-20 15:32:47.401110266 +0000 UTC m=+0.294475279 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:32:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:47.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:32:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:32:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 podman[386007]: 2026-01-20 15:32:48.006826606 +0000 UTC m=+0.061251260 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:32:48 compute-0 podman[386007]: 2026-01-20 15:32:48.016745797 +0000 UTC m=+0.071170421 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:32:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:32:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:32:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 podman[386072]: 2026-01-20 15:32:48.222615649 +0000 UTC m=+0.059286897 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64)
Jan 20 15:32:48 compute-0 podman[386072]: 2026-01-20 15:32:48.234709459 +0000 UTC m=+0.071380677 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, distribution-scope=public, release=1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 20 15:32:48 compute-0 sudo[385757]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:32:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:32:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 sudo[386106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-0 sudo[386106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 sudo[386131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:48 compute-0 sudo[386131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 sudo[386156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-0 sudo[386156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 sudo[386181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:32:48 compute-0 sudo[386181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-0 sudo[386206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386206]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 sudo[386231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:48 compute-0 sudo[386231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:48 compute-0 sudo[386231]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:32:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:48.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:32:48 compute-0 ceph-mon[74360]: pgmap v3318: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1085350766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:49 compute-0 sudo[386181]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:32:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 5.7 KiB/s wr, 0 op/s
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b20f378e-36b0-4943-ab4c-e468c146b740 does not exist
Jan 20 15:32:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 89901f7d-dcfa-4e39-b4ca-97511120b387 does not exist
Jan 20 15:32:49 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 44f0a680-4250-4a2c-8295-b5afd56a6a23 does not exist
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:32:49 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:49 compute-0 sudo[386287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:49 compute-0 sudo[386287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:49 compute-0 sudo[386287]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:49 compute-0 sudo[386312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:49 compute-0 sudo[386312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:49 compute-0 sudo[386312]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:49.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:49 compute-0 sudo[386337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:49 compute-0 sudo[386337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:49 compute-0 sudo[386337]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:49 compute-0 sudo[386362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:32:49 compute-0 sudo[386362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:49 compute-0 nova_compute[250018]: 2026-01-20 15:32:49.806 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.848927951 +0000 UTC m=+0.041298127 container create 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:32:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:32:49 compute-0 systemd[1]: Started libpod-conmon-1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853.scope.
Jan 20 15:32:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.926790483 +0000 UTC m=+0.119160659 container init 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.831953488 +0000 UTC m=+0.024323684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.933698731 +0000 UTC m=+0.126068907 container start 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.936826867 +0000 UTC m=+0.129197043 container attach 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:32:49 compute-0 gallant_rosalind[386443]: 167 167
Jan 20 15:32:49 compute-0 systemd[1]: libpod-1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853.scope: Deactivated successfully.
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.939189561 +0000 UTC m=+0.131559727 container died 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-520753197b13d6e11a5ae242e660e4b2eaf37cefaacccd607dd06e77f9cbce07-merged.mount: Deactivated successfully.
Jan 20 15:32:49 compute-0 podman[386427]: 2026-01-20 15:32:49.977853045 +0000 UTC m=+0.170223221 container remove 1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:32:49 compute-0 systemd[1]: libpod-conmon-1a5ff955defebcd3dd5b98a68d1d0bb0e2be18470ab29d47da435fc80f1e8853.scope: Deactivated successfully.
Jan 20 15:32:50 compute-0 podman[386471]: 2026-01-20 15:32:50.158038107 +0000 UTC m=+0.040884546 container create f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:32:50 compute-0 systemd[1]: Started libpod-conmon-f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529.scope.
Jan 20 15:32:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:50 compute-0 podman[386471]: 2026-01-20 15:32:50.142461662 +0000 UTC m=+0.025308121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:50 compute-0 podman[386471]: 2026-01-20 15:32:50.238014827 +0000 UTC m=+0.120861316 container init f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:32:50 compute-0 podman[386471]: 2026-01-20 15:32:50.246744605 +0000 UTC m=+0.129591054 container start f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 15:32:50 compute-0 podman[386471]: 2026-01-20 15:32:50.249805978 +0000 UTC m=+0.132652437 container attach f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:32:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:50 compute-0 ceph-mon[74360]: pgmap v3319: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 5.7 KiB/s wr, 0 op/s
Jan 20 15:32:51 compute-0 youthful_torvalds[386487]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:32:51 compute-0 youthful_torvalds[386487]: --> relative data size: 1.0
Jan 20 15:32:51 compute-0 youthful_torvalds[386487]: --> All data devices are unavailable
Jan 20 15:32:51 compute-0 systemd[1]: libpod-f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529.scope: Deactivated successfully.
Jan 20 15:32:51 compute-0 podman[386471]: 2026-01-20 15:32:51.085336454 +0000 UTC m=+0.968182913 container died f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a055903c8dc5eddec6a397de4d7a8729f8f3afa7ac2e2f95f9b85625ef6ab538-merged.mount: Deactivated successfully.
Jan 20 15:32:51 compute-0 podman[386471]: 2026-01-20 15:32:51.144428515 +0000 UTC m=+1.027274964 container remove f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:32:51 compute-0 systemd[1]: libpod-conmon-f0549da1009af37a4973448538d14ebe2e9c096ed4406b692a7850c58bf53529.scope: Deactivated successfully.
Jan 20 15:32:51 compute-0 sudo[386362]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:51 compute-0 sudo[386516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:51 compute-0 sudo[386516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:51 compute-0 sudo[386516]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:51 compute-0 sudo[386541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:51 compute-0 sudo[386541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:51 compute-0 sudo[386541]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:51 compute-0 sudo[386566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:51 compute-0 sudo[386566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:51 compute-0 sudo[386566]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:51 compute-0 sudo[386591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:32:51 compute-0 sudo[386591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:51.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.765084242 +0000 UTC m=+0.036417744 container create 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:32:51 compute-0 systemd[1]: Started libpod-conmon-394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d.scope.
Jan 20 15:32:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:51 compute-0 nova_compute[250018]: 2026-01-20 15:32:51.836 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.748336535 +0000 UTC m=+0.019670047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.853198063 +0000 UTC m=+0.124531605 container init 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.859707161 +0000 UTC m=+0.131040653 container start 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.862812116 +0000 UTC m=+0.134145658 container attach 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:32:51 compute-0 vigilant_jang[386673]: 167 167
Jan 20 15:32:51 compute-0 systemd[1]: libpod-394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d.scope: Deactivated successfully.
Jan 20 15:32:51 compute-0 conmon[386673]: conmon 394724bb850349f6c577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d.scope/container/memory.events
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.866035933 +0000 UTC m=+0.137369435 container died 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa13f5f24ce7051bb4fb1ffe529f66a20160a4998de014efc5c15c9760f3cef-merged.mount: Deactivated successfully.
Jan 20 15:32:51 compute-0 podman[386657]: 2026-01-20 15:32:51.905449958 +0000 UTC m=+0.176783450 container remove 394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:32:51 compute-0 systemd[1]: libpod-conmon-394724bb850349f6c5774db830b96acef3dd5369fd6679fb547e80744c62f47d.scope: Deactivated successfully.
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.08163035 +0000 UTC m=+0.040269038 container create 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:32:52 compute-0 systemd[1]: Started libpod-conmon-76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91.scope.
Jan 20 15:32:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5874ac370415609e5d7f41105ced8a09923a7943d6d797793d71300d8aa9151e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.064629587 +0000 UTC m=+0.023268305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5874ac370415609e5d7f41105ced8a09923a7943d6d797793d71300d8aa9151e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5874ac370415609e5d7f41105ced8a09923a7943d6d797793d71300d8aa9151e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5874ac370415609e5d7f41105ced8a09923a7943d6d797793d71300d8aa9151e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.175008856 +0000 UTC m=+0.133647554 container init 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.185577524 +0000 UTC m=+0.144216222 container start 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.18874937 +0000 UTC m=+0.147388068 container attach 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:32:52
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images']
Jan 20 15:32:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:32:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:32:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:32:52 compute-0 ceph-mon[74360]: pgmap v3320: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/62047383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1025644885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]: {
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:     "0": [
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:         {
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "devices": [
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "/dev/loop3"
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             ],
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "lv_name": "ceph_lv0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "lv_size": "7511998464",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "name": "ceph_lv0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "tags": {
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.cluster_name": "ceph",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.crush_device_class": "",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.encrypted": "0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.osd_id": "0",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.type": "block",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:                 "ceph.vdo": "0"
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             },
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "type": "block",
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:             "vg_name": "ceph_vg0"
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:         }
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]:     ]
Jan 20 15:32:52 compute-0 cranky_lederberg[386713]: }
Jan 20 15:32:52 compute-0 systemd[1]: libpod-76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91.scope: Deactivated successfully.
Jan 20 15:32:52 compute-0 podman[386697]: 2026-01-20 15:32:52.951607345 +0000 UTC m=+0.910246033 container died 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5874ac370415609e5d7f41105ced8a09923a7943d6d797793d71300d8aa9151e-merged.mount: Deactivated successfully.
Jan 20 15:32:53 compute-0 podman[386697]: 2026-01-20 15:32:53.003997043 +0000 UTC m=+0.962635741 container remove 76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lederberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 15:32:53 compute-0 systemd[1]: libpod-conmon-76f088563e98fc14804ecd35f4d61a82f2e630616a68d7ab2b7b2aceab33fd91.scope: Deactivated successfully.
Jan 20 15:32:53 compute-0 sudo[386591]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:53 compute-0 sudo[386733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:53 compute-0 sudo[386733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:53 compute-0 sudo[386733]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:53 compute-0 sudo[386758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:32:53 compute-0 sudo[386758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:53 compute-0 sudo[386758]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:53 compute-0 sudo[386783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:53 compute-0 sudo[386783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:53 compute-0 sudo[386783]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:53 compute-0 sudo[386808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:32:53 compute-0 sudo[386808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:53.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.639877716 +0000 UTC m=+0.058863415 container create a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:32:53 compute-0 systemd[1]: Started libpod-conmon-a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3.scope.
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.619956913 +0000 UTC m=+0.038942662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.742160214 +0000 UTC m=+0.161145953 container init a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.75044052 +0000 UTC m=+0.169426229 container start a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:32:53 compute-0 elegant_chatelet[386890]: 167 167
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.753684239 +0000 UTC m=+0.172669978 container attach a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:32:53 compute-0 systemd[1]: libpod-a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3.scope: Deactivated successfully.
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.755436727 +0000 UTC m=+0.174422476 container died a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d815d4805cb798fec80fdfd1d28fb0ff60dd6d16cd47a6259db9f84c0b173292-merged.mount: Deactivated successfully.
Jan 20 15:32:53 compute-0 podman[386874]: 2026-01-20 15:32:53.793889944 +0000 UTC m=+0.212875643 container remove a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_chatelet, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:32:53 compute-0 systemd[1]: libpod-conmon-a6798c8f4395d8938cafd5f85c286e94c7f9f3cd4aa96b21dcbd23c431a32fc3.scope: Deactivated successfully.
Jan 20 15:32:53 compute-0 ceph-mon[74360]: pgmap v3321: 321 pgs: 321 active+clean; 239 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 20 15:32:53 compute-0 podman[386914]: 2026-01-20 15:32:53.997701171 +0000 UTC m=+0.053015087 container create bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:32:54 compute-0 systemd[1]: Started libpod-conmon-bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653.scope.
Jan 20 15:32:54 compute-0 podman[386914]: 2026-01-20 15:32:53.968736141 +0000 UTC m=+0.024050137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:32:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2067fb1275c5473811ca563c71f3835bc249846b1d3d1a13d9fdfbb8efd51f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2067fb1275c5473811ca563c71f3835bc249846b1d3d1a13d9fdfbb8efd51f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2067fb1275c5473811ca563c71f3835bc249846b1d3d1a13d9fdfbb8efd51f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2067fb1275c5473811ca563c71f3835bc249846b1d3d1a13d9fdfbb8efd51f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:32:54 compute-0 podman[386914]: 2026-01-20 15:32:54.081122924 +0000 UTC m=+0.136436870 container init bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 15:32:54 compute-0 podman[386914]: 2026-01-20 15:32:54.089954545 +0000 UTC m=+0.145268451 container start bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:32:54 compute-0 podman[386914]: 2026-01-20 15:32:54.09306096 +0000 UTC m=+0.148374876 container attach bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:32:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:54.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:54 compute-0 nova_compute[250018]: 2026-01-20 15:32:54.808 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]: {
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:         "osd_id": 0,
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:         "type": "bluestore"
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]:     }
Jan 20 15:32:54 compute-0 intelligent_rhodes[386931]: }
Jan 20 15:32:54 compute-0 systemd[1]: libpod-bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653.scope: Deactivated successfully.
Jan 20 15:32:54 compute-0 podman[386914]: 2026-01-20 15:32:54.927594838 +0000 UTC m=+0.982908754 container died bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:32:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:32:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb2067fb1275c5473811ca563c71f3835bc249846b1d3d1a13d9fdfbb8efd51f-merged.mount: Deactivated successfully.
Jan 20 15:32:55 compute-0 podman[386914]: 2026-01-20 15:32:55.928583833 +0000 UTC m=+1.983897749 container remove bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:32:55 compute-0 sudo[386808]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:32:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:55 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:32:55 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f41e15ae-ef66-4ad5-8da7-ec1b64678247 does not exist
Jan 20 15:32:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2dd3f2f3-d923-4f44-9ffa-fa896f6ecd68 does not exist
Jan 20 15:32:55 compute-0 systemd[1]: libpod-conmon-bca17da27a94d6a2b0dbe9b1652451130ae33608df9d835315642298711b1653.scope: Deactivated successfully.
Jan 20 15:32:55 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5155fcdd-ae61-4342-b4b2-6082fb08b1e3 does not exist
Jan 20 15:32:56 compute-0 sudo[386967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:32:56 compute-0 sudo[386967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:56 compute-0 sudo[386967]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:56 compute-0 sudo[386992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:32:56 compute-0 sudo[386992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:32:56 compute-0 sudo[386992]: pam_unix(sudo:session): session closed for user root
Jan 20 15:32:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:56 compute-0 nova_compute[250018]: 2026-01-20 15:32:56.838 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:32:56 compute-0 ceph-mon[74360]: pgmap v3322: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:32:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 20 15:32:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:57 compute-0 sshd-session[386963]: Invalid user user from 134.122.57.138 port 60152
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:32:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:32:57 compute-0 ceph-mon[74360]: pgmap v3323: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 20 15:32:57 compute-0 sshd-session[386963]: Connection closed by invalid user user 134.122.57.138 port 60152 [preauth]
Jan 20 15:32:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:32:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:32:58.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:32:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:32:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:32:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:32:59.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:32:59 compute-0 nova_compute[250018]: 2026-01-20 15:32:59.810 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:00 compute-0 ceph-mon[74360]: pgmap v3324: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:33:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:00.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:33:01 compute-0 podman[387021]: 2026-01-20 15:33:01.474487597 +0000 UTC m=+0.062682050 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:33:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:01 compute-0 podman[387020]: 2026-01-20 15:33:01.566811534 +0000 UTC m=+0.148984793 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 15:33:01 compute-0 nova_compute[250018]: 2026-01-20 15:33:01.841 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:02 compute-0 ceph-mon[74360]: pgmap v3325: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:33:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:02.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 133 KiB/s wr, 75 op/s
Jan 20 15:33:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:03.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:04 compute-0 nova_compute[250018]: 2026-01-20 15:33:04.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:04 compute-0 ceph-mon[74360]: pgmap v3326: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 133 KiB/s wr, 75 op/s
Jan 20 15:33:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:04.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:04 compute-0 nova_compute[250018]: 2026-01-20 15:33:04.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 135 KiB/s wr, 75 op/s
Jan 20 15:33:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:05.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:06 compute-0 ceph-mon[74360]: pgmap v3327: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 135 KiB/s wr, 75 op/s
Jan 20 15:33:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:06.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:06 compute-0 nova_compute[250018]: 2026-01-20 15:33:06.844 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 72 op/s
Jan 20 15:33:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:07.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:08 compute-0 ceph-mon[74360]: pgmap v3328: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 72 op/s
Jan 20 15:33:08 compute-0 sudo[387066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:08 compute-0 sudo[387066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:08 compute-0 sudo[387066]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:08 compute-0 sudo[387091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:08 compute-0 sudo[387091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:08 compute-0 sudo[387091]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 253 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 363 KiB/s wr, 82 op/s
Jan 20 15:33:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:09.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:09 compute-0 nova_compute[250018]: 2026-01-20 15:33:09.812 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:10 compute-0 ceph-mon[74360]: pgmap v3329: 321 pgs: 321 active+clean; 253 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 363 KiB/s wr, 82 op/s
Jan 20 15:33:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:10.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:11 compute-0 nova_compute[250018]: 2026-01-20 15:33:11.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 20 15:33:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:11.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:11 compute-0 nova_compute[250018]: 2026-01-20 15:33:11.848 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004338278487344128 of space, bias 1.0, pg target 1.3014835462032384 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6462629990228922 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:33:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:33:12 compute-0 ceph-mon[74360]: pgmap v3330: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 20 15:33:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:12.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:33:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:13.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:33:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:33:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:33:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:33:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:33:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1396469670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:33:14 compute-0 ceph-mon[74360]: pgmap v3331: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:33:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2273594203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:14.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:14 compute-0 nova_compute[250018]: 2026-01-20 15:33:14.814 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/959369576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:16 compute-0 ceph-mon[74360]: pgmap v3332: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:16.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:16 compute-0 nova_compute[250018]: 2026-01-20 15:33:16.850 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.085 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.086 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.086 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.086 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217607316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.562 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.618 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.618 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.764 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.765 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4001MB free_disk=20.897216796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.765 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.766 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4217607316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.883 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 00832e9b-36fa-4d41-be21-ca6b05cd493f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.884 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.885 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.943 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.977 250022 DEBUG nova.compute.manager [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.978 250022 DEBUG nova.compute.manager [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-cecb01b7-0ffd-44bf-bc66-2bdc91b95936. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.979 250022 DEBUG oslo_concurrency.lockutils [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.979 250022 DEBUG oslo_concurrency.lockutils [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:33:17 compute-0 nova_compute[250018]: 2026-01-20 15:33:17.979 250022 DEBUG nova.network.neutron [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:33:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700645636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:18 compute-0 nova_compute[250018]: 2026-01-20 15:33:18.365 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:18 compute-0 nova_compute[250018]: 2026-01-20 15:33:18.371 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:33:18 compute-0 nova_compute[250018]: 2026-01-20 15:33:18.392 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:33:18 compute-0 nova_compute[250018]: 2026-01-20 15:33:18.394 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:33:18 compute-0 nova_compute[250018]: 2026-01-20 15:33:18.394 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:18.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:18 compute-0 ceph-mon[74360]: pgmap v3333: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/700645636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:19 compute-0 nova_compute[250018]: 2026-01-20 15:33:19.843 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:19 compute-0 ceph-mon[74360]: pgmap v3334: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.395 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.395 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.396 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.396 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.733 250022 DEBUG nova.network.neutron [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port cecb01b7-0ffd-44bf-bc66-2bdc91b95936. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.733 250022 DEBUG nova.network.neutron [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:20 compute-0 nova_compute[250018]: 2026-01-20 15:33:20.796 250022 DEBUG oslo_concurrency.lockutils [req-ac159d10-8fe8-4231-84e3-b0c98a22f89d req-1797d365-5b72-4617-99a0-c5d636477f65 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:33:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1180419469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3427014143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 15:33:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:21.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:21 compute-0 nova_compute[250018]: 2026-01-20 15:33:21.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:21 compute-0 ceph-mon[74360]: pgmap v3335: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 20 15:33:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:24 compute-0 ceph-mon[74360]: pgmap v3336: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 20 15:33:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:24.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:24 compute-0 nova_compute[250018]: 2026-01-20 15:33:24.846 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 217 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 27 KiB/s wr, 8 op/s
Jan 20 15:33:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2267802654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:26 compute-0 ceph-mon[74360]: pgmap v3337: 321 pgs: 321 active+clean; 217 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 27 KiB/s wr, 8 op/s
Jan 20 15:33:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:26.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.879 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-cecb01b7-0ffd-44bf-bc66-2bdc91b95936" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.880 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-cecb01b7-0ffd-44bf-bc66-2bdc91b95936" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.888 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.897 250022 DEBUG nova.objects.instance [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'flavor' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.914 250022 DEBUG nova.virt.libvirt.vif [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.915 250022 DEBUG nova.network.os_vif_util [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.915 250022 DEBUG nova.network.os_vif_util [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.918 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.921 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.925 250022 DEBUG nova.virt.libvirt.driver [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Attempting to detach device tapcecb01b7-0f from instance 00832e9b-36fa-4d41-be21-ca6b05cd493f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.925 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] detach device xml: <interface type="ethernet">
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:66:b2:7c"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <target dev="tapcecb01b7-0f"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]: </interface>
Jan 20 15:33:26 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.932 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.936 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <name>instance-000000d0</name>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <uuid>00832e9b-36fa-4d41-be21-ca6b05cd493f</uuid>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:32:39</nova:creationTime>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <nova:port uuid="cecb01b7-0ffd-44bf-bc66-2bdc91b95936">
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:26 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <resource>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </resource>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <system>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='serial'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='uuid'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </system>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <os>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </os>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <features>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </features>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk' index='2'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config' index='1'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:e7:ba:22'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target dev='tap3d9d0177-03'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:66:b2:7c'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target dev='tapcecb01b7-0f'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='net1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       </target>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </console>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </graphics>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <video>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </video>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c492,c593</label>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c492,c593</imagelabel>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 15:33:26 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:26 compute-0 nova_compute[250018]: </domain>
Jan 20 15:33:26 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.938 250022 INFO nova.virt.libvirt.driver [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully detached device tapcecb01b7-0f from instance 00832e9b-36fa-4d41-be21-ca6b05cd493f from the persistent domain config.
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.938 250022 DEBUG nova.virt.libvirt.driver [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] (1/8): Attempting to detach device tapcecb01b7-0f with device alias net1 from instance 00832e9b-36fa-4d41-be21-ca6b05cd493f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.938 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] detach device xml: <interface type="ethernet">
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <mac address="fa:16:3e:66:b2:7c"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <model type="virtio"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <mtu size="1442"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]:   <target dev="tapcecb01b7-0f"/>
Jan 20 15:33:26 compute-0 nova_compute[250018]: </interface>
Jan 20 15:33:26 compute-0 nova_compute[250018]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 15:33:26 compute-0 kernel: tapcecb01b7-0f (unregistering): left promiscuous mode
Jan 20 15:33:26 compute-0 NetworkManager[48960]: <info>  [1768923206.9849] device (tapcecb01b7-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.992 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:26 compute-0 ovn_controller[148666]: 2026-01-20T15:33:26Z|00768|binding|INFO|Releasing lport cecb01b7-0ffd-44bf-bc66-2bdc91b95936 from this chassis (sb_readonly=0)
Jan 20 15:33:26 compute-0 ovn_controller[148666]: 2026-01-20T15:33:26Z|00769|binding|INFO|Setting lport cecb01b7-0ffd-44bf-bc66-2bdc91b95936 down in Southbound
Jan 20 15:33:26 compute-0 ovn_controller[148666]: 2026-01-20T15:33:26Z|00770|binding|INFO|Removing iface tapcecb01b7-0f ovn-installed in OVS
Jan 20 15:33:26 compute-0 nova_compute[250018]: 2026-01-20 15:33:26.994 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:26 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:26.999 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:b2:7c 10.100.0.19', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '00832e9b-36fa-4d41-be21-ca6b05cd493f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1610a22-2f29-4495-85e7-ab2081f73701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=733f71a0-4d98-4c07-b692-f20cf2a632ed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=cecb01b7-0ffd-44bf-bc66-2bdc91b95936) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.000 160071 INFO neutron.agent.ovn.metadata.agent [-] Port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 in datapath e1610a22-2f29-4495-85e7-ab2081f73701 unbound from our chassis
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.000 250022 DEBUG nova.virt.libvirt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Received event <DeviceRemovedEvent: 1768923207.000201, 00832e9b-36fa-4d41-be21-ca6b05cd493f => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.001 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1610a22-2f29-4495-85e7-ab2081f73701, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.002 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[29530286-f760-4f1c-976c-e0b241dd0b6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.003 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 namespace which is not needed anymore
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.003 250022 DEBUG nova.virt.libvirt.driver [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Start waiting for the detach event from libvirt for device tapcecb01b7-0f with device alias net1 for instance 00832e9b-36fa-4d41-be21-ca6b05cd493f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.003 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.009 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <name>instance-000000d0</name>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <uuid>00832e9b-36fa-4d41-be21-ca6b05cd493f</uuid>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:32:39</nova:creationTime>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:port uuid="cecb01b7-0ffd-44bf-bc66-2bdc91b95936">
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:27 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <resource>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </resource>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <system>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='serial'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='uuid'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </system>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <os>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </os>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <features>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </features>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk' index='2'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config' index='1'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:e7:ba:22'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target dev='tap3d9d0177-03'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       </target>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </console>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </graphics>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <video>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </video>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c492,c593</label>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c492,c593</imagelabel>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:27 compute-0 nova_compute[250018]: </domain>
Jan 20 15:33:27 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.009 250022 INFO nova.virt.libvirt.driver [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully detached device tapcecb01b7-0f from instance 00832e9b-36fa-4d41-be21-ca6b05cd493f from the live domain config.
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.009 250022 DEBUG nova.virt.libvirt.vif [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.010 250022 DEBUG nova.network.os_vif_util [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.010 250022 DEBUG nova.network.os_vif_util [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.011 250022 DEBUG os_vif [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.012 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.012 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb01b7-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.014 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.016 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.018 250022 INFO os_vif [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f')
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.019 250022 DEBUG nova.virt.libvirt.guest [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:33:27</nova:creationTime>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:27 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:27 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:27 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:27 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:27 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [NOTICE]   (385668) : haproxy version is 2.8.14-c23fe91
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [NOTICE]   (385668) : path to executable is /usr/sbin/haproxy
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [WARNING]  (385668) : Exiting Master process...
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [WARNING]  (385668) : Exiting Master process...
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [ALERT]    (385668) : Current worker (385670) exited with code 143 (Terminated)
Jan 20 15:33:27 compute-0 neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701[385664]: [WARNING]  (385668) : All workers exited. Exiting... (0)
Jan 20 15:33:27 compute-0 systemd[1]: libpod-c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849.scope: Deactivated successfully.
Jan 20 15:33:27 compute-0 podman[387195]: 2026-01-20 15:33:27.163955606 +0000 UTC m=+0.046347664 container died c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849-userdata-shm.mount: Deactivated successfully.
Jan 20 15:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e66f18d24985eed188a470d8a263fcf112701d8f54051261cc1a4d4c1a6d94-merged.mount: Deactivated successfully.
Jan 20 15:33:27 compute-0 podman[387195]: 2026-01-20 15:33:27.20592953 +0000 UTC m=+0.088321588 container cleanup c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 15:33:27 compute-0 systemd[1]: libpod-conmon-c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849.scope: Deactivated successfully.
Jan 20 15:33:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Jan 20 15:33:27 compute-0 podman[387227]: 2026-01-20 15:33:27.260644561 +0000 UTC m=+0.036351441 container remove c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.266 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd01e01-93a9-4552-a46c-b18f3685fabe]: (4, ('Tue Jan 20 03:33:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 (c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849)\nc490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849\nTue Jan 20 03:33:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 (c490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849)\nc490b071800694799e701ab59fb033d2bbda9be37bc2aae6edbc16605ff41849\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.268 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e818b254-607d-4b01-a832-c56691fd9484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.269 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1610a22-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:27 compute-0 kernel: tape1610a22-20: left promiscuous mode
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.273 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.284 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.287 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad457cf-30aa-4cf7-a6fb-09275a1b2b53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.301 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[71a04f98-d042-4b5b-aec3-862873b6507f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.302 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dd89e155-2363-46f6-b005-588df29a1d40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.317 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[35e5c942-bc32-452b-871c-dc46515a8b17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910065, 'reachable_time': 22826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387242, 'error': None, 'target': 'ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 systemd[1]: run-netns-ovnmeta\x2de1610a22\x2d2f29\x2d4495\x2d85e7\x2dab2081f73701.mount: Deactivated successfully.
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.321 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e1610a22-2f29-4495-85e7-ab2081f73701 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.321 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[30b660c6-0b06-4ca1-b673-0c2407adeddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.597 250022 DEBUG nova.compute.manager [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-unplugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.597 250022 DEBUG oslo_concurrency.lockutils [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.598 250022 DEBUG oslo_concurrency.lockutils [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.598 250022 DEBUG oslo_concurrency.lockutils [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.598 250022 DEBUG nova.compute.manager [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-unplugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.598 250022 WARNING nova.compute.manager [req-817acfd3-8f15-4e50-adc1-a5ddce2553dd req-17ead75d-ed2d-4680-97c4-1370fd0bb790 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-unplugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 for instance with vm_state active and task_state None.
Jan 20 15:33:27 compute-0 sshd-session[387243]: banner exchange: Connection from 3.134.148.59 port 45790: invalid format
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.747 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.748 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:27.748 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.975 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.976 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:33:27 compute-0 nova_compute[250018]: 2026-01-20 15:33:27.976 250022 DEBUG nova.network.neutron [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.066 250022 DEBUG nova.compute.manager [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-deleted-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.066 250022 INFO nova.compute.manager [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Neutron deleted interface cecb01b7-0ffd-44bf-bc66-2bdc91b95936; detaching it from the instance and deleting it from the info cache
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.067 250022 DEBUG nova.network.neutron [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.118 250022 DEBUG nova.objects.instance [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'system_metadata' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.156 250022 DEBUG nova.objects.instance [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lazy-loading 'flavor' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.176 250022 DEBUG nova.virt.libvirt.vif [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.176 250022 DEBUG nova.network.os_vif_util [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.177 250022 DEBUG nova.network.os_vif_util [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.179 250022 DEBUG nova.virt.libvirt.guest [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.182 250022 DEBUG nova.virt.libvirt.guest [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <name>instance-000000d0</name>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <uuid>00832e9b-36fa-4d41-be21-ca6b05cd493f</uuid>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:33:27</nova:creationTime>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <resource>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </resource>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <system>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='serial'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='uuid'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </system>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <os>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </os>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <features>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </features>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk' index='2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config' index='1'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:e7:ba:22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='tap3d9d0177-03'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </target>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </console>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </graphics>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <video>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </video>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c492,c593</label>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c492,c593</imagelabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]: </domain>
Jan 20 15:33:28 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.183 250022 DEBUG nova.virt.libvirt.guest [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.190 250022 DEBUG nova.virt.libvirt.guest [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:66:b2:7c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcecb01b7-0f"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <name>instance-000000d0</name>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <uuid>00832e9b-36fa-4d41-be21-ca6b05cd493f</uuid>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:33:27</nova:creationTime>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <memory unit='KiB'>131072</memory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <vcpu placement='static'>1</vcpu>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <resource>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <partition>/machine</partition>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </resource>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <sysinfo type='smbios'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <system>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='manufacturer'>RDO</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='serial'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='uuid'>00832e9b-36fa-4d41-be21-ca6b05cd493f</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <entry name='family'>Virtual Machine</entry>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </system>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <os>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <boot dev='hd'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <smbios mode='sysinfo'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </os>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <features>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <vmcoreinfo state='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </features>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <model fallback='forbid'>Nehalem</model>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='x2apic'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='hypervisor'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <feature policy='require' name='vme'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <clock offset='utc'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <timer name='hpet' present='no'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_poweroff>destroy</on_poweroff>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_reboot>restart</on_reboot>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <on_crash>destroy</on_crash>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <disk type='network' device='disk'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk' index='2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='vda' bus='virtio'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='virtio-disk0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <disk type='network' device='cdrom'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <auth username='openstack'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <secret type='ceph' uuid='e399cf45-e6b6-5393-99f1-75c601d3f188'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source protocol='rbd' name='vms/00832e9b-36fa-4d41-be21-ca6b05cd493f_disk.config' index='1'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.100' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.102' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <host name='192.168.122.101' port='6789'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </source>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='sda' bus='sata'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <readonly/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='sata0-0-0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pcie.0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='1' port='0x10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='2' port='0x11'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='3' port='0x12'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='4' port='0x13'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='5' port='0x14'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='6' port='0x15'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='7' port='0x16'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='8' port='0x17'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.8'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='9' port='0x18'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.9'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='10' port='0x19'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='11' port='0x1a'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.11'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='12' port='0x1b'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.12'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='13' port='0x1c'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.13'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='14' port='0x1d'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.14'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='15' port='0x1e'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.15'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='16' port='0x1f'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.16'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='17' port='0x20'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.17'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='18' port='0x21'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.18'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='19' port='0x22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.19'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='20' port='0x23'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.20'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='21' port='0x24'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.21'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='22' port='0x25'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='23' port='0x26'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.23'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='24' port='0x27'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.24'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-root-port'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target chassis='25' port='0x28'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.25'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model name='pcie-pci-bridge'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='pci.26'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='usb'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <controller type='sata' index='0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='ide'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </controller>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <interface type='ethernet'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <mac address='fa:16:3e:e7:ba:22'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target dev='tap3d9d0177-03'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model type='virtio'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <mtu size='1442'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='net0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <serial type='pty'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target type='isa-serial' port='0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:         <model name='isa-serial'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       </target>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <source path='/dev/pts/0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <log file='/var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f/console.log' append='off'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <target type='serial' port='0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='serial0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </console>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='tablet' bus='usb'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='usb' bus='0' port='1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='mouse' bus='ps2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input1'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <input type='keyboard' bus='ps2'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='input2'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </input>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <listen type='address' address='::0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </graphics>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <audio id='1' type='none'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <video>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='video0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </video>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <watchdog model='itco' action='reset'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='watchdog0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </watchdog>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <memballoon model='virtio'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <stats period='10'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='balloon0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <rng model='virtio'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <backend model='random'>/dev/urandom</backend>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <alias name='rng0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <label>system_u:system_r:svirt_t:s0:c492,c593</label>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c492,c593</imagelabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <label>+107:+107</label>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <imagelabel>+107:+107</imagelabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </seclabel>
Jan 20 15:33:28 compute-0 nova_compute[250018]: </domain>
Jan 20 15:33:28 compute-0 nova_compute[250018]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.190 250022 WARNING nova.virt.libvirt.driver [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Detaching interface fa:16:3e:66:b2:7c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapcecb01b7-0f' not found.
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.191 250022 DEBUG nova.virt.libvirt.vif [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.192 250022 DEBUG nova.network.os_vif_util [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converting VIF {"id": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "address": "fa:16:3e:66:b2:7c", "network": {"id": "e1610a22-2f29-4495-85e7-ab2081f73701", "bridge": "br-int", "label": "tempest-network-smoke--313389391", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecb01b7-0f", "ovs_interfaceid": "cecb01b7-0ffd-44bf-bc66-2bdc91b95936", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.192 250022 DEBUG nova.network.os_vif_util [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.193 250022 DEBUG os_vif [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.194 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.194 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecb01b7-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.194 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.196 250022 INFO os_vif [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:b2:7c,bridge_name='br-int',has_traffic_filtering=True,id=cecb01b7-0ffd-44bf-bc66-2bdc91b95936,network=Network(e1610a22-2f29-4495-85e7-ab2081f73701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecb01b7-0f')
Jan 20 15:33:28 compute-0 nova_compute[250018]: 2026-01-20 15:33:28.197 250022 DEBUG nova.virt.libvirt.guest [req-ee09af10-19f6-435f-83e7-bb7be89efe28 req-88dd0185-a524-4d9b-b10a-afcd7b81cb64 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:name>tempest-TestNetworkBasicOps-server-294660269</nova:name>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:creationTime>2026-01-20 15:33:28</nova:creationTime>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:flavor name="m1.nano">
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:memory>128</nova:memory>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:disk>1</nova:disk>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:swap>0</nova:swap>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:vcpus>1</nova:vcpus>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:flavor>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:owner>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   <nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     <nova:port uuid="3d9d0177-0374-4df6-b536-6a369c25c060">
Jan 20 15:33:28 compute-0 nova_compute[250018]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 20 15:33:28 compute-0 nova_compute[250018]:     </nova:port>
Jan 20 15:33:28 compute-0 nova_compute[250018]:   </nova:ports>
Jan 20 15:33:28 compute-0 nova_compute[250018]: </nova:instance>
Jan 20 15:33:28 compute-0 nova_compute[250018]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 15:33:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:28 compute-0 ceph-mon[74360]: pgmap v3338: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Jan 20 15:33:28 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:28.751 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:28.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:28 compute-0 sudo[387245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:28 compute-0 sudo[387245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:28 compute-0 sudo[387245]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:29 compute-0 sudo[387270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:29 compute-0 sudo[387270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:29 compute-0 sudo[387270]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:29 compute-0 sshd-session[387166]: Connection closed by 3.134.148.59 port 51640 [preauth]
Jan 20 15:33:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 29 op/s
Jan 20 15:33:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:29.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.712 250022 DEBUG nova.compute.manager [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.713 250022 DEBUG oslo_concurrency.lockutils [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.713 250022 DEBUG oslo_concurrency.lockutils [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.713 250022 DEBUG oslo_concurrency.lockutils [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.713 250022 DEBUG nova.compute.manager [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:29 compute-0 nova_compute[250018]: 2026-01-20 15:33:29.713 250022 WARNING nova.compute.manager [req-5e0ebe91-05b2-4510-a092-7b4f7d3f224c req-e22b34f0-5f12-48df-ad2a-6685cc9a42fd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-plugged-cecb01b7-0ffd-44bf-bc66-2bdc91b95936 for instance with vm_state active and task_state None.
Jan 20 15:33:30 compute-0 ovn_controller[148666]: 2026-01-20T15:33:30Z|00771|binding|INFO|Releasing lport 3cb40d6c-abbe-41b5-94ed-cb8dcac5474e from this chassis (sb_readonly=0)
Jan 20 15:33:30 compute-0 nova_compute[250018]: 2026-01-20 15:33:30.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:30 compute-0 ceph-mon[74360]: pgmap v3339: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 27 KiB/s wr, 29 op/s
Jan 20 15:33:30 compute-0 nova_compute[250018]: 2026-01-20 15:33:30.567 250022 INFO nova.network.neutron [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Port cecb01b7-0ffd-44bf-bc66-2bdc91b95936 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 15:33:30 compute-0 nova_compute[250018]: 2026-01-20 15:33:30.567 250022 DEBUG nova.network.neutron [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:30 compute-0 nova_compute[250018]: 2026-01-20 15:33:30.601 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:33:30 compute-0 nova_compute[250018]: 2026-01-20 15:33:30.674 250022 DEBUG oslo_concurrency.lockutils [None req-d21d35fe-9687-4f04-aba5-39982cbb8549 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "interface-00832e9b-36fa-4d41-be21-ca6b05cd493f-cecb01b7-0ffd-44bf-bc66-2bdc91b95936" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:30.806 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:30.807 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:30.807 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:31 compute-0 nova_compute[250018]: 2026-01-20 15:33:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Jan 20 15:33:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:31 compute-0 nova_compute[250018]: 2026-01-20 15:33:31.889 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.014 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.203 250022 DEBUG nova.compute.manager [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.203 250022 DEBUG nova.compute.manager [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing instance network info cache due to event network-changed-3d9d0177-0374-4df6-b536-6a369c25c060. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.204 250022 DEBUG oslo_concurrency.lockutils [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.204 250022 DEBUG oslo_concurrency.lockutils [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.204 250022 DEBUG nova.network.neutron [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Refreshing network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.351 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.352 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.353 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.354 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.354 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.357 250022 INFO nova.compute.manager [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Terminating instance
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.358 250022 DEBUG nova.compute.manager [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:33:32 compute-0 ceph-mon[74360]: pgmap v3340: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Jan 20 15:33:32 compute-0 kernel: tap3d9d0177-03 (unregistering): left promiscuous mode
Jan 20 15:33:32 compute-0 NetworkManager[48960]: <info>  [1768923212.4075] device (tap3d9d0177-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:33:32 compute-0 ovn_controller[148666]: 2026-01-20T15:33:32Z|00772|binding|INFO|Releasing lport 3d9d0177-0374-4df6-b536-6a369c25c060 from this chassis (sb_readonly=0)
Jan 20 15:33:32 compute-0 ovn_controller[148666]: 2026-01-20T15:33:32Z|00773|binding|INFO|Setting lport 3d9d0177-0374-4df6-b536-6a369c25c060 down in Southbound
Jan 20 15:33:32 compute-0 ovn_controller[148666]: 2026-01-20T15:33:32Z|00774|binding|INFO|Removing iface tap3d9d0177-03 ovn-installed in OVS
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.426 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.435 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:ba:22 10.100.0.7'], port_security=['fa:16:3e:e7:ba:22 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00832e9b-36fa-4d41-be21-ca6b05cd493f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ec0e47d-339a-4483-a67a-b500a294d021', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41c4163e-420d-44a5-bbd3-fcb6dc547b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=248a3941-38d3-4ab1-8c64-6993152de5bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=3d9d0177-0374-4df6-b536-6a369c25c060) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.437 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 3d9d0177-0374-4df6-b536-6a369c25c060 in datapath 2ec0e47d-339a-4483-a67a-b500a294d021 unbound from our chassis
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.438 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ec0e47d-339a-4483-a67a-b500a294d021, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[39513f7f-9221-4975-bef0-886b250f3b00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.440 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021 namespace which is not needed anymore
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.443 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Jan 20 15:33:32 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000d0.scope: Consumed 17.732s CPU time.
Jan 20 15:33:32 compute-0 systemd-machined[216401]: Machine qemu-91-instance-000000d0 terminated.
Jan 20 15:33:32 compute-0 podman[387298]: 2026-01-20 15:33:32.48763005 +0000 UTC m=+0.073761102 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 15:33:32 compute-0 podman[387297]: 2026-01-20 15:33:32.527606269 +0000 UTC m=+0.114070490 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [NOTICE]   (385322) : haproxy version is 2.8.14-c23fe91
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [NOTICE]   (385322) : path to executable is /usr/sbin/haproxy
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [WARNING]  (385322) : Exiting Master process...
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [WARNING]  (385322) : Exiting Master process...
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [ALERT]    (385322) : Current worker (385339) exited with code 143 (Terminated)
Jan 20 15:33:32 compute-0 neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021[385318]: [WARNING]  (385322) : All workers exited. Exiting... (0)
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.580 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 systemd[1]: libpod-4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8.scope: Deactivated successfully.
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 podman[387364]: 2026-01-20 15:33:32.590286448 +0000 UTC m=+0.050374295 container died 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.598 250022 INFO nova.virt.libvirt.driver [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Instance destroyed successfully.
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.599 250022 DEBUG nova.objects.instance [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid 00832e9b-36fa-4d41-be21-ca6b05cd493f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.612 250022 DEBUG nova.virt.libvirt.vif [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:31:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-294660269',display_name='tempest-TestNetworkBasicOps-server-294660269',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-294660269',id=208,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM/wcy1u/OW984noISwEOwb5LNHq9kMm6H4gDYMOeRNg80CMD0xPaXGSJjRjqJjcmlGZ4ls4SDDoXkG2XEqy3wx1zeFMwT8eQgW0fV48l9Sd8ax0F4mvMlYvHLkKrxuyhw==',key_name='tempest-TestNetworkBasicOps-525328153',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:32:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-yd9tjeqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:32:08Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=00832e9b-36fa-4d41-be21-ca6b05cd493f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.613 250022 DEBUG nova.network.os_vif_util [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.613 250022 DEBUG nova.network.os_vif_util [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.614 250022 DEBUG os_vif [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.615 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.615 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d9d0177-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.618 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.620 250022 INFO os_vif [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:ba:22,bridge_name='br-int',has_traffic_filtering=True,id=3d9d0177-0374-4df6-b536-6a369c25c060,network=Network(2ec0e47d-339a-4483-a67a-b500a294d021),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d9d0177-03')
Jan 20 15:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-918fd2f60ca748264a0344ae1f79df4a2d533fa1aff64c09c6d284c00785cba8-merged.mount: Deactivated successfully.
Jan 20 15:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8-userdata-shm.mount: Deactivated successfully.
Jan 20 15:33:32 compute-0 podman[387364]: 2026-01-20 15:33:32.630679039 +0000 UTC m=+0.090766856 container cleanup 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:33:32 compute-0 systemd[1]: libpod-conmon-4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8.scope: Deactivated successfully.
Jan 20 15:33:32 compute-0 podman[387418]: 2026-01-20 15:33:32.690958652 +0000 UTC m=+0.040343771 container remove 4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.696 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ac194e5b-6a40-4e95-9f88-f01e801aa6ad]: (4, ('Tue Jan 20 03:33:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021 (4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8)\n4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8\nTue Jan 20 03:33:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021 (4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8)\n4f6f3770a0af28bc002c57d10931c994e230c831ef5ad0114b53a4c88a95cbf8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.698 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3d848c-4e12-4bd2-9a17-7ae48657e1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.698 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ec0e47d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:33:32 compute-0 kernel: tap2ec0e47d-30: left promiscuous mode
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.758 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.771 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8746d005-6a6c-463e-8bc8-749cf6c11a93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.785 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a1531d39-2947-4c93-8c2a-ebdc2e943cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.786 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[073b5ed0-f46b-49e8-a87d-41b89008eea7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.802 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1472c1-471c-4362-9e9f-4ec55ad6c81d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906786, 'reachable_time': 43800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387441, 'error': None, 'target': 'ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.803 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ec0e47d-339a-4483-a67a-b500a294d021 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:33:32 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:33:32.803 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3463099b-4438-4104-a84c-a9eb550d325b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:33:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d2ec0e47d\x2d339a\x2d4483\x2da67a\x2db500a294d021.mount: Deactivated successfully.
Jan 20 15:33:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:32.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.980 250022 DEBUG nova.compute.manager [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-unplugged-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.981 250022 DEBUG oslo_concurrency.lockutils [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.981 250022 DEBUG oslo_concurrency.lockutils [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.981 250022 DEBUG oslo_concurrency.lockutils [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.981 250022 DEBUG nova.compute.manager [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-unplugged-3d9d0177-0374-4df6-b536-6a369c25c060 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:32 compute-0 nova_compute[250018]: 2026-01-20 15:33:32.981 250022 DEBUG nova.compute.manager [req-7ad456fc-6a5d-4cf6-933b-6c69ab841d81 req-ade9e697-7c72-45e8-98cc-6d8b7209f9d1 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-unplugged-3d9d0177-0374-4df6-b536-6a369c25c060 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.075 250022 INFO nova.virt.libvirt.driver [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Deleting instance files /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f_del
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.075 250022 INFO nova.virt.libvirt.driver [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Deletion of /var/lib/nova/instances/00832e9b-36fa-4d41-be21-ca6b05cd493f_del complete
Jan 20 15:33:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 29 op/s
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.278 250022 INFO nova.compute.manager [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.279 250022 DEBUG oslo.service.loopingcall [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.279 250022 DEBUG nova.compute.manager [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:33:33 compute-0 nova_compute[250018]: 2026-01-20 15:33:33.279 250022 DEBUG nova.network.neutron [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:33:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:34 compute-0 ceph-mon[74360]: pgmap v3341: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 29 op/s
Jan 20 15:33:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:34.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:34 compute-0 nova_compute[250018]: 2026-01-20 15:33:34.948 250022 DEBUG nova.network.neutron [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:34 compute-0 nova_compute[250018]: 2026-01-20 15:33:34.996 250022 INFO nova.compute.manager [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Took 1.72 seconds to deallocate network for instance.
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.084 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.085 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.202 250022 DEBUG oslo_concurrency.processutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:33:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 16 KiB/s wr, 46 op/s
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.234 250022 DEBUG nova.compute.manager [req-2cb56ab2-1368-4b8e-ad95-308ad5e6b299 req-959ab20d-6036-471b-96dc-2c46c38bc614 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-deleted-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.369 250022 DEBUG nova.compute.manager [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.370 250022 DEBUG oslo_concurrency.lockutils [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.370 250022 DEBUG oslo_concurrency.lockutils [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.370 250022 DEBUG oslo_concurrency.lockutils [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.371 250022 DEBUG nova.compute.manager [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] No waiting events found dispatching network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.371 250022 WARNING nova.compute.manager [req-002deb9c-b920-4883-a8c9-9fa9383e40d2 req-8f849570-8551-4477-93b6-c32cb4abaf1c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Received unexpected event network-vif-plugged-3d9d0177-0374-4df6-b536-6a369c25c060 for instance with vm_state deleted and task_state None.
Jan 20 15:33:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:33:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717147166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.625 250022 DEBUG oslo_concurrency.processutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.630 250022 DEBUG nova.compute.provider_tree [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.657 250022 DEBUG nova.scheduler.client.report [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.686 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.751 250022 INFO nova.scheduler.client.report [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance 00832e9b-36fa-4d41-be21-ca6b05cd493f
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.796 250022 DEBUG nova.network.neutron [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updated VIF entry in instance network info cache for port 3d9d0177-0374-4df6-b536-6a369c25c060. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.797 250022 DEBUG nova.network.neutron [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Updating instance_info_cache with network_info: [{"id": "3d9d0177-0374-4df6-b536-6a369c25c060", "address": "fa:16:3e:e7:ba:22", "network": {"id": "2ec0e47d-339a-4483-a67a-b500a294d021", "bridge": "br-int", "label": "tempest-network-smoke--347448443", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d9d0177-03", "ovs_interfaceid": "3d9d0177-0374-4df6-b536-6a369c25c060", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.848 250022 DEBUG oslo_concurrency.lockutils [None req-fab72e0a-ccef-4d27-bcaf-61d14ca6c3b3 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "00832e9b-36fa-4d41-be21-ca6b05cd493f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:33:35 compute-0 nova_compute[250018]: 2026-01-20 15:33:35.868 250022 DEBUG oslo_concurrency.lockutils [req-659513f3-af44-4cea-a9e9-5e6b1c7fa95f req-7b7ee3c3-4477-4615-93e2-ed85586eb66a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-00832e9b-36fa-4d41-be21-ca6b05cd493f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:33:36 compute-0 ceph-mon[74360]: pgmap v3342: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 16 KiB/s wr, 46 op/s
Jan 20 15:33:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/717147166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:33:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:36.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:36 compute-0 nova_compute[250018]: 2026-01-20 15:33:36.891 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:37 compute-0 nova_compute[250018]: 2026-01-20 15:33:37.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:33:37 compute-0 nova_compute[250018]: 2026-01-20 15:33:37.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:33:37 compute-0 nova_compute[250018]: 2026-01-20 15:33:37.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:33:37 compute-0 nova_compute[250018]: 2026-01-20 15:33:37.076 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:33:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Jan 20 15:33:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:37 compute-0 nova_compute[250018]: 2026-01-20 15:33:37.617 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:38 compute-0 ceph-mon[74360]: pgmap v3343: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 4.0 KiB/s wr, 49 op/s
Jan 20 15:33:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:33:39 compute-0 sshd-session[387467]: Invalid user user from 134.122.57.138 port 55878
Jan 20 15:33:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:39 compute-0 sshd-session[387467]: Connection closed by invalid user user 134.122.57.138 port 55878 [preauth]
Jan 20 15:33:40 compute-0 ceph-mon[74360]: pgmap v3344: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Jan 20 15:33:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:40.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:33:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:41 compute-0 nova_compute[250018]: 2026-01-20 15:33:41.893 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:42 compute-0 nova_compute[250018]: 2026-01-20 15:33:42.619 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:42 compute-0 ceph-mon[74360]: pgmap v3345: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:33:42 compute-0 nova_compute[250018]: 2026-01-20 15:33:42.822 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:42 compute-0 nova_compute[250018]: 2026-01-20 15:33:42.919 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:43.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:44 compute-0 ceph-mon[74360]: pgmap v3346: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:46 compute-0 ceph-mon[74360]: pgmap v3347: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:33:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:46 compute-0 nova_compute[250018]: 2026-01-20 15:33:46.894 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Jan 20 15:33:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:47.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:47 compute-0 nova_compute[250018]: 2026-01-20 15:33:47.596 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923212.595017, 00832e9b-36fa-4d41-be21-ca6b05cd493f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:33:47 compute-0 nova_compute[250018]: 2026-01-20 15:33:47.596 250022 INFO nova.compute.manager [-] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] VM Stopped (Lifecycle Event)
Jan 20 15:33:47 compute-0 nova_compute[250018]: 2026-01-20 15:33:47.618 250022 DEBUG nova.compute.manager [None req-f2bbd233-26d0-489e-a994-5343b0f9c6db - - - - - -] [instance: 00832e9b-36fa-4d41-be21-ca6b05cd493f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:33:47 compute-0 nova_compute[250018]: 2026-01-20 15:33:47.621 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:48 compute-0 ceph-mon[74360]: pgmap v3348: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Jan 20 15:33:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:49 compute-0 sudo[387476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:49 compute-0 sudo[387476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:49 compute-0 sudo[387476]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:49 compute-0 sudo[387501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:49 compute-0 sudo[387501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:49 compute-0 sudo[387501]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:50 compute-0 ceph-mon[74360]: pgmap v3349: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:51 compute-0 nova_compute[250018]: 2026-01-20 15:33:51.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:33:52 compute-0 nova_compute[250018]: 2026-01-20 15:33:52.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:33:52
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta']
Jan 20 15:33:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:33:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:52.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:53 compute-0 ceph-mon[74360]: pgmap v3350: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:53.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:54 compute-0 ceph-mon[74360]: pgmap v3351: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:33:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:33:56 compute-0 sudo[387530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:56 compute-0 sudo[387530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-0 sudo[387530]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-0 sudo[387555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:33:56 compute-0 sudo[387555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-0 sudo[387555]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-0 sudo[387580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:56 compute-0 sudo[387580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-0 sudo[387580]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:56 compute-0 sudo[387605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:33:56 compute-0 sudo[387605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:56 compute-0 ceph-mon[74360]: pgmap v3352: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:56.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:56 compute-0 nova_compute[250018]: 2026-01-20 15:33:56.898 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:57 compute-0 sudo[387605]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 42bbc69b-7518-4ae7-8f0c-85e893f5cf39 does not exist
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0ee01684-88b2-44f5-8634-ab7455a528ad does not exist
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 68bdefc8-3902-48ad-a40b-e52b5a444753 does not exist
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:33:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:57 compute-0 sudo[387662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:57 compute-0 sudo[387662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:57 compute-0 sudo[387662]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:57 compute-0 sudo[387687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:33:57 compute-0 sudo[387687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:57 compute-0 sudo[387687]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:57 compute-0 nova_compute[250018]: 2026-01-20 15:33:57.624 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:33:57 compute-0 sudo[387712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:57 compute-0 sudo[387712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:57 compute-0 sudo[387712]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:57 compute-0 sudo[387737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:33:57 compute-0 sudo[387737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:33:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:33:57 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.089278992 +0000 UTC m=+0.041866922 container create e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:33:58 compute-0 systemd[1]: Started libpod-conmon-e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad.scope.
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.071107687 +0000 UTC m=+0.023695637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:33:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.195775605 +0000 UTC m=+0.148363565 container init e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.203552707 +0000 UTC m=+0.156140637 container start e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.207480504 +0000 UTC m=+0.160068434 container attach e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:33:58 compute-0 compassionate_jepsen[387818]: 167 167
Jan 20 15:33:58 compute-0 systemd[1]: libpod-e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad.scope: Deactivated successfully.
Jan 20 15:33:58 compute-0 conmon[387818]: conmon e43dd7413637103b9826 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad.scope/container/memory.events
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.211155845 +0000 UTC m=+0.163743785 container died e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf6ffcfd61883e6f43dafe2dc24716bcecd34b0c3a0babf87dc36af9a72d1ff-merged.mount: Deactivated successfully.
Jan 20 15:33:58 compute-0 podman[387802]: 2026-01-20 15:33:58.249446468 +0000 UTC m=+0.202034398 container remove e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:33:58 compute-0 systemd[1]: libpod-conmon-e43dd7413637103b9826ee09a49b97f11f88933422d6328f128d7fd79cbb52ad.scope: Deactivated successfully.
Jan 20 15:33:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:33:58 compute-0 podman[387843]: 2026-01-20 15:33:58.41422013 +0000 UTC m=+0.047207378 container create b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:33:58 compute-0 systemd[1]: Started libpod-conmon-b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b.scope.
Jan 20 15:33:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:33:58 compute-0 podman[387843]: 2026-01-20 15:33:58.39663967 +0000 UTC m=+0.029626938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:33:58 compute-0 podman[387843]: 2026-01-20 15:33:58.496803731 +0000 UTC m=+0.129790969 container init b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:33:58 compute-0 podman[387843]: 2026-01-20 15:33:58.507434011 +0000 UTC m=+0.140421249 container start b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:33:58 compute-0 podman[387843]: 2026-01-20 15:33:58.51401573 +0000 UTC m=+0.147002998 container attach b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:33:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:33:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:33:58.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:33:58 compute-0 ceph-mon[74360]: pgmap v3353: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:33:59 compute-0 flamboyant_dubinsky[387859]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:33:59 compute-0 flamboyant_dubinsky[387859]: --> relative data size: 1.0
Jan 20 15:33:59 compute-0 flamboyant_dubinsky[387859]: --> All data devices are unavailable
Jan 20 15:33:59 compute-0 systemd[1]: libpod-b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b.scope: Deactivated successfully.
Jan 20 15:33:59 compute-0 conmon[387859]: conmon b8ac39dd33d0038c9eef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b.scope/container/memory.events
Jan 20 15:33:59 compute-0 podman[387843]: 2026-01-20 15:33:59.299096681 +0000 UTC m=+0.932083979 container died b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-7072a0e1f6272f45133716a80ef870626003e5b13a80a600861183b91fa24dde-merged.mount: Deactivated successfully.
Jan 20 15:33:59 compute-0 podman[387843]: 2026-01-20 15:33:59.360866204 +0000 UTC m=+0.993853462 container remove b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_dubinsky, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:33:59 compute-0 systemd[1]: libpod-conmon-b8ac39dd33d0038c9eef891fb730b55eeeee4a2d3b0ddb3daf8e1df1ca11008b.scope: Deactivated successfully.
Jan 20 15:33:59 compute-0 sudo[387737]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:59 compute-0 sudo[387886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:59 compute-0 sudo[387886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:59 compute-0 sudo[387886]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:59 compute-0 sudo[387911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:33:59 compute-0 sudo[387911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:59 compute-0 sudo[387911]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:33:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:33:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:33:59.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:33:59 compute-0 sudo[387936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:33:59 compute-0 sudo[387936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:59 compute-0 sudo[387936]: pam_unix(sudo:session): session closed for user root
Jan 20 15:33:59 compute-0 sudo[387961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:33:59 compute-0 sudo[387961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:33:59 compute-0 ceph-mon[74360]: pgmap v3354: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.061637067 +0000 UTC m=+0.041127253 container create 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 15:34:00 compute-0 systemd[1]: Started libpod-conmon-70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b.scope.
Jan 20 15:34:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.137603828 +0000 UTC m=+0.117094104 container init 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.045035035 +0000 UTC m=+0.024525241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.144719542 +0000 UTC m=+0.124209728 container start 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.148326799 +0000 UTC m=+0.127817075 container attach 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:34:00 compute-0 strange_dubinsky[388042]: 167 167
Jan 20 15:34:00 compute-0 systemd[1]: libpod-70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b.scope: Deactivated successfully.
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.151134636 +0000 UTC m=+0.130624842 container died 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-acbfef04c13b802ee146f82e63302448f60d45d724aca4692bf5f6a2672953d1-merged.mount: Deactivated successfully.
Jan 20 15:34:00 compute-0 podman[388026]: 2026-01-20 15:34:00.198470496 +0000 UTC m=+0.177960672 container remove 70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:34:00 compute-0 systemd[1]: libpod-conmon-70a946f171697fef31cf45b345a55770247fbdfde91ab92620bdd7d3170f662b.scope: Deactivated successfully.
Jan 20 15:34:00 compute-0 podman[388066]: 2026-01-20 15:34:00.351361824 +0000 UTC m=+0.041359458 container create cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 20 15:34:00 compute-0 systemd[1]: Started libpod-conmon-cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb.scope.
Jan 20 15:34:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1a5fab68538afd48303fe9e92d62b34039449539ba2af76e1b53ca0bb6fa0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1a5fab68538afd48303fe9e92d62b34039449539ba2af76e1b53ca0bb6fa0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1a5fab68538afd48303fe9e92d62b34039449539ba2af76e1b53ca0bb6fa0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1a5fab68538afd48303fe9e92d62b34039449539ba2af76e1b53ca0bb6fa0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:00 compute-0 podman[388066]: 2026-01-20 15:34:00.334445653 +0000 UTC m=+0.024443297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:34:00 compute-0 podman[388066]: 2026-01-20 15:34:00.433148524 +0000 UTC m=+0.123146168 container init cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:34:00 compute-0 podman[388066]: 2026-01-20 15:34:00.438948622 +0000 UTC m=+0.128946246 container start cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:34:00 compute-0 podman[388066]: 2026-01-20 15:34:00.442307793 +0000 UTC m=+0.132305417 container attach cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:34:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:00.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:01 compute-0 priceless_feistel[388082]: {
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:     "0": [
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:         {
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "devices": [
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "/dev/loop3"
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             ],
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "lv_name": "ceph_lv0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "lv_size": "7511998464",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "name": "ceph_lv0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "tags": {
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.cluster_name": "ceph",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.crush_device_class": "",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.encrypted": "0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.osd_id": "0",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.type": "block",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:                 "ceph.vdo": "0"
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             },
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "type": "block",
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:             "vg_name": "ceph_vg0"
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:         }
Jan 20 15:34:01 compute-0 priceless_feistel[388082]:     ]
Jan 20 15:34:01 compute-0 priceless_feistel[388082]: }
Jan 20 15:34:01 compute-0 systemd[1]: libpod-cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb.scope: Deactivated successfully.
Jan 20 15:34:01 compute-0 podman[388066]: 2026-01-20 15:34:01.180034463 +0000 UTC m=+0.870032107 container died cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f1a5fab68538afd48303fe9e92d62b34039449539ba2af76e1b53ca0bb6fa0e-merged.mount: Deactivated successfully.
Jan 20 15:34:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:01 compute-0 podman[388066]: 2026-01-20 15:34:01.261193285 +0000 UTC m=+0.951190929 container remove cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:34:01 compute-0 systemd[1]: libpod-conmon-cf49340506a9d2d4f76a70d6a7eecbe4c014fe47b7b8d1bc3991f01469cb3feb.scope: Deactivated successfully.
Jan 20 15:34:01 compute-0 sudo[387961]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:01 compute-0 sudo[388105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:01 compute-0 sudo[388105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:01 compute-0 sudo[388105]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:01 compute-0 sudo[388130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:34:01 compute-0 sudo[388130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:01 compute-0 sudo[388130]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:01 compute-0 sudo[388155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:01 compute-0 sudo[388155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:01 compute-0 sudo[388155]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:01 compute-0 sudo[388180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:34:01 compute-0 sudo[388180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:01 compute-0 nova_compute[250018]: 2026-01-20 15:34:01.900 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.002567125 +0000 UTC m=+0.033127645 container create ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:34:02 compute-0 systemd[1]: Started libpod-conmon-ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821.scope.
Jan 20 15:34:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:01.988292756 +0000 UTC m=+0.018853296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.091703044 +0000 UTC m=+0.122263584 container init ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.100686859 +0000 UTC m=+0.131247379 container start ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.103696891 +0000 UTC m=+0.134257411 container attach ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:34:02 compute-0 charming_pascal[388261]: 167 167
Jan 20 15:34:02 compute-0 systemd[1]: libpod-ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821.scope: Deactivated successfully.
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.1065965 +0000 UTC m=+0.137157010 container died ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb74908d349df0dea3aa694b5398e600b2d5cb4944611c19c57f8857cd8e5f7a-merged.mount: Deactivated successfully.
Jan 20 15:34:02 compute-0 podman[388245]: 2026-01-20 15:34:02.145286895 +0000 UTC m=+0.175847425 container remove ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:34:02 compute-0 systemd[1]: libpod-conmon-ff4f067c24df4f7dcd2c41f872c9c6f03823969d3e79a95f062df88acdd1c821.scope: Deactivated successfully.
Jan 20 15:34:02 compute-0 podman[388283]: 2026-01-20 15:34:02.304227857 +0000 UTC m=+0.050989120 container create fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:34:02 compute-0 ceph-mon[74360]: pgmap v3355: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:02 compute-0 systemd[1]: Started libpod-conmon-fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e.scope.
Jan 20 15:34:02 compute-0 podman[388283]: 2026-01-20 15:34:02.27643377 +0000 UTC m=+0.023195133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:34:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c115c8039c74fc356bcd4d7cbcffd38e4e2da73a717eb4ca363b011b5226a900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c115c8039c74fc356bcd4d7cbcffd38e4e2da73a717eb4ca363b011b5226a900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c115c8039c74fc356bcd4d7cbcffd38e4e2da73a717eb4ca363b011b5226a900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c115c8039c74fc356bcd4d7cbcffd38e4e2da73a717eb4ca363b011b5226a900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:34:02 compute-0 podman[388283]: 2026-01-20 15:34:02.396218335 +0000 UTC m=+0.142979618 container init fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:34:02 compute-0 podman[388283]: 2026-01-20 15:34:02.401399767 +0000 UTC m=+0.148161030 container start fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Jan 20 15:34:02 compute-0 podman[388283]: 2026-01-20 15:34:02.405330933 +0000 UTC m=+0.152092196 container attach fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:34:02 compute-0 nova_compute[250018]: 2026-01-20 15:34:02.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:03 compute-0 frosty_buck[388300]: {
Jan 20 15:34:03 compute-0 frosty_buck[388300]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:34:03 compute-0 frosty_buck[388300]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:34:03 compute-0 frosty_buck[388300]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:34:03 compute-0 frosty_buck[388300]:         "osd_id": 0,
Jan 20 15:34:03 compute-0 frosty_buck[388300]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:34:03 compute-0 frosty_buck[388300]:         "type": "bluestore"
Jan 20 15:34:03 compute-0 frosty_buck[388300]:     }
Jan 20 15:34:03 compute-0 frosty_buck[388300]: }
Jan 20 15:34:03 compute-0 systemd[1]: libpod-fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e.scope: Deactivated successfully.
Jan 20 15:34:03 compute-0 podman[388283]: 2026-01-20 15:34:03.288958321 +0000 UTC m=+1.035719594 container died fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:34:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c115c8039c74fc356bcd4d7cbcffd38e4e2da73a717eb4ca363b011b5226a900-merged.mount: Deactivated successfully.
Jan 20 15:34:03 compute-0 podman[388283]: 2026-01-20 15:34:03.366020231 +0000 UTC m=+1.112781514 container remove fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:34:03 compute-0 systemd[1]: libpod-conmon-fc058d5bf0aca547465c147c89f68687629c9eca0f2722b3e3a19b260895722e.scope: Deactivated successfully.
Jan 20 15:34:03 compute-0 sudo[388180]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:34:03 compute-0 podman[388332]: 2026-01-20 15:34:03.411219233 +0000 UTC m=+0.082101449 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:34:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:34:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 55eda7b1-6cc0-4601-a9a2-6f9591feee00 does not exist
Jan 20 15:34:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c6f6ec0d-81da-499d-8d08-d174f8ac560f does not exist
Jan 20 15:34:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dea74f05-cdcc-46bb-8b31-a0165a7293ad does not exist
Jan 20 15:34:03 compute-0 podman[388324]: 2026-01-20 15:34:03.449806034 +0000 UTC m=+0.122805257 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:34:03 compute-0 sudo[388382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:03 compute-0 sudo[388382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:03 compute-0 sudo[388382]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:03.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:03 compute-0 sudo[388407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:34:03 compute-0 sudo[388407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:03 compute-0 sudo[388407]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:04 compute-0 ceph-mon[74360]: pgmap v3356: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:34:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:05 compute-0 nova_compute[250018]: 2026-01-20 15:34:05.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:05.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:06 compute-0 ceph-mon[74360]: pgmap v3357: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:06 compute-0 nova_compute[250018]: 2026-01-20 15:34:06.904 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:07.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:07 compute-0 nova_compute[250018]: 2026-01-20 15:34:07.631 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:08 compute-0 ceph-mon[74360]: pgmap v3358: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:08.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:09 compute-0 sudo[388435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:09 compute-0 sudo[388435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:09 compute-0 sudo[388435]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:09 compute-0 sudo[388460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:09 compute-0 sudo[388460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:09 compute-0 sudo[388460]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1657604232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:10 compute-0 ceph-mon[74360]: pgmap v3359: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:10.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:11 compute-0 nova_compute[250018]: 2026-01-20 15:34:11.905 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009374854597059371 of space, bias 1.0, pg target 0.28124563791178114 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:34:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:34:12 compute-0 nova_compute[250018]: 2026-01-20 15:34:12.634 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:12 compute-0 ceph-mon[74360]: pgmap v3360: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:12.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:13 compute-0 nova_compute[250018]: 2026-01-20 15:34:13.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:34:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:34:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:34:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:34:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4278523668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:34:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/634534325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:34:14 compute-0 ceph-mon[74360]: pgmap v3361: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Jan 20 15:34:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2140012008' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/650000176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:14.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1244062530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:16 compute-0 ceph-mon[74360]: pgmap v3362: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:16 compute-0 nova_compute[250018]: 2026-01-20 15:34:16.908 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.079 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.080 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205653312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.524 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.635 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.693 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.694 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.695 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.695 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.796 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.797 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:34:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2281346832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4205653312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/431993206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.836 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.894 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.894 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.925 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:34:17 compute-0 nova_compute[250018]: 2026-01-20 15:34:17.963 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.009 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:34:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:34:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651347568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.437 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.443 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.461 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.486 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:34:18 compute-0 nova_compute[250018]: 2026-01-20 15:34:18.486 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:18.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:19 compute-0 ceph-mon[74360]: pgmap v3363: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/651347568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:34:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.576433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259576535, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1229, "num_deletes": 251, "total_data_size": 2007336, "memory_usage": 2044944, "flush_reason": "Manual Compaction"}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 20 15:34:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:19.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259599043, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1962624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74487, "largest_seqno": 75715, "table_properties": {"data_size": 1956773, "index_size": 3181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12789, "raw_average_key_size": 20, "raw_value_size": 1944902, "raw_average_value_size": 3062, "num_data_blocks": 140, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923153, "oldest_key_time": 1768923153, "file_creation_time": 1768923259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 22639 microseconds, and 5402 cpu microseconds.
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.599104) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1962624 bytes OK
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.599128) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.600528) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.600548) EVENT_LOG_v1 {"time_micros": 1768923259600541, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.600565) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2001878, prev total WAL file size 2001878, number of live WAL files 2.
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.601361) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1916KB)], [170(12MB)]
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259601437, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 14931602, "oldest_snapshot_seqno": -1}
Jan 20 15:34:19 compute-0 sshd-session[388509]: Invalid user user from 134.122.57.138 port 47044
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10228 keys, 12956912 bytes, temperature: kUnknown
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259679498, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 12956912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12891129, "index_size": 39110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270240, "raw_average_key_size": 26, "raw_value_size": 12712272, "raw_average_value_size": 1242, "num_data_blocks": 1487, "num_entries": 10228, "num_filter_entries": 10228, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.679837) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 12956912 bytes
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.682636) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.9 rd, 165.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(14.2) write-amplify(6.6) OK, records in: 10751, records dropped: 523 output_compression: NoCompression
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.682654) EVENT_LOG_v1 {"time_micros": 1768923259682646, "job": 106, "event": "compaction_finished", "compaction_time_micros": 78232, "compaction_time_cpu_micros": 32418, "output_level": 6, "num_output_files": 1, "total_output_size": 12956912, "num_input_records": 10751, "num_output_records": 10228, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259683325, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923259686163, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.601215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.686253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.686258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.686260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.686261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:34:19.686263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:34:19 compute-0 sshd-session[388509]: Connection closed by invalid user user 134.122.57.138 port 47044 [preauth]
Jan 20 15:34:20 compute-0 nova_compute[250018]: 2026-01-20 15:34:20.487 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:20 compute-0 nova_compute[250018]: 2026-01-20 15:34:20.488 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:20 compute-0 nova_compute[250018]: 2026-01-20 15:34:20.488 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:34:20 compute-0 ceph-mon[74360]: pgmap v3364: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:34:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:34:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:21.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:21 compute-0 nova_compute[250018]: 2026-01-20 15:34:21.910 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:22 compute-0 nova_compute[250018]: 2026-01-20 15:34:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:22 compute-0 nova_compute[250018]: 2026-01-20 15:34:22.637 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:22 compute-0 ceph-mon[74360]: pgmap v3365: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 15:34:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:24 compute-0 ovn_controller[148666]: 2026-01-20T15:34:24Z|00775|memory_trim|INFO|Detected inactivity (last active 30881 ms ago): trimming memory
Jan 20 15:34:24 compute-0 ceph-mon[74360]: pgmap v3366: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:25 compute-0 ceph-mon[74360]: pgmap v3367: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 95 op/s
Jan 20 15:34:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:26 compute-0 nova_compute[250018]: 2026-01-20 15:34:26.913 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:27 compute-0 nova_compute[250018]: 2026-01-20 15:34:27.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:34:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:27 compute-0 nova_compute[250018]: 2026-01-20 15:34:27.638 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:28 compute-0 ceph-mon[74360]: pgmap v3368: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:34:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 164 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 75 op/s
Jan 20 15:34:29 compute-0 sudo[388543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:29 compute-0 sudo[388543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:29 compute-0 sudo[388543]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:29 compute-0 sudo[388568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:29 compute-0 sudo[388568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:29 compute-0 sudo[388568]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:29.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:30.215 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:34:30 compute-0 nova_compute[250018]: 2026-01-20 15:34:30.215 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:30.217 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:34:30 compute-0 ceph-mon[74360]: pgmap v3369: 321 pgs: 321 active+clean; 164 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 75 op/s
Jan 20 15:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:30.808 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:30.808 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:34:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:30.808 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:34:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 95 op/s
Jan 20 15:34:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/22013695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:31 compute-0 ceph-mon[74360]: pgmap v3370: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 95 op/s
Jan 20 15:34:31 compute-0 nova_compute[250018]: 2026-01-20 15:34:31.915 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:32 compute-0 nova_compute[250018]: 2026-01-20 15:34:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:32 compute-0 nova_compute[250018]: 2026-01-20 15:34:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:32 compute-0 nova_compute[250018]: 2026-01-20 15:34:32.640 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:33 compute-0 ceph-mon[74360]: pgmap v3371: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:34 compute-0 podman[388597]: 2026-01-20 15:34:34.516514071 +0000 UTC m=+0.092618297 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:34:34 compute-0 podman[388596]: 2026-01-20 15:34:34.517068515 +0000 UTC m=+0.093662904 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 15:34:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:34.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:35.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:36 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:34:36.219 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:34:36 compute-0 ceph-mon[74360]: pgmap v3372: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:36.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:36 compute-0 nova_compute[250018]: 2026-01-20 15:34:36.918 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:37 compute-0 nova_compute[250018]: 2026-01-20 15:34:37.642 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:37 compute-0 ceph-mon[74360]: pgmap v3373: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:39 compute-0 nova_compute[250018]: 2026-01-20 15:34:39.070 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:39 compute-0 nova_compute[250018]: 2026-01-20 15:34:39.070 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:34:39 compute-0 nova_compute[250018]: 2026-01-20 15:34:39.071 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:34:39 compute-0 nova_compute[250018]: 2026-01-20 15:34:39.085 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:34:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:39.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:40 compute-0 ceph-mon[74360]: pgmap v3374: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 15:34:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:40.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 20 15:34:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3059552772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:41.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:41 compute-0 nova_compute[250018]: 2026-01-20 15:34:41.967 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:42 compute-0 ceph-mon[74360]: pgmap v3375: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 20 15:34:42 compute-0 nova_compute[250018]: 2026-01-20 15:34:42.643 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:44 compute-0 ceph-mon[74360]: pgmap v3376: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:44.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:45 compute-0 ceph-mon[74360]: pgmap v3377: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:34:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:46 compute-0 nova_compute[250018]: 2026-01-20 15:34:46.970 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:47 compute-0 nova_compute[250018]: 2026-01-20 15:34:47.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:34:47 compute-0 nova_compute[250018]: 2026-01-20 15:34:47.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:34:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 492 KiB/s wr, 11 op/s
Jan 20 15:34:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:47.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:47 compute-0 nova_compute[250018]: 2026-01-20 15:34:47.644 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:48 compute-0 ceph-mon[74360]: pgmap v3378: 321 pgs: 321 active+clean; 139 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.2 KiB/s rd, 492 KiB/s wr, 11 op/s
Jan 20 15:34:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3471853459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:48 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3172465850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:34:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:48.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 20 15:34:49 compute-0 sudo[388648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:49 compute-0 sudo[388648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:49 compute-0 sudo[388648]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:49 compute-0 sudo[388673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:34:49 compute-0 sudo[388673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:34:49 compute-0 sudo[388673]: pam_unix(sudo:session): session closed for user root
Jan 20 15:34:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:49.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:50 compute-0 ceph-mon[74360]: pgmap v3379: 321 pgs: 321 active+clean; 156 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 20 15:34:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:51.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:51 compute-0 nova_compute[250018]: 2026-01-20 15:34:51.971 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:52 compute-0 ceph-mon[74360]: pgmap v3380: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:34:52 compute-0 nova_compute[250018]: 2026-01-20 15:34:52.646 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:34:52
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Jan 20 15:34:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:34:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:34:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:53.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:34:54 compute-0 ceph-mon[74360]: pgmap v3381: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:34:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 162 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 947 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:34:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:55.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:56 compute-0 ceph-mon[74360]: pgmap v3382: 321 pgs: 321 active+clean; 162 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 947 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 20 15:34:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:56 compute-0 nova_compute[250018]: 2026-01-20 15:34:56.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 15:34:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:34:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:34:57 compute-0 nova_compute[250018]: 2026-01-20 15:34:57.648 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:34:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:34:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:34:58 compute-0 ceph-mon[74360]: pgmap v3383: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 20 15:34:58 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1237257214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:34:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:34:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 20 15:34:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:34:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:34:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:34:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:34:59 compute-0 ceph-mon[74360]: pgmap v3384: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 20 15:34:59 compute-0 sshd-session[388702]: Invalid user ubuntu from 134.122.57.138 port 40340
Jan 20 15:35:00 compute-0 sshd-session[388702]: Connection closed by invalid user ubuntu 134.122.57.138 port 40340 [preauth]
Jan 20 15:35:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 726 KiB/s wr, 113 op/s
Jan 20 15:35:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:01 compute-0 nova_compute[250018]: 2026-01-20 15:35:01.975 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:02 compute-0 ceph-mon[74360]: pgmap v3385: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 726 KiB/s wr, 113 op/s
Jan 20 15:35:02 compute-0 nova_compute[250018]: 2026-01-20 15:35:02.649 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:03 compute-0 nova_compute[250018]: 2026-01-20 15:35:03.079 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:03 compute-0 nova_compute[250018]: 2026-01-20 15:35:03.080 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:35:03 compute-0 nova_compute[250018]: 2026-01-20 15:35:03.101 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:35:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:03 compute-0 sudo[388707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:03 compute-0 sudo[388707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-0 sudo[388707]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:03 compute-0 sudo[388732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:35:03 compute-0 sudo[388732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-0 sudo[388732]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:03 compute-0 sudo[388757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:03 compute-0 sudo[388757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:03 compute-0 sudo[388757]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-0 sudo[388782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:35:04 compute-0 sudo[388782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:04 compute-0 ceph-mon[74360]: pgmap v3386: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:04 compute-0 sudo[388782]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 12d10b6e-1d7e-48fa-84c8-e29a6bf669df does not exist
Jan 20 15:35:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2a066425-7333-4bee-a9af-2b3885ac74cd does not exist
Jan 20 15:35:04 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 887f1ecc-369e-46e0-b8db-3f2832bf6962 does not exist
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:35:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:35:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:04 compute-0 sudo[388839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:04 compute-0 sudo[388839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:04 compute-0 sudo[388839]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:04 compute-0 sudo[388876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:35:04 compute-0 sudo[388876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:04 compute-0 sudo[388876]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:04 compute-0 podman[388864]: 2026-01-20 15:35:04.950768379 +0000 UTC m=+0.076829265 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:35:04 compute-0 sudo[388931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:04 compute-0 sudo[388931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:04 compute-0 sudo[388931]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:05 compute-0 podman[388863]: 2026-01-20 15:35:05.022623588 +0000 UTC m=+0.147906162 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:35:05 compute-0 sudo[388959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:35:05 compute-0 sudo[388959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.325660959 +0000 UTC m=+0.020043237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.538303875 +0000 UTC m=+0.232686133 container create ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:35:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:35:05 compute-0 systemd[1]: Started libpod-conmon-ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a.scope.
Jan 20 15:35:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:05.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.67420477 +0000 UTC m=+0.368587028 container init ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.682488626 +0000 UTC m=+0.376870884 container start ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.685935759 +0000 UTC m=+0.380318027 container attach ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:35:05 compute-0 kind_burnell[389040]: 167 167
Jan 20 15:35:05 compute-0 systemd[1]: libpod-ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a.scope: Deactivated successfully.
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.690825003 +0000 UTC m=+0.385207261 container died ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-bebdd65524ae267f1f45172d848013a2ff209e8cbcb243c751db3a0603891787-merged.mount: Deactivated successfully.
Jan 20 15:35:05 compute-0 podman[389024]: 2026-01-20 15:35:05.735822879 +0000 UTC m=+0.430205137 container remove ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:35:05 compute-0 systemd[1]: libpod-conmon-ae46ec8e5ef4223285beb92402c00ca291f6558e9d5f65cb9c027ef20442bf8a.scope: Deactivated successfully.
Jan 20 15:35:05 compute-0 podman[389063]: 2026-01-20 15:35:05.901713151 +0000 UTC m=+0.040987668 container create cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:35:05 compute-0 systemd[1]: Started libpod-conmon-cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b.scope.
Jan 20 15:35:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:05 compute-0 podman[389063]: 2026-01-20 15:35:05.886100495 +0000 UTC m=+0.025375022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:06 compute-0 podman[389063]: 2026-01-20 15:35:06.018084063 +0000 UTC m=+0.157358610 container init cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:35:06 compute-0 podman[389063]: 2026-01-20 15:35:06.02494727 +0000 UTC m=+0.164221787 container start cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:35:06 compute-0 podman[389063]: 2026-01-20 15:35:06.028014974 +0000 UTC m=+0.167289511 container attach cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:35:06 compute-0 clever_bartik[389079]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:35:06 compute-0 clever_bartik[389079]: --> relative data size: 1.0
Jan 20 15:35:06 compute-0 clever_bartik[389079]: --> All data devices are unavailable
Jan 20 15:35:06 compute-0 systemd[1]: libpod-cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b.scope: Deactivated successfully.
Jan 20 15:35:06 compute-0 podman[389063]: 2026-01-20 15:35:06.856300052 +0000 UTC m=+0.995574589 container died cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a2b0179db4388abdd01068dcd6e7520c1934d956501920f6cceba8e6c1d835-merged.mount: Deactivated successfully.
Jan 20 15:35:06 compute-0 ceph-mon[74360]: pgmap v3387: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 20 15:35:06 compute-0 podman[389063]: 2026-01-20 15:35:06.913518532 +0000 UTC m=+1.052793049 container remove cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:35:06 compute-0 systemd[1]: libpod-conmon-cd4e4a6ab7b3138c5e55a9a84acc4b4d23937539aefb7c51b00860e3636f929b.scope: Deactivated successfully.
Jan 20 15:35:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:06 compute-0 sudo[388959]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:06 compute-0 nova_compute[250018]: 2026-01-20 15:35:06.977 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:07 compute-0 sudo[389106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:07 compute-0 sudo[389106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:07 compute-0 sudo[389106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:07 compute-0 sudo[389131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:35:07 compute-0 sudo[389131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:07 compute-0 sudo[389131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:07 compute-0 nova_compute[250018]: 2026-01-20 15:35:07.072 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:07 compute-0 sudo[389156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:07 compute-0 sudo[389156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:07 compute-0 sudo[389156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:07 compute-0 sudo[389181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:35:07 compute-0 sudo[389181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 852 B/s wr, 40 op/s
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.477326851 +0000 UTC m=+0.035398376 container create 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:35:07 compute-0 systemd[1]: Started libpod-conmon-69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383.scope.
Jan 20 15:35:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.554172445 +0000 UTC m=+0.112244000 container init 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.463308049 +0000 UTC m=+0.021379594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.565905726 +0000 UTC m=+0.123977251 container start 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.5697113 +0000 UTC m=+0.127782815 container attach 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:35:07 compute-0 infallible_murdock[389263]: 167 167
Jan 20 15:35:07 compute-0 systemd[1]: libpod-69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383.scope: Deactivated successfully.
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.572927377 +0000 UTC m=+0.130998912 container died 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-73fad72705d23a2a20df17446a6d7f23a1b983cf805180603540b09feb1b4ded-merged.mount: Deactivated successfully.
Jan 20 15:35:07 compute-0 podman[389246]: 2026-01-20 15:35:07.616757291 +0000 UTC m=+0.174828816 container remove 69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_murdock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:35:07 compute-0 systemd[1]: libpod-conmon-69eb422f07ad1630a4f8704526769950505b5900b057982b69ec09048b17d383.scope: Deactivated successfully.
Jan 20 15:35:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:07 compute-0 nova_compute[250018]: 2026-01-20 15:35:07.651 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:07 compute-0 podman[389287]: 2026-01-20 15:35:07.824597878 +0000 UTC m=+0.055414332 container create 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:35:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:35:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 75K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1508 writes, 6685 keys, 1508 commit groups, 1.0 writes per commit group, ingest: 10.36 MB, 0.02 MB/s
                                           Interval WAL: 1508 writes, 1508 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.2      1.79              0.32        53    0.034       0      0       0.0       0.0
                                             L6      1/0   12.36 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    144.5    123.8      4.27              1.64        52    0.082    396K    28K       0.0       0.0
                                            Sum      1/0   12.36 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3    101.9    103.9      6.06              1.97       105    0.058    396K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    136.3    139.0      0.59              0.27        12    0.049     63K   3065       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    144.5    123.8      4.27              1.64        52    0.082    396K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.3      1.78              0.32        52    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.098, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.61 GB write, 0.10 MB/s write, 0.60 GB read, 0.10 MB/s read, 6.1 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 68.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.00089 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3941,65.37 MB,21.5037%) FilterBlock(106,1.06 MB,0.349722%) IndexBlock(106,1.77 MB,0.582414%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:35:07 compute-0 systemd[1]: Started libpod-conmon-9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141.scope.
Jan 20 15:35:07 compute-0 podman[389287]: 2026-01-20 15:35:07.798879646 +0000 UTC m=+0.029696130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80921a190ea116e89c4b7f1cc74208a837edc9e605d78195b4772f9d8920eceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:07 compute-0 ceph-mon[74360]: pgmap v3388: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 852 B/s wr, 40 op/s
Jan 20 15:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80921a190ea116e89c4b7f1cc74208a837edc9e605d78195b4772f9d8920eceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80921a190ea116e89c4b7f1cc74208a837edc9e605d78195b4772f9d8920eceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80921a190ea116e89c4b7f1cc74208a837edc9e605d78195b4772f9d8920eceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:07 compute-0 podman[389287]: 2026-01-20 15:35:07.926725271 +0000 UTC m=+0.157541745 container init 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:35:07 compute-0 podman[389287]: 2026-01-20 15:35:07.937022232 +0000 UTC m=+0.167838676 container start 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:35:07 compute-0 podman[389287]: 2026-01-20 15:35:07.941635977 +0000 UTC m=+0.172452441 container attach 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 20 15:35:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:08 compute-0 cranky_golick[389304]: {
Jan 20 15:35:08 compute-0 cranky_golick[389304]:     "0": [
Jan 20 15:35:08 compute-0 cranky_golick[389304]:         {
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "devices": [
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "/dev/loop3"
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             ],
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "lv_name": "ceph_lv0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "lv_size": "7511998464",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "name": "ceph_lv0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "tags": {
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.cluster_name": "ceph",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.crush_device_class": "",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.encrypted": "0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.osd_id": "0",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.type": "block",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:                 "ceph.vdo": "0"
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             },
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "type": "block",
Jan 20 15:35:08 compute-0 cranky_golick[389304]:             "vg_name": "ceph_vg0"
Jan 20 15:35:08 compute-0 cranky_golick[389304]:         }
Jan 20 15:35:08 compute-0 cranky_golick[389304]:     ]
Jan 20 15:35:08 compute-0 cranky_golick[389304]: }
Jan 20 15:35:08 compute-0 systemd[1]: libpod-9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141.scope: Deactivated successfully.
Jan 20 15:35:08 compute-0 podman[389287]: 2026-01-20 15:35:08.709100998 +0000 UTC m=+0.939917452 container died 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-80921a190ea116e89c4b7f1cc74208a837edc9e605d78195b4772f9d8920eceb-merged.mount: Deactivated successfully.
Jan 20 15:35:08 compute-0 podman[389287]: 2026-01-20 15:35:08.758519685 +0000 UTC m=+0.989336129 container remove 9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 20 15:35:08 compute-0 systemd[1]: libpod-conmon-9ca9a00368f29a480f427b0509e7c71b2f491d3a7d03a2d18efd2329b5f3b141.scope: Deactivated successfully.
Jan 20 15:35:08 compute-0 sudo[389181]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:08 compute-0 sudo[389326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:08 compute-0 sudo[389326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:08 compute-0 sudo[389326]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:08 compute-0 sudo[389351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:35:08 compute-0 sudo[389351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:08 compute-0 sudo[389351]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:08.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:08 compute-0 sudo[389376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:08 compute-0 sudo[389376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:08 compute-0 sudo[389376]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:09 compute-0 sudo[389401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:35:09 compute-0 sudo[389401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.37758309 +0000 UTC m=+0.035288562 container create 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:35:09 compute-0 systemd[1]: Started libpod-conmon-459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221.scope.
Jan 20 15:35:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.439080387 +0000 UTC m=+0.096785879 container init 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.44398924 +0000 UTC m=+0.101694712 container start 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.447098115 +0000 UTC m=+0.104803587 container attach 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:35:09 compute-0 vibrant_kepler[389486]: 167 167
Jan 20 15:35:09 compute-0 systemd[1]: libpod-459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221.scope: Deactivated successfully.
Jan 20 15:35:09 compute-0 conmon[389486]: conmon 459557ba172833734325 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221.scope/container/memory.events
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.449167561 +0000 UTC m=+0.106873033 container died 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.361850052 +0000 UTC m=+0.019555544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaa1951e4f7aee84b4cbea44cdcf37368f2be797abf598a3abf61a1d28841efd-merged.mount: Deactivated successfully.
Jan 20 15:35:09 compute-0 podman[389469]: 2026-01-20 15:35:09.494050335 +0000 UTC m=+0.151755807 container remove 459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:35:09 compute-0 systemd[1]: libpod-conmon-459557ba172833734325ee06a2d9bc3bbeeb3be83c5f33f0e0f5c4f4a8a9e221.scope: Deactivated successfully.
Jan 20 15:35:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:09 compute-0 podman[389509]: 2026-01-20 15:35:09.644897837 +0000 UTC m=+0.040613048 container create b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:35:09 compute-0 systemd[1]: Started libpod-conmon-b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859.scope.
Jan 20 15:35:09 compute-0 sudo[389521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:09 compute-0 sudo[389521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:09 compute-0 sudo[389521]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c1991cf7908044e9464eead492ef4bef1d4140f6a79b4bb06247a219fa50f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c1991cf7908044e9464eead492ef4bef1d4140f6a79b4bb06247a219fa50f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c1991cf7908044e9464eead492ef4bef1d4140f6a79b4bb06247a219fa50f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399c1991cf7908044e9464eead492ef4bef1d4140f6a79b4bb06247a219fa50f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:35:09 compute-0 podman[389509]: 2026-01-20 15:35:09.628230343 +0000 UTC m=+0.023945574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:35:09 compute-0 podman[389509]: 2026-01-20 15:35:09.729066381 +0000 UTC m=+0.124781612 container init b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:35:09 compute-0 podman[389509]: 2026-01-20 15:35:09.741093399 +0000 UTC m=+0.136808610 container start b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:35:09 compute-0 podman[389509]: 2026-01-20 15:35:09.745042847 +0000 UTC m=+0.140758078 container attach b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 15:35:09 compute-0 sudo[389553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:09 compute-0 sudo[389553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:09 compute-0 sudo[389553]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-0 ceph-mon[74360]: pgmap v3389: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 20 15:35:10 compute-0 sharp_wing[389549]: {
Jan 20 15:35:10 compute-0 sharp_wing[389549]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:35:10 compute-0 sharp_wing[389549]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:35:10 compute-0 sharp_wing[389549]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:35:10 compute-0 sharp_wing[389549]:         "osd_id": 0,
Jan 20 15:35:10 compute-0 sharp_wing[389549]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:35:10 compute-0 sharp_wing[389549]:         "type": "bluestore"
Jan 20 15:35:10 compute-0 sharp_wing[389549]:     }
Jan 20 15:35:10 compute-0 sharp_wing[389549]: }
Jan 20 15:35:10 compute-0 systemd[1]: libpod-b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859.scope: Deactivated successfully.
Jan 20 15:35:10 compute-0 podman[389509]: 2026-01-20 15:35:10.639862998 +0000 UTC m=+1.035578269 container died b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 20 15:35:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-399c1991cf7908044e9464eead492ef4bef1d4140f6a79b4bb06247a219fa50f-merged.mount: Deactivated successfully.
Jan 20 15:35:10 compute-0 podman[389509]: 2026-01-20 15:35:10.695523456 +0000 UTC m=+1.091238667 container remove b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:35:10 compute-0 systemd[1]: libpod-conmon-b513d5ffe529e4d56886305e7313ad62d03d008be852b599f25843c93e5cc859.scope: Deactivated successfully.
Jan 20 15:35:10 compute-0 sudo[389401]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:35:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:35:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9ffa8eb6-a11a-4bed-980b-bd8714f5adb6 does not exist
Jan 20 15:35:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3fbfd020-d1fe-47c9-bfcf-97354c4f4051 does not exist
Jan 20 15:35:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0df848f4-8c32-48c7-a81d-7ae9226b6390 does not exist
Jan 20 15:35:10 compute-0 sudo[389611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:10 compute-0 sudo[389611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:10 compute-0 sudo[389611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-0 sudo[389636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:35:10 compute-0 sudo[389636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:10 compute-0 sudo[389636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:10.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:35:11 compute-0 ceph-mon[74360]: pgmap v3390: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:11 compute-0 nova_compute[250018]: 2026-01-20 15:35:11.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:35:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:35:12 compute-0 nova_compute[250018]: 2026-01-20 15:35:12.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:12.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:14 compute-0 nova_compute[250018]: 2026-01-20 15:35:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:14 compute-0 ceph-mon[74360]: pgmap v3391: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:35:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3805897643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:35:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:15 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/858350277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:16 compute-0 ceph-mon[74360]: pgmap v3392: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1905477531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:16.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.086 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.087 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.087 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.088 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:35:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:35:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71798964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.564 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:35:17 compute-0 ceph-mon[74360]: pgmap v3393: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/71798964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:17.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.656 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.760 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.762 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4133MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.763 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.763 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.877 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.878 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:35:17 compute-0 nova_compute[250018]: 2026-01-20 15:35:17.915 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:35:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:35:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346544430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:18 compute-0 nova_compute[250018]: 2026-01-20 15:35:18.366 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:35:18 compute-0 nova_compute[250018]: 2026-01-20 15:35:18.374 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:35:18 compute-0 nova_compute[250018]: 2026-01-20 15:35:18.392 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:35:18 compute-0 nova_compute[250018]: 2026-01-20 15:35:18.394 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:35:18 compute-0 nova_compute[250018]: 2026-01-20 15:35:18.394 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3132927951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3346544430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:18.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/962082559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:19 compute-0 ceph-mon[74360]: pgmap v3394: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:20.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:22 compute-0 nova_compute[250018]: 2026-01-20 15:35:22.036 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:22 compute-0 ceph-mon[74360]: pgmap v3395: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:22 compute-0 nova_compute[250018]: 2026-01-20 15:35:22.395 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:22 compute-0 nova_compute[250018]: 2026-01-20 15:35:22.396 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:22 compute-0 nova_compute[250018]: 2026-01-20 15:35:22.396 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:22 compute-0 nova_compute[250018]: 2026-01-20 15:35:22.658 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:23.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:24 compute-0 nova_compute[250018]: 2026-01-20 15:35:24.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:24 compute-0 ceph-mon[74360]: pgmap v3396: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:24.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:26 compute-0 ceph-mon[74360]: pgmap v3397: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:27 compute-0 nova_compute[250018]: 2026-01-20 15:35:27.038 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:27 compute-0 nova_compute[250018]: 2026-01-20 15:35:27.661 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:28 compute-0 ceph-mon[74360]: pgmap v3398: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:28.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:29 compute-0 nova_compute[250018]: 2026-01-20 15:35:29.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:29.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:29 compute-0 sudo[389714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:29 compute-0 sudo[389714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:29 compute-0 sudo[389714]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:29 compute-0 sudo[389739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:29 compute-0 sudo[389739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:29 compute-0 sudo[389739]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:30 compute-0 ceph-mon[74360]: pgmap v3399: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:30.809 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:35:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:35:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:31.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:32 compute-0 nova_compute[250018]: 2026-01-20 15:35:32.039 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:32 compute-0 nova_compute[250018]: 2026-01-20 15:35:32.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:32 compute-0 ceph-mon[74360]: pgmap v3400: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:32 compute-0 nova_compute[250018]: 2026-01-20 15:35:32.690 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:32.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:33.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:33 compute-0 ceph-mon[74360]: pgmap v3401: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:34 compute-0 nova_compute[250018]: 2026-01-20 15:35:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:34.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:35 compute-0 podman[389768]: 2026-01-20 15:35:35.468340131 +0000 UTC m=+0.059653107 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 15:35:35 compute-0 podman[389767]: 2026-01-20 15:35:35.497917447 +0000 UTC m=+0.089275305 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:35:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:35.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:36 compute-0 ceph-mon[74360]: pgmap v3402: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:37 compute-0 nova_compute[250018]: 2026-01-20 15:35:37.042 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:37 compute-0 nova_compute[250018]: 2026-01-20 15:35:37.692 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:38 compute-0 ceph-mon[74360]: pgmap v3403: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2682010573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:35:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:38.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:39 compute-0 nova_compute[250018]: 2026-01-20 15:35:39.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:35:39 compute-0 nova_compute[250018]: 2026-01-20 15:35:39.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:35:39 compute-0 nova_compute[250018]: 2026-01-20 15:35:39.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:35:39 compute-0 nova_compute[250018]: 2026-01-20 15:35:39.069 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:35:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:39 compute-0 sshd-session[389811]: Invalid user ubuntu from 134.122.57.138 port 56380
Jan 20 15:35:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:39 compute-0 sshd-session[389811]: Connection closed by invalid user ubuntu 134.122.57.138 port 56380 [preauth]
Jan 20 15:35:40 compute-0 ceph-mon[74360]: pgmap v3404: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:35:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:41 compute-0 ceph-mon[74360]: pgmap v3405: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:42 compute-0 nova_compute[250018]: 2026-01-20 15:35:42.044 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:42 compute-0 nova_compute[250018]: 2026-01-20 15:35:42.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:35:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:42.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:35:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:43.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:44 compute-0 ceph-mon[74360]: pgmap v3406: 321 pgs: 321 active+clean; 146 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 778 KiB/s wr, 25 op/s
Jan 20 15:35:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3378826245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:35:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:44.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3772005823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:35:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:45.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:46 compute-0 ceph-mon[74360]: pgmap v3407: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:47 compute-0 nova_compute[250018]: 2026-01-20 15:35:47.047 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:47.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:47 compute-0 nova_compute[250018]: 2026-01-20 15:35:47.733 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:48 compute-0 ceph-mon[74360]: pgmap v3408: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:35:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:48.747 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:35:48 compute-0 nova_compute[250018]: 2026-01-20 15:35:48.747 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:48.748 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:35:48 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:35:48.749 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:35:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 20 15:35:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:49 compute-0 sudo[389819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:49 compute-0 sudo[389819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:49 compute-0 sudo[389819]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:50 compute-0 sudo[389844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:35:50 compute-0 sudo[389844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:35:50 compute-0 sudo[389844]: pam_unix(sudo:session): session closed for user root
Jan 20 15:35:50 compute-0 ceph-mon[74360]: pgmap v3409: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 606 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 20 15:35:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:50.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:35:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:51.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:52 compute-0 nova_compute[250018]: 2026-01-20 15:35:52.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:35:52 compute-0 ceph-mon[74360]: pgmap v3410: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:35:52
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', '.mgr', 'images']
Jan 20 15:35:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:35:52 compute-0 nova_compute[250018]: 2026-01-20 15:35:52.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:52.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.0 MiB/s wr, 56 op/s
Jan 20 15:35:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:54 compute-0 ceph-mon[74360]: pgmap v3411: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.0 MiB/s wr, 56 op/s
Jan 20 15:35:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:54.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Jan 20 15:35:55 compute-0 ceph-mon[74360]: pgmap v3412: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 75 op/s
Jan 20 15:35:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:56.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:57 compute-0 nova_compute[250018]: 2026-01-20 15:35:57.050 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:35:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:57.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:35:57 compute-0 nova_compute[250018]: 2026-01-20 15:35:57.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:35:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:35:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:35:58 compute-0 ceph-mon[74360]: pgmap v3413: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:35:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:35:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:35:58.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:35:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:35:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:35:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:35:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:35:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:00 compute-0 ceph-mon[74360]: pgmap v3414: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 20 15:36:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Jan 20 15:36:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:02 compute-0 nova_compute[250018]: 2026-01-20 15:36:02.052 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:02 compute-0 ceph-mon[74360]: pgmap v3415: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 74 op/s
Jan 20 15:36:02 compute-0 nova_compute[250018]: 2026-01-20 15:36:02.790 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 669 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Jan 20 15:36:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:04 compute-0 ceph-mon[74360]: pgmap v3416: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 669 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Jan 20 15:36:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:04.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:36:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:06 compute-0 podman[389879]: 2026-01-20 15:36:06.475533789 +0000 UTC m=+0.055667489 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:06 compute-0 ceph-mon[74360]: pgmap v3417: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 20 15:36:06 compute-0 podman[389878]: 2026-01-20 15:36:06.523558957 +0000 UTC m=+0.102818103 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:36:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:07 compute-0 nova_compute[250018]: 2026-01-20 15:36:07.054 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:07.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:07 compute-0 nova_compute[250018]: 2026-01-20 15:36:07.793 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:08 compute-0 nova_compute[250018]: 2026-01-20 15:36:08.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:08 compute-0 ceph-mon[74360]: pgmap v3418: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:08.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:10 compute-0 sudo[389925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:10 compute-0 sudo[389925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:10 compute-0 sudo[389925]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:10 compute-0 sudo[389950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:10 compute-0 sudo[389950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:10 compute-0 sudo[389950]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:10 compute-0 ceph-mon[74360]: pgmap v3419: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:36:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:10.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:11 compute-0 sudo[389976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:11 compute-0 sudo[389976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-0 sudo[389976]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-0 sudo[390001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:36:11 compute-0 sudo[390001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-0 sudo[390001]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-0 sudo[390026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:11 compute-0 sudo[390026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-0 sudo[390026]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:36:11 compute-0 sudo[390051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:36:11 compute-0 sudo[390051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:11.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:11 compute-0 sudo[390051]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5b41b0b0-8325-4cf3-9cfa-f32c2877463c does not exist
Jan 20 15:36:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d4866b2d-8a73-4460-a8af-2466abe7968f does not exist
Jan 20 15:36:11 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c8f65b42-a205-4449-a260-33d671cf87a0 does not exist
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:36:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:36:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:36:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:36:12 compute-0 sudo[390106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:12 compute-0 sudo[390106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:12 compute-0 sudo[390106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:12 compute-0 nova_compute[250018]: 2026-01-20 15:36:12.056 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:12 compute-0 sudo[390131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:36:12 compute-0 sudo[390131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:12 compute-0 sudo[390131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:12 compute-0 sudo[390156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:12 compute-0 sudo[390156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:12 compute-0 sudo[390156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:12 compute-0 sudo[390181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:36:12 compute-0 sudo[390181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:12 compute-0 ceph-mon[74360]: pgmap v3420: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:36:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.559706926 +0000 UTC m=+0.041084381 container create 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:12 compute-0 systemd[1]: Started libpod-conmon-27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a.scope.
Jan 20 15:36:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.637879217 +0000 UTC m=+0.119256672 container init 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.543164165 +0000 UTC m=+0.024541620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.646866762 +0000 UTC m=+0.128244237 container start 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.650162752 +0000 UTC m=+0.131540227 container attach 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:36:12 compute-0 condescending_wescoff[390263]: 167 167
Jan 20 15:36:12 compute-0 systemd[1]: libpod-27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a.scope: Deactivated successfully.
Jan 20 15:36:12 compute-0 conmon[390263]: conmon 27a9ef6f0fba33eebc79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a.scope/container/memory.events
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.653074451 +0000 UTC m=+0.134451906 container died 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-488d4482b8d0fa806cc4fcf53345a629d4f28fbf129f8a4c4a9f54e5098e70e3-merged.mount: Deactivated successfully.
Jan 20 15:36:12 compute-0 podman[390247]: 2026-01-20 15:36:12.685212487 +0000 UTC m=+0.166589942 container remove 27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wescoff, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:36:12 compute-0 systemd[1]: libpod-conmon-27a9ef6f0fba33eebc795dab32e6c8ffd1a8fb34b2a2d957aace3eca295e0d4a.scope: Deactivated successfully.
Jan 20 15:36:12 compute-0 nova_compute[250018]: 2026-01-20 15:36:12.817 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:12 compute-0 podman[390287]: 2026-01-20 15:36:12.82762596 +0000 UTC m=+0.035488789 container create 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:36:12 compute-0 systemd[1]: Started libpod-conmon-428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84.scope.
Jan 20 15:36:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:12 compute-0 podman[390287]: 2026-01-20 15:36:12.90212533 +0000 UTC m=+0.109988179 container init 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:12 compute-0 podman[390287]: 2026-01-20 15:36:12.812893188 +0000 UTC m=+0.020756017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:12 compute-0 podman[390287]: 2026-01-20 15:36:12.91642667 +0000 UTC m=+0.124289509 container start 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:36:12 compute-0 podman[390287]: 2026-01-20 15:36:12.920118761 +0000 UTC m=+0.127981590 container attach 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:36:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:12.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 983 KiB/s wr, 38 op/s
Jan 20 15:36:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:36:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:36:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:36:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:36:13 compute-0 magical_ptolemy[390303]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:36:13 compute-0 magical_ptolemy[390303]: --> relative data size: 1.0
Jan 20 15:36:13 compute-0 magical_ptolemy[390303]: --> All data devices are unavailable
Jan 20 15:36:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:13 compute-0 systemd[1]: libpod-428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84.scope: Deactivated successfully.
Jan 20 15:36:13 compute-0 podman[390287]: 2026-01-20 15:36:13.750156786 +0000 UTC m=+0.958019615 container died 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc4f81e95559d12b6e46875dae487f5201dd1a3eba068de1d6df21e913194ac-merged.mount: Deactivated successfully.
Jan 20 15:36:13 compute-0 podman[390287]: 2026-01-20 15:36:13.800818788 +0000 UTC m=+1.008681617 container remove 428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:13 compute-0 systemd[1]: libpod-conmon-428f04a91a55e4c59ceafb50bebde3b900cb31fd8bbe69bb7f75dbb2d0121f84.scope: Deactivated successfully.
Jan 20 15:36:13 compute-0 sudo[390181]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:13 compute-0 sudo[390332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:13 compute-0 sudo[390332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:13 compute-0 sudo[390332]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:13 compute-0 sudo[390357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:36:13 compute-0 sudo[390357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:13 compute-0 sudo[390357]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:14 compute-0 sudo[390382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:14 compute-0 sudo[390382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:14 compute-0 sudo[390382]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:14 compute-0 nova_compute[250018]: 2026-01-20 15:36:14.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:14 compute-0 sudo[390407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:36:14 compute-0 sudo[390407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.422211956 +0000 UTC m=+0.040959377 container create 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:36:14 compute-0 systemd[1]: Started libpod-conmon-4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9.scope.
Jan 20 15:36:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.40365938 +0000 UTC m=+0.022406851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.512103837 +0000 UTC m=+0.130851308 container init 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.519205401 +0000 UTC m=+0.137952822 container start 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.523363914 +0000 UTC m=+0.142111345 container attach 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 15:36:14 compute-0 boring_johnson[390492]: 167 167
Jan 20 15:36:14 compute-0 systemd[1]: libpod-4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9.scope: Deactivated successfully.
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.525713077 +0000 UTC m=+0.144460538 container died 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:36:14 compute-0 ceph-mon[74360]: pgmap v3421: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 983 KiB/s wr, 38 op/s
Jan 20 15:36:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:36:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/239173477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f333d669491ed0934b6672bc594758050948e235db312000422b9f317ce596c-merged.mount: Deactivated successfully.
Jan 20 15:36:14 compute-0 podman[390474]: 2026-01-20 15:36:14.576521103 +0000 UTC m=+0.195268534 container remove 4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:36:14 compute-0 systemd[1]: libpod-conmon-4a2b9969f5a65cc97a8297b1d76016d5af655a0157f8a4556505a8305117d9a9.scope: Deactivated successfully.
Jan 20 15:36:14 compute-0 podman[390516]: 2026-01-20 15:36:14.748067679 +0000 UTC m=+0.052792251 container create 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 20 15:36:14 compute-0 systemd[1]: Started libpod-conmon-4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af.scope.
Jan 20 15:36:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa33aa3cc403afdf3085029a515ab68e32f05c03ca7b91bbd7051d5f094e44a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa33aa3cc403afdf3085029a515ab68e32f05c03ca7b91bbd7051d5f094e44a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa33aa3cc403afdf3085029a515ab68e32f05c03ca7b91bbd7051d5f094e44a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfa33aa3cc403afdf3085029a515ab68e32f05c03ca7b91bbd7051d5f094e44a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:14 compute-0 podman[390516]: 2026-01-20 15:36:14.727989642 +0000 UTC m=+0.032714244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:14 compute-0 podman[390516]: 2026-01-20 15:36:14.83910034 +0000 UTC m=+0.143824932 container init 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:36:14 compute-0 podman[390516]: 2026-01-20 15:36:14.85304395 +0000 UTC m=+0.157768522 container start 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:36:14 compute-0 podman[390516]: 2026-01-20 15:36:14.856171555 +0000 UTC m=+0.160896127 container attach 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:36:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 989 KiB/s wr, 46 op/s
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]: {
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:     "0": [
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:         {
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "devices": [
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "/dev/loop3"
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             ],
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "lv_name": "ceph_lv0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "lv_size": "7511998464",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "name": "ceph_lv0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "tags": {
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.cluster_name": "ceph",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.crush_device_class": "",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.encrypted": "0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.osd_id": "0",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.type": "block",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:                 "ceph.vdo": "0"
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             },
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "type": "block",
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:             "vg_name": "ceph_vg0"
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:         }
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]:     ]
Jan 20 15:36:15 compute-0 quizzical_mclaren[390532]: }
Jan 20 15:36:15 compute-0 systemd[1]: libpod-4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af.scope: Deactivated successfully.
Jan 20 15:36:15 compute-0 podman[390516]: 2026-01-20 15:36:15.583930213 +0000 UTC m=+0.888654825 container died 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa33aa3cc403afdf3085029a515ab68e32f05c03ca7b91bbd7051d5f094e44a-merged.mount: Deactivated successfully.
Jan 20 15:36:15 compute-0 podman[390516]: 2026-01-20 15:36:15.633510205 +0000 UTC m=+0.938234767 container remove 4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:36:15 compute-0 systemd[1]: libpod-conmon-4c1342a95f7a6e4bb3a568fbf932c1e15d5e7c8a24dcb37c45d37601b223b0af.scope: Deactivated successfully.
Jan 20 15:36:15 compute-0 sudo[390407]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:15 compute-0 sudo[390555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:15 compute-0 sudo[390555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:15 compute-0 sudo[390555]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:15 compute-0 sudo[390580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:36:15 compute-0 sudo[390580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:15 compute-0 sudo[390580]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:15 compute-0 sudo[390605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:15 compute-0 sudo[390605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:15 compute-0 sudo[390605]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:15 compute-0 sudo[390630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:36:15 compute-0 sudo[390630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.21992976 +0000 UTC m=+0.039056836 container create 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:16 compute-0 systemd[1]: Started libpod-conmon-1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51.scope.
Jan 20 15:36:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.294210025 +0000 UTC m=+0.113337111 container init 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.20562217 +0000 UTC m=+0.024749256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.301505194 +0000 UTC m=+0.120632260 container start 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.304814434 +0000 UTC m=+0.123941500 container attach 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:36:16 compute-0 xenodochial_visvesvaraya[390711]: 167 167
Jan 20 15:36:16 compute-0 systemd[1]: libpod-1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51.scope: Deactivated successfully.
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.311318402 +0000 UTC m=+0.130445478 container died 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-593a6d88d9339683b178a6cbc4967005646454b7a577465e183ca97f17dfc512-merged.mount: Deactivated successfully.
Jan 20 15:36:16 compute-0 podman[390695]: 2026-01-20 15:36:16.350318925 +0000 UTC m=+0.169445991 container remove 1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:36:16 compute-0 systemd[1]: libpod-conmon-1c3554ffef8ae25aa161ecdd47718da6fe94aaaa99a85f8040aedda6d69f1b51.scope: Deactivated successfully.
Jan 20 15:36:16 compute-0 podman[390737]: 2026-01-20 15:36:16.53099883 +0000 UTC m=+0.040956358 container create d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:36:16 compute-0 ceph-mon[74360]: pgmap v3422: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 989 KiB/s wr, 46 op/s
Jan 20 15:36:16 compute-0 systemd[1]: Started libpod-conmon-d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1.scope.
Jan 20 15:36:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0df97ce4a91fe51bed6b191ce3170e0b08f52b7f31654b283be5643b1d34428/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0df97ce4a91fe51bed6b191ce3170e0b08f52b7f31654b283be5643b1d34428/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0df97ce4a91fe51bed6b191ce3170e0b08f52b7f31654b283be5643b1d34428/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0df97ce4a91fe51bed6b191ce3170e0b08f52b7f31654b283be5643b1d34428/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:36:16 compute-0 podman[390737]: 2026-01-20 15:36:16.515341023 +0000 UTC m=+0.025298581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:36:16 compute-0 podman[390737]: 2026-01-20 15:36:16.618619818 +0000 UTC m=+0.128577386 container init d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:36:16 compute-0 podman[390737]: 2026-01-20 15:36:16.628456786 +0000 UTC m=+0.138414324 container start d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:36:16 compute-0 podman[390737]: 2026-01-20 15:36:16.631629182 +0000 UTC m=+0.141586720 container attach d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:36:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:17 compute-0 nova_compute[250018]: 2026-01-20 15:36:17.059 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 23 KiB/s wr, 23 op/s
Jan 20 15:36:17 compute-0 priceless_sammet[390753]: {
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:         "osd_id": 0,
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:         "type": "bluestore"
Jan 20 15:36:17 compute-0 priceless_sammet[390753]:     }
Jan 20 15:36:17 compute-0 priceless_sammet[390753]: }
Jan 20 15:36:17 compute-0 systemd[1]: libpod-d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1.scope: Deactivated successfully.
Jan 20 15:36:17 compute-0 podman[390774]: 2026-01-20 15:36:17.559007352 +0000 UTC m=+0.021727623 container died d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:36:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3107279431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1875694725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0df97ce4a91fe51bed6b191ce3170e0b08f52b7f31654b283be5643b1d34428-merged.mount: Deactivated successfully.
Jan 20 15:36:17 compute-0 podman[390774]: 2026-01-20 15:36:17.611667877 +0000 UTC m=+0.074388138 container remove d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:36:17 compute-0 systemd[1]: libpod-conmon-d5b59395a0056b87b9e9f16172ca27d72ccaf220a3f02d5caedb6ea669b3abd1.scope: Deactivated successfully.
Jan 20 15:36:17 compute-0 sudo[390630]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:36:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:36:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev ab63b2cf-d0e4-4a7a-b00e-1e93ca5319c5 does not exist
Jan 20 15:36:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4c7400ca-dffd-4fa7-8e3a-49585e3d7e15 does not exist
Jan 20 15:36:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e13f7a0c-a760-491a-8e34-5fdb70e816f5 does not exist
Jan 20 15:36:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:17.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:17 compute-0 sudo[390787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:17 compute-0 sudo[390787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:17 compute-0 sudo[390787]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:17 compute-0 nova_compute[250018]: 2026-01-20 15:36:17.820 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:17 compute-0 sudo[390812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:36:17 compute-0 sudo[390812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:17 compute-0 sudo[390812]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:18 compute-0 ceph-mon[74360]: pgmap v3423: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 23 KiB/s wr, 23 op/s
Jan 20 15:36:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:36:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1689596933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 15:36:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:19.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.072 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.073 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.073 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:36:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Jan 20 15:36:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:36:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142365457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.505 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:36:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4142365457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.721 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.723 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4149MB free_disk=20.96615219116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.723 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.723 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.863 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.865 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:36:19 compute-0 nova_compute[250018]: 2026-01-20 15:36:19.880 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:36:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:36:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810772915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-0 nova_compute[250018]: 2026-01-20 15:36:20.313 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:36:20 compute-0 nova_compute[250018]: 2026-01-20 15:36:20.320 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:36:20 compute-0 nova_compute[250018]: 2026-01-20 15:36:20.338 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:36:20 compute-0 nova_compute[250018]: 2026-01-20 15:36:20.339 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:36:20 compute-0 nova_compute[250018]: 2026-01-20 15:36:20.339 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:20 compute-0 ceph-mon[74360]: pgmap v3424: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Jan 20 15:36:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1836750772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/810772915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2691880414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:20 compute-0 sshd-session[390838]: Invalid user ubuntu from 134.122.57.138 port 41960
Jan 20 15:36:20 compute-0 sshd-session[390838]: Connection closed by invalid user ubuntu 134.122.57.138 port 41960 [preauth]
Jan 20 15:36:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:21.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 20 15:36:21 compute-0 ceph-mon[74360]: pgmap v3425: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 20 15:36:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:21.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:22 compute-0 nova_compute[250018]: 2026-01-20 15:36:22.062 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:22 compute-0 nova_compute[250018]: 2026-01-20 15:36:22.339 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:22 compute-0 nova_compute[250018]: 2026-01-20 15:36:22.340 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:22 compute-0 nova_compute[250018]: 2026-01-20 15:36:22.340 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:22 compute-0 nova_compute[250018]: 2026-01-20 15:36:22.870 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:23.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:24 compute-0 nova_compute[250018]: 2026-01-20 15:36:24.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:24 compute-0 ceph-mon[74360]: pgmap v3426: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:25.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:25.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:26 compute-0 ceph-mon[74360]: pgmap v3427: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:36:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:27.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:27 compute-0 nova_compute[250018]: 2026-01-20 15:36:27.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 20 15:36:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:27.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:27 compute-0 nova_compute[250018]: 2026-01-20 15:36:27.871 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:28 compute-0 ceph-mon[74360]: pgmap v3428: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 20 15:36:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:29.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:29 compute-0 nova_compute[250018]: 2026-01-20 15:36:29.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:36:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:30 compute-0 sudo[390889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:30 compute-0 sudo[390889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:30 compute-0 sudo[390889]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:30 compute-0 sudo[390914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:30 compute-0 sudo[390914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:30 compute-0 sudo[390914]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:30 compute-0 ceph-mon[74360]: pgmap v3429: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:36:30.810 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:36:30.810 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:36:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:36:30.810 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:36:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:31.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:32 compute-0 nova_compute[250018]: 2026-01-20 15:36:32.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:32 compute-0 ceph-mon[74360]: pgmap v3430: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:32 compute-0 nova_compute[250018]: 2026-01-20 15:36:32.878 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:33.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:34 compute-0 ceph-mon[74360]: pgmap v3431: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:36 compute-0 nova_compute[250018]: 2026-01-20 15:36:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:36 compute-0 ceph-mon[74360]: pgmap v3432: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:37 compute-0 nova_compute[250018]: 2026-01-20 15:36:37.069 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:37 compute-0 podman[390944]: 2026-01-20 15:36:37.48439169 +0000 UTC m=+0.067769119 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:36:37 compute-0 podman[390943]: 2026-01-20 15:36:37.523145026 +0000 UTC m=+0.105667891 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:36:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:37 compute-0 nova_compute[250018]: 2026-01-20 15:36:37.880 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:38 compute-0 ceph-mon[74360]: pgmap v3433: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:39.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:39 compute-0 nova_compute[250018]: 2026-01-20 15:36:39.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:36:39 compute-0 nova_compute[250018]: 2026-01-20 15:36:39.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:36:39 compute-0 nova_compute[250018]: 2026-01-20 15:36:39.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:36:39 compute-0 nova_compute[250018]: 2026-01-20 15:36:39.079 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:36:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:40 compute-0 ceph-mon[74360]: pgmap v3434: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:41.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:42 compute-0 nova_compute[250018]: 2026-01-20 15:36:42.071 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:42 compute-0 ceph-mon[74360]: pgmap v3435: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:42 compute-0 nova_compute[250018]: 2026-01-20 15:36:42.892 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:43.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:44 compute-0 ceph-mon[74360]: pgmap v3436: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 20 15:36:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2033270823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:36:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 145 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 1.1 MiB/s wr, 2 op/s
Jan 20 15:36:45 compute-0 ceph-mon[74360]: pgmap v3437: 321 pgs: 321 active+clean; 145 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 1.1 MiB/s wr, 2 op/s
Jan 20 15:36:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:47.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:47 compute-0 nova_compute[250018]: 2026-01-20 15:36:47.074 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 1.4 MiB/s wr, 3 op/s
Jan 20 15:36:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:47 compute-0 nova_compute[250018]: 2026-01-20 15:36:47.895 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:48 compute-0 ceph-mon[74360]: pgmap v3438: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 1.4 MiB/s wr, 3 op/s
Jan 20 15:36:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 20 15:36:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:50 compute-0 sudo[390995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:50 compute-0 sudo[390995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:50 compute-0 sudo[390995]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:50 compute-0 ceph-mon[74360]: pgmap v3439: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 20 15:36:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/521345695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:36:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/940201291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:36:50 compute-0 sudo[391020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:36:50 compute-0 sudo[391020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:36:50 compute-0 sudo[391020]: pam_unix(sudo:session): session closed for user root
Jan 20 15:36:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:51.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:52 compute-0 nova_compute[250018]: 2026-01-20 15:36:52.076 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:52 compute-0 ceph-mon[74360]: pgmap v3440: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:36:52
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.log']
Jan 20 15:36:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:36:52 compute-0 nova_compute[250018]: 2026-01-20 15:36:52.897 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:36:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:36:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:54 compute-0 ceph-mon[74360]: pgmap v3441: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:36:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:55.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:36:55 compute-0 ceph-mon[74360]: pgmap v3442: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 15:36:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:57 compute-0 nova_compute[250018]: 2026-01-20 15:36:57.077 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 98 op/s
Jan 20 15:36:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:57.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:36:57 compute-0 nova_compute[250018]: 2026-01-20 15:36:57.914 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:36:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:36:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:36:58 compute-0 ceph-mon[74360]: pgmap v3443: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 748 KiB/s wr, 98 op/s
Jan 20 15:36:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:36:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:36:59.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:36:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 20 15:36:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:36:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:36:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:36:59.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:00 compute-0 ceph-mon[74360]: pgmap v3444: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 20 15:37:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:01.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:37:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:37:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:01.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.849 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.850 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.864 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.944 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.944 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.952 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:37:01 compute-0 nova_compute[250018]: 2026-01-20 15:37:01.952 250022 INFO nova.compute.claims [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.051 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:02 compute-0 sshd-session[391049]: Invalid user ubuntu from 134.122.57.138 port 40440
Jan 20 15:37:02 compute-0 ceph-mon[74360]: pgmap v3445: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 15:37:02 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:02 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682962310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.535 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.541 250022 DEBUG nova.compute.provider_tree [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.563 250022 DEBUG nova.scheduler.client.report [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.591 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.592 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:37:02 compute-0 sshd-session[391049]: Connection closed by invalid user ubuntu 134.122.57.138 port 40440 [preauth]
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.661 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.661 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.680 250022 INFO nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.698 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.798 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.799 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.799 250022 INFO nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Creating image(s)
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.835 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.868 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.914 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.920 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.951 250022 DEBUG nova.policy [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.953 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.988 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.990 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.991 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:02 compute-0 nova_compute[250018]: 2026-01-20 15:37:02.991 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.021 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.025 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:03.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:03 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 20 15:37:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.376 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2682962310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.492 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.624 250022 DEBUG nova.objects.instance [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid 40704e43-8d1e-4a6f-addb-4e8524e3534d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.651 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.651 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Ensure instance console log exists: /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.652 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.652 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.652 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:03.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:03 compute-0 nova_compute[250018]: 2026-01-20 15:37:03.833 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Successfully created port: b07ffffb-4654-4fbb-bd0b-860c4d618de1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:37:04 compute-0 ceph-mon[74360]: pgmap v3446: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.518 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Successfully updated port: b07ffffb-4654-4fbb-bd0b-860c4d618de1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.542 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.542 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.542 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.599 250022 DEBUG nova.compute.manager [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.600 250022 DEBUG nova.compute.manager [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing instance network info cache due to event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:37:04 compute-0 nova_compute[250018]: 2026-01-20 15:37:04.600 250022 DEBUG oslo_concurrency.lockutils [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:37:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:05.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 226 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Jan 20 15:37:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:05 compute-0 nova_compute[250018]: 2026-01-20 15:37:05.809 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:37:06 compute-0 ceph-mon[74360]: pgmap v3447: 321 pgs: 321 active+clean; 226 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 129 op/s
Jan 20 15:37:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.081 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 244 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.739 250022 DEBUG nova.network.neutron [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updating instance_info_cache with network_info: [{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.762 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.763 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Instance network_info: |[{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.763 250022 DEBUG oslo_concurrency.lockutils [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.763 250022 DEBUG nova.network.neutron [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.767 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Start _get_guest_xml network_info=[{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.773 250022 WARNING nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.779 250022 DEBUG nova.virt.libvirt.host [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:37:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.781 250022 DEBUG nova.virt.libvirt.host [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.791 250022 DEBUG nova.virt.libvirt.host [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.791 250022 DEBUG nova.virt.libvirt.host [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.793 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.793 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.794 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.794 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.795 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.795 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.795 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.795 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.796 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.796 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.796 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.797 250022 DEBUG nova.virt.hardware [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.800 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:07 compute-0 nova_compute[250018]: 2026-01-20 15:37:07.955 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:37:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547492813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.274 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.319 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.329 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:08 compute-0 podman[391286]: 2026-01-20 15:37:08.505971689 +0000 UTC m=+0.083154928 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 15:37:08 compute-0 podman[391285]: 2026-01-20 15:37:08.509537866 +0000 UTC m=+0.097963142 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 15:37:08 compute-0 ceph-mon[74360]: pgmap v3448: 321 pgs: 321 active+clean; 244 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 20 15:37:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/547492813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:37:08 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914974930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.809 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.811 250022 DEBUG nova.virt.libvirt.vif [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-237209397',display_name='tempest-TestNetworkBasicOps-server-237209397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-237209397',id=214,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkP2vzk68ordq1Cd3HJUYxImpUT5hMDsFQU6BGUUao42NjuvP6GQgxpjcgMERTlsoqQOXlucNpOxkob6br28DNQAfRkkWqobfp1HtB+AQiSr4tjOud63vEtKxHG3y82WA==',key_name='tempest-TestNetworkBasicOps-882946882',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ursy3p3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:37:02Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=40704e43-8d1e-4a6f-addb-4e8524e3534d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.811 250022 DEBUG nova.network.os_vif_util [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.812 250022 DEBUG nova.network.os_vif_util [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.813 250022 DEBUG nova.objects.instance [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid 40704e43-8d1e-4a6f-addb-4e8524e3534d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.841 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <uuid>40704e43-8d1e-4a6f-addb-4e8524e3534d</uuid>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <name>instance-000000d6</name>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkBasicOps-server-237209397</nova:name>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:37:07</nova:creationTime>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <nova:port uuid="b07ffffb-4654-4fbb-bd0b-860c4d618de1">
Jan 20 15:37:08 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <system>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="serial">40704e43-8d1e-4a6f-addb-4e8524e3534d</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="uuid">40704e43-8d1e-4a6f-addb-4e8524e3534d</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </system>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <os>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </os>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <features>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </features>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/40704e43-8d1e-4a6f-addb-4e8524e3534d_disk">
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </source>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config">
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </source>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:37:08 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:bf:50:10"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <target dev="tapb07ffffb-46"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/console.log" append="off"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <video>
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </video>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:37:08 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:37:08 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:37:08 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:37:08 compute-0 nova_compute[250018]: </domain>
Jan 20 15:37:08 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.842 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Preparing to wait for external event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.842 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.842 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.842 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.843 250022 DEBUG nova.virt.libvirt.vif [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-237209397',display_name='tempest-TestNetworkBasicOps-server-237209397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-237209397',id=214,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkP2vzk68ordq1Cd3HJUYxImpUT5hMDsFQU6BGUUao42NjuvP6GQgxpjcgMERTlsoqQOXlucNpOxkob6br28DNQAfRkkWqobfp1HtB+AQiSr4tjOud63vEtKxHG3y82WA==',key_name='tempest-TestNetworkBasicOps-882946882',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ursy3p3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:37:02Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=40704e43-8d1e-4a6f-addb-4e8524e3534d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.843 250022 DEBUG nova.network.os_vif_util [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.844 250022 DEBUG nova.network.os_vif_util [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.844 250022 DEBUG os_vif [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.845 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.845 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.849 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07ffffb-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.849 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb07ffffb-46, col_values=(('external_ids', {'iface-id': 'b07ffffb-4654-4fbb-bd0b-860c4d618de1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:50:10', 'vm-uuid': '40704e43-8d1e-4a6f-addb-4e8524e3534d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:08 compute-0 NetworkManager[48960]: <info>  [1768923428.8979] manager: (tapb07ffffb-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.899 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.903 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.904 250022 INFO os_vif [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46')
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.951 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.952 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.952 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:bf:50:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.953 250022 INFO nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Using config drive
Jan 20 15:37:08 compute-0 nova_compute[250018]: 2026-01-20 15:37:08.981 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:09.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:09 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2914974930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:37:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:09 compute-0 nova_compute[250018]: 2026-01-20 15:37:09.927 250022 INFO nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Creating config drive at /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config
Jan 20 15:37:09 compute-0 nova_compute[250018]: 2026-01-20 15:37:09.936 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfugop508 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.073 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfugop508" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.105 250022 DEBUG nova.storage.rbd_utils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.108 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.159 250022 DEBUG nova.network.neutron [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updated VIF entry in instance network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.160 250022 DEBUG nova.network.neutron [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updating instance_info_cache with network_info: [{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.177 250022 DEBUG oslo_concurrency.lockutils [req-c85756a5-42fc-47aa-b699-a8831648f8fd req-f9fa37ad-d515-47af-a2f0-1a1702462f84 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.268 250022 DEBUG oslo_concurrency.processutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config 40704e43-8d1e-4a6f-addb-4e8524e3534d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.269 250022 INFO nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Deleting local config drive /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d/disk.config because it was imported into RBD.
Jan 20 15:37:10 compute-0 kernel: tapb07ffffb-46: entered promiscuous mode
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.3551] manager: (tapb07ffffb-46): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Jan 20 15:37:10 compute-0 ovn_controller[148666]: 2026-01-20T15:37:10Z|00776|binding|INFO|Claiming lport b07ffffb-4654-4fbb-bd0b-860c4d618de1 for this chassis.
Jan 20 15:37:10 compute-0 ovn_controller[148666]: 2026-01-20T15:37:10Z|00777|binding|INFO|b07ffffb-4654-4fbb-bd0b-860c4d618de1: Claiming fa:16:3e:bf:50:10 10.100.0.5
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.398 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.404 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 systemd-udevd[391419]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.4199] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.418 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.4214] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.422 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:50:10 10.100.0.5'], port_security=['fa:16:3e:bf:50:10 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40704e43-8d1e-4a6f-addb-4e8524e3534d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '95a77afb-6fed-4025-a06d-6753ed6fc83e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10fabbbe-46a8-4773-85b5-859f8d94e243, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=b07ffffb-4654-4fbb-bd0b-860c4d618de1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.423 160071 INFO neutron.agent.ovn.metadata.agent [-] Port b07ffffb-4654-4fbb-bd0b-860c4d618de1 in datapath ce71b376-fc91-4f6b-9838-8ea300ca70de bound to our chassis
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.424 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce71b376-fc91-4f6b-9838-8ea300ca70de
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.4292] device (tapb07ffffb-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.4303] device (tapb07ffffb-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:37:10 compute-0 systemd-machined[216401]: New machine qemu-92-instance-000000d6.
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.436 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbb7774-0992-49c5-a4fd-f5cc09176cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.437 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce71b376-f1 in ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.440 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce71b376-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.440 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d523a55a-64ac-4161-88be-2516e062dd55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.441 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[000c914c-e8be-4029-a971-52f4c89269cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.456 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4e9d9e-ab94-40e9-b9c1-b3e57f6abfdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-000000d6.
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.492 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7666d6ad-329a-4558-9bfe-3aa6c05bdfb0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.497 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.508 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 ovn_controller[148666]: 2026-01-20T15:37:10Z|00778|binding|INFO|Setting lport b07ffffb-4654-4fbb-bd0b-860c4d618de1 ovn-installed in OVS
Jan 20 15:37:10 compute-0 ovn_controller[148666]: 2026-01-20T15:37:10Z|00779|binding|INFO|Setting lport b07ffffb-4654-4fbb-bd0b-860c4d618de1 up in Southbound
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.520 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.538 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[41e856d8-3519-4e23-b9b8-994c20587b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.5456] manager: (tapce71b376-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/377)
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.544 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ae8628-edd9-4f99-a0fd-5592634e017c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ceph-mon[74360]: pgmap v3449: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.583 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[dddf5461-3de2-42d5-ac8b-efa03e6ab8e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.586 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[04dcf3f4-38b2-4791-933b-8c150c9e337a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 sudo[391431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:10 compute-0 sudo[391431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:10 compute-0 sudo[391431]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.6115] device (tapce71b376-f0): carrier: link connected
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.617 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[be4fb9c6-d169-4d29-85a1-c9f350aa2504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.635 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d41c24de-7277-42ee-a70b-8785e417aa19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce71b376-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:63:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937133, 'reachable_time': 24360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391497, 'error': None, 'target': 'ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 sudo[391480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.648 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5059638f-4e68-41dc-811b-443e237a52cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:63ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 937133, 'tstamp': 937133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391504, 'error': None, 'target': 'ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 sudo[391480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:10 compute-0 sudo[391480]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.669 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ec504123-d887-46d9-a25c-5baf4abfd50e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce71b376-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:63:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937133, 'reachable_time': 24360, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391507, 'error': None, 'target': 'ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.699 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b5583-e7fb-465a-a3c0-1a0f35f845c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.766 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b32e305b-878d-4412-b2c2-7687c5294050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.767 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce71b376-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.767 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.768 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce71b376-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.769 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 kernel: tapce71b376-f0: entered promiscuous mode
Jan 20 15:37:10 compute-0 NetworkManager[48960]: <info>  [1768923430.7705] manager: (tapce71b376-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.774 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce71b376-f0, col_values=(('external_ids', {'iface-id': 'e939c0cd-c70f-4392-99e6-adb0b7314e89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.776 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 ovn_controller[148666]: 2026-01-20T15:37:10Z|00780|binding|INFO|Releasing lport e939c0cd-c70f-4392-99e6-adb0b7314e89 from this chassis (sb_readonly=0)
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.777 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.779 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce71b376-fc91-4f6b-9838-8ea300ca70de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce71b376-fc91-4f6b-9838-8ea300ca70de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.780 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[37af9632-3c0b-4371-8146-b20494b0883d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.780 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-ce71b376-fc91-4f6b-9838-8ea300ca70de
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/ce71b376-fc91-4f6b-9838-8ea300ca70de.pid.haproxy
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID ce71b376-fc91-4f6b-9838-8ea300ca70de
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:37:10 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:10.781 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'env', 'PROCESS_TAG=haproxy-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce71b376-fc91-4f6b-9838-8ea300ca70de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.921 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923430.9209588, 40704e43-8d1e-4a6f-addb-4e8524e3534d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.922 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] VM Started (Lifecycle Event)
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.945 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.950 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923430.9212255, 40704e43-8d1e-4a6f-addb-4e8524e3534d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.950 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] VM Paused (Lifecycle Event)
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.977 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:37:10 compute-0 nova_compute[250018]: 2026-01-20 15:37:10.980 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.007 250022 DEBUG nova.compute.manager [req-0c0c679e-2f2b-4434-874d-6c7bef5f44ce req-30261bc1-90cf-435a-b68e-88f7c5f06970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.008 250022 DEBUG oslo_concurrency.lockutils [req-0c0c679e-2f2b-4434-874d-6c7bef5f44ce req-30261bc1-90cf-435a-b68e-88f7c5f06970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.008 250022 DEBUG oslo_concurrency.lockutils [req-0c0c679e-2f2b-4434-874d-6c7bef5f44ce req-30261bc1-90cf-435a-b68e-88f7c5f06970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.008 250022 DEBUG oslo_concurrency.lockutils [req-0c0c679e-2f2b-4434-874d-6c7bef5f44ce req-30261bc1-90cf-435a-b68e-88f7c5f06970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.009 250022 DEBUG nova.compute.manager [req-0c0c679e-2f2b-4434-874d-6c7bef5f44ce req-30261bc1-90cf-435a-b68e-88f7c5f06970 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Processing event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.010 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.010 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.013 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923431.0136197, 40704e43-8d1e-4a6f-addb-4e8524e3534d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.014 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] VM Resumed (Lifecycle Event)
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.016 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.019 250022 INFO nova.virt.libvirt.driver [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Instance spawned successfully.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.019 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.038 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.043 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.043 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.044 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.044 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.045 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.045 250022 DEBUG nova.virt.libvirt.driver [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.050 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:37:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:11.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.078 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.103 250022 INFO nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Took 8.31 seconds to spawn the instance on the hypervisor.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.104 250022 DEBUG nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:37:11 compute-0 podman[391578]: 2026-01-20 15:37:11.146134266 +0000 UTC m=+0.040964627 container create 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:37:11 compute-0 systemd[1]: Started libpod-conmon-0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0.scope.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.192 250022 INFO nova.compute.manager [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Took 9.28 seconds to build instance.
Jan 20 15:37:11 compute-0 nova_compute[250018]: 2026-01-20 15:37:11.206 250022 DEBUG oslo_concurrency.lockutils [None req-4d2d3441-e1fa-47ee-b4b8-190a8a89f03e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e6f2305f5447705a10372f5040b0269b09e467fb1ef6759982c3c23dab35ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:11 compute-0 podman[391578]: 2026-01-20 15:37:11.124184659 +0000 UTC m=+0.019015040 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:37:11 compute-0 podman[391578]: 2026-01-20 15:37:11.224317028 +0000 UTC m=+0.119147399 container init 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:37:11 compute-0 podman[391578]: 2026-01-20 15:37:11.231070682 +0000 UTC m=+0.125901033 container start 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:37:11 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [NOTICE]   (391597) : New worker (391599) forked
Jan 20 15:37:11 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [NOTICE]   (391597) : Loading success.
Jan 20 15:37:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00315160873813512 of space, bias 1.0, pg target 0.945482621440536 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:37:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:37:12 compute-0 nova_compute[250018]: 2026-01-20 15:37:12.084 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:12 compute-0 ceph-mon[74360]: pgmap v3450: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:13.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.148 250022 DEBUG nova.compute.manager [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.149 250022 DEBUG oslo_concurrency.lockutils [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.150 250022 DEBUG oslo_concurrency.lockutils [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.150 250022 DEBUG oslo_concurrency.lockutils [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.151 250022 DEBUG nova.compute.manager [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] No waiting events found dispatching network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.151 250022 WARNING nova.compute.manager [req-e225bffa-9320-42fc-8e23-990a916df7f8 req-a089c341-e436-4a9a-9292-e00910cf8923 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received unexpected event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 for instance with vm_state active and task_state None.
Jan 20 15:37:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:13 compute-0 nova_compute[250018]: 2026-01-20 15:37:13.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:14 compute-0 ceph-mon[74360]: pgmap v3451: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 15:37:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:37:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3673359103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.054 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:37:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:15.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:37:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.377 250022 DEBUG nova.compute.manager [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.378 250022 DEBUG nova.compute.manager [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing instance network info cache due to event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.378 250022 DEBUG oslo_concurrency.lockutils [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.378 250022 DEBUG oslo_concurrency.lockutils [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:37:15 compute-0 nova_compute[250018]: 2026-01-20 15:37:15.378 250022 DEBUG nova.network.neutron [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:37:15 compute-0 ceph-mon[74360]: pgmap v3452: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 20 15:37:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:37:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:37:16 compute-0 nova_compute[250018]: 2026-01-20 15:37:16.810 250022 DEBUG nova.network.neutron [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updated VIF entry in instance network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:37:16 compute-0 nova_compute[250018]: 2026-01-20 15:37:16.811 250022 DEBUG nova.network.neutron [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updating instance_info_cache with network_info: [{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:37:16 compute-0 nova_compute[250018]: 2026-01-20 15:37:16.828 250022 DEBUG oslo_concurrency.lockutils [req-ea53e749-cbb1-4b34-a5b0-58c6cf9b4602 req-faa51a6e-6e3b-4f62-9d0c-1f7001aff6bb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:37:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:37:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 57K writes, 215K keys, 57K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 57K writes, 21K syncs, 2.65 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2225 writes, 7775 keys, 2225 commit groups, 1.0 writes per commit group, ingest: 7.92 MB, 0.01 MB/s
                                           Interval WAL: 2225 writes, 934 syncs, 2.38 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 15:37:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:17.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:17 compute-0 nova_compute[250018]: 2026-01-20 15:37:17.089 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2535838283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 20 15:37:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:18 compute-0 ceph-mon[74360]: pgmap v3453: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 110 op/s
Jan 20 15:37:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1014966892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:18 compute-0 sudo[391611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:18 compute-0 sudo[391611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-0 sudo[391611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-0 sudo[391636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:37:18 compute-0 sudo[391636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-0 sudo[391636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:18 compute-0 sudo[391662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:18 compute-0 sudo[391662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-0 sudo[391662]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:18 compute-0 sudo[391687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:37:18 compute-0 sudo[391687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:18 compute-0 sudo[391687]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:19 compute-0 nova_compute[250018]: 2026-01-20 15:37:18.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:19.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e815cb5e-6767-4435-afcb-1a0fcf4dca34 does not exist
Jan 20 15:37:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev df4fa649-7e0f-466d-824a-291b01adad0a does not exist
Jan 20 15:37:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 559401d4-40c8-4842-bc1c-14dcdffd3f90 does not exist
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:37:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:19 compute-0 sudo[391742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:19 compute-0 sudo[391742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:19 compute-0 sudo[391742]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:19 compute-0 sudo[391767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:37:19 compute-0 sudo[391767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:19 compute-0 sudo[391767]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:19 compute-0 sudo[391792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:19 compute-0 sudo[391792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:19 compute-0 sudo[391792]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:37:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:37:19 compute-0 sudo[391817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:37:19 compute-0 sudo[391817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 80 op/s
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.640114924 +0000 UTC m=+0.051280450 container create 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:37:19 compute-0 systemd[1]: Started libpod-conmon-4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612.scope.
Jan 20 15:37:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.618446902 +0000 UTC m=+0.029612478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.724820282 +0000 UTC m=+0.135985808 container init 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.732109272 +0000 UTC m=+0.143274788 container start 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.735300738 +0000 UTC m=+0.146466264 container attach 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:19 compute-0 priceless_booth[391900]: 167 167
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.737812707 +0000 UTC m=+0.148978233 container died 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:37:19 compute-0 systemd[1]: libpod-4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612.scope: Deactivated successfully.
Jan 20 15:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4141696699c126a35d639ad693eb4384cc05b427cb2ba683ec394608f308a89e-merged.mount: Deactivated successfully.
Jan 20 15:37:19 compute-0 podman[391883]: 2026-01-20 15:37:19.774538328 +0000 UTC m=+0.185703854 container remove 4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:37:19 compute-0 systemd[1]: libpod-conmon-4afbc874839afa430d86e79012359eee74c0c6f1456ef908d7bb711bcfa4f612.scope: Deactivated successfully.
Jan 20 15:37:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:19 compute-0 podman[391923]: 2026-01-20 15:37:19.931980459 +0000 UTC m=+0.039393485 container create 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:37:19 compute-0 systemd[1]: Started libpod-conmon-37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6.scope.
Jan 20 15:37:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:19.916567709 +0000 UTC m=+0.023980755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:20.022124487 +0000 UTC m=+0.129537533 container init 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:20.031224684 +0000 UTC m=+0.138637710 container start 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:20.034665419 +0000 UTC m=+0.142078475 container attach 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:37:20 compute-0 ceph-mon[74360]: pgmap v3454: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 80 op/s
Jan 20 15:37:20 compute-0 busy_bhabha[391939]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:37:20 compute-0 busy_bhabha[391939]: --> relative data size: 1.0
Jan 20 15:37:20 compute-0 busy_bhabha[391939]: --> All data devices are unavailable
Jan 20 15:37:20 compute-0 systemd[1]: libpod-37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6.scope: Deactivated successfully.
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:20.843073205 +0000 UTC m=+0.950486241 container died 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:37:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5db77946e1c3dc1a1d0b602b4d5f552000c25c334cf077edf87e984ea5c27ad3-merged.mount: Deactivated successfully.
Jan 20 15:37:20 compute-0 podman[391923]: 2026-01-20 15:37:20.896709036 +0000 UTC m=+1.004122062 container remove 37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:37:20 compute-0 systemd[1]: libpod-conmon-37ace499c05dddf41003945a66f291a0e32a7cea67b80b6e2719f65f3a1c0aa6.scope: Deactivated successfully.
Jan 20 15:37:20 compute-0 sudo[391817]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:20 compute-0 sudo[391970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:20 compute-0 sudo[391970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:20 compute-0 sudo[391970]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:21 compute-0 sudo[391995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:37:21 compute-0 sudo[391995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:21 compute-0 sudo[391995]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.074 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.074 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.075 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:21.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:21 compute-0 sudo[392020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:21 compute-0 sudo[392020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:21 compute-0 sudo[392020]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:21 compute-0 sudo[392046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:37:21 compute-0 sudo[392046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 20 15:37:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1298775489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.471358821 +0000 UTC m=+0.024461298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599222156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.690514055 +0000 UTC m=+0.243616512 container create 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.708 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:21 compute-0 systemd[1]: Started libpod-conmon-8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63.scope.
Jan 20 15:37:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.787795217 +0000 UTC m=+0.340897694 container init 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:37:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.796 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:37:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:21 compute-0 nova_compute[250018]: 2026-01-20 15:37:21.798 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.800866623 +0000 UTC m=+0.353969080 container start 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:21 compute-0 youthful_varahamihira[392150]: 167 167
Jan 20 15:37:21 compute-0 systemd[1]: libpod-8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63.scope: Deactivated successfully.
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.808606255 +0000 UTC m=+0.361708732 container attach 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.808896312 +0000 UTC m=+0.361998779 container died 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d78d0284642ed36062c85b24d1e8b87f2a350d224b8f85ee3119a7fa0aee469-merged.mount: Deactivated successfully.
Jan 20 15:37:21 compute-0 podman[392130]: 2026-01-20 15:37:21.852775098 +0000 UTC m=+0.405877555 container remove 8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:21 compute-0 systemd[1]: libpod-conmon-8a63bdd16d0c3f0536a3b8f6f59faf10e2645d395a9833acb85884156c6ddb63.scope: Deactivated successfully.
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.032 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.035 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3992MB free_disk=20.921852111816406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.036 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.036 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.052449531 +0000 UTC m=+0.052852581 container create 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.089 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:22 compute-0 systemd[1]: Started libpod-conmon-419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098.scope.
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.025492947 +0000 UTC m=+0.025896017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc0c69f0d4162d1e6ccb7da0941b30c21be0a23263f14eda03444f14f821d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc0c69f0d4162d1e6ccb7da0941b30c21be0a23263f14eda03444f14f821d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc0c69f0d4162d1e6ccb7da0941b30c21be0a23263f14eda03444f14f821d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc0c69f0d4162d1e6ccb7da0941b30c21be0a23263f14eda03444f14f821d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.148494459 +0000 UTC m=+0.148897529 container init 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.150 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 40704e43-8d1e-4a6f-addb-4e8524e3534d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.151 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.151 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.163108258 +0000 UTC m=+0.163511298 container start 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.166063718 +0000 UTC m=+0.166466768 container attach 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.269 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:22 compute-0 ceph-mon[74360]: pgmap v3455: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 20 15:37:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/351549409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1599222156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1011709830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.787 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.796 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.816 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.839 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:37:22 compute-0 nova_compute[250018]: 2026-01-20 15:37:22.840 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:22 compute-0 practical_panini[392190]: {
Jan 20 15:37:22 compute-0 practical_panini[392190]:     "0": [
Jan 20 15:37:22 compute-0 practical_panini[392190]:         {
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "devices": [
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "/dev/loop3"
Jan 20 15:37:22 compute-0 practical_panini[392190]:             ],
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "lv_name": "ceph_lv0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "lv_size": "7511998464",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "name": "ceph_lv0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "tags": {
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.cluster_name": "ceph",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.crush_device_class": "",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.encrypted": "0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.osd_id": "0",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.type": "block",
Jan 20 15:37:22 compute-0 practical_panini[392190]:                 "ceph.vdo": "0"
Jan 20 15:37:22 compute-0 practical_panini[392190]:             },
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "type": "block",
Jan 20 15:37:22 compute-0 practical_panini[392190]:             "vg_name": "ceph_vg0"
Jan 20 15:37:22 compute-0 practical_panini[392190]:         }
Jan 20 15:37:22 compute-0 practical_panini[392190]:     ]
Jan 20 15:37:22 compute-0 practical_panini[392190]: }
Jan 20 15:37:22 compute-0 systemd[1]: libpod-419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098.scope: Deactivated successfully.
Jan 20 15:37:22 compute-0 podman[392173]: 2026-01-20 15:37:22.942553484 +0000 UTC m=+0.942956594 container died 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc0c69f0d4162d1e6ccb7da0941b30c21be0a23263f14eda03444f14f821d30-merged.mount: Deactivated successfully.
Jan 20 15:37:23 compute-0 podman[392173]: 2026-01-20 15:37:23.015191204 +0000 UTC m=+1.015594254 container remove 419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_panini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:37:23 compute-0 systemd[1]: libpod-conmon-419e91d0425712dc9ab4940ef1bd961b245a8229b1310f073883a3c755101098.scope: Deactivated successfully.
Jan 20 15:37:23 compute-0 sudo[392046]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:23.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:23 compute-0 sudo[392236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:23 compute-0 sudo[392236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:23 compute-0 sudo[392236]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:23 compute-0 sudo[392261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:37:23 compute-0 sudo[392261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:23 compute-0 sudo[392261]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:23 compute-0 sudo[392286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:23 compute-0 sudo[392286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:23 compute-0 sudo[392286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 73 op/s
Jan 20 15:37:23 compute-0 sudo[392311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:37:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1011709830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:23 compute-0 sudo[392311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.714427386 +0000 UTC m=+0.042072308 container create 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:37:23 compute-0 systemd[1]: Started libpod-conmon-9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248.scope.
Jan 20 15:37:23 compute-0 ovn_controller[148666]: 2026-01-20T15:37:23Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:50:10 10.100.0.5
Jan 20 15:37:23 compute-0 ovn_controller[148666]: 2026-01-20T15:37:23Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:50:10 10.100.0.5
Jan 20 15:37:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.694178323 +0000 UTC m=+0.021823265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.805716204 +0000 UTC m=+0.133361136 container init 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.813920298 +0000 UTC m=+0.141565220 container start 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.816816926 +0000 UTC m=+0.144461848 container attach 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:37:23 compute-0 elastic_mclean[392391]: 167 167
Jan 20 15:37:23 compute-0 systemd[1]: libpod-9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248.scope: Deactivated successfully.
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.822040479 +0000 UTC m=+0.149685401 container died 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:23 compute-0 nova_compute[250018]: 2026-01-20 15:37:23.841 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dc83aa9ae18731f61f9ab5c2ef32523952715633022d136a993fdc6cd231c67-merged.mount: Deactivated successfully.
Jan 20 15:37:23 compute-0 podman[392375]: 2026-01-20 15:37:23.859796688 +0000 UTC m=+0.187441610 container remove 9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:37:23 compute-0 systemd[1]: libpod-conmon-9004ad4206c2cf31dee8e0e044c7bac1ec44ae828b8385c0125820a8a5cb9248.scope: Deactivated successfully.
Jan 20 15:37:24 compute-0 nova_compute[250018]: 2026-01-20 15:37:24.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:24 compute-0 podman[392414]: 2026-01-20 15:37:24.045594192 +0000 UTC m=+0.042170890 container create 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:37:24 compute-0 podman[392414]: 2026-01-20 15:37:24.028517407 +0000 UTC m=+0.025094135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:37:24 compute-0 systemd[1]: Started libpod-conmon-6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021.scope.
Jan 20 15:37:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e19b5d1aba575f2a19c70c85f2aa35c7c2c4794ea8e3d3cd7b6028e48e859/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e19b5d1aba575f2a19c70c85f2aa35c7c2c4794ea8e3d3cd7b6028e48e859/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e19b5d1aba575f2a19c70c85f2aa35c7c2c4794ea8e3d3cd7b6028e48e859/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e19b5d1aba575f2a19c70c85f2aa35c7c2c4794ea8e3d3cd7b6028e48e859/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:37:24 compute-0 podman[392414]: 2026-01-20 15:37:24.397497986 +0000 UTC m=+0.394074704 container init 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:37:24 compute-0 podman[392414]: 2026-01-20 15:37:24.405874444 +0000 UTC m=+0.402451142 container start 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:37:24 compute-0 ceph-mon[74360]: pgmap v3456: 321 pgs: 321 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 73 op/s
Jan 20 15:37:24 compute-0 podman[392414]: 2026-01-20 15:37:24.420160263 +0000 UTC m=+0.416737031 container attach 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:37:25 compute-0 nova_compute[250018]: 2026-01-20 15:37:25.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:25.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]: {
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:         "osd_id": 0,
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:         "type": "bluestore"
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]:     }
Jan 20 15:37:25 compute-0 youthful_chatelet[392433]: }
Jan 20 15:37:25 compute-0 systemd[1]: libpod-6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021.scope: Deactivated successfully.
Jan 20 15:37:25 compute-0 podman[392414]: 2026-01-20 15:37:25.290656082 +0000 UTC m=+1.287232830 container died 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-628e19b5d1aba575f2a19c70c85f2aa35c7c2c4794ea8e3d3cd7b6028e48e859-merged.mount: Deactivated successfully.
Jan 20 15:37:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Jan 20 15:37:25 compute-0 podman[392414]: 2026-01-20 15:37:25.355825748 +0000 UTC m=+1.352402456 container remove 6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:37:25 compute-0 systemd[1]: libpod-conmon-6a9efdadcfe7cb3462fda2da5eae19d5d97f458e2327cf342635762f81d30021.scope: Deactivated successfully.
Jan 20 15:37:25 compute-0 sudo[392311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:37:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:37:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2b932df2-52bc-4f30-b706-01c8070c68a5 does not exist
Jan 20 15:37:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 08744cc4-a65a-4588-ab3d-a3274e0ff5cb does not exist
Jan 20 15:37:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 49b711f0-e325-4736-b2c5-aaae56f58fd8 does not exist
Jan 20 15:37:25 compute-0 sudo[392469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:25 compute-0 sudo[392469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:25 compute-0 sudo[392469]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:25 compute-0 sudo[392494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:37:25 compute-0 sudo[392494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:25 compute-0 sudo[392494]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:37:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:25.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:37:26 compute-0 ceph-mon[74360]: pgmap v3457: 321 pgs: 321 active+clean; 264 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 112 op/s
Jan 20 15:37:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:37:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:27 compute-0 nova_compute[250018]: 2026-01-20 15:37:27.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 20 15:37:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:27.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:28 compute-0 ceph-mon[74360]: pgmap v3458: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 20 15:37:29 compute-0 nova_compute[250018]: 2026-01-20 15:37:29.004 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:29 compute-0 nova_compute[250018]: 2026-01-20 15:37:29.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:29.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:37:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:29.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:30 compute-0 ceph-mon[74360]: pgmap v3459: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:37:30 compute-0 sudo[392522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:30 compute-0 sudo[392522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:30 compute-0 sudo[392522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:30 compute-0 sudo[392547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:30 compute-0 sudo[392547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:30 compute-0 sudo[392547]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:30.810 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:31.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:31.896 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:37:31 compute-0 nova_compute[250018]: 2026-01-20 15:37:31.897 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:31 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:31.897 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:37:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 15:37:32 compute-0 nova_compute[250018]: 2026-01-20 15:37:32.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:32 compute-0 nova_compute[250018]: 2026-01-20 15:37:32.093 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:32 compute-0 ceph-mon[74360]: pgmap v3460: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:33.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:34 compute-0 nova_compute[250018]: 2026-01-20 15:37:34.070 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:34 compute-0 ceph-mon[74360]: pgmap v3461: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:35 compute-0 ceph-mon[74360]: pgmap v3462: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:37:36 compute-0 nova_compute[250018]: 2026-01-20 15:37:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:37 compute-0 nova_compute[250018]: 2026-01-20 15:37:37.095 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 708 KiB/s wr, 25 op/s
Jan 20 15:37:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:37 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:37.899 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.037 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.037 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.037 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.037 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.038 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.039 250022 INFO nova.compute.manager [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Terminating instance
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.039 250022 DEBUG nova.compute.manager [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:37:38 compute-0 kernel: tapb07ffffb-46 (unregistering): left promiscuous mode
Jan 20 15:37:38 compute-0 NetworkManager[48960]: <info>  [1768923458.0968] device (tapb07ffffb-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:37:38 compute-0 ovn_controller[148666]: 2026-01-20T15:37:38Z|00781|binding|INFO|Releasing lport b07ffffb-4654-4fbb-bd0b-860c4d618de1 from this chassis (sb_readonly=0)
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 ovn_controller[148666]: 2026-01-20T15:37:38Z|00782|binding|INFO|Setting lport b07ffffb-4654-4fbb-bd0b-860c4d618de1 down in Southbound
Jan 20 15:37:38 compute-0 ovn_controller[148666]: 2026-01-20T15:37:38Z|00783|binding|INFO|Removing iface tapb07ffffb-46 ovn-installed in OVS
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.109 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.125 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.143 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:50:10 10.100.0.5'], port_security=['fa:16:3e:bf:50:10 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40704e43-8d1e-4a6f-addb-4e8524e3534d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '95a77afb-6fed-4025-a06d-6753ed6fc83e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10fabbbe-46a8-4773-85b5-859f8d94e243, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=b07ffffb-4654-4fbb-bd0b-860c4d618de1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.144 160071 INFO neutron.agent.ovn.metadata.agent [-] Port b07ffffb-4654-4fbb-bd0b-860c4d618de1 in datapath ce71b376-fc91-4f6b-9838-8ea300ca70de unbound from our chassis
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.144 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce71b376-fc91-4f6b-9838-8ea300ca70de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.145 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd34c1d-7912-49e4-a3ab-a0c5b0508744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.146 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de namespace which is not needed anymore
Jan 20 15:37:38 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Jan 20 15:37:38 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d6.scope: Consumed 14.214s CPU time.
Jan 20 15:37:38 compute-0 systemd-machined[216401]: Machine qemu-92-instance-000000d6 terminated.
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.259 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.265 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.274 250022 INFO nova.virt.libvirt.driver [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Instance destroyed successfully.
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [NOTICE]   (391597) : haproxy version is 2.8.14-c23fe91
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [NOTICE]   (391597) : path to executable is /usr/sbin/haproxy
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [WARNING]  (391597) : Exiting Master process...
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [WARNING]  (391597) : Exiting Master process...
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.276 250022 DEBUG nova.objects.instance [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid 40704e43-8d1e-4a6f-addb-4e8524e3534d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [ALERT]    (391597) : Current worker (391599) exited with code 143 (Terminated)
Jan 20 15:37:38 compute-0 neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de[391593]: [WARNING]  (391597) : All workers exited. Exiting... (0)
Jan 20 15:37:38 compute-0 systemd[1]: libpod-0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0.scope: Deactivated successfully.
Jan 20 15:37:38 compute-0 podman[392601]: 2026-01-20 15:37:38.287213305 +0000 UTC m=+0.051408963 container died 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.290 250022 DEBUG nova.virt.libvirt.vif [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:37:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-237209397',display_name='tempest-TestNetworkBasicOps-server-237209397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-237209397',id=214,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkP2vzk68ordq1Cd3HJUYxImpUT5hMDsFQU6BGUUao42NjuvP6GQgxpjcgMERTlsoqQOXlucNpOxkob6br28DNQAfRkkWqobfp1HtB+AQiSr4tjOud63vEtKxHG3y82WA==',key_name='tempest-TestNetworkBasicOps-882946882',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:37:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-ursy3p3u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:37:11Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=40704e43-8d1e-4a6f-addb-4e8524e3534d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.291 250022 DEBUG nova.network.os_vif_util [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.291 250022 DEBUG nova.network.os_vif_util [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.292 250022 DEBUG os_vif [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.293 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.294 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07ffffb-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.295 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.296 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.299 250022 INFO os_vif [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:50:10,bridge_name='br-int',has_traffic_filtering=True,id=b07ffffb-4654-4fbb-bd0b-860c4d618de1,network=Network(ce71b376-fc91-4f6b-9838-8ea300ca70de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07ffffb-46')
Jan 20 15:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0-userdata-shm.mount: Deactivated successfully.
Jan 20 15:37:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e6f2305f5447705a10372f5040b0269b09e467fb1ef6759982c3c23dab35ff-merged.mount: Deactivated successfully.
Jan 20 15:37:38 compute-0 podman[392601]: 2026-01-20 15:37:38.322452185 +0000 UTC m=+0.086647843 container cleanup 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:37:38 compute-0 systemd[1]: libpod-conmon-0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0.scope: Deactivated successfully.
Jan 20 15:37:38 compute-0 podman[392653]: 2026-01-20 15:37:38.381486244 +0000 UTC m=+0.039346223 container remove 0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.386 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b5e678-240f-4cd0-97e1-15b2650e4d38]: (4, ('Tue Jan 20 03:37:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de (0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0)\n0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0\nTue Jan 20 03:37:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de (0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0)\n0bb68ccb2c8b80b7179d5bd6a6b7b62aecf75741d5c4fa7801022a849fe0eea0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.388 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7609ac62-e617-4b20-89ad-604f60114ab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.390 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce71b376-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:37:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.392 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 kernel: tapce71b376-f0: left promiscuous mode
Jan 20 15:37:38 compute-0 ceph-mon[74360]: pgmap v3463: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 708 KiB/s wr, 25 op/s
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.406 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.408 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[68d0bfbc-16ed-4619-a742-2acd5e38ec01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.423 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[58e69a22-94f3-4b04-a9bf-84d9dc3707fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.424 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1888ba60-e130-4904-b623-33eec38995f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.440 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f69b4a94-b3d6-4628-a749-d45d73389379]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 937125, 'reachable_time': 16877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392670, 'error': None, 'target': 'ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.443 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce71b376-fc91-4f6b-9838-8ea300ca70de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:37:38 compute-0 systemd[1]: run-netns-ovnmeta\x2dce71b376\x2dfc91\x2d4f6b\x2d9838\x2d8ea300ca70de.mount: Deactivated successfully.
Jan 20 15:37:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:37:38.443 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[06f99ea6-8261-42ec-8aec-adede166263c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.558 250022 DEBUG nova.compute.manager [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.558 250022 DEBUG nova.compute.manager [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing instance network info cache due to event network-changed-b07ffffb-4654-4fbb-bd0b-860c4d618de1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.558 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.559 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.559 250022 DEBUG nova.network.neutron [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Refreshing network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.686 250022 INFO nova.virt.libvirt.driver [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Deleting instance files /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d_del
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.687 250022 INFO nova.virt.libvirt.driver [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Deletion of /var/lib/nova/instances/40704e43-8d1e-4a6f-addb-4e8524e3534d_del complete
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.774 250022 INFO nova.compute.manager [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.774 250022 DEBUG oslo.service.loopingcall [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.774 250022 DEBUG nova.compute.manager [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:37:38 compute-0 nova_compute[250018]: 2026-01-20 15:37:38.775 250022 DEBUG nova.network.neutron [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:37:39 compute-0 nova_compute[250018]: 2026-01-20 15:37:39.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:37:39 compute-0 nova_compute[250018]: 2026-01-20 15:37:39.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:37:39 compute-0 nova_compute[250018]: 2026-01-20 15:37:39.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:37:39 compute-0 nova_compute[250018]: 2026-01-20 15:37:39.068 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 20 15:37:39 compute-0 nova_compute[250018]: 2026-01-20 15:37:39.068 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:37:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 58 KiB/s wr, 2 op/s
Jan 20 15:37:39 compute-0 podman[392673]: 2026-01-20 15:37:39.4958482 +0000 UTC m=+0.067350116 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:37:39 compute-0 podman[392672]: 2026-01-20 15:37:39.527322119 +0000 UTC m=+0.114318658 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:37:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:40 compute-0 ceph-mon[74360]: pgmap v3464: 321 pgs: 321 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 58 KiB/s wr, 2 op/s
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.550 250022 DEBUG nova.network.neutron [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updated VIF entry in instance network info cache for port b07ffffb-4654-4fbb-bd0b-860c4d618de1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.550 250022 DEBUG nova.network.neutron [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updating instance_info_cache with network_info: [{"id": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "address": "fa:16:3e:bf:50:10", "network": {"id": "ce71b376-fc91-4f6b-9838-8ea300ca70de", "bridge": "br-int", "label": "tempest-network-smoke--315983280", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07ffffb-46", "ovs_interfaceid": "b07ffffb-4654-4fbb-bd0b-860c4d618de1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.571 250022 DEBUG nova.network.neutron [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.587 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-40704e43-8d1e-4a6f-addb-4e8524e3534d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.589 250022 DEBUG nova.compute.manager [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-unplugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.589 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.590 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.590 250022 DEBUG oslo_concurrency.lockutils [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.590 250022 DEBUG nova.compute.manager [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] No waiting events found dispatching network-vif-unplugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.591 250022 DEBUG nova.compute.manager [req-d80608c1-7f6d-4faa-a45c-1c6912f42c28 req-6c62bbfa-774d-4056-8fcc-b43700c0cdd2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-unplugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.595 250022 INFO nova.compute.manager [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Took 1.82 seconds to deallocate network for instance.
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.644 250022 DEBUG nova.compute.manager [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.645 250022 DEBUG oslo_concurrency.lockutils [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.645 250022 DEBUG oslo_concurrency.lockutils [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.788 250022 DEBUG oslo_concurrency.lockutils [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.788 250022 DEBUG nova.compute.manager [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] No waiting events found dispatching network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.788 250022 WARNING nova.compute.manager [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received unexpected event network-vif-plugged-b07ffffb-4654-4fbb-bd0b-860c4d618de1 for instance with vm_state active and task_state deleting.
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.789 250022 DEBUG nova.compute.manager [req-37e280cd-3020-4a36-9415-b012157301d0 req-1d826138-7762-4c2b-963b-f5ce8e60cb0a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Received event network-vif-deleted-b07ffffb-4654-4fbb-bd0b-860c4d618de1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.790 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.790 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:37:40 compute-0 nova_compute[250018]: 2026-01-20 15:37:40.859 250022 DEBUG oslo_concurrency.processutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:37:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:37:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559593881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.345 250022 DEBUG oslo_concurrency.processutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.351 250022 DEBUG nova.compute.provider_tree [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.378 250022 DEBUG nova.scheduler.client.report [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.397 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.423 250022 INFO nova.scheduler.client.report [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance 40704e43-8d1e-4a6f-addb-4e8524e3534d
Jan 20 15:37:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1559593881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:41 compute-0 nova_compute[250018]: 2026-01-20 15:37:41.484 250022 DEBUG oslo_concurrency.lockutils [None req-946ccc12-fdc2-445e-b90d-95a001a31b98 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "40704e43-8d1e-4a6f-addb-4e8524e3534d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:37:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:42 compute-0 nova_compute[250018]: 2026-01-20 15:37:42.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:42 compute-0 ceph-mon[74360]: pgmap v3465: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:43.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:43 compute-0 nova_compute[250018]: 2026-01-20 15:37:43.296 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:44 compute-0 ceph-mon[74360]: pgmap v3466: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 20 15:37:44 compute-0 sshd-session[392741]: Invalid user ubuntu from 134.122.57.138 port 44166
Jan 20 15:37:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:45.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:45 compute-0 sshd-session[392741]: Connection closed by invalid user ubuntu 134.122.57.138 port 44166 [preauth]
Jan 20 15:37:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 20 15:37:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:46 compute-0 ceph-mon[74360]: pgmap v3467: 321 pgs: 321 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 20 15:37:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/496827110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:37:47 compute-0 nova_compute[250018]: 2026-01-20 15:37:47.098 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:47.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 10 KiB/s wr, 51 op/s
Jan 20 15:37:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:47.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:48 compute-0 nova_compute[250018]: 2026-01-20 15:37:48.298 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.416325) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468416464, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1978, "num_deletes": 252, "total_data_size": 3612945, "memory_usage": 3672664, "flush_reason": "Manual Compaction"}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468433593, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 2036286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75716, "largest_seqno": 77693, "table_properties": {"data_size": 2029966, "index_size": 3201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16729, "raw_average_key_size": 20, "raw_value_size": 2015882, "raw_average_value_size": 2513, "num_data_blocks": 145, "num_entries": 802, "num_filter_entries": 802, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923260, "oldest_key_time": 1768923260, "file_creation_time": 1768923468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 17325 microseconds, and 9253 cpu microseconds.
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.433673) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 2036286 bytes OK
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.433696) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.435223) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.435244) EVENT_LOG_v1 {"time_micros": 1768923468435238, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.435265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 3604943, prev total WAL file size 3604943, number of live WAL files 2.
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.436833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303130' seq:0, type:0; will stop at (end)
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1988KB)], [173(12MB)]
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468436918, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14993198, "oldest_snapshot_seqno": -1}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10612 keys, 12636561 bytes, temperature: kUnknown
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468516281, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 12636561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12570180, "index_size": 38748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 278441, "raw_average_key_size": 26, "raw_value_size": 12386644, "raw_average_value_size": 1167, "num_data_blocks": 1478, "num_entries": 10612, "num_filter_entries": 10612, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.516659) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 12636561 bytes
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.518405) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.6 rd, 158.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(13.6) write-amplify(6.2) OK, records in: 11030, records dropped: 418 output_compression: NoCompression
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.518433) EVENT_LOG_v1 {"time_micros": 1768923468518421, "job": 108, "event": "compaction_finished", "compaction_time_micros": 79511, "compaction_time_cpu_micros": 40876, "output_level": 6, "num_output_files": 1, "total_output_size": 12636561, "num_input_records": 11030, "num_output_records": 10612, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468519486, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923468524430, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.436691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.524675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.524684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.524687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.524690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:37:48.524693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:37:48 compute-0 ceph-mon[74360]: pgmap v3468: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 10 KiB/s wr, 51 op/s
Jan 20 15:37:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:49.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Jan 20 15:37:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:49.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:50 compute-0 ceph-mon[74360]: pgmap v3469: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 9.0 KiB/s wr, 56 op/s
Jan 20 15:37:50 compute-0 sudo[392747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:50 compute-0 sudo[392747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:50 compute-0 sudo[392747]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:50 compute-0 sudo[392772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:37:50 compute-0 sudo[392772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:37:50 compute-0 sudo[392772]: pam_unix(sudo:session): session closed for user root
Jan 20 15:37:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:51.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Jan 20 15:37:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:52 compute-0 nova_compute[250018]: 2026-01-20 15:37:52.099 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:37:52 compute-0 ceph-mon[74360]: pgmap v3470: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.0 KiB/s wr, 55 op/s
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:37:52
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control']
Jan 20 15:37:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:37:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:53 compute-0 nova_compute[250018]: 2026-01-20 15:37:53.273 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923458.2720258, 40704e43-8d1e-4a6f-addb-4e8524e3534d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:37:53 compute-0 nova_compute[250018]: 2026-01-20 15:37:53.273 250022 INFO nova.compute.manager [-] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] VM Stopped (Lifecycle Event)
Jan 20 15:37:53 compute-0 nova_compute[250018]: 2026-01-20 15:37:53.300 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:53 compute-0 ceph-mon[74360]: pgmap v3471: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:37:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:37:56 compute-0 ceph-mon[74360]: pgmap v3472: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:37:57 compute-0 nova_compute[250018]: 2026-01-20 15:37:57.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:57 compute-0 nova_compute[250018]: 2026-01-20 15:37:57.262 250022 DEBUG nova.compute.manager [None req-2b977ec4-3957-4bf7-99aa-8aace007fb9e - - - - - -] [instance: 40704e43-8d1e-4a6f-addb-4e8524e3534d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 852 B/s wr, 8 op/s
Jan 20 15:37:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:57.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:37:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:37:58 compute-0 nova_compute[250018]: 2026-01-20 15:37:58.303 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:37:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:37:58 compute-0 ceph-mon[74360]: pgmap v3473: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 852 B/s wr, 8 op/s
Jan 20 15:37:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:37:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:37:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s
Jan 20 15:37:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:37:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:37:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:37:59.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:00 compute-0 ceph-mon[74360]: pgmap v3474: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 341 B/s wr, 5 op/s
Jan 20 15:38:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:01.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:02 compute-0 nova_compute[250018]: 2026-01-20 15:38:02.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:02 compute-0 ceph-mon[74360]: pgmap v3475: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:03 compute-0 nova_compute[250018]: 2026-01-20 15:38:03.304 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:03.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:04 compute-0 ceph-mon[74360]: pgmap v3476: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.683777) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484683814, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 378, "num_deletes": 251, "total_data_size": 272180, "memory_usage": 279624, "flush_reason": "Manual Compaction"}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484687623, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 269559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77694, "largest_seqno": 78071, "table_properties": {"data_size": 267271, "index_size": 451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5634, "raw_average_key_size": 18, "raw_value_size": 262744, "raw_average_value_size": 867, "num_data_blocks": 20, "num_entries": 303, "num_filter_entries": 303, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923468, "oldest_key_time": 1768923468, "file_creation_time": 1768923484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 3876 microseconds, and 1440 cpu microseconds.
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.687654) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 269559 bytes OK
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.687668) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.689236) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.689249) EVENT_LOG_v1 {"time_micros": 1768923484689244, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.689262) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 269751, prev total WAL file size 269751, number of live WAL files 2.
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.689687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(263KB)], [176(12MB)]
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484689713, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 12906120, "oldest_snapshot_seqno": -1}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10405 keys, 10873552 bytes, temperature: kUnknown
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484744151, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10873552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10810040, "index_size": 36391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 274821, "raw_average_key_size": 26, "raw_value_size": 10631504, "raw_average_value_size": 1021, "num_data_blocks": 1371, "num_entries": 10405, "num_filter_entries": 10405, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.744419) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10873552 bytes
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.745668) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.8 rd, 199.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(88.2) write-amplify(40.3) OK, records in: 10915, records dropped: 510 output_compression: NoCompression
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.745689) EVENT_LOG_v1 {"time_micros": 1768923484745679, "job": 110, "event": "compaction_finished", "compaction_time_micros": 54513, "compaction_time_cpu_micros": 27017, "output_level": 6, "num_output_files": 1, "total_output_size": 10873552, "num_input_records": 10915, "num_output_records": 10405, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484745852, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923484748649, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.689593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.748689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.748693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.748695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.748697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:04 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:38:04.748699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:38:05 compute-0 nova_compute[250018]: 2026-01-20 15:38:05.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:38:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:38:05 compute-0 nova_compute[250018]: 2026-01-20 15:38:05.156 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:05 compute-0 ceph-mon[74360]: pgmap v3477: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:05.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:07 compute-0 nova_compute[250018]: 2026-01-20 15:38:07.106 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:08 compute-0 nova_compute[250018]: 2026-01-20 15:38:08.306 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:08 compute-0 ceph-mon[74360]: pgmap v3478: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:09 compute-0 nova_compute[250018]: 2026-01-20 15:38:09.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:38:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:38:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:10 compute-0 podman[392809]: 2026-01-20 15:38:10.459132097 +0000 UTC m=+0.048362229 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:38:10 compute-0 ceph-mon[74360]: pgmap v3479: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:10 compute-0 podman[392808]: 2026-01-20 15:38:10.527837 +0000 UTC m=+0.116579869 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:38:11 compute-0 sudo[392852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:11 compute-0 sudo[392852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:11 compute-0 sudo[392852]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:11 compute-0 sudo[392877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:11 compute-0 sudo[392877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:11 compute-0 sudo[392877]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:38:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:38:12 compute-0 nova_compute[250018]: 2026-01-20 15:38:12.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:12 compute-0 ceph-mon[74360]: pgmap v3480: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:13 compute-0 nova_compute[250018]: 2026-01-20 15:38:13.308 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:38:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:38:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:38:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:38:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:13.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:14 compute-0 ceph-mon[74360]: pgmap v3481: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:38:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/650535647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:38:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:15.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:16 compute-0 ceph-mon[74360]: pgmap v3482: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:17 compute-0 nova_compute[250018]: 2026-01-20 15:38:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:17 compute-0 nova_compute[250018]: 2026-01-20 15:38:17.109 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:38:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:38:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:18 compute-0 nova_compute[250018]: 2026-01-20 15:38:18.311 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:18 compute-0 ceph-mon[74360]: pgmap v3483: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/420021600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:19.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:20 compute-0 ceph-mon[74360]: pgmap v3484: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2588081493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:21 compute-0 nova_compute[250018]: 2026-01-20 15:38:21.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:21 compute-0 nova_compute[250018]: 2026-01-20 15:38:21.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:38:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:21.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:22 compute-0 nova_compute[250018]: 2026-01-20 15:38:22.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:22 compute-0 nova_compute[250018]: 2026-01-20 15:38:22.111 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:22 compute-0 ceph-mon[74360]: pgmap v3485: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.097 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.098 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.098 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.098 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:23.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3824174375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.312 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:38:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130345173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.572 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.732 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.733 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4202MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.733 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.734 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.934 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.935 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:38:23 compute-0 nova_compute[250018]: 2026-01-20 15:38:23.959 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:24 compute-0 ceph-mon[74360]: pgmap v3486: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1130345173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/374964423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:38:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078152141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:24 compute-0 nova_compute[250018]: 2026-01-20 15:38:24.410 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:24 compute-0 nova_compute[250018]: 2026-01-20 15:38:24.416 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:38:24 compute-0 nova_compute[250018]: 2026-01-20 15:38:24.434 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:38:24 compute-0 nova_compute[250018]: 2026-01-20 15:38:24.460 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:38:24 compute-0 nova_compute[250018]: 2026-01-20 15:38:24.461 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:25.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1078152141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:25.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:25 compute-0 sudo[392957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:25 compute-0 sudo[392957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:25 compute-0 sudo[392957]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:25 compute-0 sudo[392982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:38:25 compute-0 sudo[392982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:25 compute-0 sudo[392982]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 sudo[393007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:26 compute-0 sudo[393007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 sudo[393007]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 sudo[393032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:38:26 compute-0 sudo[393032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 ceph-mon[74360]: pgmap v3487: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:26 compute-0 sudo[393032]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0d079579-da5c-474a-a7c4-3c59813ea6a9 does not exist
Jan 20 15:38:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4976648c-2d2d-42f5-9139-a047d78bb285 does not exist
Jan 20 15:38:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 97cf0ffe-12cd-454f-abaa-e15fdd415440 does not exist
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:38:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:38:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:26 compute-0 sudo[393088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:26 compute-0 sudo[393088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 sudo[393088]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 sudo[393113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:38:26 compute-0 sudo[393113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 sudo[393113]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 sudo[393138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:26 compute-0 sudo[393138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 sudo[393138]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:26 compute-0 sudo[393163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:38:26 compute-0 sudo[393163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:26 compute-0 sshd-session[392955]: Invalid user ubuntu from 134.122.57.138 port 41744
Jan 20 15:38:27 compute-0 sshd-session[392955]: Connection closed by invalid user ubuntu 134.122.57.138 port 41744 [preauth]
Jan 20 15:38:27 compute-0 nova_compute[250018]: 2026-01-20 15:38:27.114 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:38:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.305991419 +0000 UTC m=+0.045666377 container create 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:38:27 compute-0 systemd[1]: Started libpod-conmon-65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2.scope.
Jan 20 15:38:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.284890673 +0000 UTC m=+0.024565721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.39044315 +0000 UTC m=+0.130118178 container init 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.397188764 +0000 UTC m=+0.136863742 container start 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.401107541 +0000 UTC m=+0.140782499 container attach 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:38:27 compute-0 systemd[1]: libpod-65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2.scope: Deactivated successfully.
Jan 20 15:38:27 compute-0 determined_moore[393246]: 167 167
Jan 20 15:38:27 compute-0 conmon[393246]: conmon 65c53501ad6c1d4ebe90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2.scope/container/memory.events
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.403806504 +0000 UTC m=+0.143481462 container died 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d69dd1683cad8634b8442ee281d45a4aad2dc804a4a193199f1714cb966b72bf-merged.mount: Deactivated successfully.
Jan 20 15:38:27 compute-0 podman[393230]: 2026-01-20 15:38:27.443012483 +0000 UTC m=+0.182687441 container remove 65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:38:27 compute-0 systemd[1]: libpod-conmon-65c53501ad6c1d4ebe90fa9ad3466248bd667ba4d92ee3ca40c18e9a975d60c2.scope: Deactivated successfully.
Jan 20 15:38:27 compute-0 podman[393269]: 2026-01-20 15:38:27.602863361 +0000 UTC m=+0.039951601 container create de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:38:27 compute-0 systemd[1]: Started libpod-conmon-de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd.scope.
Jan 20 15:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:27 compute-0 podman[393269]: 2026-01-20 15:38:27.583732929 +0000 UTC m=+0.020821189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:27 compute-0 podman[393269]: 2026-01-20 15:38:27.684231709 +0000 UTC m=+0.121319959 container init de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:38:27 compute-0 podman[393269]: 2026-01-20 15:38:27.694466498 +0000 UTC m=+0.131554738 container start de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:38:27 compute-0 podman[393269]: 2026-01-20 15:38:27.69749433 +0000 UTC m=+0.134582570 container attach de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:38:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:28 compute-0 ceph-mon[74360]: pgmap v3488: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:28 compute-0 nova_compute[250018]: 2026-01-20 15:38:28.315 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:28 compute-0 sharp_poincare[393285]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:38:28 compute-0 sharp_poincare[393285]: --> relative data size: 1.0
Jan 20 15:38:28 compute-0 sharp_poincare[393285]: --> All data devices are unavailable
Jan 20 15:38:28 compute-0 systemd[1]: libpod-de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd.scope: Deactivated successfully.
Jan 20 15:38:28 compute-0 podman[393269]: 2026-01-20 15:38:28.460054067 +0000 UTC m=+0.897142327 container died de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 15:38:28 compute-0 nova_compute[250018]: 2026-01-20 15:38:28.462 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fc2b802b9e7bd7c27529f337306a53cf5f05c6cdb0c1e2159c0b3a89c489330-merged.mount: Deactivated successfully.
Jan 20 15:38:28 compute-0 podman[393269]: 2026-01-20 15:38:28.507001536 +0000 UTC m=+0.944089776 container remove de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:38:28 compute-0 systemd[1]: libpod-conmon-de739fed6f983ca51ea61538a60ced52c627045a1651b3dabea271be9ee8abfd.scope: Deactivated successfully.
Jan 20 15:38:28 compute-0 sudo[393163]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:28 compute-0 sudo[393316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:28 compute-0 sudo[393316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:28 compute-0 sudo[393316]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:28 compute-0 sudo[393341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:38:28 compute-0 sudo[393341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:28 compute-0 sudo[393341]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:28 compute-0 sudo[393366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:28 compute-0 sudo[393366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:28 compute-0 sudo[393366]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:28 compute-0 sudo[393391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:38:28 compute-0 sudo[393391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.074684191 +0000 UTC m=+0.036722652 container create 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:38:29 compute-0 systemd[1]: Started libpod-conmon-5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7.scope.
Jan 20 15:38:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.146960971 +0000 UTC m=+0.108999452 container init 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:38:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:29.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.152731769 +0000 UTC m=+0.114770230 container start 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.05998987 +0000 UTC m=+0.022028351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.155957547 +0000 UTC m=+0.117996058 container attach 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:38:29 compute-0 festive_williams[393472]: 167 167
Jan 20 15:38:29 compute-0 systemd[1]: libpod-5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7.scope: Deactivated successfully.
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.157530369 +0000 UTC m=+0.119568830 container died 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-454c1799a5561ceb259fef4b91567eaeb718294efcb53d76fe96441a1e1119ae-merged.mount: Deactivated successfully.
Jan 20 15:38:29 compute-0 podman[393455]: 2026-01-20 15:38:29.191062383 +0000 UTC m=+0.153100844 container remove 5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williams, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:38:29 compute-0 systemd[1]: libpod-conmon-5ce4ea3735f45056230ed7a75485257ae67078cf9d4f67cbc3ae206bbcc5a2d7.scope: Deactivated successfully.
Jan 20 15:38:29 compute-0 podman[393494]: 2026-01-20 15:38:29.340585929 +0000 UTC m=+0.037713699 container create 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:38:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:29 compute-0 systemd[1]: Started libpod-conmon-460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c.scope.
Jan 20 15:38:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a217aa638beb18249ee9f423cd3899e5b36c61543b6a39d3c5d1a818c05c22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a217aa638beb18249ee9f423cd3899e5b36c61543b6a39d3c5d1a818c05c22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a217aa638beb18249ee9f423cd3899e5b36c61543b6a39d3c5d1a818c05c22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a217aa638beb18249ee9f423cd3899e5b36c61543b6a39d3c5d1a818c05c22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:29 compute-0 podman[393494]: 2026-01-20 15:38:29.323694969 +0000 UTC m=+0.020822759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:29 compute-0 podman[393494]: 2026-01-20 15:38:29.420861727 +0000 UTC m=+0.117989527 container init 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:38:29 compute-0 podman[393494]: 2026-01-20 15:38:29.434620333 +0000 UTC m=+0.131748103 container start 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:38:29 compute-0 podman[393494]: 2026-01-20 15:38:29.438267332 +0000 UTC m=+0.135395092 container attach 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 20 15:38:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:30 compute-0 nova_compute[250018]: 2026-01-20 15:38:30.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]: {
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:     "0": [
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:         {
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "devices": [
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "/dev/loop3"
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             ],
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "lv_name": "ceph_lv0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "lv_size": "7511998464",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "name": "ceph_lv0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "tags": {
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.cluster_name": "ceph",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.crush_device_class": "",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.encrypted": "0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.osd_id": "0",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.type": "block",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:                 "ceph.vdo": "0"
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             },
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "type": "block",
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:             "vg_name": "ceph_vg0"
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:         }
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]:     ]
Jan 20 15:38:30 compute-0 eloquent_chatterjee[393511]: }
Jan 20 15:38:30 compute-0 systemd[1]: libpod-460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c.scope: Deactivated successfully.
Jan 20 15:38:30 compute-0 podman[393494]: 2026-01-20 15:38:30.182659533 +0000 UTC m=+0.879787343 container died 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a217aa638beb18249ee9f423cd3899e5b36c61543b6a39d3c5d1a818c05c22-merged.mount: Deactivated successfully.
Jan 20 15:38:30 compute-0 podman[393494]: 2026-01-20 15:38:30.25003871 +0000 UTC m=+0.947166480 container remove 460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:38:30 compute-0 systemd[1]: libpod-conmon-460463af260034a14ae4c917e269e03fc7ec33009fffb8b49b09515310921e4c.scope: Deactivated successfully.
Jan 20 15:38:30 compute-0 sudo[393391]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:30 compute-0 sudo[393532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:30 compute-0 sudo[393532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:30 compute-0 sudo[393532]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:30 compute-0 sudo[393558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:38:30 compute-0 sudo[393558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:30 compute-0 sudo[393558]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:30 compute-0 sudo[393583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:30 compute-0 sudo[393583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:30 compute-0 sudo[393583]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:30 compute-0 ceph-mon[74360]: pgmap v3489: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:30 compute-0 sudo[393608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:38:30 compute-0 sudo[393608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:30.810 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:30.811 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.874171173 +0000 UTC m=+0.043469306 container create 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:38:30 compute-0 systemd[1]: Started libpod-conmon-752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c.scope.
Jan 20 15:38:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.857637242 +0000 UTC m=+0.026935395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.955955152 +0000 UTC m=+0.125253315 container init 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.961934386 +0000 UTC m=+0.131232549 container start 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.965755429 +0000 UTC m=+0.135053582 container attach 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:38:30 compute-0 busy_shockley[393690]: 167 167
Jan 20 15:38:30 compute-0 systemd[1]: libpod-752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c.scope: Deactivated successfully.
Jan 20 15:38:30 compute-0 podman[393674]: 2026-01-20 15:38:30.967506868 +0000 UTC m=+0.136805001 container died 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a8433f179ed21f94afe8495718bf4533eceb6f1debe1f416d14762bedf2f9f-merged.mount: Deactivated successfully.
Jan 20 15:38:31 compute-0 podman[393674]: 2026-01-20 15:38:31.008954497 +0000 UTC m=+0.178252660 container remove 752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_shockley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 15:38:31 compute-0 systemd[1]: libpod-conmon-752ac4a8e26633f90da798609bba39ec2ee2822e964a78e2cfdcf9e2e57ee39c.scope: Deactivated successfully.
Jan 20 15:38:31 compute-0 sudo[393709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:31 compute-0 sudo[393709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:31 compute-0 sudo[393709]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:38:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:31.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:38:31 compute-0 podman[393737]: 2026-01-20 15:38:31.20646441 +0000 UTC m=+0.056938162 container create 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:38:31 compute-0 sudo[393746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:31 compute-0 sudo[393746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:31 compute-0 sudo[393746]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:31 compute-0 systemd[1]: Started libpod-conmon-6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef.scope.
Jan 20 15:38:31 compute-0 podman[393737]: 2026-01-20 15:38:31.186160377 +0000 UTC m=+0.036634059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:38:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01c7677549a42873f918cae37061ea6526798e2d0c9c8d39e56b5967a8bb36b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01c7677549a42873f918cae37061ea6526798e2d0c9c8d39e56b5967a8bb36b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01c7677549a42873f918cae37061ea6526798e2d0c9c8d39e56b5967a8bb36b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01c7677549a42873f918cae37061ea6526798e2d0c9c8d39e56b5967a8bb36b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:31 compute-0 podman[393737]: 2026-01-20 15:38:31.305534731 +0000 UTC m=+0.156008443 container init 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:38:31 compute-0 podman[393737]: 2026-01-20 15:38:31.316369057 +0000 UTC m=+0.166842719 container start 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:38:31 compute-0 podman[393737]: 2026-01-20 15:38:31.319945453 +0000 UTC m=+0.170419165 container attach 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:38:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:31.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:32 compute-0 nova_compute[250018]: 2026-01-20 15:38:32.165 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:32 compute-0 strange_wescoff[393781]: {
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:         "osd_id": 0,
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:         "type": "bluestore"
Jan 20 15:38:32 compute-0 strange_wescoff[393781]:     }
Jan 20 15:38:32 compute-0 strange_wescoff[393781]: }
Jan 20 15:38:32 compute-0 systemd[1]: libpod-6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef.scope: Deactivated successfully.
Jan 20 15:38:32 compute-0 podman[393737]: 2026-01-20 15:38:32.333556723 +0000 UTC m=+1.184030455 container died 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:38:32 compute-0 systemd[1]: libpod-6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef.scope: Consumed 1.005s CPU time.
Jan 20 15:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d01c7677549a42873f918cae37061ea6526798e2d0c9c8d39e56b5967a8bb36b-merged.mount: Deactivated successfully.
Jan 20 15:38:32 compute-0 podman[393737]: 2026-01-20 15:38:32.404100357 +0000 UTC m=+1.254574019 container remove 6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:38:32 compute-0 systemd[1]: libpod-conmon-6e50f35f063eb7095a1bf06efa487416bc4e0e1f5b71a3a2fbe4460ddbc726ef.scope: Deactivated successfully.
Jan 20 15:38:32 compute-0 sudo[393608]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:38:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:38:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 01e28af0-e8fa-40f4-83ef-5b92070dd24b does not exist
Jan 20 15:38:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0b32e993-a88d-42cf-bca9-ff059ab2c720 does not exist
Jan 20 15:38:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 84b5149d-4cc7-4884-8844-82c2c7fc9b80 does not exist
Jan 20 15:38:32 compute-0 sudo[393817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:32 compute-0 sudo[393817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:32 compute-0 sudo[393817]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:32 compute-0 ceph-mon[74360]: pgmap v3490: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:38:32 compute-0 sudo[393842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:38:32 compute-0 sudo[393842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:32 compute-0 sudo[393842]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:33.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:33 compute-0 nova_compute[250018]: 2026-01-20 15:38:33.363 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:33.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:34 compute-0 ceph-mon[74360]: pgmap v3491: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:35.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:35 compute-0 ceph-mon[74360]: pgmap v3492: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.519 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.520 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.539 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.617 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.617 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.627 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.627 250022 INFO nova.compute.claims [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:38:36 compute-0 nova_compute[250018]: 2026-01-20 15:38:36.807 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:37.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.167 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:38:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.239 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.244 250022 DEBUG nova.compute.provider_tree [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.265 250022 DEBUG nova.scheduler.client.report [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.283 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.283 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.352 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.353 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:38:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.373 250022 INFO nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.409 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.543 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.544 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.544 250022 INFO nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Creating image(s)
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.568 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1147255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.599 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.624 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.628 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.657 250022 DEBUG nova.policy [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5338aa65dc0e4326a66ce79053787f14', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.690 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.691 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.692 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.692 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.722 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:37 compute-0 nova_compute[250018]: 2026-01-20 15:38:37.726 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.579 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:38 compute-0 ceph-mon[74360]: pgmap v3493: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.640 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] resizing rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.729 250022 DEBUG nova.objects.instance [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'migration_context' on Instance uuid b70d8470-0a60-4adc-89b5-a4e8763fd81e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.763 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.763 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Ensure instance console log exists: /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.763 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.764 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:38 compute-0 nova_compute[250018]: 2026-01-20 15:38:38.764 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:39 compute-0 nova_compute[250018]: 2026-01-20 15:38:39.067 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Successfully created port: 74f670dc-f485-4a98-8c72-7f592f8939ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:38:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:39.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 130 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 280 KiB/s wr, 11 op/s
Jan 20 15:38:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.403 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Successfully updated port: 74f670dc-f485-4a98-8c72-7f592f8939ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.424 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.424 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquired lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.425 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.563 250022 DEBUG nova.compute.manager [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.564 250022 DEBUG nova.compute.manager [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing instance network info cache due to event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.564 250022 DEBUG oslo_concurrency.lockutils [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:38:40 compute-0 ceph-mon[74360]: pgmap v3494: 321 pgs: 321 active+clean; 130 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 280 KiB/s wr, 11 op/s
Jan 20 15:38:40 compute-0 nova_compute[250018]: 2026-01-20 15:38:40.886 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:38:41 compute-0 nova_compute[250018]: 2026-01-20 15:38:41.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:38:41 compute-0 nova_compute[250018]: 2026-01-20 15:38:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:38:41 compute-0 nova_compute[250018]: 2026-01-20 15:38:41.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:38:41 compute-0 nova_compute[250018]: 2026-01-20 15:38:41.076 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 15:38:41 compute-0 nova_compute[250018]: 2026-01-20 15:38:41.077 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:38:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:41 compute-0 podman[394060]: 2026-01-20 15:38:41.45815666 +0000 UTC m=+0.053252132 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 15:38:41 compute-0 podman[394059]: 2026-01-20 15:38:41.482936636 +0000 UTC m=+0.078033198 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:38:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.097 250022 DEBUG nova.network.neutron [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updating instance_info_cache with network_info: [{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.139 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Releasing lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.139 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Instance network_info: |[{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.140 250022 DEBUG oslo_concurrency.lockutils [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.140 250022 DEBUG nova.network.neutron [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.143 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Start _get_guest_xml network_info=[{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.147 250022 WARNING nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.152 250022 DEBUG nova.virt.libvirt.host [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.153 250022 DEBUG nova.virt.libvirt.host [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.155 250022 DEBUG nova.virt.libvirt.host [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.156 250022 DEBUG nova.virt.libvirt.host [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.157 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.157 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.157 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.158 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.158 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.158 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.158 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.158 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.159 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.159 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.159 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.159 250022 DEBUG nova.virt.hardware [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.162 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:42 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:38:42 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392674960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.636 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.665 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:42 compute-0 nova_compute[250018]: 2026-01-20 15:38:42.669 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:42 compute-0 ceph-mon[74360]: pgmap v3495: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:38:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024218012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.095 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.096 250022 DEBUG nova.virt.libvirt.vif [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:38:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1905072111',display_name='tempest-TestNetworkBasicOps-server-1905072111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1905072111',id=215,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDK4n8EFtuL5/XqWY5d/FasGM76mbqH31m+kP4HpBk72Alanw2Gc2AL882g4qQJ4cI2EqJTeY7zEV2m5DJociXdBuQPc6ENpKh0PFPKtX8CK3OFGpGFOPhuqLgfVsiPCmA==',key_name='tempest-TestNetworkBasicOps-1736252842',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-46xtwcxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:38:37Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=b70d8470-0a60-4adc-89b5-a4e8763fd81e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.097 250022 DEBUG nova.network.os_vif_util [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.098 250022 DEBUG nova.network.os_vif_util [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.099 250022 DEBUG nova.objects.instance [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'pci_devices' on Instance uuid b70d8470-0a60-4adc-89b5-a4e8763fd81e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.115 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <uuid>b70d8470-0a60-4adc-89b5-a4e8763fd81e</uuid>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <name>instance-000000d7</name>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:name>tempest-TestNetworkBasicOps-server-1905072111</nova:name>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:38:42</nova:creationTime>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:user uuid="5338aa65dc0e4326a66ce79053787f14">tempest-TestNetworkBasicOps-807695970-project-member</nova:user>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:project uuid="3168f57421fb49bfb94b85daedd1fe7d">tempest-TestNetworkBasicOps-807695970</nova:project>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <nova:port uuid="74f670dc-f485-4a98-8c72-7f592f8939ab">
Jan 20 15:38:43 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <system>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="serial">b70d8470-0a60-4adc-89b5-a4e8763fd81e</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="uuid">b70d8470-0a60-4adc-89b5-a4e8763fd81e</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </system>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <os>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </os>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <features>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </features>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk">
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </source>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config">
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </source>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:38:43 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:1f:2b:07"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <target dev="tap74f670dc-f4"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/console.log" append="off"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <video>
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </video>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:38:43 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:38:43 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:38:43 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:38:43 compute-0 nova_compute[250018]: </domain>
Jan 20 15:38:43 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.116 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Preparing to wait for external event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.117 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.117 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.118 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.118 250022 DEBUG nova.virt.libvirt.vif [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:38:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1905072111',display_name='tempest-TestNetworkBasicOps-server-1905072111',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1905072111',id=215,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDK4n8EFtuL5/XqWY5d/FasGM76mbqH31m+kP4HpBk72Alanw2Gc2AL882g4qQJ4cI2EqJTeY7zEV2m5DJociXdBuQPc6ENpKh0PFPKtX8CK3OFGpGFOPhuqLgfVsiPCmA==',key_name='tempest-TestNetworkBasicOps-1736252842',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-46xtwcxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:38:37Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=b70d8470-0a60-4adc-89b5-a4e8763fd81e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.119 250022 DEBUG nova.network.os_vif_util [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.119 250022 DEBUG nova.network.os_vif_util [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.120 250022 DEBUG os_vif [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.122 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.122 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.123 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.127 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.127 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74f670dc-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.128 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74f670dc-f4, col_values=(('external_ids', {'iface-id': '74f670dc-f485-4a98-8c72-7f592f8939ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:2b:07', 'vm-uuid': 'b70d8470-0a60-4adc-89b5-a4e8763fd81e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.129 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:43 compute-0 NetworkManager[48960]: <info>  [1768923523.1304] manager: (tap74f670dc-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.135 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.136 250022 INFO os_vif [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4')
Jan 20 15:38:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.210 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.210 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.211 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] No VIF found with MAC fa:16:3e:1f:2b:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.211 250022 INFO nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Using config drive
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.238 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.725 250022 INFO nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Creating config drive at /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.729 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpubq6fhxp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2392674960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1024218012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:38:43 compute-0 ceph-mon[74360]: pgmap v3496: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.863 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpubq6fhxp" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:43.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.896 250022 DEBUG nova.storage.rbd_utils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] rbd image b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.900 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.948 250022 DEBUG nova.network.neutron [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updated VIF entry in instance network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.949 250022 DEBUG nova.network.neutron [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updating instance_info_cache with network_info: [{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:38:43 compute-0 nova_compute[250018]: 2026-01-20 15:38:43.969 250022 DEBUG oslo_concurrency.lockutils [req-1e3ced1e-7614-4a0e-9bca-0ac1f2945f88 req-2915d84e-bc0d-4fb3-aa9b-c47ada8661e0 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.247 250022 DEBUG oslo_concurrency.processutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config b70d8470-0a60-4adc-89b5-a4e8763fd81e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.248 250022 INFO nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Deleting local config drive /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e/disk.config because it was imported into RBD.
Jan 20 15:38:44 compute-0 kernel: tap74f670dc-f4: entered promiscuous mode
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.3163] manager: (tap74f670dc-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Jan 20 15:38:44 compute-0 ovn_controller[148666]: 2026-01-20T15:38:44Z|00784|binding|INFO|Claiming lport 74f670dc-f485-4a98-8c72-7f592f8939ab for this chassis.
Jan 20 15:38:44 compute-0 ovn_controller[148666]: 2026-01-20T15:38:44Z|00785|binding|INFO|74f670dc-f485-4a98-8c72-7f592f8939ab: Claiming fa:16:3e:1f:2b:07 10.100.0.5
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.362 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.381 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2b:07 10.100.0.5'], port_security=['fa:16:3e:1f:2b:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b70d8470-0a60-4adc-89b5-a4e8763fd81e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03770aaa-187c-4ee9-a705-bc9495cdf334', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26fc97f4-6600-43f3-8149-cb555a406125', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf75492-0d72-4e62-8af5-30365215ac61, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=74f670dc-f485-4a98-8c72-7f592f8939ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.382 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 74f670dc-f485-4a98-8c72-7f592f8939ab in datapath 03770aaa-187c-4ee9-a705-bc9495cdf334 bound to our chassis
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.383 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03770aaa-187c-4ee9-a705-bc9495cdf334
Jan 20 15:38:44 compute-0 systemd-udevd[394242]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:38:44 compute-0 systemd-machined[216401]: New machine qemu-93-instance-000000d7.
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.397 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b636891e-2a74-4955-bc2a-3dbfad6b863b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.399 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03770aaa-11 in ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.400 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03770aaa-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.400 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[76e636eb-99e0-407d-ad38-7901fa21436c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.401 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4e8072-0dbb-4715-8b76-0d813d7b84e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.4069] device (tap74f670dc-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.4078] device (tap74f670dc-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:38:44 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-000000d7.
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.415 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[4c28052d-5da5-4849-b1f7-43cf161c2dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.439 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[953bc350-9347-4eee-83f0-df8f966454a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.443 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 ovn_controller[148666]: 2026-01-20T15:38:44Z|00786|binding|INFO|Setting lport 74f670dc-f485-4a98-8c72-7f592f8939ab ovn-installed in OVS
Jan 20 15:38:44 compute-0 ovn_controller[148666]: 2026-01-20T15:38:44Z|00787|binding|INFO|Setting lport 74f670dc-f485-4a98-8c72-7f592f8939ab up in Southbound
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.469 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[84584598-9b1c-44be-8fb3-80ba484ee0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.475 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3b01b943-8f97-4fb0-b7c8-1a86ae830a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.4765] manager: (tap03770aaa-10): new Veth device (/org/freedesktop/NetworkManager/Devices/381)
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.510 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7602eaa8-4c4d-479f-9a36-c96dfa30922b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.513 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d7dc35-a3be-4ef9-bbff-992cf9cb1edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.5407] device (tap03770aaa-10): carrier: link connected
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.548 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[9c633e23-081b-4796-8461-f015073e9d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.563 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad9b236-425e-44c4-bdcf-b3e177723c6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03770aaa-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:7d:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946526, 'reachable_time': 32798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394275, 'error': None, 'target': 'ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.580 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d9056b94-1286-4457-a82a-b40da9033d91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:7d46'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 946526, 'tstamp': 946526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394276, 'error': None, 'target': 'ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.599 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3a7b47-a41e-456a-9d86-72c992df9e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03770aaa-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:7d:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946526, 'reachable_time': 32798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394277, 'error': None, 'target': 'ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.635 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[de0f892d-daec-4f57-b29c-f51859ac019e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.717 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0d2f50-a366-4ad1-bf9f-9b824ff95c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.719 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03770aaa-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.719 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.720 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03770aaa-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.721 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 NetworkManager[48960]: <info>  [1768923524.7242] manager: (tap03770aaa-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Jan 20 15:38:44 compute-0 kernel: tap03770aaa-10: entered promiscuous mode
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.726 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.728 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03770aaa-10, col_values=(('external_ids', {'iface-id': 'e145ba7e-306f-4de5-8e9b-034c2a7cc25e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:44 compute-0 ovn_controller[148666]: 2026-01-20T15:38:44Z|00788|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.729 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.747 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.750 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.751 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03770aaa-187c-4ee9-a705-bc9495cdf334.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03770aaa-187c-4ee9-a705-bc9495cdf334.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.752 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1290cfdd-de22-4320-b146-933966203a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.752 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-03770aaa-187c-4ee9-a705-bc9495cdf334
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/03770aaa-187c-4ee9-a705-bc9495cdf334.pid.haproxy
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID 03770aaa-187c-4ee9-a705-bc9495cdf334
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:38:44 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:44.753 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334', 'env', 'PROCESS_TAG=haproxy-03770aaa-187c-4ee9-a705-bc9495cdf334', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03770aaa-187c-4ee9-a705-bc9495cdf334.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.781 250022 DEBUG nova.compute.manager [req-2e26c490-6643-4ab8-85aa-80ccc998d5a8 req-6f8e056f-d284-4d0d-b62a-d15be6766ae4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.781 250022 DEBUG oslo_concurrency.lockutils [req-2e26c490-6643-4ab8-85aa-80ccc998d5a8 req-6f8e056f-d284-4d0d-b62a-d15be6766ae4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.781 250022 DEBUG oslo_concurrency.lockutils [req-2e26c490-6643-4ab8-85aa-80ccc998d5a8 req-6f8e056f-d284-4d0d-b62a-d15be6766ae4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.782 250022 DEBUG oslo_concurrency.lockutils [req-2e26c490-6643-4ab8-85aa-80ccc998d5a8 req-6f8e056f-d284-4d0d-b62a-d15be6766ae4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.782 250022 DEBUG nova.compute.manager [req-2e26c490-6643-4ab8-85aa-80ccc998d5a8 req-6f8e056f-d284-4d0d-b62a-d15be6766ae4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Processing event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.961 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923524.9615731, b70d8470-0a60-4adc-89b5-a4e8763fd81e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.962 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] VM Started (Lifecycle Event)
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.964 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.967 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.971 250022 INFO nova.virt.libvirt.driver [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Instance spawned successfully.
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.971 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.989 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:38:44 compute-0 nova_compute[250018]: 2026-01-20 15:38:44.995 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.000 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.001 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.001 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.001 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.002 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.002 250022 DEBUG nova.virt.libvirt.driver [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.028 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.029 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923524.9623501, b70d8470-0a60-4adc-89b5-a4e8763fd81e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.029 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] VM Paused (Lifecycle Event)
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.041 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:45.041 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.048 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.052 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923524.9668913, b70d8470-0a60-4adc-89b5-a4e8763fd81e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.052 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] VM Resumed (Lifecycle Event)
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.100 250022 INFO nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Took 7.56 seconds to spawn the instance on the hypervisor.
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.102 250022 DEBUG nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.114 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.118 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.147 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:38:45 compute-0 podman[394348]: 2026-01-20 15:38:45.150113809 +0000 UTC m=+0.054754724 container create 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 15:38:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:45 compute-0 systemd[1]: Started libpod-conmon-5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac.scope.
Jan 20 15:38:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.212 250022 INFO nova.compute.manager [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Took 8.62 seconds to build instance.
Jan 20 15:38:45 compute-0 podman[394348]: 2026-01-20 15:38:45.127260156 +0000 UTC m=+0.031901091 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ca696aa67f03008b4e3143c5a2dd843c8fc55082c5cc9c63ed3f9bdc84942b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:38:45 compute-0 podman[394348]: 2026-01-20 15:38:45.229916614 +0000 UTC m=+0.134557549 container init 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:38:45 compute-0 podman[394348]: 2026-01-20 15:38:45.235560698 +0000 UTC m=+0.140201613 container start 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:38:45 compute-0 nova_compute[250018]: 2026-01-20 15:38:45.235 250022 DEBUG oslo_concurrency.lockutils [None req-e7d1e352-42f7-45b0-8ce5-86aef89b3d66 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:45 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [NOTICE]   (394367) : New worker (394369) forked
Jan 20 15:38:45 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [NOTICE]   (394367) : Loading success.
Jan 20 15:38:45 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:45.303 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:38:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:38:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:45.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:46 compute-0 ceph-mon[74360]: pgmap v3497: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 15:38:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:47.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.173 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 275 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.501 250022 DEBUG nova.compute.manager [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.502 250022 DEBUG oslo_concurrency.lockutils [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.503 250022 DEBUG oslo_concurrency.lockutils [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.503 250022 DEBUG oslo_concurrency.lockutils [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.503 250022 DEBUG nova.compute.manager [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] No waiting events found dispatching network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:38:47 compute-0 nova_compute[250018]: 2026-01-20 15:38:47.504 250022 WARNING nova.compute.manager [req-0b9860bf-78e5-4e0f-bbcb-1d64a1c75819 req-5e6a2c94-8445-4bd6-a91b-f756a8d6f806 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received unexpected event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab for instance with vm_state active and task_state None.
Jan 20 15:38:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:47.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:48 compute-0 nova_compute[250018]: 2026-01-20 15:38:48.130 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:48 compute-0 ceph-mon[74360]: pgmap v3498: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 275 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 20 15:38:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:38:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:49.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 15:38:50 compute-0 NetworkManager[48960]: <info>  [1768923530.5197] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Jan 20 15:38:50 compute-0 ovn_controller[148666]: 2026-01-20T15:38:50Z|00789|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:38:50 compute-0 NetworkManager[48960]: <info>  [1768923530.5204] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 20 15:38:50 compute-0 nova_compute[250018]: 2026-01-20 15:38:50.522 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:50 compute-0 ceph-mon[74360]: pgmap v3499: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 20 15:38:50 compute-0 ovn_controller[148666]: 2026-01-20T15:38:50Z|00790|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:38:50 compute-0 nova_compute[250018]: 2026-01-20 15:38:50.535 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:51 compute-0 nova_compute[250018]: 2026-01-20 15:38:51.011 250022 DEBUG nova.compute.manager [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:38:51 compute-0 nova_compute[250018]: 2026-01-20 15:38:51.012 250022 DEBUG nova.compute.manager [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing instance network info cache due to event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:38:51 compute-0 nova_compute[250018]: 2026-01-20 15:38:51.012 250022 DEBUG oslo_concurrency.lockutils [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:38:51 compute-0 nova_compute[250018]: 2026-01-20 15:38:51.013 250022 DEBUG oslo_concurrency.lockutils [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:38:51 compute-0 nova_compute[250018]: 2026-01-20 15:38:51.013 250022 DEBUG nova.network.neutron [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:38:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:51 compute-0 sudo[394382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:51 compute-0 sudo[394382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:51 compute-0 sudo[394382]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:51 compute-0 sudo[394407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:38:51 compute-0 sudo[394407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:38:51 compute-0 sudo[394407]: pam_unix(sudo:session): session closed for user root
Jan 20 15:38:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Jan 20 15:38:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:52 compute-0 nova_compute[250018]: 2026-01-20 15:38:52.175 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:38:52.304 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:38:52 compute-0 ceph-mon[74360]: pgmap v3500: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 90 op/s
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:38:52
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups']
Jan 20 15:38:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:38:53 compute-0 nova_compute[250018]: 2026-01-20 15:38:53.132 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:53.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 20 15:38:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:54 compute-0 ceph-mon[74360]: pgmap v3501: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 20 15:38:54 compute-0 nova_compute[250018]: 2026-01-20 15:38:54.879 250022 DEBUG nova.network.neutron [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updated VIF entry in instance network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:38:54 compute-0 nova_compute[250018]: 2026-01-20 15:38:54.881 250022 DEBUG nova.network.neutron [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updating instance_info_cache with network_info: [{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:38:54 compute-0 nova_compute[250018]: 2026-01-20 15:38:54.898 250022 DEBUG oslo_concurrency.lockutils [req-9d564cc6-5d31-4cf7-940f-450ee720d10b req-130f9c83-eb50-4d4b-b506-fd493f3f5bd7 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:38:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 167 op/s
Jan 20 15:38:55 compute-0 ceph-mon[74360]: pgmap v3502: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 167 op/s
Jan 20 15:38:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:57 compute-0 nova_compute[250018]: 2026-01-20 15:38:57.177 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:57.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 202 op/s
Jan 20 15:38:57 compute-0 ovn_controller[148666]: 2026-01-20T15:38:57Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1f:2b:07 10.100.0.5
Jan 20 15:38:57 compute-0 ovn_controller[148666]: 2026-01-20T15:38:57Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:2b:07 10.100.0.5
Jan 20 15:38:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:38:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:38:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:38:58 compute-0 nova_compute[250018]: 2026-01-20 15:38:58.134 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:38:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:38:58 compute-0 ceph-mon[74360]: pgmap v3503: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 202 op/s
Jan 20 15:38:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:38:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:38:59.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:38:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 545 KiB/s wr, 213 op/s
Jan 20 15:38:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:38:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:38:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:38:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:00 compute-0 ceph-mon[74360]: pgmap v3504: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 545 KiB/s wr, 213 op/s
Jan 20 15:39:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:01.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 15:39:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:01.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:02 compute-0 nova_compute[250018]: 2026-01-20 15:39:02.179 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:02 compute-0 ceph-mon[74360]: pgmap v3505: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 20 15:39:03 compute-0 nova_compute[250018]: 2026-01-20 15:39:03.137 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:03.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 193 op/s
Jan 20 15:39:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:03.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:03 compute-0 ceph-mon[74360]: pgmap v3506: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 193 op/s
Jan 20 15:39:04 compute-0 nova_compute[250018]: 2026-01-20 15:39:04.484 250022 INFO nova.compute.manager [None req-a4126536-666a-4e8c-891e-ea3b230573a1 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Get console output
Jan 20 15:39:04 compute-0 nova_compute[250018]: 2026-01-20 15:39:04.492 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:39:05 compute-0 ovn_controller[148666]: 2026-01-20T15:39:05Z|00791|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:39:05 compute-0 nova_compute[250018]: 2026-01-20 15:39:05.029 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:05 compute-0 ovn_controller[148666]: 2026-01-20T15:39:05Z|00792|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:39:05 compute-0 nova_compute[250018]: 2026-01-20 15:39:05.103 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:05.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 194 op/s
Jan 20 15:39:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:05.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:06 compute-0 nova_compute[250018]: 2026-01-20 15:39:06.358 250022 INFO nova.compute.manager [None req-556d4d6e-4f63-4b09-93c7-27f8aa88646e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Get console output
Jan 20 15:39:06 compute-0 nova_compute[250018]: 2026-01-20 15:39:06.363 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:39:06 compute-0 ceph-mon[74360]: pgmap v3507: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 194 op/s
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.182 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:07.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:07 compute-0 NetworkManager[48960]: <info>  [1768923547.2566] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.255 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:07 compute-0 NetworkManager[48960]: <info>  [1768923547.2575] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.307 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:07 compute-0 ovn_controller[148666]: 2026-01-20T15:39:07Z|00793|binding|INFO|Releasing lport e145ba7e-306f-4de5-8e9b-034c2a7cc25e from this chassis (sb_readonly=0)
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.311 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.841 250022 INFO nova.compute.manager [None req-e1ed7f10-b5b1-4eaa-aa2f-98810a1eb7c4 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Get console output
Jan 20 15:39:07 compute-0 nova_compute[250018]: 2026-01-20 15:39:07.847 331017 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 15:39:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.139 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:08 compute-0 ceph-mon[74360]: pgmap v3508: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.622 250022 DEBUG nova.compute.manager [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.623 250022 DEBUG nova.compute.manager [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing instance network info cache due to event network-changed-74f670dc-f485-4a98-8c72-7f592f8939ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.623 250022 DEBUG oslo_concurrency.lockutils [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.623 250022 DEBUG oslo_concurrency.lockutils [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.623 250022 DEBUG nova.network.neutron [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Refreshing network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.680 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.680 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.681 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.681 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.681 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.682 250022 INFO nova.compute.manager [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Terminating instance
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.683 250022 DEBUG nova.compute.manager [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:39:08 compute-0 kernel: tap74f670dc-f4 (unregistering): left promiscuous mode
Jan 20 15:39:08 compute-0 NetworkManager[48960]: <info>  [1768923548.7404] device (tap74f670dc-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.750 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 ovn_controller[148666]: 2026-01-20T15:39:08Z|00794|binding|INFO|Releasing lport 74f670dc-f485-4a98-8c72-7f592f8939ab from this chassis (sb_readonly=0)
Jan 20 15:39:08 compute-0 ovn_controller[148666]: 2026-01-20T15:39:08Z|00795|binding|INFO|Setting lport 74f670dc-f485-4a98-8c72-7f592f8939ab down in Southbound
Jan 20 15:39:08 compute-0 ovn_controller[148666]: 2026-01-20T15:39:08Z|00796|binding|INFO|Removing iface tap74f670dc-f4 ovn-installed in OVS
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.752 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:08.757 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2b:07 10.100.0.5'], port_security=['fa:16:3e:1f:2b:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b70d8470-0a60-4adc-89b5-a4e8763fd81e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03770aaa-187c-4ee9-a705-bc9495cdf334', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3168f57421fb49bfb94b85daedd1fe7d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26fc97f4-6600-43f3-8149-cb555a406125', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf75492-0d72-4e62-8af5-30365215ac61, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=74f670dc-f485-4a98-8c72-7f592f8939ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:39:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:08.758 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 74f670dc-f485-4a98-8c72-7f592f8939ab in datapath 03770aaa-187c-4ee9-a705-bc9495cdf334 unbound from our chassis
Jan 20 15:39:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:08.759 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03770aaa-187c-4ee9-a705-bc9495cdf334, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:39:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:08.760 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5e08387e-03b1-4a1e-bd98-86f89e92d373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:08 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:08.761 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334 namespace which is not needed anymore
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.766 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Jan 20 15:39:08 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d7.scope: Consumed 14.712s CPU time.
Jan 20 15:39:08 compute-0 systemd-machined[216401]: Machine qemu-93-instance-000000d7 terminated.
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [NOTICE]   (394367) : haproxy version is 2.8.14-c23fe91
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [NOTICE]   (394367) : path to executable is /usr/sbin/haproxy
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [WARNING]  (394367) : Exiting Master process...
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [WARNING]  (394367) : Exiting Master process...
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [ALERT]    (394367) : Current worker (394369) exited with code 143 (Terminated)
Jan 20 15:39:08 compute-0 neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334[394363]: [WARNING]  (394367) : All workers exited. Exiting... (0)
Jan 20 15:39:08 compute-0 systemd[1]: libpod-5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac.scope: Deactivated successfully.
Jan 20 15:39:08 compute-0 podman[394470]: 2026-01-20 15:39:08.904914922 +0000 UTC m=+0.049028527 container died 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.958 250022 INFO nova.virt.libvirt.driver [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Instance destroyed successfully.
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.959 250022 DEBUG nova.objects.instance [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lazy-loading 'resources' on Instance uuid b70d8470-0a60-4adc-89b5-a4e8763fd81e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac-userdata-shm.mount: Deactivated successfully.
Jan 20 15:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ca696aa67f03008b4e3143c5a2dd843c8fc55082c5cc9c63ed3f9bdc84942b-merged.mount: Deactivated successfully.
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.974 250022 DEBUG nova.virt.libvirt.vif [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:38:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1905072111',display_name='tempest-TestNetworkBasicOps-server-1905072111',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1905072111',id=215,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDK4n8EFtuL5/XqWY5d/FasGM76mbqH31m+kP4HpBk72Alanw2Gc2AL882g4qQJ4cI2EqJTeY7zEV2m5DJociXdBuQPc6ENpKh0PFPKtX8CK3OFGpGFOPhuqLgfVsiPCmA==',key_name='tempest-TestNetworkBasicOps-1736252842',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:38:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3168f57421fb49bfb94b85daedd1fe7d',ramdisk_id='',reservation_id='r-46xtwcxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-807695970',owner_user_name='tempest-TestNetworkBasicOps-807695970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:38:45Z,user_data=None,user_id='5338aa65dc0e4326a66ce79053787f14',uuid=b70d8470-0a60-4adc-89b5-a4e8763fd81e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.975 250022 DEBUG nova.network.os_vif_util [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converting VIF {"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.976 250022 DEBUG nova.network.os_vif_util [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.977 250022 DEBUG os_vif [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.980 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74f670dc-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:08 compute-0 podman[394470]: 2026-01-20 15:39:08.982919378 +0000 UTC m=+0.127032983 container cleanup 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.982 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.984 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:39:08 compute-0 nova_compute[250018]: 2026-01-20 15:39:08.988 250022 INFO os_vif [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:2b:07,bridge_name='br-int',has_traffic_filtering=True,id=74f670dc-f485-4a98-8c72-7f592f8939ab,network=Network(03770aaa-187c-4ee9-a705-bc9495cdf334),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74f670dc-f4')
Jan 20 15:39:09 compute-0 systemd[1]: libpod-conmon-5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac.scope: Deactivated successfully.
Jan 20 15:39:09 compute-0 podman[394508]: 2026-01-20 15:39:09.048711932 +0000 UTC m=+0.043520797 container remove 5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.056 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a4f71-73a7-4ca6-9ea9-0489011adfb0]: (4, ('Tue Jan 20 03:39:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334 (5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac)\n5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac\nTue Jan 20 03:39:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334 (5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac)\n5b3f95e33711089422235b057a2f25450a969635d0d9c3a501c548dc26ff00ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.059 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e68a719f-9cce-4f5d-af76-c25a84599237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.060 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03770aaa-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.063 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:09 compute-0 kernel: tap03770aaa-10: left promiscuous mode
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.081 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.084 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4039d9-06e4-4fd4-b94a-ffe9fe6a82ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.097 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad82e90-2ad2-467f-ad2e-db715178956b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.098 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7e750-0697-49a9-99e6-38a4fa0855e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.123 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[a4427473-4bb1-4246-bfbf-137eb0bebe59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 946518, 'reachable_time': 25652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394544, 'error': None, 'target': 'ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.126 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03770aaa-187c-4ee9-a705-bc9495cdf334 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:39:09 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:09.127 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab2e079-bde3-427f-a776-2ad1840e11b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d03770aaa\x2d187c\x2d4ee9\x2da705\x2dbc9495cdf334.mount: Deactivated successfully.
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.133 250022 DEBUG nova.compute.manager [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-unplugged-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.133 250022 DEBUG oslo_concurrency.lockutils [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.133 250022 DEBUG oslo_concurrency.lockutils [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.134 250022 DEBUG oslo_concurrency.lockutils [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.134 250022 DEBUG nova.compute.manager [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] No waiting events found dispatching network-vif-unplugged-74f670dc-f485-4a98-8c72-7f592f8939ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.134 250022 DEBUG nova.compute.manager [req-3cb3e48d-cf13-4027-95a5-51f867684374 req-d3d64b2b-60dd-4719-9aae-e64b135fc49c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-unplugged-74f670dc-f485-4a98-8c72-7f592f8939ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:39:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:09.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:09 compute-0 sshd-session[394441]: Invalid user ubuntu from 134.122.57.138 port 41946
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.374 250022 INFO nova.virt.libvirt.driver [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Deleting instance files /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e_del
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.375 250022 INFO nova.virt.libvirt.driver [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Deletion of /var/lib/nova/instances/b70d8470-0a60-4adc-89b5-a4e8763fd81e_del complete
Jan 20 15:39:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.451 250022 INFO nova.compute.manager [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.452 250022 DEBUG oslo.service.loopingcall [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.452 250022 DEBUG nova.compute.manager [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:39:09 compute-0 nova_compute[250018]: 2026-01-20 15:39:09.452 250022 DEBUG nova.network.neutron [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:39:09 compute-0 sshd-session[394441]: Connection closed by invalid user ubuntu 134.122.57.138 port 41946 [preauth]
Jan 20 15:39:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:09.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.360 250022 DEBUG nova.network.neutron [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updated VIF entry in instance network info cache for port 74f670dc-f485-4a98-8c72-7f592f8939ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.361 250022 DEBUG nova.network.neutron [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updating instance_info_cache with network_info: [{"id": "74f670dc-f485-4a98-8c72-7f592f8939ab", "address": "fa:16:3e:1f:2b:07", "network": {"id": "03770aaa-187c-4ee9-a705-bc9495cdf334", "bridge": "br-int", "label": "tempest-network-smoke--1124670091", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3168f57421fb49bfb94b85daedd1fe7d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74f670dc-f4", "ovs_interfaceid": "74f670dc-f485-4a98-8c72-7f592f8939ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.384 250022 DEBUG oslo_concurrency.lockutils [req-b4d86430-d7df-450a-8179-11ba1cb9337a req-2de898ec-d7e9-48d4-896c-6375edbba518 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b70d8470-0a60-4adc-89b5-a4e8763fd81e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:39:10 compute-0 ceph-mon[74360]: pgmap v3509: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.527 250022 DEBUG nova.network.neutron [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.544 250022 INFO nova.compute.manager [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Took 1.09 seconds to deallocate network for instance.
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.599 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.600 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.652 250022 DEBUG oslo_concurrency.processutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:10 compute-0 nova_compute[250018]: 2026-01-20 15:39:10.742 250022 DEBUG nova.compute.manager [req-43416268-5903-41b2-9975-9073ee9efa1b req-392a2347-ecb4-44dd-8277-f0652c365fcb 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-deleted-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39756524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.103 250022 DEBUG oslo_concurrency.processutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.112 250022 DEBUG nova.compute.provider_tree [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.143 250022 DEBUG nova.scheduler.client.report [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.181 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.222 250022 INFO nova.scheduler.client.report [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Deleted allocations for instance b70d8470-0a60-4adc-89b5-a4e8763fd81e
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.254 250022 DEBUG nova.compute.manager [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.255 250022 DEBUG oslo_concurrency.lockutils [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.255 250022 DEBUG oslo_concurrency.lockutils [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.255 250022 DEBUG oslo_concurrency.lockutils [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.255 250022 DEBUG nova.compute.manager [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] No waiting events found dispatching network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.255 250022 WARNING nova.compute.manager [req-2ed7c0e5-cded-4918-aabd-755c8c6b7e39 req-bfef49ba-1989-4db3-ba55-e8c6b0f69605 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Received unexpected event network-vif-plugged-74f670dc-f485-4a98-8c72-7f592f8939ab for instance with vm_state deleted and task_state None.
Jan 20 15:39:11 compute-0 nova_compute[250018]: 2026-01-20 15:39:11.322 250022 DEBUG oslo_concurrency.lockutils [None req-516fbbae-6c8b-4428-9f2d-a88db022199e 5338aa65dc0e4326a66ce79053787f14 3168f57421fb49bfb94b85daedd1fe7d - - default default] Lock "b70d8470-0a60-4adc-89b5-a4e8763fd81e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Jan 20 15:39:11 compute-0 sudo[394569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:11 compute-0 sudo[394569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:11 compute-0 sudo[394569]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/39756524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:11 compute-0 sudo[394594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:11 compute-0 sudo[394594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:11 compute-0 sudo[394594]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:11 compute-0 podman[394619]: 2026-01-20 15:39:11.58399089 +0000 UTC m=+0.052227454 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 15:39:11 compute-0 podman[394618]: 2026-01-20 15:39:11.620421703 +0000 UTC m=+0.094567698 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:39:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:11.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00043093796528941003 of space, bias 1.0, pg target 0.129281389586823 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:39:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:39:12 compute-0 nova_compute[250018]: 2026-01-20 15:39:12.183 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:12 compute-0 ceph-mon[74360]: pgmap v3510: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 235 KiB/s rd, 1.6 MiB/s wr, 58 op/s
Jan 20 15:39:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Jan 20 15:39:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:39:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:39:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:39:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:39:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:13.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:13 compute-0 nova_compute[250018]: 2026-01-20 15:39:13.983 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:14 compute-0 ceph-mon[74360]: pgmap v3511: 321 pgs: 321 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 16 op/s
Jan 20 15:39:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:39:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/474579767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:39:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 20 15:39:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:15.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:16 compute-0 nova_compute[250018]: 2026-01-20 15:39:16.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:16 compute-0 nova_compute[250018]: 2026-01-20 15:39:16.373 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:16 compute-0 ceph-mon[74360]: pgmap v3512: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 20 15:39:17 compute-0 nova_compute[250018]: 2026-01-20 15:39:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:17 compute-0 nova_compute[250018]: 2026-01-20 15:39:17.185 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:17.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:18 compute-0 ceph-mon[74360]: pgmap v3513: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:18 compute-0 nova_compute[250018]: 2026-01-20 15:39:18.985 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:19.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:20 compute-0 ceph-mon[74360]: pgmap v3514: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 20 15:39:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/366126376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:21 compute-0 nova_compute[250018]: 2026-01-20 15:39:21.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:21 compute-0 nova_compute[250018]: 2026-01-20 15:39:21.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:39:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:39:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:21.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:39:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:39:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3640117576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:22 compute-0 nova_compute[250018]: 2026-01-20 15:39:22.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:22 compute-0 ceph-mon[74360]: pgmap v3515: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:39:23 compute-0 nova_compute[250018]: 2026-01-20 15:39:23.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:23.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:23.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:23 compute-0 nova_compute[250018]: 2026-01-20 15:39:23.957 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923548.9560773, b70d8470-0a60-4adc-89b5-a4e8763fd81e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:39:23 compute-0 nova_compute[250018]: 2026-01-20 15:39:23.957 250022 INFO nova.compute.manager [-] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] VM Stopped (Lifecycle Event)
Jan 20 15:39:23 compute-0 nova_compute[250018]: 2026-01-20 15:39:23.978 250022 DEBUG nova.compute.manager [None req-b22e90db-341e-486e-b3ea-e008bf6bb020 - - - - - -] [instance: b70d8470-0a60-4adc-89b5-a4e8763fd81e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:39:23 compute-0 nova_compute[250018]: 2026-01-20 15:39:23.988 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.080 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.081 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197890005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.559 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:24 compute-0 ceph-mon[74360]: pgmap v3516: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2197890005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.743 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.744 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4185MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.744 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.745 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.814 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.814 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.829 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.851 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.852 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.864 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.892 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:39:24 compute-0 nova_compute[250018]: 2026-01-20 15:39:24.920 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:25.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411632472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-0 nova_compute[250018]: 2026-01-20 15:39:25.364 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:25 compute-0 nova_compute[250018]: 2026-01-20 15:39:25.372 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:39:25 compute-0 nova_compute[250018]: 2026-01-20 15:39:25.389 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:39:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:25 compute-0 nova_compute[250018]: 2026-01-20 15:39:25.419 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:39:25 compute-0 nova_compute[250018]: 2026-01-20 15:39:25.419 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1540194721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3411632472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/312578530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:39:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:25.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:39:26 compute-0 ceph-mon[74360]: pgmap v3517: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 11 op/s
Jan 20 15:39:27 compute-0 nova_compute[250018]: 2026-01-20 15:39:27.190 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:27.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:27 compute-0 ceph-mon[74360]: pgmap v3518: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:27.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:29 compute-0 nova_compute[250018]: 2026-01-20 15:39:29.022 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:29.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:30 compute-0 nova_compute[250018]: 2026-01-20 15:39:30.421 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:30 compute-0 nova_compute[250018]: 2026-01-20 15:39:30.422 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:30 compute-0 ceph-mon[74360]: pgmap v3519: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:30.812 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:30.812 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:30.813 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:31.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:31 compute-0 sudo[394719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:31 compute-0 sudo[394719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:31 compute-0 sudo[394719]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:31 compute-0 sudo[394744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:31 compute-0 sudo[394744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:31 compute-0 sudo[394744]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:31.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:32 compute-0 nova_compute[250018]: 2026-01-20 15:39:32.192 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:32 compute-0 ceph-mon[74360]: pgmap v3520: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:33 compute-0 sudo[394770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:33 compute-0 sudo[394770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394770]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:33 compute-0 sudo[394795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:33 compute-0 sudo[394820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394820]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:39:33 compute-0 sudo[394845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:33.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:33 compute-0 sudo[394845]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:39:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:39:33 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:33 compute-0 sudo[394890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:33 compute-0 sudo[394890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394890]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:33 compute-0 sudo[394915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394915]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:33 compute-0 sudo[394940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 sudo[394940]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:33 compute-0 sudo[394965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:39:33 compute-0 sudo[394965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:34 compute-0 nova_compute[250018]: 2026-01-20 15:39:34.025 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:34 compute-0 sudo[394965]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d4e5b4df-9274-469f-8eeb-0435202721ab does not exist
Jan 20 15:39:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5fd7885f-fd06-48ba-a79a-4880e577c346 does not exist
Jan 20 15:39:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 328ac45a-3c45-4c80-a79c-cde6080acbd2 does not exist
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:39:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: pgmap v3521: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:39:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:39:34 compute-0 sudo[395023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:34 compute-0 sudo[395023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:34 compute-0 sudo[395023]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:34 compute-0 sudo[395048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:34 compute-0 sudo[395048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:34 compute-0 sudo[395048]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:34 compute-0 sudo[395073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:34 compute-0 sudo[395073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:34 compute-0 sudo[395073]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:34 compute-0 sudo[395098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:39:34 compute-0 sudo[395098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.043280567 +0000 UTC m=+0.041517972 container create 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:39:35 compute-0 systemd[1]: Started libpod-conmon-70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0.scope.
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.023510829 +0000 UTC m=+0.021748254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.143230903 +0000 UTC m=+0.141468298 container init 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.149941446 +0000 UTC m=+0.148178841 container start 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.154070308 +0000 UTC m=+0.152307703 container attach 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:39:35 compute-0 gifted_haibt[395181]: 167 167
Jan 20 15:39:35 compute-0 systemd[1]: libpod-70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0.scope: Deactivated successfully.
Jan 20 15:39:35 compute-0 conmon[395181]: conmon 70a7f7bd40aee47b9240 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0.scope/container/memory.events
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.15637443 +0000 UTC m=+0.154611845 container died 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee50d0af37fe064b5d038b437c9ae478ebfe452c1d0dec33e67ed6a89f905fe9-merged.mount: Deactivated successfully.
Jan 20 15:39:35 compute-0 podman[395164]: 2026-01-20 15:39:35.194060928 +0000 UTC m=+0.192298323 container remove 70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 15:39:35 compute-0 systemd[1]: libpod-conmon-70a7f7bd40aee47b9240590f2379042922a35144db2702f0ac81bfccc5d85ee0.scope: Deactivated successfully.
Jan 20 15:39:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:35.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:35 compute-0 podman[395204]: 2026-01-20 15:39:35.356101615 +0000 UTC m=+0.051889615 container create e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:39:35 compute-0 systemd[1]: Started libpod-conmon-e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7.scope.
Jan 20 15:39:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:35 compute-0 podman[395204]: 2026-01-20 15:39:35.32841745 +0000 UTC m=+0.024205480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:35 compute-0 podman[395204]: 2026-01-20 15:39:35.449454579 +0000 UTC m=+0.145242559 container init e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:39:35 compute-0 podman[395204]: 2026-01-20 15:39:35.458488266 +0000 UTC m=+0.154276246 container start e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 15:39:35 compute-0 podman[395204]: 2026-01-20 15:39:35.461737404 +0000 UTC m=+0.157525404 container attach e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:39:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:35.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:36 compute-0 nova_compute[250018]: 2026-01-20 15:39:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:36 compute-0 festive_bohr[395221]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:39:36 compute-0 festive_bohr[395221]: --> relative data size: 1.0
Jan 20 15:39:36 compute-0 festive_bohr[395221]: --> All data devices are unavailable
Jan 20 15:39:36 compute-0 systemd[1]: libpod-e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7.scope: Deactivated successfully.
Jan 20 15:39:36 compute-0 podman[395204]: 2026-01-20 15:39:36.249759515 +0000 UTC m=+0.945547485 container died e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e78524734a5a87f0afa58a0328ffe9e37cd56d24519faf7d0e2c399709861949-merged.mount: Deactivated successfully.
Jan 20 15:39:36 compute-0 podman[395204]: 2026-01-20 15:39:36.299499001 +0000 UTC m=+0.995286981 container remove e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 20 15:39:36 compute-0 systemd[1]: libpod-conmon-e912f082b2b3de6d1d2eb34009330047d949fc83a93a75a481507b85e4432ec7.scope: Deactivated successfully.
Jan 20 15:39:36 compute-0 sudo[395098]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:36 compute-0 sudo[395251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:36 compute-0 sudo[395251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:36 compute-0 sudo[395251]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:36 compute-0 sudo[395276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:36 compute-0 sudo[395276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:36 compute-0 sudo[395276]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:36 compute-0 ceph-mon[74360]: pgmap v3522: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:36 compute-0 sudo[395301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:36 compute-0 sudo[395301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:36 compute-0 sudo[395301]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:36 compute-0 sudo[395326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:39:36 compute-0 sudo[395326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.885210077 +0000 UTC m=+0.043203209 container create 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 20 15:39:36 compute-0 systemd[1]: Started libpod-conmon-4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3.scope.
Jan 20 15:39:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.950716383 +0000 UTC m=+0.108709535 container init 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.956568552 +0000 UTC m=+0.114561684 container start 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.960785137 +0000 UTC m=+0.118778259 container attach 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:36 compute-0 inspiring_allen[395407]: 167 167
Jan 20 15:39:36 compute-0 systemd[1]: libpod-4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3.scope: Deactivated successfully.
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.866753364 +0000 UTC m=+0.024746536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:36 compute-0 podman[395391]: 2026-01-20 15:39:36.963003667 +0000 UTC m=+0.120996789 container died 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-34cc9fafd4f217ca6dd58e27efcf65d13a79cc874bb74e0ea856823482f5c329-merged.mount: Deactivated successfully.
Jan 20 15:39:37 compute-0 podman[395391]: 2026-01-20 15:39:37.000941632 +0000 UTC m=+0.158934774 container remove 4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:39:37 compute-0 systemd[1]: libpod-conmon-4d9993a86f1a5923f128215dbc76e647cee27fcb2e56a3af58619e783012e9c3.scope: Deactivated successfully.
Jan 20 15:39:37 compute-0 nova_compute[250018]: 2026-01-20 15:39:37.047 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:37 compute-0 podman[395429]: 2026-01-20 15:39:37.146834569 +0000 UTC m=+0.041728349 container create f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:39:37 compute-0 systemd[1]: Started libpod-conmon-f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f.scope.
Jan 20 15:39:37 compute-0 nova_compute[250018]: 2026-01-20 15:39:37.193 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997ea6d73e8a47db83c9e3c2122ac9d082e8feb37c49dc340d82fba5ef86e42a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:37 compute-0 podman[395429]: 2026-01-20 15:39:37.127593144 +0000 UTC m=+0.022486944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997ea6d73e8a47db83c9e3c2122ac9d082e8feb37c49dc340d82fba5ef86e42a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997ea6d73e8a47db83c9e3c2122ac9d082e8feb37c49dc340d82fba5ef86e42a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997ea6d73e8a47db83c9e3c2122ac9d082e8feb37c49dc340d82fba5ef86e42a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:37.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:37 compute-0 podman[395429]: 2026-01-20 15:39:37.236082002 +0000 UTC m=+0.130975802 container init f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:39:37 compute-0 podman[395429]: 2026-01-20 15:39:37.242109496 +0000 UTC m=+0.137003276 container start f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:39:37 compute-0 podman[395429]: 2026-01-20 15:39:37.245454677 +0000 UTC m=+0.140348477 container attach f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:37.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]: {
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:     "0": [
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:         {
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "devices": [
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "/dev/loop3"
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             ],
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "lv_name": "ceph_lv0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "lv_size": "7511998464",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "name": "ceph_lv0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "tags": {
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.cluster_name": "ceph",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.crush_device_class": "",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.encrypted": "0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.osd_id": "0",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.type": "block",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:                 "ceph.vdo": "0"
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             },
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "type": "block",
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:             "vg_name": "ceph_vg0"
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:         }
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]:     ]
Jan 20 15:39:37 compute-0 nostalgic_greider[395445]: }
Jan 20 15:39:38 compute-0 systemd[1]: libpod-f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f.scope: Deactivated successfully.
Jan 20 15:39:38 compute-0 podman[395429]: 2026-01-20 15:39:38.004265421 +0000 UTC m=+0.899159201 container died f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-997ea6d73e8a47db83c9e3c2122ac9d082e8feb37c49dc340d82fba5ef86e42a-merged.mount: Deactivated successfully.
Jan 20 15:39:38 compute-0 podman[395429]: 2026-01-20 15:39:38.054709736 +0000 UTC m=+0.949603516 container remove f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:39:38 compute-0 systemd[1]: libpod-conmon-f3777eeba46e687d5feaf00b81347e44dc73cfd6aa4f3b137dd2d4340067a13f.scope: Deactivated successfully.
Jan 20 15:39:38 compute-0 sudo[395326]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:38 compute-0 sudo[395467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:38 compute-0 sudo[395467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:38 compute-0 sudo[395467]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:38 compute-0 sudo[395492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:39:38 compute-0 sudo[395492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:38 compute-0 sudo[395492]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:38 compute-0 sudo[395517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:38 compute-0 sudo[395517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:38 compute-0 sudo[395517]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:38 compute-0 sudo[395542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:39:38 compute-0 sudo[395542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:38 compute-0 ceph-mon[74360]: pgmap v3523: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.650397665 +0000 UTC m=+0.037589346 container create c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:39:38 compute-0 systemd[1]: Started libpod-conmon-c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6.scope.
Jan 20 15:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.634657895 +0000 UTC m=+0.021849596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.735122514 +0000 UTC m=+0.122314275 container init c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.74157195 +0000 UTC m=+0.128763641 container start c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:38 compute-0 exciting_liskov[395624]: 167 167
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.746109523 +0000 UTC m=+0.133301284 container attach c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:39:38 compute-0 systemd[1]: libpod-c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6.scope: Deactivated successfully.
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.748016545 +0000 UTC m=+0.135208266 container died c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7d415dda088276d89b65182530dd32cb56263dc2784c546803c0eaac1ef46b7-merged.mount: Deactivated successfully.
Jan 20 15:39:38 compute-0 podman[395608]: 2026-01-20 15:39:38.791933963 +0000 UTC m=+0.179125664 container remove c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:39:38 compute-0 systemd[1]: libpod-conmon-c8cb4cdcd24afc4a82e730b47f0c00bc2991d470ed6d453dd749a1316bf4b2b6.scope: Deactivated successfully.
Jan 20 15:39:38 compute-0 podman[395648]: 2026-01-20 15:39:38.965521095 +0000 UTC m=+0.047361283 container create 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:39 compute-0 systemd[1]: Started libpod-conmon-71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b.scope.
Jan 20 15:39:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbb5af939c929a4eadefe4f59962de9d4b17be90ad5ffc8d5aab66e2309ad99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbb5af939c929a4eadefe4f59962de9d4b17be90ad5ffc8d5aab66e2309ad99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbb5af939c929a4eadefe4f59962de9d4b17be90ad5ffc8d5aab66e2309ad99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cbb5af939c929a4eadefe4f59962de9d4b17be90ad5ffc8d5aab66e2309ad99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:38.949986591 +0000 UTC m=+0.031826809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:39:39 compute-0 nova_compute[250018]: 2026-01-20 15:39:39.075 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:39.080526719 +0000 UTC m=+0.162366967 container init 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:39.086686767 +0000 UTC m=+0.168526975 container start 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:39.089817462 +0000 UTC m=+0.171657700 container attach 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:39:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:39:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:39:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]: {
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:         "osd_id": 0,
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:         "type": "bluestore"
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]:     }
Jan 20 15:39:39 compute-0 elastic_lederberg[395664]: }
Jan 20 15:39:39 compute-0 systemd[1]: libpod-71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b.scope: Deactivated successfully.
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:39.878044278 +0000 UTC m=+0.959884516 container died 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cbb5af939c929a4eadefe4f59962de9d4b17be90ad5ffc8d5aab66e2309ad99-merged.mount: Deactivated successfully.
Jan 20 15:39:39 compute-0 podman[395648]: 2026-01-20 15:39:39.941835848 +0000 UTC m=+1.023676046 container remove 71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:39:39 compute-0 systemd[1]: libpod-conmon-71d1b62fd5bd105ee2e771b9cc33499f19a8e37e5f9d8e39baa04d1d21cf718b.scope: Deactivated successfully.
Jan 20 15:39:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:39.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:39 compute-0 sudo[395542]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:39:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:39:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 75340b40-7468-4048-82a7-00dbd762e39c does not exist
Jan 20 15:39:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 53c8761e-2326-4ec9-b27a-0095d1ce07a2 does not exist
Jan 20 15:39:40 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b8f8e9d0-49a0-4550-b70a-a2d2a4000333 does not exist
Jan 20 15:39:40 compute-0 sudo[395698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:40 compute-0 sudo[395698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:40 compute-0 sudo[395698]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:40 compute-0 sudo[395723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:39:40 compute-0 sudo[395723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:40 compute-0 sudo[395723]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:40 compute-0 ceph-mon[74360]: pgmap v3524: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:39:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:41.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:41.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:42 compute-0 nova_compute[250018]: 2026-01-20 15:39:42.195 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:42 compute-0 podman[395751]: 2026-01-20 15:39:42.494050949 +0000 UTC m=+0.081061071 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 15:39:42 compute-0 podman[395750]: 2026-01-20 15:39:42.528978501 +0000 UTC m=+0.116244339 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 15:39:42 compute-0 ceph-mon[74360]: pgmap v3525: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.002 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.003 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.024 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.077 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.149 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.149 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.164 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.164 250022 INFO nova.compute.claims [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:39:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.270 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:39:43 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766277954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.749 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.755 250022 DEBUG nova.compute.provider_tree [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.769 250022 DEBUG nova.scheduler.client.report [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.796 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.797 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.872 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.873 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.899 250022 INFO nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:39:43 compute-0 nova_compute[250018]: 2026-01-20 15:39:43.926 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:39:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:43.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.032 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.033 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.033 250022 INFO nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Creating image(s)
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.057 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.081 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.107 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.111 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.136 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.175 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.176 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.177 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.177 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.202 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.205 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b7713a52-d016-43a7-824f-d4b6098efd0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.504 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 b7713a52-d016-43a7-824f-d4b6098efd0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.539 250022 DEBUG nova.policy [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5985ef736503499a9f1d734cabc33ce5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:39:44 compute-0 ceph-mon[74360]: pgmap v3526: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:39:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3766277954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.584 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] resizing rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.692 250022 DEBUG nova.objects.instance [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'migration_context' on Instance uuid b7713a52-d016-43a7-824f-d4b6098efd0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.736 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.737 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Ensure instance console log exists: /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.737 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.737 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:44 compute-0 nova_compute[250018]: 2026-01-20 15:39:44.738 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:45 compute-0 nova_compute[250018]: 2026-01-20 15:39:45.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:45.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 242 KiB/s wr, 3 op/s
Jan 20 15:39:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:39:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:45.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.067 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.068 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.068 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.069 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.069 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.070 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.070 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.110 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.110 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Image id a32b3e07-16d8-46fd-9a7b-c242c432fcf9 yields fingerprint 82d5c1918fd7c974214c7a48c1793a7a82560462 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.111 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] image a32b3e07-16d8-46fd-9a7b-c242c432fcf9 at (/var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462): checking
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.111 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] image a32b3e07-16d8-46fd-9a7b-c242c432fcf9 at (/var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.114 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.114 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] b7713a52-d016-43a7-824f-d4b6098efd0d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.114 250022 WARNING nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.114 250022 WARNING nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.115 250022 WARNING nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.115 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Active base files: /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.115 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Removable base files: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7 /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.115 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5c59bb50cd8e2f04a0e24e31c5eec4210425eca7
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.116 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a4ed0d2b98aa460c005e878d78a49ccb6f511f7c
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.116 250022 INFO nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d098465a3e855b629d7a6148197a821861fe0d4b
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.116 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.116 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 20 15:39:46 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.116 250022 DEBUG nova.virt.libvirt.imagecache [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 20 15:39:46 compute-0 ceph-mon[74360]: pgmap v3527: 321 pgs: 321 active+clean; 138 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 242 KiB/s wr, 3 op/s
Jan 20 15:39:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:46.957 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:39:46 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:46.958 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:39:47 compute-0 nova_compute[250018]: 2026-01-20 15:39:46.999 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:47 compute-0 nova_compute[250018]: 2026-01-20 15:39:47.197 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:47.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 20 15:39:47 compute-0 nova_compute[250018]: 2026-01-20 15:39:47.459 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Successfully created port: 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:39:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.467202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588467324, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1147, "num_deletes": 255, "total_data_size": 1868805, "memory_usage": 1894104, "flush_reason": "Manual Compaction"}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588478538, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1825659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78073, "largest_seqno": 79218, "table_properties": {"data_size": 1820203, "index_size": 2851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11654, "raw_average_key_size": 19, "raw_value_size": 1809208, "raw_average_value_size": 3025, "num_data_blocks": 127, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923484, "oldest_key_time": 1768923484, "file_creation_time": 1768923588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 11344 microseconds, and 5718 cpu microseconds.
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.478595) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1825659 bytes OK
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.478617) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.480202) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.480214) EVENT_LOG_v1 {"time_micros": 1768923588480210, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.480229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 1863653, prev total WAL file size 1863653, number of live WAL files 2.
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.480872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323733' seq:72057594037927935, type:22 .. '6C6F676D0033353234' seq:0, type:0; will stop at (end)
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1782KB)], [179(10MB)]
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588480933, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 12699211, "oldest_snapshot_seqno": -1}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10480 keys, 12578189 bytes, temperature: kUnknown
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588547854, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 12578189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12512142, "index_size": 38734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26245, "raw_key_size": 277330, "raw_average_key_size": 26, "raw_value_size": 12330278, "raw_average_value_size": 1176, "num_data_blocks": 1471, "num_entries": 10480, "num_filter_entries": 10480, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.548189) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12578189 bytes
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.549717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.4 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.8) write-amplify(6.9) OK, records in: 11003, records dropped: 523 output_compression: NoCompression
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.549738) EVENT_LOG_v1 {"time_micros": 1768923588549728, "job": 112, "event": "compaction_finished", "compaction_time_micros": 67047, "compaction_time_cpu_micros": 30471, "output_level": 6, "num_output_files": 1, "total_output_size": 12578189, "num_input_records": 11003, "num_output_records": 10480, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588550258, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923588553032, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.480782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.553150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.553156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.553158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.553159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:39:48.553160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:39:48 compute-0 ceph-mon[74360]: pgmap v3528: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 26 op/s
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.081 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Successfully updated port: 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.105 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.105 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquired lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.105 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.138 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:49.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.251 250022 DEBUG nova.compute.manager [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.252 250022 DEBUG nova.compute.manager [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing instance network info cache due to event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.252 250022 DEBUG oslo_concurrency.lockutils [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:39:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:49 compute-0 sshd-session[395985]: Invalid user ubuntu from 134.122.57.138 port 59478
Jan 20 15:39:49 compute-0 sshd-session[395985]: Connection closed by invalid user ubuntu 134.122.57.138 port 59478 [preauth]
Jan 20 15:39:49 compute-0 ceph-mon[74360]: pgmap v3529: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:49 compute-0 nova_compute[250018]: 2026-01-20 15:39:49.921 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:39:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:49.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:51.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:51 compute-0 sudo[395989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:51 compute-0 sudo[395989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:51 compute-0 sudo[395989]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:51 compute-0 ceph-mon[74360]: pgmap v3530: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:51 compute-0 sudo[396014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:39:51 compute-0 sudo[396014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:39:51 compute-0 sudo[396014]: pam_unix(sudo:session): session closed for user root
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.923 250022 DEBUG nova.network.neutron [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.948 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Releasing lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.948 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Instance network_info: |[{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.949 250022 DEBUG oslo_concurrency.lockutils [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.949 250022 DEBUG nova.network.neutron [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.953 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Start _get_guest_xml network_info=[{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.958 250022 WARNING nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.964 250022 DEBUG nova.virt.libvirt.host [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.965 250022 DEBUG nova.virt.libvirt.host [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:39:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.971 250022 DEBUG nova.virt.libvirt.host [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.972 250022 DEBUG nova.virt.libvirt.host [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.973 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.973 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.974 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.974 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.974 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.975 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.975 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.975 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.975 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.976 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.976 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.976 250022 DEBUG nova.virt.hardware [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:39:51 compute-0 nova_compute[250018]: 2026-01-20 15:39:51.979 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:39:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582616422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.435 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.463 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.467 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:39:52
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'backups']
Jan 20 15:39:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:39:52 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2582616422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:52 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:39:52 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2245512469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.894 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.896 250022 DEBUG nova.virt.libvirt.vif [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=216,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTC3JVSL9L2cjPYrp5GPk4BJMNU1U1jZv3sq/jJjTIgWHt20GA+MFjDYv7hWBtJxAW0nzGAa/cchqNq+8We8aBg4dy8xIHQ+Z9Mh/+Vq9yeWt6/Cd2vkDS34jF534uifQ==',key_name='tempest-TestSecurityGroupsBasicOps-1945297385',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-l4ae3vnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:39:43Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=b7713a52-d016-43a7-824f-d4b6098efd0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.896 250022 DEBUG nova.network.os_vif_util [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.897 250022 DEBUG nova.network.os_vif_util [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.898 250022 DEBUG nova.objects.instance [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'pci_devices' on Instance uuid b7713a52-d016-43a7-824f-d4b6098efd0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.916 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <uuid>b7713a52-d016-43a7-824f-d4b6098efd0d</uuid>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <name>instance-000000d8</name>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659</nova:name>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:39:51</nova:creationTime>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:user uuid="5985ef736503499a9f1d734cabc33ce5">tempest-TestSecurityGroupsBasicOps-342561427-project-member</nova:user>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:project uuid="728662ec7f654a3fb2e53a90b8707d7e">tempest-TestSecurityGroupsBasicOps-342561427</nova:project>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <nova:port uuid="8daf5d78-feec-462d-a8f7-f15e4b8e2a16">
Jan 20 15:39:52 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <system>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="serial">b7713a52-d016-43a7-824f-d4b6098efd0d</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="uuid">b7713a52-d016-43a7-824f-d4b6098efd0d</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </system>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <os>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </os>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <features>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </features>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b7713a52-d016-43a7-824f-d4b6098efd0d_disk">
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </source>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config">
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </source>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:39:52 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:22:38:1d"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <target dev="tap8daf5d78-fe"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/console.log" append="off"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <video>
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </video>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:39:52 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:39:52 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:39:52 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:39:52 compute-0 nova_compute[250018]: </domain>
Jan 20 15:39:52 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.918 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Preparing to wait for external event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.918 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.918 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.918 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.919 250022 DEBUG nova.virt.libvirt.vif [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=216,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTC3JVSL9L2cjPYrp5GPk4BJMNU1U1jZv3sq/jJjTIgWHt20GA+MFjDYv7hWBtJxAW0nzGAa/cchqNq+8We8aBg4dy8xIHQ+Z9Mh/+Vq9yeWt6/Cd2vkDS34jF534uifQ==',key_name='tempest-TestSecurityGroupsBasicOps-1945297385',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-l4ae3vnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:39:43Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=b7713a52-d016-43a7-824f-d4b6098efd0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.919 250022 DEBUG nova.network.os_vif_util [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.920 250022 DEBUG nova.network.os_vif_util [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.920 250022 DEBUG os_vif [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.921 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.921 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.921 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.925 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.925 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8daf5d78-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.925 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8daf5d78-fe, col_values=(('external_ids', {'iface-id': '8daf5d78-feec-462d-a8f7-f15e4b8e2a16', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:38:1d', 'vm-uuid': 'b7713a52-d016-43a7-824f-d4b6098efd0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.927 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:52 compute-0 NetworkManager[48960]: <info>  [1768923592.9277] manager: (tap8daf5d78-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.929 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:52 compute-0 nova_compute[250018]: 2026-01-20 15:39:52.938 250022 INFO os_vif [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe')
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.011 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.011 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.012 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No VIF found with MAC fa:16:3e:22:38:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.012 250022 INFO nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Using config drive
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.040 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.451 250022 DEBUG nova.network.neutron [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updated VIF entry in instance network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.452 250022 DEBUG nova.network.neutron [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.454 250022 INFO nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Creating config drive at /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.459 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7gyiairv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.594 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7gyiairv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.620 250022 DEBUG nova.storage.rbd_utils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:39:53 compute-0 nova_compute[250018]: 2026-01-20 15:39:53.623 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:39:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:39:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2245512469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:39:54 compute-0 ceph-mon[74360]: pgmap v3531: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.141 250022 DEBUG oslo_concurrency.processutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config b7713a52-d016-43a7-824f-d4b6098efd0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.143 250022 INFO nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Deleting local config drive /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d/disk.config because it was imported into RBD.
Jan 20 15:39:54 compute-0 kernel: tap8daf5d78-fe: entered promiscuous mode
Jan 20 15:39:54 compute-0 ovn_controller[148666]: 2026-01-20T15:39:54Z|00797|binding|INFO|Claiming lport 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 for this chassis.
Jan 20 15:39:54 compute-0 ovn_controller[148666]: 2026-01-20T15:39:54Z|00798|binding|INFO|8daf5d78-feec-462d-a8f7-f15e4b8e2a16: Claiming fa:16:3e:22:38:1d 10.100.0.3
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.199 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.2002] manager: (tap8daf5d78-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.203 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 systemd-machined[216401]: New machine qemu-94-instance-000000d8.
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.265 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 ovn_controller[148666]: 2026-01-20T15:39:54Z|00799|binding|INFO|Setting lport 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 ovn-installed in OVS
Jan 20 15:39:54 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-000000d8.
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.271 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 systemd-udevd[396175]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.2885] device (tap8daf5d78-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.2895] device (tap8daf5d78-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.363 250022 DEBUG oslo_concurrency.lockutils [req-b295181f-25c7-486d-96c0-f042b08093f6 req-a64d3bcf-8569-4b14-adbb-83b18eba725c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:39:54 compute-0 ovn_controller[148666]: 2026-01-20T15:39:54Z|00800|binding|INFO|Setting lport 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 up in Southbound
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.588 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:38:1d 10.100.0.3'], port_security=['fa:16:3e:22:38:1d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b7713a52-d016-43a7-824f-d4b6098efd0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd491bdf-9f4c-41c9-8ee6-5870d56307ff d9d3413f-e719-4750-9395-b1f3019eb5f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133c5430-85b8-4463-94c0-652a31dd592a, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8daf5d78-feec-462d-a8f7-f15e4b8e2a16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.589 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 in datapath e8c67e63-02c8-44ef-812a-e2391a4e1afe bound to our chassis
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.590 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8c67e63-02c8-44ef-812a-e2391a4e1afe
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.601 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a6555d-f236-4523-9ba9-c8f34026d42f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.602 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8c67e63-01 in ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.603 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8c67e63-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.603 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[72fbb9b9-af49-4124-a6a2-4c64df9c5fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.604 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2fed42f3-17a6-41ec-a25e-b9e743662a6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.615 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[d728b10c-6e1d-4e48-91cb-6fe31f3618c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.639 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5a99c6-eb71-491a-8cb5-b95aa780033b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.668 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[11875c7d-caa9-4629-8074-cdc82d368c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.6746] manager: (tape8c67e63-00): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Jan 20 15:39:54 compute-0 systemd-udevd[396178]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.673 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dd172ada-b325-4e7c-888f-4ae5f6e14120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.703 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[67cbb753-5042-4bc8-b85b-b590254d4f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.706 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[7593b0fa-fd37-4a26-83c9-40bd906db7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.7261] device (tape8c67e63-00): carrier: link connected
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.733 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[af721700-fe6f-4843-a9d8-f7e4055ba713]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.750 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7f63a7-955d-4ec4-9e3e-23a25ce8ff70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8c67e63-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:a4:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953544, 'reachable_time': 27087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396224, 'error': None, 'target': 'ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.765 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[68981990-0e72-4dd2-80b8-df26563cf920]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed3:a4d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 953544, 'tstamp': 953544}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396228, 'error': None, 'target': 'ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.780 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f6d24f-70b9-4c1f-80d6-f4c7d2393d3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8c67e63-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d3:a4:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953544, 'reachable_time': 27087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396229, 'error': None, 'target': 'ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.812 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[336402b0-e70a-4522-9ba6-04ded5b24d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.875 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7054e6-05b1-4902-bfd7-27ffd7c3f8b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.876 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8c67e63-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.876 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.877 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8c67e63-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.878 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 NetworkManager[48960]: <info>  [1768923594.8789] manager: (tape8c67e63-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 20 15:39:54 compute-0 kernel: tape8c67e63-00: entered promiscuous mode
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.881 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8c67e63-00, col_values=(('external_ids', {'iface-id': 'd3d1794b-9dd0-4852-8274-ac50dfe93bc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:54 compute-0 ovn_controller[148666]: 2026-01-20T15:39:54Z|00801|binding|INFO|Releasing lport d3d1794b-9dd0-4852-8274-ac50dfe93bc9 from this chassis (sb_readonly=0)
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.894 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.895 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8c67e63-02c8-44ef-812a-e2391a4e1afe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8c67e63-02c8-44ef-812a-e2391a4e1afe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.896 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[8b816ecc-674c-4ef9-9328-45084a1a2d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.897 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-e8c67e63-02c8-44ef-812a-e2391a4e1afe
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/e8c67e63-02c8-44ef-812a-e2391a4e1afe.pid.haproxy
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID e8c67e63-02c8-44ef-812a-e2391a4e1afe
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.897 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'env', 'PROCESS_TAG=haproxy-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8c67e63-02c8-44ef-812a-e2391a4e1afe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.921 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923594.920597, b7713a52-d016-43a7-824f-d4b6098efd0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.921 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] VM Started (Lifecycle Event)
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.941 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.945 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923594.9217148, b7713a52-d016-43a7-824f-d4b6098efd0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.945 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] VM Paused (Lifecycle Event)
Jan 20 15:39:54 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:39:54.961 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.966 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.968 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:39:54 compute-0 nova_compute[250018]: 2026-01-20 15:39:54.997 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:39:55 compute-0 podman[396285]: 2026-01-20 15:39:55.224626761 +0000 UTC m=+0.043296521 container create bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 15:39:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:39:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:55.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:39:55 compute-0 systemd[1]: Started libpod-conmon-bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591.scope.
Jan 20 15:39:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:39:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0893dc1f354408dc94a0a16d6fbb73a8bfd7bb1e1dd6d59f1f08324eafad1548/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:39:55 compute-0 podman[396285]: 2026-01-20 15:39:55.201496451 +0000 UTC m=+0.020166231 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:39:55 compute-0 podman[396285]: 2026-01-20 15:39:55.308318133 +0000 UTC m=+0.126987903 container init bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 15:39:55 compute-0 podman[396285]: 2026-01-20 15:39:55.313142624 +0000 UTC m=+0.131812384 container start bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:39:55 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [NOTICE]   (396303) : New worker (396305) forked
Jan 20 15:39:55 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [NOTICE]   (396303) : Loading success.
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.390 250022 DEBUG nova.compute.manager [req-d33f8f3a-dcb3-45d4-bcc9-70255329ce22 req-5af6b1c1-2ac1-4edc-9eaf-b3bfc03e1bce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.390 250022 DEBUG oslo_concurrency.lockutils [req-d33f8f3a-dcb3-45d4-bcc9-70255329ce22 req-5af6b1c1-2ac1-4edc-9eaf-b3bfc03e1bce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.390 250022 DEBUG oslo_concurrency.lockutils [req-d33f8f3a-dcb3-45d4-bcc9-70255329ce22 req-5af6b1c1-2ac1-4edc-9eaf-b3bfc03e1bce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.391 250022 DEBUG oslo_concurrency.lockutils [req-d33f8f3a-dcb3-45d4-bcc9-70255329ce22 req-5af6b1c1-2ac1-4edc-9eaf-b3bfc03e1bce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.391 250022 DEBUG nova.compute.manager [req-d33f8f3a-dcb3-45d4-bcc9-70255329ce22 req-5af6b1c1-2ac1-4edc-9eaf-b3bfc03e1bce 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Processing event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.391 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.395 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923595.3958302, b7713a52-d016-43a7-824f-d4b6098efd0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.396 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] VM Resumed (Lifecycle Event)
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.397 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.400 250022 INFO nova.virt.libvirt.driver [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Instance spawned successfully.
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.400 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:39:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.415 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.421 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.426 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.426 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.427 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.427 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.428 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.428 250022 DEBUG nova.virt.libvirt.driver [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.456 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.487 250022 INFO nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Took 11.45 seconds to spawn the instance on the hypervisor.
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.487 250022 DEBUG nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.575 250022 INFO nova.compute.manager [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Took 12.49 seconds to build instance.
Jan 20 15:39:55 compute-0 nova_compute[250018]: 2026-01-20 15:39:55.601 250022 DEBUG oslo_concurrency.lockutils [None req-07c30026-d565-442e-ad3a-7cb02137afd1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:55.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:56 compute-0 ceph-mon[74360]: pgmap v3532: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:57.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.476 250022 DEBUG nova.compute.manager [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.477 250022 DEBUG oslo_concurrency.lockutils [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.478 250022 DEBUG oslo_concurrency.lockutils [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.478 250022 DEBUG oslo_concurrency.lockutils [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.479 250022 DEBUG nova.compute.manager [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] No waiting events found dispatching network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.479 250022 WARNING nova.compute.manager [req-7007994b-04bc-4fdf-9d5b-7cef09a69c0d req-65657d90-c0e0-4c6a-9e67-3b31f9634a5c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received unexpected event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 for instance with vm_state active and task_state None.
Jan 20 15:39:57 compute-0 nova_compute[250018]: 2026-01-20 15:39:57.928 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:39:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:39:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:39:58 compute-0 NetworkManager[48960]: <info>  [1768923598.6264] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:58 compute-0 NetworkManager[48960]: <info>  [1768923598.6280] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 20 15:39:58 compute-0 ovn_controller[148666]: 2026-01-20T15:39:58Z|00802|binding|INFO|Releasing lport d3d1794b-9dd0-4852-8274-ac50dfe93bc9 from this chassis (sb_readonly=0)
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.660 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:58 compute-0 ovn_controller[148666]: 2026-01-20T15:39:58Z|00803|binding|INFO|Releasing lport d3d1794b-9dd0-4852-8274-ac50dfe93bc9 from this chassis (sb_readonly=0)
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.664 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:39:58 compute-0 ceph-mon[74360]: pgmap v3533: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.921 250022 DEBUG nova.compute.manager [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.921 250022 DEBUG nova.compute.manager [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing instance network info cache due to event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.922 250022 DEBUG oslo_concurrency.lockutils [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.922 250022 DEBUG oslo_concurrency.lockutils [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:39:58 compute-0 nova_compute[250018]: 2026-01-20 15:39:58.923 250022 DEBUG nova.network.neutron [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:39:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:39:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:39:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 312 KiB/s wr, 20 op/s
Jan 20 15:39:59 compute-0 ceph-mon[74360]: pgmap v3534: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 312 KiB/s wr, 20 op/s
Jan 20 15:39:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:39:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:39:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:39:59.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:40:00 compute-0 nova_compute[250018]: 2026-01-20 15:40:00.069 250022 DEBUG nova.network.neutron [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updated VIF entry in instance network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:40:00 compute-0 nova_compute[250018]: 2026-01-20 15:40:00.070 250022 DEBUG nova.network.neutron [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:40:00 compute-0 nova_compute[250018]: 2026-01-20 15:40:00.102 250022 DEBUG oslo_concurrency.lockutils [req-cdffd6e8-6d2d-4ffd-8247-d8d3d8418e07 req-45d7dc50-557c-4e6c-977e-6d3100be11e2 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:40:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:40:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:01 compute-0 ceph-mon[74360]: pgmap v3535: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:02 compute-0 nova_compute[250018]: 2026-01-20 15:40:02.204 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:02 compute-0 nova_compute[250018]: 2026-01-20 15:40:02.931 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:03.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:03.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:04 compute-0 ceph-mon[74360]: pgmap v3536: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:05.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:05.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:06 compute-0 ceph-mon[74360]: pgmap v3537: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:40:07 compute-0 nova_compute[250018]: 2026-01-20 15:40:07.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:07.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 67 op/s
Jan 20 15:40:07 compute-0 nova_compute[250018]: 2026-01-20 15:40:07.934 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:07.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:08 compute-0 ceph-mon[74360]: pgmap v3538: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 67 op/s
Jan 20 15:40:09 compute-0 ovn_controller[148666]: 2026-01-20T15:40:09Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:38:1d 10.100.0.3
Jan 20 15:40:09 compute-0 ovn_controller[148666]: 2026-01-20T15:40:09Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:38:1d 10.100.0.3
Jan 20 15:40:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:09.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 174 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 444 KiB/s wr, 67 op/s
Jan 20 15:40:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:09.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:10 compute-0 ceph-mon[74360]: pgmap v3539: 321 pgs: 321 active+clean; 174 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 444 KiB/s wr, 67 op/s
Jan 20 15:40:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:11.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:40:11 compute-0 sudo[396323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:11 compute-0 sudo[396323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:11 compute-0 sudo[396323]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:11 compute-0 sudo[396348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:11 compute-0 sudo[396348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:11 compute-0 sudo[396348]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:11.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002155053333798623 of space, bias 1.0, pg target 0.6465160001395869 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:40:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:40:12 compute-0 nova_compute[250018]: 2026-01-20 15:40:12.207 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:12 compute-0 nova_compute[250018]: 2026-01-20 15:40:12.601 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:12 compute-0 ceph-mon[74360]: pgmap v3540: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Jan 20 15:40:12 compute-0 nova_compute[250018]: 2026-01-20 15:40:12.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 15:40:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:13 compute-0 podman[396375]: 2026-01-20 15:40:13.468241224 +0000 UTC m=+0.062833993 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:40:13 compute-0 podman[396374]: 2026-01-20 15:40:13.504416171 +0000 UTC m=+0.099977317 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 15:40:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:13.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:14 compute-0 ceph-mon[74360]: pgmap v3541: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 15:40:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4116331290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:40:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/4116331290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:40:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:15.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:16 compute-0 ceph-mon[74360]: pgmap v3542: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:17 compute-0 nova_compute[250018]: 2026-01-20 15:40:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:17 compute-0 nova_compute[250018]: 2026-01-20 15:40:17.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:40:17 compute-0 nova_compute[250018]: 2026-01-20 15:40:17.071 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:40:17 compute-0 nova_compute[250018]: 2026-01-20 15:40:17.210 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:17.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:17 compute-0 ceph-mon[74360]: pgmap v3543: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:17 compute-0 nova_compute[250018]: 2026-01-20 15:40:17.940 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:18 compute-0 nova_compute[250018]: 2026-01-20 15:40:18.071 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:19 compute-0 nova_compute[250018]: 2026-01-20 15:40:19.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:19.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:20 compute-0 ceph-mon[74360]: pgmap v3544: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:40:21 compute-0 nova_compute[250018]: 2026-01-20 15:40:21.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:21 compute-0 nova_compute[250018]: 2026-01-20 15:40:21.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:40:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:21.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 20 15:40:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3019188951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:21 compute-0 ceph-mon[74360]: pgmap v3545: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 20 15:40:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:22 compute-0 nova_compute[250018]: 2026-01-20 15:40:22.212 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4221299174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:22 compute-0 nova_compute[250018]: 2026-01-20 15:40:22.941 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 27 KiB/s wr, 2 op/s
Jan 20 15:40:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:23 compute-0 nova_compute[250018]: 2026-01-20 15:40:23.779 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:23 compute-0 ceph-mon[74360]: pgmap v3546: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 27 KiB/s wr, 2 op/s
Jan 20 15:40:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.206 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.207 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.207 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.207 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.208 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:40:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621488171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.661 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.740 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.742 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:40:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3621488171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/415358669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.928 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.930 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3972MB free_disk=20.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.930 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:24 compute-0 nova_compute[250018]: 2026-01-20 15:40:24.930 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.029 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b7713a52-d016-43a7-824f-d4b6098efd0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.029 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.030 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.072 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:40:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 29 KiB/s wr, 2 op/s
Jan 20 15:40:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:40:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854088840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.527 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.533 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.759 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.779 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:40:25 compute-0 nova_compute[250018]: 2026-01-20 15:40:25.779 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:25 compute-0 ceph-mon[74360]: pgmap v3547: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 29 KiB/s wr, 2 op/s
Jan 20 15:40:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3854088840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2103467019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:26 compute-0 nova_compute[250018]: 2026-01-20 15:40:26.202 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:27 compute-0 nova_compute[250018]: 2026-01-20 15:40:27.214 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:40:27 compute-0 nova_compute[250018]: 2026-01-20 15:40:27.944 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:28 compute-0 ceph-mon[74360]: pgmap v3548: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 20 15:40:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:29 compute-0 sshd-session[396473]: Invalid user ubuntu from 134.122.57.138 port 56936
Jan 20 15:40:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:29 compute-0 sshd-session[396473]: Connection closed by invalid user ubuntu 134.122.57.138 port 56936 [preauth]
Jan 20 15:40:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:30.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:30 compute-0 ceph-mon[74360]: pgmap v3549: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:30 compute-0 nova_compute[250018]: 2026-01-20 15:40:30.779 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:30.813 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:30.814 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:40:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:30.814 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:40:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:31.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:31 compute-0 ceph-mon[74360]: pgmap v3550: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 20 15:40:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:32.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:32 compute-0 nova_compute[250018]: 2026-01-20 15:40:32.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:32 compute-0 sudo[396477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:32 compute-0 sudo[396477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:32 compute-0 sudo[396477]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:32 compute-0 sudo[396502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:32 compute-0 sudo[396502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:32 compute-0 sudo[396502]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:32 compute-0 nova_compute[250018]: 2026-01-20 15:40:32.215 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:32 compute-0 nova_compute[250018]: 2026-01-20 15:40:32.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/522043476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:40:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 20 15:40:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:35.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:35 compute-0 ceph-mon[74360]: pgmap v3551: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 20 15:40:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 230 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 20 15:40:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:36.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:36 compute-0 ceph-mon[74360]: pgmap v3552: 321 pgs: 321 active+clean; 230 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 20 15:40:37 compute-0 nova_compute[250018]: 2026-01-20 15:40:37.219 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 20 15:40:37 compute-0 nova_compute[250018]: 2026-01-20 15:40:37.991 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:38.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:38 compute-0 nova_compute[250018]: 2026-01-20 15:40:38.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:38 compute-0 ceph-mon[74360]: pgmap v3553: 321 pgs: 321 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Jan 20 15:40:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:39.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:40.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:40 compute-0 sudo[396532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:40 compute-0 sudo[396532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-0 sudo[396532]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-0 sudo[396557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:40 compute-0 sudo[396557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-0 sudo[396557]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-0 ceph-mon[74360]: pgmap v3554: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:40 compute-0 sudo[396582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:40 compute-0 sudo[396582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-0 sudo[396582]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:40 compute-0 sudo[396607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:40:40 compute-0 sudo[396607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:40:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:40 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:40:40 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 sudo[396607]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1404fe77-39a4-48f7-84b1-435363bf281b does not exist
Jan 20 15:40:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3158c450-23f7-4b12-b32d-7a5c7bab99fa does not exist
Jan 20 15:40:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 339a2861-a743-4948-9c0b-450e42deec15 does not exist
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:40:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: pgmap v3555: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:40:41 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:40:42 compute-0 sudo[396665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:42 compute-0 sudo[396665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:42 compute-0 sudo[396665]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:42.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:42 compute-0 sudo[396690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:42 compute-0 sudo[396690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:42 compute-0 sudo[396690]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:42 compute-0 sudo[396715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:42 compute-0 sudo[396715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:42 compute-0 sudo[396715]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:42 compute-0 sudo[396740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:40:42 compute-0 sudo[396740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:42 compute-0 nova_compute[250018]: 2026-01-20 15:40:42.234 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.570583407 +0000 UTC m=+0.079575767 container create bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 15:40:42 compute-0 systemd[1]: Started libpod-conmon-bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095.scope.
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.551414009 +0000 UTC m=+0.060406389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.66194448 +0000 UTC m=+0.170936890 container init bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.670345877 +0000 UTC m=+0.179338247 container start bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.67339479 +0000 UTC m=+0.182387150 container attach bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:40:42 compute-0 silly_herschel[396825]: 167 167
Jan 20 15:40:42 compute-0 systemd[1]: libpod-bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095.scope: Deactivated successfully.
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.677275454 +0000 UTC m=+0.186267814 container died bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba6d0ed7ba770a2e5685950286ae3f44b241195562ea1042eff590cae956880-merged.mount: Deactivated successfully.
Jan 20 15:40:42 compute-0 podman[396808]: 2026-01-20 15:40:42.718436504 +0000 UTC m=+0.227428864 container remove bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:40:42 compute-0 systemd[1]: libpod-conmon-bce92f4805aae56de18c725b87e4d1744ff44d2d591aaeb3a486c697e625e095.scope: Deactivated successfully.
Jan 20 15:40:42 compute-0 podman[396849]: 2026-01-20 15:40:42.903987428 +0000 UTC m=+0.044500131 container create d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:40:42 compute-0 systemd[1]: Started libpod-conmon-d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322.scope.
Jan 20 15:40:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:42 compute-0 podman[396849]: 2026-01-20 15:40:42.96820666 +0000 UTC m=+0.108719373 container init d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:40:42 compute-0 podman[396849]: 2026-01-20 15:40:42.974758176 +0000 UTC m=+0.115270879 container start d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:40:42 compute-0 podman[396849]: 2026-01-20 15:40:42.977950762 +0000 UTC m=+0.118463465 container attach d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:42 compute-0 podman[396849]: 2026-01-20 15:40:42.883400873 +0000 UTC m=+0.023913626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:42 compute-0 nova_compute[250018]: 2026-01-20 15:40:42.992 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4232303621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:43 compute-0 competent_stonebraker[396866]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:40:43 compute-0 competent_stonebraker[396866]: --> relative data size: 1.0
Jan 20 15:40:43 compute-0 competent_stonebraker[396866]: --> All data devices are unavailable
Jan 20 15:40:43 compute-0 systemd[1]: libpod-d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322.scope: Deactivated successfully.
Jan 20 15:40:43 compute-0 podman[396849]: 2026-01-20 15:40:43.766369943 +0000 UTC m=+0.906882646 container died d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f440a29fe11298fd3252ec6b258e9b97cf39ac1bfde06b1b3f56ee51b9aecf91-merged.mount: Deactivated successfully.
Jan 20 15:40:43 compute-0 podman[396849]: 2026-01-20 15:40:43.818361795 +0000 UTC m=+0.958874498 container remove d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:40:43 compute-0 systemd[1]: libpod-conmon-d0f957d81824aad59d7d6ac5c3909e14995694f6de49efffc8e070dca226d322.scope: Deactivated successfully.
Jan 20 15:40:43 compute-0 sudo[396740]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:43 compute-0 podman[396890]: 2026-01-20 15:40:43.879243657 +0000 UTC m=+0.075183348 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:40:43 compute-0 podman[396881]: 2026-01-20 15:40:43.887216093 +0000 UTC m=+0.094041668 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 15:40:43 compute-0 sudo[396931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:43 compute-0 sudo[396931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:43 compute-0 sudo[396931]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:43 compute-0 sudo[396958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:43 compute-0 sudo[396958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:43 compute-0 sudo[396958]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:40:44 compute-0 sudo[396983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:44 compute-0 sudo[396983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3309998384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:40:44 compute-0 ceph-mon[74360]: pgmap v3556: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:40:44 compute-0 sudo[396983]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:44 compute-0 sudo[397008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:40:44 compute-0 sudo[397008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.333 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.333 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.333 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:40:44 compute-0 nova_compute[250018]: 2026-01-20 15:40:44.334 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7713a52-d016-43a7-824f-d4b6098efd0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.425847697 +0000 UTC m=+0.038220661 container create 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:40:44 compute-0 systemd[1]: Started libpod-conmon-78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec.scope.
Jan 20 15:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.407981545 +0000 UTC m=+0.020354539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.50936678 +0000 UTC m=+0.121739764 container init 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.515739931 +0000 UTC m=+0.128112895 container start 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.518660101 +0000 UTC m=+0.131033085 container attach 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:40:44 compute-0 eager_babbage[397091]: 167 167
Jan 20 15:40:44 compute-0 systemd[1]: libpod-78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec.scope: Deactivated successfully.
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.521813805 +0000 UTC m=+0.134186809 container died 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bb1a8e6737447fe0b49190e75d183c6fcc6603c44d15ab217cabc345eddb964-merged.mount: Deactivated successfully.
Jan 20 15:40:44 compute-0 podman[397075]: 2026-01-20 15:40:44.564599389 +0000 UTC m=+0.176972353 container remove 78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:40:44 compute-0 systemd[1]: libpod-conmon-78885b07748526b0b663725a6849d9cc9dab90381e9aaf43670729de0854b1ec.scope: Deactivated successfully.
Jan 20 15:40:44 compute-0 podman[397115]: 2026-01-20 15:40:44.734647965 +0000 UTC m=+0.045928450 container create 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:40:44 compute-0 systemd[1]: Started libpod-conmon-0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b.scope.
Jan 20 15:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e907f7b9f1cd47c656433cd8c52305d1ca87c6b31c5643cd48bffc900102312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e907f7b9f1cd47c656433cd8c52305d1ca87c6b31c5643cd48bffc900102312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e907f7b9f1cd47c656433cd8c52305d1ca87c6b31c5643cd48bffc900102312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e907f7b9f1cd47c656433cd8c52305d1ca87c6b31c5643cd48bffc900102312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:44 compute-0 podman[397115]: 2026-01-20 15:40:44.716930657 +0000 UTC m=+0.028211142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:44 compute-0 podman[397115]: 2026-01-20 15:40:44.813318656 +0000 UTC m=+0.124599181 container init 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:44 compute-0 podman[397115]: 2026-01-20 15:40:44.827321684 +0000 UTC m=+0.138602149 container start 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:44 compute-0 podman[397115]: 2026-01-20 15:40:44.830751076 +0000 UTC m=+0.142031601 container attach 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 20 15:40:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]: {
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:     "0": [
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:         {
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "devices": [
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "/dev/loop3"
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             ],
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "lv_name": "ceph_lv0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "lv_size": "7511998464",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "name": "ceph_lv0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "tags": {
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.cluster_name": "ceph",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.crush_device_class": "",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.encrypted": "0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.osd_id": "0",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.type": "block",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:                 "ceph.vdo": "0"
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             },
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "type": "block",
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:             "vg_name": "ceph_vg0"
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:         }
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]:     ]
Jan 20 15:40:45 compute-0 sleepy_heisenberg[397131]: }
Jan 20 15:40:45 compute-0 podman[397115]: 2026-01-20 15:40:45.542774087 +0000 UTC m=+0.854054562 container died 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 15:40:45 compute-0 systemd[1]: libpod-0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b.scope: Deactivated successfully.
Jan 20 15:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e907f7b9f1cd47c656433cd8c52305d1ca87c6b31c5643cd48bffc900102312-merged.mount: Deactivated successfully.
Jan 20 15:40:45 compute-0 podman[397115]: 2026-01-20 15:40:45.593791512 +0000 UTC m=+0.905071987 container remove 0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:40:45 compute-0 systemd[1]: libpod-conmon-0261921d7f95e9744156f57567cfed23fa140865680d733dc58bed63b117441b.scope: Deactivated successfully.
Jan 20 15:40:45 compute-0 sudo[397008]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:45 compute-0 sudo[397152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:45 compute-0 sudo[397152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:45 compute-0 sudo[397152]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:45 compute-0 sudo[397177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:40:45 compute-0 sudo[397177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:45 compute-0 sudo[397177]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:45 compute-0 sudo[397202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:45 compute-0 sudo[397202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:45 compute-0 sudo[397202]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:45 compute-0 sudo[397227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:40:45 compute-0 sudo[397227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.200823683 +0000 UTC m=+0.042925249 container create a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:40:46 compute-0 systemd[1]: Started libpod-conmon-a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797.scope.
Jan 20 15:40:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.18217803 +0000 UTC m=+0.024279626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.281955371 +0000 UTC m=+0.124056957 container init a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.288830296 +0000 UTC m=+0.130931852 container start a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.292051543 +0000 UTC m=+0.134153109 container attach a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:46 compute-0 competent_leakey[397310]: 167 167
Jan 20 15:40:46 compute-0 systemd[1]: libpod-a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797.scope: Deactivated successfully.
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.294077588 +0000 UTC m=+0.136179154 container died a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c70295e0b63152edb67f00629e524abb55c5674bcfaf48b034eed0666f1578c-merged.mount: Deactivated successfully.
Jan 20 15:40:46 compute-0 podman[397293]: 2026-01-20 15:40:46.328715601 +0000 UTC m=+0.170817157 container remove a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:40:46 compute-0 systemd[1]: libpod-conmon-a31c2d4e49403686fcd1417f25bae80918723ba02d597d7a4cd036c2e22fe797.scope: Deactivated successfully.
Jan 20 15:40:46 compute-0 podman[397334]: 2026-01-20 15:40:46.483063924 +0000 UTC m=+0.039377814 container create c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:40:46 compute-0 systemd[1]: Started libpod-conmon-c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b.scope.
Jan 20 15:40:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f725a622079b228699c61fb37ecff53edbd79b1e77ddab7e6b544bdaefb0ba1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f725a622079b228699c61fb37ecff53edbd79b1e77ddab7e6b544bdaefb0ba1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f725a622079b228699c61fb37ecff53edbd79b1e77ddab7e6b544bdaefb0ba1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f725a622079b228699c61fb37ecff53edbd79b1e77ddab7e6b544bdaefb0ba1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:40:46 compute-0 podman[397334]: 2026-01-20 15:40:46.465162521 +0000 UTC m=+0.021476451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:40:46 compute-0 podman[397334]: 2026-01-20 15:40:46.564531591 +0000 UTC m=+0.120845521 container init c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:40:46 compute-0 podman[397334]: 2026-01-20 15:40:46.570783289 +0000 UTC m=+0.127097189 container start c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:40:46 compute-0 podman[397334]: 2026-01-20 15:40:46.574761096 +0000 UTC m=+0.131075046 container attach c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:40:46 compute-0 ceph-mon[74360]: pgmap v3557: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:40:47 compute-0 nova_compute[250018]: 2026-01-20 15:40:47.265 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]: {
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:         "osd_id": 0,
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:         "type": "bluestore"
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]:     }
Jan 20 15:40:47 compute-0 upbeat_hugle[397350]: }
Jan 20 15:40:47 compute-0 systemd[1]: libpod-c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b.scope: Deactivated successfully.
Jan 20 15:40:47 compute-0 podman[397334]: 2026-01-20 15:40:47.41184247 +0000 UTC m=+0.968156360 container died c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:40:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 574 KiB/s wr, 17 op/s
Jan 20 15:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f725a622079b228699c61fb37ecff53edbd79b1e77ddab7e6b544bdaefb0ba1-merged.mount: Deactivated successfully.
Jan 20 15:40:47 compute-0 podman[397334]: 2026-01-20 15:40:47.456630648 +0000 UTC m=+1.012944548 container remove c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:40:47 compute-0 systemd[1]: libpod-conmon-c44c5ff64c0cae4f63b0863cfe64051aac7a6b94b7edd0d680ce8ffb2efc495b.scope: Deactivated successfully.
Jan 20 15:40:47 compute-0 sudo[397227]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:40:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:47 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:40:47 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3f82f0f3-1180-48b1-a2b1-20fd88a68af9 does not exist
Jan 20 15:40:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b1f90825-27af-4f3f-a3c5-1c1d95a6ead8 does not exist
Jan 20 15:40:47 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 36b627a7-3f5c-47e8-9867-14e833ecb3c2 does not exist
Jan 20 15:40:47 compute-0 sudo[397382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:47 compute-0 sudo[397382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:47 compute-0 sudo[397382]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:47 compute-0 sudo[397407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:40:47 compute-0 sudo[397407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:47 compute-0 sudo[397407]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:47 compute-0 nova_compute[250018]: 2026-01-20 15:40:47.871 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:40:47 compute-0 nova_compute[250018]: 2026-01-20 15:40:47.891 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:40:47 compute-0 nova_compute[250018]: 2026-01-20 15:40:47.891 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:40:47 compute-0 nova_compute[250018]: 2026-01-20 15:40:47.995 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:48 compute-0 ceph-mon[74360]: pgmap v3558: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 574 KiB/s wr, 17 op/s
Jan 20 15:40:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:48 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:40:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:49.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 120 KiB/s wr, 17 op/s
Jan 20 15:40:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:40:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:40:50 compute-0 ceph-mon[74360]: pgmap v3559: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 120 KiB/s wr, 17 op/s
Jan 20 15:40:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:51 compute-0 nova_compute[250018]: 2026-01-20 15:40:51.348 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:51.348 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:40:51 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:51.350 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:40:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:52 compute-0 sudo[397434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:52 compute-0 sudo[397434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:52 compute-0 sudo[397434]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:52 compute-0 nova_compute[250018]: 2026-01-20 15:40:52.305 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:52 compute-0 sudo[397459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:40:52 compute-0 sudo[397459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:40:52 compute-0 sudo[397459]: pam_unix(sudo:session): session closed for user root
Jan 20 15:40:52 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:40:52.352 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:40:52
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 20 15:40:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:40:52 compute-0 ceph-mon[74360]: pgmap v3560: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:52 compute-0 nova_compute[250018]: 2026-01-20 15:40:52.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:53.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:53 compute-0 ceph-mgr[74653]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2542147622
Jan 20 15:40:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:53 compute-0 ceph-mon[74360]: pgmap v3561: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:54.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:55.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:56 compute-0 ceph-mon[74360]: pgmap v3562: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:40:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:56.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:57 compute-0 nova_compute[250018]: 2026-01-20 15:40:57.307 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:40:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:40:58 compute-0 nova_compute[250018]: 2026-01-20 15:40:58.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:40:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:40:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:40:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:40:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:40:58 compute-0 ceph-mon[74360]: pgmap v3563: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 20 15:40:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:40:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:40:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:40:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:40:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 908 KiB/s wr, 81 op/s
Jan 20 15:41:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:00.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:00 compute-0 ceph-mon[74360]: pgmap v3564: 321 pgs: 321 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 908 KiB/s wr, 81 op/s
Jan 20 15:41:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 20 15:41:01 compute-0 ceph-mon[74360]: pgmap v3565: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 20 15:41:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:02 compute-0 nova_compute[250018]: 2026-01-20 15:41:02.311 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:03 compute-0 nova_compute[250018]: 2026-01-20 15:41:03.004 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:41:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:04.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:04 compute-0 ceph-mon[74360]: pgmap v3566: 321 pgs: 321 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:41:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:05.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:05 compute-0 nova_compute[250018]: 2026-01-20 15:41:05.892 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:05 compute-0 nova_compute[250018]: 2026-01-20 15:41:05.921 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Triggering sync for uuid b7713a52-d016-43a7-824f-d4b6098efd0d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 15:41:05 compute-0 nova_compute[250018]: 2026-01-20 15:41:05.922 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:05 compute-0 nova_compute[250018]: 2026-01-20 15:41:05.923 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:05 compute-0 nova_compute[250018]: 2026-01-20 15:41:05.949 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:06.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:06 compute-0 ceph-mon[74360]: pgmap v3567: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:07 compute-0 nova_compute[250018]: 2026-01-20 15:41:07.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:08 compute-0 nova_compute[250018]: 2026-01-20 15:41:08.007 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:08 compute-0 ceph-mon[74360]: pgmap v3568: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:09 compute-0 sshd-session[397492]: Invalid user guest from 134.122.57.138 port 44192
Jan 20 15:41:09 compute-0 sshd-session[397492]: Connection closed by invalid user guest 134.122.57.138 port 44192 [preauth]
Jan 20 15:41:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:10.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:10 compute-0 ceph-mon[74360]: pgmap v3569: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:41:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Jan 20 15:41:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:12.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004340096024101991 of space, bias 1.0, pg target 1.3020288072305974 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6462629990228922 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:41:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 20 15:41:12 compute-0 nova_compute[250018]: 2026-01-20 15:41:12.351 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:12 compute-0 sudo[397497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:12 compute-0 sudo[397497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:12 compute-0 sudo[397497]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:12 compute-0 sudo[397522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:12 compute-0 sudo[397522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:12 compute-0 sudo[397522]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:12 compute-0 ceph-mon[74360]: pgmap v3570: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 228 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Jan 20 15:41:13 compute-0 nova_compute[250018]: 2026-01-20 15:41:13.009 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:13 compute-0 nova_compute[250018]: 2026-01-20 15:41:13.081 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:13.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 59 KiB/s wr, 3 op/s
Jan 20 15:41:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:14.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:14 compute-0 podman[397549]: 2026-01-20 15:41:14.495541081 +0000 UTC m=+0.074902419 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:41:14 compute-0 podman[397548]: 2026-01-20 15:41:14.527769454 +0000 UTC m=+0.107412540 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:41:14 compute-0 ceph-mon[74360]: pgmap v3571: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 59 KiB/s wr, 3 op/s
Jan 20 15:41:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:41:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3873927125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:41:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 60 KiB/s wr, 3 op/s
Jan 20 15:41:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:16.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:16 compute-0 ceph-mon[74360]: pgmap v3572: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 60 KiB/s wr, 3 op/s
Jan 20 15:41:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:17 compute-0 nova_compute[250018]: 2026-01-20 15:41:17.390 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 20 15:41:18 compute-0 nova_compute[250018]: 2026-01-20 15:41:18.011 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:18 compute-0 nova_compute[250018]: 2026-01-20 15:41:18.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:18.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:18 compute-0 ceph-mon[74360]: pgmap v3573: 321 pgs: 321 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 20 15:41:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1600213521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 15 KiB/s wr, 19 op/s
Jan 20 15:41:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:20.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:20 compute-0 ceph-mon[74360]: pgmap v3574: 321 pgs: 321 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 15 KiB/s wr, 19 op/s
Jan 20 15:41:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:21.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Jan 20 15:41:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/552745251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:22.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:22 compute-0 nova_compute[250018]: 2026-01-20 15:41:22.437 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:22 compute-0 ceph-mon[74360]: pgmap v3575: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Jan 20 15:41:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1289347087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:23 compute-0 nova_compute[250018]: 2026-01-20 15:41:23.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:23 compute-0 nova_compute[250018]: 2026-01-20 15:41:23.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:23 compute-0 nova_compute[250018]: 2026-01-20 15:41:23.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:41:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:24.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:24 compute-0 ovn_controller[148666]: 2026-01-20T15:41:24Z|00804|binding|INFO|Releasing lport d3d1794b-9dd0-4852-8274-ac50dfe93bc9 from this chassis (sb_readonly=0)
Jan 20 15:41:24 compute-0 nova_compute[250018]: 2026-01-20 15:41:24.622 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:24 compute-0 ceph-mon[74360]: pgmap v3576: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.235 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.235 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.236 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.237 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.238 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:25.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427972615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:25 compute-0 ceph-mon[74360]: pgmap v3577: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.696 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.773 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.774 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.945 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.946 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3993MB free_disk=20.94268798828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.946 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:25 compute-0 nova_compute[250018]: 2026-01-20 15:41:25.947 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.049 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance b7713a52-d016-43a7-824f-d4b6098efd0d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.049 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.050 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:41:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:26.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.228 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981697531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1427972615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3545405961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3284217090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1981697531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.688 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.697 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.715 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.717 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:41:26 compute-0 nova_compute[250018]: 2026-01-20 15:41:26.718 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:27.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.359 250022 DEBUG nova.compute.manager [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.359 250022 DEBUG nova.compute.manager [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing instance network info cache due to event network-changed-8daf5d78-feec-462d-a8f7-f15e4b8e2a16. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.360 250022 DEBUG oslo_concurrency.lockutils [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.360 250022 DEBUG oslo_concurrency.lockutils [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.360 250022 DEBUG nova.network.neutron [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Refreshing network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.440 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.440 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.441 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.441 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.441 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.443 250022 INFO nova.compute.manager [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Terminating instance
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.444 250022 DEBUG nova.compute.manager [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.499 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 kernel: tap8daf5d78-fe (unregistering): left promiscuous mode
Jan 20 15:41:27 compute-0 NetworkManager[48960]: <info>  [1768923687.5475] device (tap8daf5d78-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:41:27 compute-0 ovn_controller[148666]: 2026-01-20T15:41:27Z|00805|binding|INFO|Releasing lport 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 from this chassis (sb_readonly=0)
Jan 20 15:41:27 compute-0 ovn_controller[148666]: 2026-01-20T15:41:27Z|00806|binding|INFO|Setting lport 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 down in Southbound
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.555 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 ovn_controller[148666]: 2026-01-20T15:41:27Z|00807|binding|INFO|Removing iface tap8daf5d78-fe ovn-installed in OVS
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.562 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:38:1d 10.100.0.3'], port_security=['fa:16:3e:22:38:1d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b7713a52-d016-43a7-824f-d4b6098efd0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd491bdf-9f4c-41c9-8ee6-5870d56307ff d9d3413f-e719-4750-9395-b1f3019eb5f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133c5430-85b8-4463-94c0-652a31dd592a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=8daf5d78-feec-462d-a8f7-f15e4b8e2a16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.563 160071 INFO neutron.agent.ovn.metadata.agent [-] Port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16 in datapath e8c67e63-02c8-44ef-812a-e2391a4e1afe unbound from our chassis
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.564 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8c67e63-02c8-44ef-812a-e2391a4e1afe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.565 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[065ba9ed-e642-4738-8576-5c28d269565a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.566 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe namespace which is not needed anymore
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.572 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Jan 20 15:41:27 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d8.scope: Consumed 17.868s CPU time.
Jan 20 15:41:27 compute-0 systemd-machined[216401]: Machine qemu-94-instance-000000d8 terminated.
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.667 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.672 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.681 250022 INFO nova.virt.libvirt.driver [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Instance destroyed successfully.
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.682 250022 DEBUG nova.objects.instance [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'resources' on Instance uuid b7713a52-d016-43a7-824f-d4b6098efd0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:41:27 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [NOTICE]   (396303) : haproxy version is 2.8.14-c23fe91
Jan 20 15:41:27 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [NOTICE]   (396303) : path to executable is /usr/sbin/haproxy
Jan 20 15:41:27 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [ALERT]    (396303) : Current worker (396305) exited with code 143 (Terminated)
Jan 20 15:41:27 compute-0 neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe[396299]: [WARNING]  (396303) : All workers exited. Exiting... (0)
Jan 20 15:41:27 compute-0 systemd[1]: libpod-bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591.scope: Deactivated successfully.
Jan 20 15:41:27 compute-0 podman[397668]: 2026-01-20 15:41:27.701454827 +0000 UTC m=+0.046272225 container died bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:41:27 compute-0 ceph-mon[74360]: pgmap v3578: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.705 250022 DEBUG nova.virt.libvirt.vif [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:39:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-2025775659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=216,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTC3JVSL9L2cjPYrp5GPk4BJMNU1U1jZv3sq/jJjTIgWHt20GA+MFjDYv7hWBtJxAW0nzGAa/cchqNq+8We8aBg4dy8xIHQ+Z9Mh/+Vq9yeWt6/Cd2vkDS34jF534uifQ==',key_name='tempest-TestSecurityGroupsBasicOps-1945297385',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:39:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-l4ae3vnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:39:55Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=b7713a52-d016-43a7-824f-d4b6098efd0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.706 250022 DEBUG nova.network.os_vif_util [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.706 250022 DEBUG nova.network.os_vif_util [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.707 250022 DEBUG os_vif [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.708 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.708 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8daf5d78-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.711 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.714 250022 INFO os_vif [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:38:1d,bridge_name='br-int',has_traffic_filtering=True,id=8daf5d78-feec-462d-a8f7-f15e4b8e2a16,network=Network(e8c67e63-02c8-44ef-812a-e2391a4e1afe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8daf5d78-fe')
Jan 20 15:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591-userdata-shm.mount: Deactivated successfully.
Jan 20 15:41:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0893dc1f354408dc94a0a16d6fbb73a8bfd7bb1e1dd6d59f1f08324eafad1548-merged.mount: Deactivated successfully.
Jan 20 15:41:27 compute-0 podman[397668]: 2026-01-20 15:41:27.741690937 +0000 UTC m=+0.086508335 container cleanup bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:41:27 compute-0 systemd[1]: libpod-conmon-bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591.scope: Deactivated successfully.
Jan 20 15:41:27 compute-0 podman[397722]: 2026-01-20 15:41:27.812642049 +0000 UTC m=+0.044179428 container remove bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.818 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[926a4544-42b2-4727-b9da-448e3fed60f8]: (4, ('Tue Jan 20 03:41:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe (bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591)\nbf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591\nTue Jan 20 03:41:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe (bf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591)\nbf645079fab7d07b4d4447e1276e48a7778223a00bdcacf0c9220aa866b71591\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.819 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[d2724820-75a8-4ef6-89ce-d83d9380d5cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.820 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8c67e63-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:41:27 compute-0 kernel: tape8c67e63-00: left promiscuous mode
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 nova_compute[250018]: 2026-01-20 15:41:27.835 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.837 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff4e59f-94b4-4d74-b9dc-d8648739894a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.852 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ccc0ec-ed80-40e0-a05a-0e52a8dfc905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.853 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b988f295-bdd1-4596-a492-4ef1a18523d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.868 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[bd47e81a-7e62-4486-84de-cc928d203f76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953538, 'reachable_time': 33097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397740, 'error': None, 'target': 'ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.871 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8c67e63-02c8-44ef-812a-e2391a4e1afe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:41:27 compute-0 systemd[1]: run-netns-ovnmeta\x2de8c67e63\x2d02c8\x2d44ef\x2d812a\x2de2391a4e1afe.mount: Deactivated successfully.
Jan 20 15:41:27 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:27.872 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[49b23844-2ea8-4e62-990e-6e145374715a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:41:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:28.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.080 250022 INFO nova.virt.libvirt.driver [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Deleting instance files /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d_del
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.080 250022 INFO nova.virt.libvirt.driver [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Deletion of /var/lib/nova/instances/b7713a52-d016-43a7-824f-d4b6098efd0d_del complete
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.197 250022 INFO nova.compute.manager [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.198 250022 DEBUG oslo.service.loopingcall [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.198 250022 DEBUG nova.compute.manager [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.198 250022 DEBUG nova.network.neutron [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:41:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.819 250022 DEBUG nova.network.neutron [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.838 250022 INFO nova.compute.manager [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Took 0.64 seconds to deallocate network for instance.
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.917 250022 DEBUG nova.compute.manager [req-6b793485-498a-4f4f-9418-4180417cdee1 req-fdf1ffc0-7128-43ca-b417-4a0591b526b8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-vif-deleted-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.919 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.919 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.955 250022 DEBUG oslo_concurrency.processutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.984 250022 DEBUG nova.network.neutron [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updated VIF entry in instance network info cache for port 8daf5d78-feec-462d-a8f7-f15e4b8e2a16. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:41:28 compute-0 nova_compute[250018]: 2026-01-20 15:41:28.985 250022 DEBUG nova.network.neutron [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Updating instance_info_cache with network_info: [{"id": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "address": "fa:16:3e:22:38:1d", "network": {"id": "e8c67e63-02c8-44ef-812a-e2391a4e1afe", "bridge": "br-int", "label": "tempest-network-smoke--210660804", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8daf5d78-fe", "ovs_interfaceid": "8daf5d78-feec-462d-a8f7-f15e4b8e2a16", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.007 250022 DEBUG oslo_concurrency.lockutils [req-2daae7a3-2207-47c0-9b10-f055df4589fc req-939905ff-c49e-4901-a178-d600aed57d9f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-b7713a52-d016-43a7-824f-d4b6098efd0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:41:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928735627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.359 250022 DEBUG oslo_concurrency.processutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.364 250022 DEBUG nova.compute.provider_tree [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.383 250022 DEBUG nova.scheduler.client.report [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:41:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3928735627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.404 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.444 250022 INFO nova.scheduler.client.report [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Deleted allocations for instance b7713a52-d016-43a7-824f-d4b6098efd0d
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.450 250022 DEBUG nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-vif-unplugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.450 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 DEBUG nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] No waiting events found dispatching network-vif-unplugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 WARNING nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received unexpected event network-vif-unplugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 for instance with vm_state deleted and task_state None.
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 DEBUG nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.451 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.452 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.452 250022 DEBUG oslo_concurrency.lockutils [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.452 250022 DEBUG nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] No waiting events found dispatching network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.452 250022 WARNING nova.compute.manager [req-0f2d7b02-a443-4fb6-a8d7-f9aea268be63 req-3c6a882a-f53c-45c0-8e3d-3b06649678a4 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Received unexpected event network-vif-plugged-8daf5d78-feec-462d-a8f7-f15e4b8e2a16 for instance with vm_state deleted and task_state None.
Jan 20 15:41:29 compute-0 nova_compute[250018]: 2026-01-20 15:41:29.504 250022 DEBUG oslo_concurrency.lockutils [None req-4d6a51c3-aa67-485e-8494-a25c7f87c7d1 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "b7713a52-d016-43a7-824f-d4b6098efd0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:30.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:30 compute-0 ceph-mon[74360]: pgmap v3579: 321 pgs: 321 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 6.2 KiB/s wr, 19 op/s
Jan 20 15:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:30.814 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:30.814 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:30.815 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 20 15:41:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:32 compute-0 nova_compute[250018]: 2026-01-20 15:41:32.551 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:32 compute-0 ceph-mon[74360]: pgmap v3580: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 20 15:41:32 compute-0 sudo[397767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:32 compute-0 sudo[397767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:32 compute-0 sudo[397767]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:32 compute-0 sudo[397792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:32 compute-0 sudo[397792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:32 compute-0 sudo[397792]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:32 compute-0 nova_compute[250018]: 2026-01-20 15:41:32.710 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:32 compute-0 nova_compute[250018]: 2026-01-20 15:41:32.719 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:32 compute-0 nova_compute[250018]: 2026-01-20 15:41:32.719 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:33 compute-0 nova_compute[250018]: 2026-01-20 15:41:33.517 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:33 compute-0 nova_compute[250018]: 2026-01-20 15:41:33.592 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:34 compute-0 ceph-mon[74360]: pgmap v3581: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:36 compute-0 ceph-mon[74360]: pgmap v3582: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:37 compute-0 nova_compute[250018]: 2026-01-20 15:41:37.552 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:37 compute-0 nova_compute[250018]: 2026-01-20 15:41:37.711 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:38.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:39 compute-0 nova_compute[250018]: 2026-01-20 15:41:39.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:39 compute-0 ceph-mon[74360]: pgmap v3583: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:39.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:40 compute-0 nova_compute[250018]: 2026-01-20 15:41:40.049 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:40.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:40 compute-0 ceph-mon[74360]: pgmap v3584: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.517627) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701517744, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1222, "num_deletes": 251, "total_data_size": 2029525, "memory_usage": 2055952, "flush_reason": "Manual Compaction"}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701531877, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2006340, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79219, "largest_seqno": 80440, "table_properties": {"data_size": 2000478, "index_size": 3192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12603, "raw_average_key_size": 20, "raw_value_size": 1988788, "raw_average_value_size": 3176, "num_data_blocks": 139, "num_entries": 626, "num_filter_entries": 626, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923588, "oldest_key_time": 1768923588, "file_creation_time": 1768923701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 14273 microseconds, and 6631 cpu microseconds.
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.531933) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2006340 bytes OK
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.531953) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.533904) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.533920) EVENT_LOG_v1 {"time_micros": 1768923701533915, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.533938) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2024140, prev total WAL file size 2024140, number of live WAL files 2.
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.534691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1959KB)], [182(11MB)]
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701534778, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 14584529, "oldest_snapshot_seqno": -1}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10587 keys, 12624543 bytes, temperature: kUnknown
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701621969, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 12624543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12557760, "index_size": 39214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26501, "raw_key_size": 280255, "raw_average_key_size": 26, "raw_value_size": 12374018, "raw_average_value_size": 1168, "num_data_blocks": 1485, "num_entries": 10587, "num_filter_entries": 10587, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.622209) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12624543 bytes
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.623817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.1 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.6) write-amplify(6.3) OK, records in: 11106, records dropped: 519 output_compression: NoCompression
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.623832) EVENT_LOG_v1 {"time_micros": 1768923701623825, "job": 114, "event": "compaction_finished", "compaction_time_micros": 87257, "compaction_time_cpu_micros": 52137, "output_level": 6, "num_output_files": 1, "total_output_size": 12624543, "num_input_records": 11106, "num_output_records": 10587, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701624289, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923701626402, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.534507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.626491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.626499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.626502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.626505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:41 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:41:41.626507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:41:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:42 compute-0 ceph-mon[74360]: pgmap v3585: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:41:42 compute-0 nova_compute[250018]: 2026-01-20 15:41:42.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:42 compute-0 nova_compute[250018]: 2026-01-20 15:41:42.680 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923687.6787376, b7713a52-d016-43a7-824f-d4b6098efd0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:41:42 compute-0 nova_compute[250018]: 2026-01-20 15:41:42.680 250022 INFO nova.compute.manager [-] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] VM Stopped (Lifecycle Event)
Jan 20 15:41:42 compute-0 nova_compute[250018]: 2026-01-20 15:41:42.710 250022 DEBUG nova.compute.manager [None req-0d047db6-84d2-472c-84a0-1004c5e42289 - - - - - -] [instance: b7713a52-d016-43a7-824f-d4b6098efd0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:41:42 compute-0 nova_compute[250018]: 2026-01-20 15:41:42.713 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:44 compute-0 nova_compute[250018]: 2026-01-20 15:41:44.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:41:44 compute-0 nova_compute[250018]: 2026-01-20 15:41:44.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:41:44 compute-0 nova_compute[250018]: 2026-01-20 15:41:44.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:41:44 compute-0 nova_compute[250018]: 2026-01-20 15:41:44.071 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:41:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:44 compute-0 ceph-mon[74360]: pgmap v3586: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:45 compute-0 podman[397825]: 2026-01-20 15:41:45.467427801 +0000 UTC m=+0.056719588 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:41:45 compute-0 podman[397824]: 2026-01-20 15:41:45.526283775 +0000 UTC m=+0.120705880 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:41:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:46.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:46 compute-0 ceph-mon[74360]: pgmap v3587: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:47 compute-0 nova_compute[250018]: 2026-01-20 15:41:47.589 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:47 compute-0 nova_compute[250018]: 2026-01-20 15:41:47.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:47 compute-0 sudo[397869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:47 compute-0 sudo[397869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:47 compute-0 sudo[397869]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-0 sudo[397894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:41:48 compute-0 sudo[397894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-0 sudo[397894]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:48.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:48 compute-0 sudo[397919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:48 compute-0 sudo[397919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-0 sudo[397919]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-0 sudo[397944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:41:48 compute-0 sudo[397944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:48 compute-0 ceph-mon[74360]: pgmap v3588: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:48 compute-0 sudo[397944]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:48 compute-0 sshd-session[397867]: Invalid user guest from 134.122.57.138 port 49242
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4c362bd2-3ca8-4c8c-9054-8a88e0e28f1c does not exist
Jan 20 15:41:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8989f093-823d-47b3-aa2a-739c07aa6ba4 does not exist
Jan 20 15:41:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 60fe0772-454b-4840-a99f-fddc2717bce0 does not exist
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:41:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:41:48 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:49 compute-0 sudo[398002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:49 compute-0 sudo[398002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:49 compute-0 sudo[398002]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:49 compute-0 sshd-session[397867]: Connection closed by invalid user guest 134.122.57.138 port 49242 [preauth]
Jan 20 15:41:49 compute-0 sudo[398027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:41:49 compute-0 sudo[398027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:49 compute-0 sudo[398027]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:49 compute-0 sudo[398052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:49 compute-0 sudo[398052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:49 compute-0 sudo[398052]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:49 compute-0 sudo[398077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:41:49 compute-0 sudo[398077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:41:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.645756445 +0000 UTC m=+0.054170938 container create aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:49 compute-0 systemd[1]: Started libpod-conmon-aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60.scope.
Jan 20 15:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.630444071 +0000 UTC m=+0.038858584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.742459255 +0000 UTC m=+0.150873768 container init aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.75115115 +0000 UTC m=+0.159565643 container start aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.754565453 +0000 UTC m=+0.162979976 container attach aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 15:41:49 compute-0 agitated_cannon[398159]: 167 167
Jan 20 15:41:49 compute-0 systemd[1]: libpod-aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60.scope: Deactivated successfully.
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.75778704 +0000 UTC m=+0.166201543 container died aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c4c86eeb5138f1f6c2a5f98672d034f8b3dda0818557caa370f6fc8fc39160f-merged.mount: Deactivated successfully.
Jan 20 15:41:49 compute-0 podman[398143]: 2026-01-20 15:41:49.806747156 +0000 UTC m=+0.215161649 container remove aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:41:49 compute-0 systemd[1]: libpod-conmon-aa6bce6bd4801cd801486fa37fd6a5c55afae88aefdf38d464360ee0c2bccc60.scope: Deactivated successfully.
Jan 20 15:41:49 compute-0 podman[398182]: 2026-01-20 15:41:49.96787531 +0000 UTC m=+0.052916264 container create a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:50 compute-0 systemd[1]: Started libpod-conmon-a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6.scope.
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:49.944356133 +0000 UTC m=+0.029397177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:50.060862059 +0000 UTC m=+0.145903053 container init a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:50.072022612 +0000 UTC m=+0.157063576 container start a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:50.075719011 +0000 UTC m=+0.160760015 container attach a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:50.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:50 compute-0 ceph-mon[74360]: pgmap v3589: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:50 compute-0 admiring_wiles[398197]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:41:50 compute-0 admiring_wiles[398197]: --> relative data size: 1.0
Jan 20 15:41:50 compute-0 admiring_wiles[398197]: --> All data devices are unavailable
Jan 20 15:41:50 compute-0 systemd[1]: libpod-a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6.scope: Deactivated successfully.
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:50.863659793 +0000 UTC m=+0.948700757 container died a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b19f47458131e1bf743cf9fd2964b293bb27346fce93e8ad1ffa947e78685d35-merged.mount: Deactivated successfully.
Jan 20 15:41:50 compute-0 podman[398182]: 2026-01-20 15:41:50.912066784 +0000 UTC m=+0.997107738 container remove a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:41:50 compute-0 systemd[1]: libpod-conmon-a61d29abcfafbfad77a11ec4205d4231051d6d8f5c89d1840c1af06237b5e0a6.scope: Deactivated successfully.
Jan 20 15:41:50 compute-0 sudo[398077]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:51 compute-0 sudo[398226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:51 compute-0 sudo[398226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:51 compute-0 sudo[398226]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:51 compute-0 sudo[398251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:41:51 compute-0 sudo[398251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:51 compute-0 sudo[398251]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:51 compute-0 sudo[398276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:51 compute-0 sudo[398276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:51 compute-0 sudo[398276]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:51 compute-0 sudo[398301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:41:51 compute-0 sudo[398301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.58902331 +0000 UTC m=+0.051189027 container create 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 15:41:51 compute-0 systemd[1]: Started libpod-conmon-280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7.scope.
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.564924138 +0000 UTC m=+0.027089895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.681932927 +0000 UTC m=+0.144098714 container init 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.693956293 +0000 UTC m=+0.156122010 container start 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.697889399 +0000 UTC m=+0.160055156 container attach 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:41:51 compute-0 silly_grothendieck[398385]: 167 167
Jan 20 15:41:51 compute-0 systemd[1]: libpod-280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7.scope: Deactivated successfully.
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.699854832 +0000 UTC m=+0.162020549 container died 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0d532895953ec72d1daf06d434686444f6924b0a3c6dcfa56418e723cd46ea7-merged.mount: Deactivated successfully.
Jan 20 15:41:51 compute-0 podman[398368]: 2026-01-20 15:41:51.738320425 +0000 UTC m=+0.200486182 container remove 280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Jan 20 15:41:51 compute-0 systemd[1]: libpod-conmon-280dbf43f4ff430baf0ec0410a03c92d88c894144e52a5bcbe767a441ad3d0b7.scope: Deactivated successfully.
Jan 20 15:41:51 compute-0 podman[398408]: 2026-01-20 15:41:51.924575279 +0000 UTC m=+0.049857901 container create 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:41:51 compute-0 systemd[1]: Started libpod-conmon-11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca.scope.
Jan 20 15:41:51 compute-0 podman[398408]: 2026-01-20 15:41:51.900224789 +0000 UTC m=+0.025507491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a393223a084f09ebdbbd961e7fc9f8c3f6919dcaedad0d76b9b538335ac88a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a393223a084f09ebdbbd961e7fc9f8c3f6919dcaedad0d76b9b538335ac88a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a393223a084f09ebdbbd961e7fc9f8c3f6919dcaedad0d76b9b538335ac88a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a393223a084f09ebdbbd961e7fc9f8c3f6919dcaedad0d76b9b538335ac88a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:52 compute-0 podman[398408]: 2026-01-20 15:41:52.018471213 +0000 UTC m=+0.143753875 container init 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:41:52 compute-0 podman[398408]: 2026-01-20 15:41:52.02576242 +0000 UTC m=+0.151045042 container start 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:41:52 compute-0 podman[398408]: 2026-01-20 15:41:52.028771911 +0000 UTC m=+0.154054543 container attach 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 20 15:41:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:52.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:41:52 compute-0 nova_compute[250018]: 2026-01-20 15:41:52.628 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:52 compute-0 ceph-mon[74360]: pgmap v3590: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:41:52
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', '.rgw.root', 'default.rgw.meta']
Jan 20 15:41:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:41:52 compute-0 nova_compute[250018]: 2026-01-20 15:41:52.716 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:52 compute-0 sudo[398433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:52 compute-0 sudo[398433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:52 compute-0 sudo[398433]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:52 compute-0 happy_bose[398425]: {
Jan 20 15:41:52 compute-0 happy_bose[398425]:     "0": [
Jan 20 15:41:52 compute-0 happy_bose[398425]:         {
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "devices": [
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "/dev/loop3"
Jan 20 15:41:52 compute-0 happy_bose[398425]:             ],
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "lv_name": "ceph_lv0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "lv_size": "7511998464",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "name": "ceph_lv0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "tags": {
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.cluster_name": "ceph",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.crush_device_class": "",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.encrypted": "0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.osd_id": "0",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.type": "block",
Jan 20 15:41:52 compute-0 happy_bose[398425]:                 "ceph.vdo": "0"
Jan 20 15:41:52 compute-0 happy_bose[398425]:             },
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "type": "block",
Jan 20 15:41:52 compute-0 happy_bose[398425]:             "vg_name": "ceph_vg0"
Jan 20 15:41:52 compute-0 happy_bose[398425]:         }
Jan 20 15:41:52 compute-0 happy_bose[398425]:     ]
Jan 20 15:41:52 compute-0 happy_bose[398425]: }
Jan 20 15:41:52 compute-0 sudo[398460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:52 compute-0 sudo[398460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:52 compute-0 sudo[398460]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:52 compute-0 systemd[1]: libpod-11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca.scope: Deactivated successfully.
Jan 20 15:41:52 compute-0 podman[398408]: 2026-01-20 15:41:52.818628115 +0000 UTC m=+0.943910747 container died 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a393223a084f09ebdbbd961e7fc9f8c3f6919dcaedad0d76b9b538335ac88a-merged.mount: Deactivated successfully.
Jan 20 15:41:52 compute-0 podman[398408]: 2026-01-20 15:41:52.871703953 +0000 UTC m=+0.996986575 container remove 11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bose, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:41:52 compute-0 systemd[1]: libpod-conmon-11463e4156a7b06ca8187f49553bd437644255ea9456f61cea0d240cdac64aca.scope: Deactivated successfully.
Jan 20 15:41:52 compute-0 sudo[398301]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:52 compute-0 nova_compute[250018]: 2026-01-20 15:41:52.962 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:52 compute-0 nova_compute[250018]: 2026-01-20 15:41:52.964 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:52 compute-0 sudo[398496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:52 compute-0 sudo[398496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:52 compute-0 sudo[398496]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:52 compute-0 nova_compute[250018]: 2026-01-20 15:41:52.987 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 15:41:53 compute-0 sudo[398521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:41:53 compute-0 sudo[398521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:53 compute-0 sudo[398521]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.067 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.068 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.080 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.081 250022 INFO nova.compute.claims [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Claim successful on node compute-0.ctlplane.example.com
Jan 20 15:41:53 compute-0 sudo[398546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:53 compute-0 sudo[398546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:53 compute-0 sudo[398546]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:53 compute-0 sudo[398571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:41:53 compute-0 sudo[398571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.218 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.518574164 +0000 UTC m=+0.053888350 container create 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:41:53 compute-0 systemd[1]: Started libpod-conmon-22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066.scope.
Jan 20 15:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.497332349 +0000 UTC m=+0.032646565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.598237372 +0000 UTC m=+0.133551578 container init 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.604311006 +0000 UTC m=+0.139625192 container start 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.607487293 +0000 UTC m=+0.142801479 container attach 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:41:53 compute-0 xenodochial_gagarin[398670]: 167 167
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.610502804 +0000 UTC m=+0.145816980 container died 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:41:53 compute-0 systemd[1]: libpod-22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066.scope: Deactivated successfully.
Jan 20 15:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-196fe05e222c83a3052681821362188b4663a6b950fc96f1413ed66dc2cf8c0c-merged.mount: Deactivated successfully.
Jan 20 15:41:53 compute-0 podman[398654]: 2026-01-20 15:41:53.647810054 +0000 UTC m=+0.183124250 container remove 22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:41:53 compute-0 systemd[1]: libpod-conmon-22ff6284e13f9fc21eb540ae01317d7e45551117424832484de280b197ebd066.scope: Deactivated successfully.
Jan 20 15:41:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:41:53 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109221212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.689 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.698 250022 DEBUG nova.compute.provider_tree [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.724 250022 DEBUG nova.scheduler.client.report [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.755 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.756 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.801 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.801 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.820 250022 INFO nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.835 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 15:41:53 compute-0 podman[398694]: 2026-01-20 15:41:53.836244168 +0000 UTC m=+0.041714711 container create d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:41:53 compute-0 systemd[1]: Started libpod-conmon-d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519.scope.
Jan 20 15:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957068de025d90de332de8843af5f5033a2393a0eeb3dd6738edea56516665f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957068de025d90de332de8843af5f5033a2393a0eeb3dd6738edea56516665f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957068de025d90de332de8843af5f5033a2393a0eeb3dd6738edea56516665f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/957068de025d90de332de8843af5f5033a2393a0eeb3dd6738edea56516665f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:41:53 compute-0 podman[398694]: 2026-01-20 15:41:53.910256013 +0000 UTC m=+0.115726576 container init d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:41:53 compute-0 podman[398694]: 2026-01-20 15:41:53.816258047 +0000 UTC m=+0.021728620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:41:53 compute-0 podman[398694]: 2026-01-20 15:41:53.921545859 +0000 UTC m=+0.127016392 container start d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:41:53 compute-0 podman[398694]: 2026-01-20 15:41:53.925404114 +0000 UTC m=+0.130874687 container attach d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.927 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.930 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.931 250022 INFO nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Creating image(s)
Jan 20 15:41:53 compute-0 nova_compute[250018]: 2026-01-20 15:41:53.972 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.012 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.048 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.052 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.091 250022 DEBUG nova.policy [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5985ef736503499a9f1d734cabc33ce5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 15:41:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:54.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.141 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.142 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "82d5c1918fd7c974214c7a48c1793a7a82560462" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.143 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.144 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "82d5c1918fd7c974214c7a48c1793a7a82560462" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.180 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.187 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 858544da-d6c9-46d0-ac10-f36f6813e593_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.511 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/82d5c1918fd7c974214c7a48c1793a7a82560462 858544da-d6c9-46d0-ac10-f36f6813e593_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.609 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] resizing rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 15:41:54 compute-0 ceph-mon[74360]: pgmap v3591: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:41:54 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2109221212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.727 250022 DEBUG nova.objects.instance [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'migration_context' on Instance uuid 858544da-d6c9-46d0-ac10-f36f6813e593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:41:54 compute-0 silly_albattani[398711]: {
Jan 20 15:41:54 compute-0 silly_albattani[398711]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:41:54 compute-0 silly_albattani[398711]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:41:54 compute-0 silly_albattani[398711]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:41:54 compute-0 silly_albattani[398711]:         "osd_id": 0,
Jan 20 15:41:54 compute-0 silly_albattani[398711]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:41:54 compute-0 silly_albattani[398711]:         "type": "bluestore"
Jan 20 15:41:54 compute-0 silly_albattani[398711]:     }
Jan 20 15:41:54 compute-0 silly_albattani[398711]: }
Jan 20 15:41:54 compute-0 systemd[1]: libpod-d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519.scope: Deactivated successfully.
Jan 20 15:41:54 compute-0 podman[398899]: 2026-01-20 15:41:54.795596463 +0000 UTC m=+0.027923787 container died d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-957068de025d90de332de8843af5f5033a2393a0eeb3dd6738edea56516665f5-merged.mount: Deactivated successfully.
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.823 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.824 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Ensure instance console log exists: /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.824 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.824 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:41:54 compute-0 nova_compute[250018]: 2026-01-20 15:41:54.825 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:41:54 compute-0 podman[398899]: 2026-01-20 15:41:54.856717599 +0000 UTC m=+0.089044913 container remove d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:41:54 compute-0 systemd[1]: libpod-conmon-d7a6a4e40c169c47e1d930e99694ec721dd0459d0aa159ed6bf71309bb8fc519.scope: Deactivated successfully.
Jan 20 15:41:54 compute-0 sudo[398571]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:41:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:54 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:41:54 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1e67c75e-3e1b-4497-9e7e-d4a52f418286 does not exist
Jan 20 15:41:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 04d6cb07-1032-46c7-a34d-c0a8c9e59e6d does not exist
Jan 20 15:41:54 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7f58ccad-bf49-4ef1-98d9-6e54b65451cb does not exist
Jan 20 15:41:54 compute-0 sudo[398914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:41:55 compute-0 sudo[398914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:55 compute-0 sudo[398914]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:55 compute-0 sudo[398939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:41:55 compute-0 sudo[398939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:41:55 compute-0 sudo[398939]: pam_unix(sudo:session): session closed for user root
Jan 20 15:41:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 481 KiB/s wr, 2 op/s
Jan 20 15:41:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:55 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:41:55 compute-0 ceph-mon[74360]: pgmap v3592: 321 pgs: 321 active+clean; 137 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 481 KiB/s wr, 2 op/s
Jan 20 15:41:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:41:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:56.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:41:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:41:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:57.542 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:41:57 compute-0 nova_compute[250018]: 2026-01-20 15:41:57.543 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:57 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:41:57.543 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:41:57 compute-0 nova_compute[250018]: 2026-01-20 15:41:57.631 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:57 compute-0 nova_compute[250018]: 2026-01-20 15:41:57.685 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Successfully created port: ce2e7728-b7f2-40ae-991c-c492681e800b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 15:41:57 compute-0 nova_compute[250018]: 2026-01-20 15:41:57.718 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:41:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:41:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:41:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:41:58.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:41:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:41:58 compute-0 ceph-mon[74360]: pgmap v3593: 321 pgs: 321 active+clean; 158 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Jan 20 15:41:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:41:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:41:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:41:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:41:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:00.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:00 compute-0 ceph-mon[74360]: pgmap v3594: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.494 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Successfully updated port: ce2e7728-b7f2-40ae-991c-c492681e800b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.515 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.515 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquired lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.516 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.685 250022 DEBUG nova.compute.manager [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.686 250022 DEBUG nova.compute.manager [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing instance network info cache due to event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.686 250022 DEBUG oslo_concurrency.lockutils [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:01 compute-0 nova_compute[250018]: 2026-01-20 15:42:01.699 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 15:42:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:02 compute-0 ceph-mon[74360]: pgmap v3595: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:02 compute-0 nova_compute[250018]: 2026-01-20 15:42:02.653 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:02 compute-0 nova_compute[250018]: 2026-01-20 15:42:02.719 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.049 250022 DEBUG nova.network.neutron [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.069 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Releasing lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.069 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Instance network_info: |[{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.070 250022 DEBUG oslo_concurrency.lockutils [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.070 250022 DEBUG nova.network.neutron [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.076 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Start _get_guest_xml network_info=[{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_format': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'guest_format': None, 'size': 0, 'image_id': 'a32b3e07-16d8-46fd-9a7b-c242c432fcf9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.083 250022 WARNING nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.089 250022 DEBUG nova.virt.libvirt.host [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.090 250022 DEBUG nova.virt.libvirt.host [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.095 250022 DEBUG nova.virt.libvirt.host [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.095 250022 DEBUG nova.virt.libvirt.host [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.097 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.098 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T14:21:55Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='522deaab-a741-4dbb-932d-d8b13a211c33',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T14:21:57Z,direct_url=<?>,disk_format='qcow2',id=a32b3e07-16d8-46fd-9a7b-c242c432fcf9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4e7b863e1a5b4a8bb85e8466fecb8db2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T14:22:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.099 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.099 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.100 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.100 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.101 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.101 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.102 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.102 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.103 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.103 250022 DEBUG nova.virt.hardware [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.108 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:42:03 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473706580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.575 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.610 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:03 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3473706580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:03 compute-0 nova_compute[250018]: 2026-01-20 15:42:03.618 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:04 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 20 15:42:04 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723949721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.049 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.051 250022 DEBUG nova.virt.libvirt.vif [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=218,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-sbfck5cd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:41:53Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=858544da-d6c9-46d0-ac10-f36f6813e593,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.051 250022 DEBUG nova.network.os_vif_util [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.052 250022 DEBUG nova.network.os_vif_util [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.053 250022 DEBUG nova.objects.instance [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'pci_devices' on Instance uuid 858544da-d6c9-46d0-ac10-f36f6813e593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.077 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] End _get_guest_xml xml=<domain type="kvm">
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <uuid>858544da-d6c9-46d0-ac10-f36f6813e593</uuid>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <name>instance-000000da</name>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <memory>131072</memory>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <vcpu>1</vcpu>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <metadata>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531</nova:name>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:creationTime>2026-01-20 15:42:03</nova:creationTime>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:flavor name="m1.nano">
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:memory>128</nova:memory>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:disk>1</nova:disk>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:swap>0</nova:swap>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:vcpus>1</nova:vcpus>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </nova:flavor>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:owner>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:user uuid="5985ef736503499a9f1d734cabc33ce5">tempest-TestSecurityGroupsBasicOps-342561427-project-member</nova:user>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:project uuid="728662ec7f654a3fb2e53a90b8707d7e">tempest-TestSecurityGroupsBasicOps-342561427</nova:project>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </nova:owner>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:root type="image" uuid="a32b3e07-16d8-46fd-9a7b-c242c432fcf9"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <nova:ports>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <nova:port uuid="ce2e7728-b7f2-40ae-991c-c492681e800b">
Jan 20 15:42:04 compute-0 nova_compute[250018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         </nova:port>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </nova:ports>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </nova:instance>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </metadata>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <sysinfo type="smbios">
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <system>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="manufacturer">RDO</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="product">OpenStack Compute</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="serial">858544da-d6c9-46d0-ac10-f36f6813e593</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="uuid">858544da-d6c9-46d0-ac10-f36f6813e593</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <entry name="family">Virtual Machine</entry>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </system>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </sysinfo>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <os>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <boot dev="hd"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <smbios mode="sysinfo"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </os>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <features>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <acpi/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <apic/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <vmcoreinfo/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </features>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <clock offset="utc">
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <timer name="hpet" present="no"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </clock>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <cpu mode="custom" match="exact">
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <model>Nehalem</model>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </cpu>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   <devices>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <disk type="network" device="disk">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/858544da-d6c9-46d0-ac10-f36f6813e593_disk">
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </source>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <target dev="vda" bus="virtio"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <disk type="network" device="cdrom">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <driver type="raw" cache="none"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <source protocol="rbd" name="vms/858544da-d6c9-46d0-ac10-f36f6813e593_disk.config">
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.100" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.102" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <host name="192.168.122.101" port="6789"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </source>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <auth username="openstack">
Jan 20 15:42:04 compute-0 nova_compute[250018]:         <secret type="ceph" uuid="e399cf45-e6b6-5393-99f1-75c601d3f188"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       </auth>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <target dev="sda" bus="sata"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </disk>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <interface type="ethernet">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <mac address="fa:16:3e:36:61:a3"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <mtu size="1442"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <target dev="tapce2e7728-b7"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </interface>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <serial type="pty">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <log file="/var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/console.log" append="off"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </serial>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <video>
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <model type="virtio"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </video>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <input type="tablet" bus="usb"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <rng model="virtio">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <backend model="random">/dev/urandom</backend>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </rng>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <controller type="usb" index="0"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     <memballoon model="virtio">
Jan 20 15:42:04 compute-0 nova_compute[250018]:       <stats period="10"/>
Jan 20 15:42:04 compute-0 nova_compute[250018]:     </memballoon>
Jan 20 15:42:04 compute-0 nova_compute[250018]:   </devices>
Jan 20 15:42:04 compute-0 nova_compute[250018]: </domain>
Jan 20 15:42:04 compute-0 nova_compute[250018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.079 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Preparing to wait for external event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.080 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.080 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.080 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.081 250022 DEBUG nova.virt.libvirt.vif [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T15:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=218,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-sbfck5cd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T15:41:53Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=858544da-d6c9-46d0-ac10-f36f6813e593,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.082 250022 DEBUG nova.network.os_vif_util [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.083 250022 DEBUG nova.network.os_vif_util [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.083 250022 DEBUG os_vif [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.085 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.085 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.086 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.089 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.089 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce2e7728-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.090 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce2e7728-b7, col_values=(('external_ids', {'iface-id': 'ce2e7728-b7f2-40ae-991c-c492681e800b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:61:a3', 'vm-uuid': '858544da-d6c9-46d0-ac10-f36f6813e593'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:04 compute-0 NetworkManager[48960]: <info>  [1768923724.0924] manager: (tapce2e7728-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.094 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.103 250022 INFO os_vif [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7')
Jan 20 15:42:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:04.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.161 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.162 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.162 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] No VIF found with MAC fa:16:3e:36:61:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.162 250022 INFO nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Using config drive
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.190 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:04 compute-0 ceph-mon[74360]: pgmap v3596: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:42:04 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2723949721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.654 250022 INFO nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Creating config drive at /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.663 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jd_jij5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.802 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jd_jij5" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.853 250022 DEBUG nova.storage.rbd_utils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] rbd image 858544da-d6c9-46d0-ac10-f36f6813e593_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.859 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config 858544da-d6c9-46d0-ac10-f36f6813e593_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.901 250022 DEBUG nova.network.neutron [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updated VIF entry in instance network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.902 250022 DEBUG nova.network.neutron [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:04 compute-0 nova_compute[250018]: 2026-01-20 15:42:04.925 250022 DEBUG oslo_concurrency.lockutils [req-e246b672-0204-4ce2-986b-2cc58ac439cb req-ef162859-fb20-42b4-8461-f217393ef06c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.039 250022 DEBUG oslo_concurrency.processutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config 858544da-d6c9-46d0-ac10-f36f6813e593_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.040 250022 INFO nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Deleting local config drive /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593/disk.config because it was imported into RBD.
Jan 20 15:42:05 compute-0 kernel: tapce2e7728-b7: entered promiscuous mode
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.0902] manager: (tapce2e7728-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 20 15:42:05 compute-0 ovn_controller[148666]: 2026-01-20T15:42:05Z|00808|binding|INFO|Claiming lport ce2e7728-b7f2-40ae-991c-c492681e800b for this chassis.
Jan 20 15:42:05 compute-0 ovn_controller[148666]: 2026-01-20T15:42:05Z|00809|binding|INFO|ce2e7728-b7f2-40ae-991c-c492681e800b: Claiming fa:16:3e:36:61:a3 10.100.0.9
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.146 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.151 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.158 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:61:a3 10.100.0.9'], port_security=['fa:16:3e:36:61:a3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '858544da-d6c9-46d0-ac10-f36f6813e593', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b6de228c-5aa6-462d-950b-3f1ea3d45f2d b7424c3a-5aee-4d68-a5d7-51752094553b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=909829b9-c0dd-4f89-9095-7f817ccefae3, chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ce2e7728-b7f2-40ae-991c-c492681e800b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.159 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ce2e7728-b7f2-40ae-991c-c492681e800b in datapath b5fb4ee9-fa45-4797-871a-53247ebaf43e bound to our chassis
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.160 160071 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:05 compute-0 systemd-udevd[399105]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.171 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9bc460-2972-40ab-ba98-7cb20fb40a1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.172 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5fb4ee9-f1 in ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 15:42:05 compute-0 systemd-machined[216401]: New machine qemu-95-instance-000000da.
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.174 257604 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5fb4ee9-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.174 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd3798c-419b-480f-ac87-743c4b80d378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.175 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7423d3-44f1-4a9c-8585-603108fb5da1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.1829] device (tapce2e7728-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.1838] device (tapce2e7728-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.184 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[daaac631-6db5-4fa8-a272-63dc9203290e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-000000da.
Jan 20 15:42:05 compute-0 ovn_controller[148666]: 2026-01-20T15:42:05Z|00810|binding|INFO|Setting lport ce2e7728-b7f2-40ae-991c-c492681e800b ovn-installed in OVS
Jan 20 15:42:05 compute-0 ovn_controller[148666]: 2026-01-20T15:42:05Z|00811|binding|INFO|Setting lport ce2e7728-b7f2-40ae-991c-c492681e800b up in Southbound
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.210 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.209 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[c05b35eb-4a42-408e-b6b2-22dcc5044922]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.236 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6449ed-0d0a-45e1-aa4d-f81d877fb21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.241 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[f011dfe5-d5dc-44fd-88d0-74faee40c06f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 systemd-udevd[399108]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.2429] manager: (tapb5fb4ee9-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.275 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[02997343-8f5d-48a4-9402-ad40aaf11876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.278 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[a62d28f4-9d24-487c-b1bd-00155807fe0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.3012] device (tapb5fb4ee9-f0): carrier: link connected
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.307 257661 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1cbe07-7d89-4378-99e4-08650a98e5fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.322 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[39399384-d2f7-4780-83bd-6210a2ac75e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5fb4ee9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:58:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966602, 'reachable_time': 25934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399137, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.336 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[67f006aa-2fce-469c-a19f-617c4a9cb6da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:585a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 966602, 'tstamp': 966602}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399138, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.355 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[290972ac-2fd9-441b-a45f-a8e8afa0027a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5fb4ee9-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:58:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966602, 'reachable_time': 25934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399139, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:05.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.385 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[e90c9d3b-93e0-4d5e-bad0-a88028c1fdca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.426 250022 DEBUG nova.compute.manager [req-a0d71d03-74c0-4d03-b0ef-a9462418fb7a req-7f281d38-e7a7-46a0-85cc-6a6b5178fbfd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.426 250022 DEBUG oslo_concurrency.lockutils [req-a0d71d03-74c0-4d03-b0ef-a9462418fb7a req-7f281d38-e7a7-46a0-85cc-6a6b5178fbfd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.426 250022 DEBUG oslo_concurrency.lockutils [req-a0d71d03-74c0-4d03-b0ef-a9462418fb7a req-7f281d38-e7a7-46a0-85cc-6a6b5178fbfd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.427 250022 DEBUG oslo_concurrency.lockutils [req-a0d71d03-74c0-4d03-b0ef-a9462418fb7a req-7f281d38-e7a7-46a0-85cc-6a6b5178fbfd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.427 250022 DEBUG nova.compute.manager [req-a0d71d03-74c0-4d03-b0ef-a9462418fb7a req-7f281d38-e7a7-46a0-85cc-6a6b5178fbfd 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Processing event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.447 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[95e3da25-76c4-4d62-8a56-0101773dcde2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.449 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5fb4ee9-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.449 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.450 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5fb4ee9-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.451 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 NetworkManager[48960]: <info>  [1768923725.4524] manager: (tapb5fb4ee9-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 20 15:42:05 compute-0 kernel: tapb5fb4ee9-f0: entered promiscuous mode
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.453 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.454 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5fb4ee9-f0, col_values=(('external_ids', {'iface-id': '92b999b0-5595-47ff-ac54-cb52d2ba58ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 ovn_controller[148666]: 2026-01-20T15:42:05Z|00812|binding|INFO|Releasing lport 92b999b0-5595-47ff-ac54-cb52d2ba58ba from this chassis (sb_readonly=0)
Jan 20 15:42:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:42:05 compute-0 nova_compute[250018]: 2026-01-20 15:42:05.468 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.469 160071 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.470 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a729f5-b489-4b66-beb7-cfcc8aa9a69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.471 160071 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: global
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     log         /dev/log local0 debug
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     log-tag     haproxy-metadata-proxy-b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     user        root
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     group       root
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     maxconn     1024
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     pidfile     /var/lib/neutron/external/pids/b5fb4ee9-fa45-4797-871a-53247ebaf43e.pid.haproxy
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     daemon
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: defaults
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     log global
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     mode http
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     option httplog
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     option dontlognull
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     option http-server-close
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     option forwardfor
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     retries                 3
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     timeout http-request    30s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     timeout connect         30s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     timeout client          32s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     timeout server          32s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     timeout http-keep-alive 30s
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: listen listener
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     bind 169.254.169.254:80
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:     http-request add-header X-OVN-Network-ID b5fb4ee9-fa45-4797-871a-53247ebaf43e
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.472 160071 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'env', 'PROCESS_TAG=haproxy-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5fb4ee9-fa45-4797-871a-53247ebaf43e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 15:42:05 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:05.545 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:05 compute-0 podman[399171]: 2026-01-20 15:42:05.862804178 +0000 UTC m=+0.051315881 container create 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:42:05 compute-0 systemd[1]: Started libpod-conmon-4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c.scope.
Jan 20 15:42:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:42:05 compute-0 podman[399171]: 2026-01-20 15:42:05.835261792 +0000 UTC m=+0.023773515 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 15:42:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91677ca609e7123c34361184b4d067e312fee07dfa6acf718713befd95832db9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:05 compute-0 podman[399171]: 2026-01-20 15:42:05.943834673 +0000 UTC m=+0.132346406 container init 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:42:05 compute-0 podman[399171]: 2026-01-20 15:42:05.949180408 +0000 UTC m=+0.137692111 container start 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:42:05 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [NOTICE]   (399191) : New worker (399193) forked
Jan 20 15:42:05 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [NOTICE]   (399191) : Loading success.
Jan 20 15:42:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:06.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.377 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923726.3772635, 858544da-d6c9-46d0-ac10-f36f6813e593 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.378 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] VM Started (Lifecycle Event)
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.380 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.384 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.391 250022 INFO nova.virt.libvirt.driver [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Instance spawned successfully.
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.391 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.400 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.403 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.416 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.417 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.418 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.418 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.419 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.419 250022 DEBUG nova.virt.libvirt.driver [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.425 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.425 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923726.3797512, 858544da-d6c9-46d0-ac10-f36f6813e593 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.425 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] VM Paused (Lifecycle Event)
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.455 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.459 250022 DEBUG nova.virt.driver [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] Emitting event <LifecycleEvent: 1768923726.3827255, 858544da-d6c9-46d0-ac10-f36f6813e593 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.459 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] VM Resumed (Lifecycle Event)
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.477 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.479 250022 DEBUG nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.487 250022 INFO nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Took 12.56 seconds to spawn the instance on the hypervisor.
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.488 250022 DEBUG nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.495 250022 INFO nova.compute.manager [None req-990fe3b0-e4c6-4f44-86fa-576a7594b353 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.633 250022 INFO nova.compute.manager [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Took 13.60 seconds to build instance.
Jan 20 15:42:06 compute-0 ceph-mon[74360]: pgmap v3597: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:42:06 compute-0 nova_compute[250018]: 2026-01-20 15:42:06.649 250022 DEBUG oslo_concurrency.lockutils [None req-d5080f1f-9328-4a2d-bc36-0cfdf312e8f3 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 MiB/s wr, 27 op/s
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.554 250022 DEBUG nova.compute.manager [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.555 250022 DEBUG oslo_concurrency.lockutils [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.556 250022 DEBUG oslo_concurrency.lockutils [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.556 250022 DEBUG oslo_concurrency.lockutils [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.557 250022 DEBUG nova.compute.manager [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] No waiting events found dispatching network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.557 250022 WARNING nova.compute.manager [req-74ffddab-0f0e-4015-ae34-9486526b7b61 req-a18ddd53-f6af-4c84-899e-fe6943f0508f 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received unexpected event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b for instance with vm_state active and task_state None.
Jan 20 15:42:07 compute-0 nova_compute[250018]: 2026-01-20 15:42:07.655 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:08.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:08 compute-0 ceph-mon[74360]: pgmap v3598: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 MiB/s wr, 27 op/s
Jan 20 15:42:09 compute-0 nova_compute[250018]: 2026-01-20 15:42:09.092 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 477 KiB/s wr, 33 op/s
Jan 20 15:42:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:42:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:10.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:42:10 compute-0 ceph-mon[74360]: pgmap v3599: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 477 KiB/s wr, 33 op/s
Jan 20 15:42:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:42:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:42:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:12 compute-0 nova_compute[250018]: 2026-01-20 15:42:12.706 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:12 compute-0 ceph-mon[74360]: pgmap v3600: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:12 compute-0 sudo[399248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:12 compute-0 sudo[399248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:12 compute-0 sudo[399248]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:12 compute-0 sudo[399273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:12 compute-0 sudo[399273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:12 compute-0 sudo[399273]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:13.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:42:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:42:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:42:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:42:13 compute-0 ceph-mon[74360]: pgmap v3601: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:42:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2189209461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:42:13 compute-0 NetworkManager[48960]: <info>  [1768923733.7206] manager: (patch-br-int-to-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 20 15:42:13 compute-0 NetworkManager[48960]: <info>  [1768923733.7213] manager: (patch-provnet-b62c391b-f7a3-4a38-a0df-72ac0383ca74-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Jan 20 15:42:13 compute-0 ovn_controller[148666]: 2026-01-20T15:42:13Z|00813|binding|INFO|Releasing lport 92b999b0-5595-47ff-ac54-cb52d2ba58ba from this chassis (sb_readonly=0)
Jan 20 15:42:13 compute-0 nova_compute[250018]: 2026-01-20 15:42:13.723 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:13 compute-0 ovn_controller[148666]: 2026-01-20T15:42:13Z|00814|binding|INFO|Releasing lport 92b999b0-5595-47ff-ac54-cb52d2ba58ba from this chassis (sb_readonly=0)
Jan 20 15:42:13 compute-0 nova_compute[250018]: 2026-01-20 15:42:13.805 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.093 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.602 250022 DEBUG nova.compute.manager [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.602 250022 DEBUG nova.compute.manager [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing instance network info cache due to event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.602 250022 DEBUG oslo_concurrency.lockutils [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.603 250022 DEBUG oslo_concurrency.lockutils [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:14 compute-0 nova_compute[250018]: 2026-01-20 15:42:14.603 250022 DEBUG nova.network.neutron [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:42:15 compute-0 nova_compute[250018]: 2026-01-20 15:42:15.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:15.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:16.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:16 compute-0 nova_compute[250018]: 2026-01-20 15:42:16.313 250022 DEBUG nova.network.neutron [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updated VIF entry in instance network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:42:16 compute-0 nova_compute[250018]: 2026-01-20 15:42:16.313 250022 DEBUG nova.network.neutron [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:16 compute-0 nova_compute[250018]: 2026-01-20 15:42:16.436 250022 DEBUG oslo_concurrency.lockutils [req-29b5bc6c-7b83-4fb9-8ee8-aae5d2b6c4fb req-427bcba5-5ec8-4387-861d-1d4fd18b19a8 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:16 compute-0 podman[399302]: 2026-01-20 15:42:16.487124556 +0000 UTC m=+0.060362315 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:42:16 compute-0 podman[399301]: 2026-01-20 15:42:16.519329339 +0000 UTC m=+0.092567378 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 15:42:16 compute-0 ceph-mon[74360]: pgmap v3602: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:42:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 20 15:42:17 compute-0 nova_compute[250018]: 2026-01-20 15:42:17.708 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:18.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:18 compute-0 ceph-mon[74360]: pgmap v3603: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 20 15:42:19 compute-0 nova_compute[250018]: 2026-01-20 15:42:19.097 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 170 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 206 KiB/s wr, 75 op/s
Jan 20 15:42:20 compute-0 nova_compute[250018]: 2026-01-20 15:42:20.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:20 compute-0 ceph-mon[74360]: pgmap v3604: 321 pgs: 321 active+clean; 170 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 206 KiB/s wr, 75 op/s
Jan 20 15:42:20 compute-0 ovn_controller[148666]: 2026-01-20T15:42:20Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:61:a3 10.100.0.9
Jan 20 15:42:20 compute-0 ovn_controller[148666]: 2026-01-20T15:42:20Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:61:a3 10.100.0.9
Jan 20 15:42:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 20 15:42:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:22.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:22 compute-0 ceph-mon[74360]: pgmap v3605: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 20 15:42:22 compute-0 nova_compute[250018]: 2026-01-20 15:42:22.750 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 20 15:42:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1871014464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:24 compute-0 nova_compute[250018]: 2026-01-20 15:42:24.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:24.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:24 compute-0 ceph-mon[74360]: pgmap v3606: 321 pgs: 321 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 20 15:42:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1603594978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:25 compute-0 nova_compute[250018]: 2026-01-20 15:42:25.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:25 compute-0 nova_compute[250018]: 2026-01-20 15:42:25.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:42:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:26.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.236 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.237 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.237 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.237 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.237 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:26 compute-0 ceph-mon[74360]: pgmap v3607: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:42:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416527839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:26 compute-0 nova_compute[250018]: 2026-01-20 15:42:26.649 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.004 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.004 250022 DEBUG nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.166 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.167 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3974MB free_disk=20.942886352539062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.168 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.168 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.290 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Instance 858544da-d6c9-46d0-ac10-f36f6813e593 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.290 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.290 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:42:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.421 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:42:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2416527839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.752 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:42:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607743711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.889 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.896 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.918 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.938 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:42:27 compute-0 nova_compute[250018]: 2026-01-20 15:42:27.938 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:28.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:28 compute-0 ceph-mon[74360]: pgmap v3608: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1965861902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/607743711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3529525207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:28 compute-0 nova_compute[250018]: 2026-01-20 15:42:28.940 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:29 compute-0 nova_compute[250018]: 2026-01-20 15:42:29.101 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:42:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:42:29 compute-0 sshd-session[399376]: Invalid user guest from 134.122.57.138 port 46224
Jan 20 15:42:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:29 compute-0 sshd-session[399376]: Connection closed by invalid user guest 134.122.57.138 port 46224 [preauth]
Jan 20 15:42:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:30 compute-0 ceph-mon[74360]: pgmap v3609: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:30.816 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:30.817 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:42:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:30.817 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:42:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:31.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Jan 20 15:42:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:32.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:32 compute-0 ceph-mon[74360]: pgmap v3610: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Jan 20 15:42:32 compute-0 nova_compute[250018]: 2026-01-20 15:42:32.756 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:33 compute-0 nova_compute[250018]: 2026-01-20 15:42:33.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:33 compute-0 nova_compute[250018]: 2026-01-20 15:42:33.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:33 compute-0 sudo[399403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:33 compute-0 sudo[399403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:33 compute-0 sudo[399403]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:33 compute-0 sudo[399428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:33 compute-0 sudo[399428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:33 compute-0 sudo[399428]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 20 15:42:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:34 compute-0 nova_compute[250018]: 2026-01-20 15:42:34.103 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:34 compute-0 ceph-mon[74360]: pgmap v3611: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 5 op/s
Jan 20 15:42:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:35.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 20 15:42:35 compute-0 ceph-mon[74360]: pgmap v3612: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 5 op/s
Jan 20 15:42:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:37 compute-0 nova_compute[250018]: 2026-01-20 15:42:37.805 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:42:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:42:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:38 compute-0 ceph-mon[74360]: pgmap v3613: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2809331689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:42:39 compute-0 nova_compute[250018]: 2026-01-20 15:42:39.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:39.792 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:42:39 compute-0 nova_compute[250018]: 2026-01-20 15:42:39.793 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:39 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:39.793 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:42:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:42:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:40.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:42:40 compute-0 ceph-mon[74360]: pgmap v3614: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Jan 20 15:42:41 compute-0 nova_compute[250018]: 2026-01-20 15:42:41.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:41.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1004 KiB/s wr, 14 op/s
Jan 20 15:42:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:42 compute-0 ceph-mon[74360]: pgmap v3615: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1004 KiB/s wr, 14 op/s
Jan 20 15:42:42 compute-0 nova_compute[250018]: 2026-01-20 15:42:42.851 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:43.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 993 KiB/s wr, 14 op/s
Jan 20 15:42:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2863498301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:43 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2419632345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:42:44 compute-0 nova_compute[250018]: 2026-01-20 15:42:44.106 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:44 compute-0 ceph-mon[74360]: pgmap v3616: 321 pgs: 321 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 993 KiB/s wr, 14 op/s
Jan 20 15:42:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:45.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:42:46 compute-0 nova_compute[250018]: 2026-01-20 15:42:46.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:42:46 compute-0 nova_compute[250018]: 2026-01-20 15:42:46.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:42:46 compute-0 nova_compute[250018]: 2026-01-20 15:42:46.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:42:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:46.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:46 compute-0 ceph-mon[74360]: pgmap v3617: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 20 15:42:47 compute-0 nova_compute[250018]: 2026-01-20 15:42:47.099 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:42:47 compute-0 nova_compute[250018]: 2026-01-20 15:42:47.099 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquired lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:42:47 compute-0 nova_compute[250018]: 2026-01-20 15:42:47.100 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 15:42:47 compute-0 nova_compute[250018]: 2026-01-20 15:42:47.100 250022 DEBUG nova.objects.instance [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 858544da-d6c9-46d0-ac10-f36f6813e593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:42:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:47.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:42:47 compute-0 podman[399461]: 2026-01-20 15:42:47.483877112 +0000 UTC m=+0.065626129 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 20 15:42:47 compute-0 podman[399460]: 2026-01-20 15:42:47.510960985 +0000 UTC m=+0.097845101 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 15:42:47 compute-0 nova_compute[250018]: 2026-01-20 15:42:47.861 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:42:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:42:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:48 compute-0 ceph-mon[74360]: pgmap v3618: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 20 15:42:49 compute-0 nova_compute[250018]: 2026-01-20 15:42:49.108 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:49 compute-0 nova_compute[250018]: 2026-01-20 15:42:49.335 250022 DEBUG nova.network.neutron [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:42:49 compute-0 nova_compute[250018]: 2026-01-20 15:42:49.347 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Releasing lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:42:49 compute-0 nova_compute[250018]: 2026-01-20 15:42:49.348 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 15:42:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:49.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 20 15:42:49 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:42:49.795 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:42:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:50 compute-0 ceph-mon[74360]: pgmap v3619: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 20 15:42:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:51.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:42:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:52.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:42:52 compute-0 ceph-mon[74360]: pgmap v3620: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:42:52
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.mgr']
Jan 20 15:42:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:42:52 compute-0 nova_compute[250018]: 2026-01-20 15:42:52.864 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:53 compute-0 sudo[399510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:53 compute-0 sudo[399510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:53 compute-0 sudo[399510]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:53 compute-0 sudo[399535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:53 compute-0 sudo[399535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:53 compute-0 sudo[399535]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:42:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:53.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:42:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 840 KiB/s wr, 87 op/s
Jan 20 15:42:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:54 compute-0 nova_compute[250018]: 2026-01-20 15:42:54.150 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:54.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:54 compute-0 ceph-mon[74360]: pgmap v3621: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 840 KiB/s wr, 87 op/s
Jan 20 15:42:55 compute-0 sudo[399561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:55 compute-0 sudo[399561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-0 sudo[399561]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 844 KiB/s wr, 88 op/s
Jan 20 15:42:55 compute-0 sudo[399586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:42:55 compute-0 sudo[399586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-0 sudo[399586]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-0 sudo[399611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:55 compute-0 sudo[399611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:55 compute-0 sudo[399611]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:55 compute-0 sudo[399636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:42:55 compute-0 sudo[399636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:56 compute-0 podman[399734]: 2026-01-20 15:42:56.104309626 +0000 UTC m=+0.070565993 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:42:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:56.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:56 compute-0 podman[399734]: 2026-01-20 15:42:56.209235227 +0000 UTC m=+0.175491594 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:42:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:42:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:42:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-0 ceph-mon[74360]: pgmap v3622: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 844 KiB/s wr, 88 op/s
Jan 20 15:42:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:56 compute-0 podman[399888]: 2026-01-20 15:42:56.847348501 +0000 UTC m=+0.058115325 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:42:56 compute-0 podman[399888]: 2026-01-20 15:42:56.858673307 +0000 UTC m=+0.069440131 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:42:57 compute-0 podman[399952]: 2026-01-20 15:42:57.055130649 +0000 UTC m=+0.047003844 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, name=keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 15:42:57 compute-0 podman[399952]: 2026-01-20 15:42:57.070417533 +0000 UTC m=+0.062290698 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 15:42:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:42:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:42:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:57 compute-0 sudo[399636]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:42:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:42:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:57 compute-0 sudo[399987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:57 compute-0 sudo[399987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-0 sudo[399987]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-0 sudo[400012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:42:57 compute-0 sudo[400012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-0 sudo[400012]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-0 sudo[400037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:57 compute-0 sudo[400037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-0 sudo[400037]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-0 sudo[400062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:42:57 compute-0 sudo[400062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.1 KiB/s wr, 72 op/s
Jan 20 15:42:57 compute-0 nova_compute[250018]: 2026-01-20 15:42:57.896 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:42:57 compute-0 sudo[400062]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:42:57 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:42:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:42:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2d633aae-6619-4b48-95bc-eb0d0f8298dd does not exist
Jan 20 15:42:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5b9a1627-29c0-46de-9461-7efa050e4a15 does not exist
Jan 20 15:42:58 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5f0462a9-ffe8-4b69-b7b2-3a9f16c1f563 does not exist
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:42:58 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mon[74360]: pgmap v3623: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.1 KiB/s wr, 72 op/s
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:42:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:42:58 compute-0 sudo[400119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:58 compute-0 sudo[400119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:58 compute-0 sudo[400119]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:42:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:58 compute-0 sudo[400144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:42:58 compute-0 sudo[400144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:58 compute-0 sudo[400144]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:58 compute-0 sudo[400169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:42:58 compute-0 sudo[400169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:58 compute-0 sudo[400169]: pam_unix(sudo:session): session closed for user root
Jan 20 15:42:58 compute-0 sudo[400194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:42:58 compute-0 sudo[400194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:42:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.618310029 +0000 UTC m=+0.036985662 container create 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:42:58 compute-0 systemd[1]: Started libpod-conmon-3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08.scope.
Jan 20 15:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.602294086 +0000 UTC m=+0.020969729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.707341201 +0000 UTC m=+0.126016834 container init 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.715101321 +0000 UTC m=+0.133776954 container start 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.717946458 +0000 UTC m=+0.136622091 container attach 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:42:58 compute-0 interesting_rosalind[400276]: 167 167
Jan 20 15:42:58 compute-0 systemd[1]: libpod-3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08.scope: Deactivated successfully.
Jan 20 15:42:58 compute-0 conmon[400276]: conmon 3f8ddda326d191a10ccd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08.scope/container/memory.events
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.722782959 +0000 UTC m=+0.141458592 container died 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb6c88bb0ff7c60a9f53a8999eb8b914a9c0b0f579a81394707f5d1c6ff9f3d6-merged.mount: Deactivated successfully.
Jan 20 15:42:58 compute-0 podman[400260]: 2026-01-20 15:42:58.761182629 +0000 UTC m=+0.179858262 container remove 3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:42:58 compute-0 systemd[1]: libpod-conmon-3f8ddda326d191a10ccd6e0b2b6748e2467ef958765b0f756e0aed05eb9bfe08.scope: Deactivated successfully.
Jan 20 15:42:58 compute-0 podman[400300]: 2026-01-20 15:42:58.952307356 +0000 UTC m=+0.056803780 container create 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:42:58 compute-0 systemd[1]: Started libpod-conmon-4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412.scope.
Jan 20 15:42:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:58.927697729 +0000 UTC m=+0.032194183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:59.039367524 +0000 UTC m=+0.143863968 container init 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:59.046271961 +0000 UTC m=+0.150768365 container start 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:59.049710064 +0000 UTC m=+0.154206558 container attach 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:42:59 compute-0 nova_compute[250018]: 2026-01-20 15:42:59.191 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:42:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:42:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:42:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:42:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:42:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 20 15:42:59 compute-0 elastic_zhukovsky[400316]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:42:59 compute-0 elastic_zhukovsky[400316]: --> relative data size: 1.0
Jan 20 15:42:59 compute-0 elastic_zhukovsky[400316]: --> All data devices are unavailable
Jan 20 15:42:59 compute-0 systemd[1]: libpod-4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412.scope: Deactivated successfully.
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:59.90498355 +0000 UTC m=+1.009479994 container died 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 15:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4edb1d2e9745991022ce2af5f11ce3e56331a855abce35035b10939aa8b445b-merged.mount: Deactivated successfully.
Jan 20 15:42:59 compute-0 podman[400300]: 2026-01-20 15:42:59.980058254 +0000 UTC m=+1.084554658 container remove 4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 15:42:59 compute-0 systemd[1]: libpod-conmon-4b2ce2a9e7908cffecc9c56006fbe24eefca5ca1862dc00aa3b2e286fda54412.scope: Deactivated successfully.
Jan 20 15:43:00 compute-0 sudo[400194]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:00 compute-0 sudo[400346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:00 compute-0 sudo[400346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:00 compute-0 sudo[400346]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:00 compute-0 sudo[400371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:43:00 compute-0 sudo[400371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:00 compute-0 sudo[400371]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:00.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:00 compute-0 sudo[400396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:00 compute-0 sudo[400396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:00 compute-0 sudo[400396]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:00 compute-0 sudo[400421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:43:00 compute-0 sudo[400421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:00 compute-0 ceph-mon[74360]: pgmap v3624: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.596137561 +0000 UTC m=+0.041572828 container create 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:43:00 compute-0 systemd[1]: Started libpod-conmon-6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4.scope.
Jan 20 15:43:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.575783309 +0000 UTC m=+0.021218616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.671615865 +0000 UTC m=+0.117051182 container init 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.679775405 +0000 UTC m=+0.125210672 container start 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.68326526 +0000 UTC m=+0.128700567 container attach 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 20 15:43:00 compute-0 reverent_proskuriakova[400504]: 167 167
Jan 20 15:43:00 compute-0 systemd[1]: libpod-6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4.scope: Deactivated successfully.
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.68729526 +0000 UTC m=+0.132730577 container died 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:43:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a9a6f2ad7a29d9332d6c77a089d3f8c2867f03a1f82def159ffe06db5b8da5-merged.mount: Deactivated successfully.
Jan 20 15:43:00 compute-0 podman[400486]: 2026-01-20 15:43:00.735236998 +0000 UTC m=+0.180672265 container remove 6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_proskuriakova, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:43:00 compute-0 systemd[1]: libpod-conmon-6bc9ba80d1bde2a85bc2473860106b796b8e8ea9af52017e857e68ab0977c3b4.scope: Deactivated successfully.
Jan 20 15:43:00 compute-0 podman[400529]: 2026-01-20 15:43:00.942507812 +0000 UTC m=+0.049683196 container create db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 20 15:43:00 compute-0 systemd[1]: Started libpod-conmon-db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6.scope.
Jan 20 15:43:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c17cf8c3e1783f1d15e8b147d20af3c41d867fded9429fa2604fd936ef1a4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c17cf8c3e1783f1d15e8b147d20af3c41d867fded9429fa2604fd936ef1a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c17cf8c3e1783f1d15e8b147d20af3c41d867fded9429fa2604fd936ef1a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:00.923121017 +0000 UTC m=+0.030296411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6c17cf8c3e1783f1d15e8b147d20af3c41d867fded9429fa2604fd936ef1a4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:01.032069348 +0000 UTC m=+0.139244772 container init db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:01.039831779 +0000 UTC m=+0.147007173 container start db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:01.04324237 +0000 UTC m=+0.150417754 container attach db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:43:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]: {
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:     "0": [
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:         {
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "devices": [
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "/dev/loop3"
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             ],
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "lv_name": "ceph_lv0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "lv_size": "7511998464",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "name": "ceph_lv0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "tags": {
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.cluster_name": "ceph",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.crush_device_class": "",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.encrypted": "0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.osd_id": "0",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.type": "block",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:                 "ceph.vdo": "0"
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             },
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "type": "block",
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:             "vg_name": "ceph_vg0"
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:         }
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]:     ]
Jan 20 15:43:01 compute-0 zealous_aryabhata[400546]: }
Jan 20 15:43:01 compute-0 systemd[1]: libpod-db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6.scope: Deactivated successfully.
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:01.767911369 +0000 UTC m=+0.875086753 container died db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6c17cf8c3e1783f1d15e8b147d20af3c41d867fded9429fa2604fd936ef1a4b-merged.mount: Deactivated successfully.
Jan 20 15:43:01 compute-0 podman[400529]: 2026-01-20 15:43:01.821142411 +0000 UTC m=+0.928317785 container remove db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:43:01 compute-0 systemd[1]: libpod-conmon-db7b7d49b04db1a88199689feef989a47f99d6b1dfacb095dda379023393a8e6.scope: Deactivated successfully.
Jan 20 15:43:01 compute-0 sudo[400421]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:01 compute-0 sudo[400569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:01 compute-0 sudo[400569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:01 compute-0 sudo[400569]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:01 compute-0 sudo[400594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:43:01 compute-0 sudo[400594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:01 compute-0 sudo[400594]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:02 compute-0 sudo[400619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:02 compute-0 sudo[400619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:02 compute-0 sudo[400619]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:02 compute-0 sudo[400644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:43:02 compute-0 sudo[400644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.433087546 +0000 UTC m=+0.055571176 container create eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:43:02 compute-0 systemd[1]: Started libpod-conmon-eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb.scope.
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.409860837 +0000 UTC m=+0.032344467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:43:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.529415744 +0000 UTC m=+0.151899374 container init eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.545807609 +0000 UTC m=+0.168291209 container start eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.549766706 +0000 UTC m=+0.172250476 container attach eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:43:02 compute-0 great_lehmann[400728]: 167 167
Jan 20 15:43:02 compute-0 systemd[1]: libpod-eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb.scope: Deactivated successfully.
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.552179512 +0000 UTC m=+0.174663132 container died eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:43:02 compute-0 ceph-mon[74360]: pgmap v3625: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 20 15:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-20772af6ecad63d5179008da2d282e8065c33874e6dba602830889dbae67f6e8-merged.mount: Deactivated successfully.
Jan 20 15:43:02 compute-0 podman[400711]: 2026-01-20 15:43:02.599969606 +0000 UTC m=+0.222453196 container remove eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:43:02 compute-0 systemd[1]: libpod-conmon-eb3e37234650cfd327307b61acebd98f7c9c37f4fa463d4646dac95d09a9d8eb.scope: Deactivated successfully.
Jan 20 15:43:02 compute-0 podman[400752]: 2026-01-20 15:43:02.825672429 +0000 UTC m=+0.053149811 container create 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 15:43:02 compute-0 systemd[1]: Started libpod-conmon-6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4.scope.
Jan 20 15:43:02 compute-0 podman[400752]: 2026-01-20 15:43:02.804330381 +0000 UTC m=+0.031807793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:43:02 compute-0 nova_compute[250018]: 2026-01-20 15:43:02.898 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7c1b9e592761d7234030be3f18f0cafa0de3e518cfcddca93a7a0ad16d11e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7c1b9e592761d7234030be3f18f0cafa0de3e518cfcddca93a7a0ad16d11e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7c1b9e592761d7234030be3f18f0cafa0de3e518cfcddca93a7a0ad16d11e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7c1b9e592761d7234030be3f18f0cafa0de3e518cfcddca93a7a0ad16d11e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:43:02 compute-0 podman[400752]: 2026-01-20 15:43:02.918793742 +0000 UTC m=+0.146271154 container init 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:43:02 compute-0 podman[400752]: 2026-01-20 15:43:02.928841814 +0000 UTC m=+0.156319226 container start 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:43:02 compute-0 podman[400752]: 2026-01-20 15:43:02.931977629 +0000 UTC m=+0.159455041 container attach 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:43:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:43:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]: {
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:         "osd_id": 0,
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:         "type": "bluestore"
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]:     }
Jan 20 15:43:03 compute-0 sleepy_shockley[400768]: }
Jan 20 15:43:03 compute-0 systemd[1]: libpod-6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4.scope: Deactivated successfully.
Jan 20 15:43:03 compute-0 podman[400752]: 2026-01-20 15:43:03.765167646 +0000 UTC m=+0.992645058 container died 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 20 15:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d7c1b9e592761d7234030be3f18f0cafa0de3e518cfcddca93a7a0ad16d11e8-merged.mount: Deactivated successfully.
Jan 20 15:43:03 compute-0 podman[400752]: 2026-01-20 15:43:03.830882816 +0000 UTC m=+1.058360218 container remove 6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:43:03 compute-0 systemd[1]: libpod-conmon-6850aa5e271bd7ba374d782109eedc021b1bee58802cd5da75bd08351849dae4.scope: Deactivated successfully.
Jan 20 15:43:03 compute-0 sudo[400644]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:43:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:43:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c5f27a86-ac15-44e6-8107-780bd0ea2489 does not exist
Jan 20 15:43:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 85c713ee-24cb-48b1-b391-3d46e52c5a50 does not exist
Jan 20 15:43:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a0977b5b-3934-4f16-adaf-16bbcd80b52a does not exist
Jan 20 15:43:03 compute-0 sudo[400801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:03 compute-0 sudo[400801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:03 compute-0 sudo[400801]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:04 compute-0 sudo[400826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:43:04 compute-0 sudo[400826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:04 compute-0 sudo[400826]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:04.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:04 compute-0 nova_compute[250018]: 2026-01-20 15:43:04.194 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:04 compute-0 ceph-mon[74360]: pgmap v3626: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:43:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:43:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:43:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:06.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:06 compute-0 ceph-mon[74360]: pgmap v3627: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:43:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:43:07 compute-0 nova_compute[250018]: 2026-01-20 15:43:07.931 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:08.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:08 compute-0 ceph-mon[74360]: pgmap v3628: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 15:43:08 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4178163900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:09 compute-0 nova_compute[250018]: 2026-01-20 15:43:09.226 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:43:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:43:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 20 15:43:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:10.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:10 compute-0 ceph-mon[74360]: pgmap v3629: 321 pgs: 321 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 20 15:43:11 compute-0 sshd-session[400854]: Invalid user guest from 134.122.57.138 port 58296
Jan 20 15:43:11 compute-0 sshd-session[400854]: Connection closed by invalid user guest 134.122.57.138 port 58296 [preauth]
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.397 250022 DEBUG nova.compute.manager [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.397 250022 DEBUG nova.compute.manager [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing instance network info cache due to event network-changed-ce2e7728-b7f2-40ae-991c-c492681e800b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.398 250022 DEBUG oslo_concurrency.lockutils [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.398 250022 DEBUG oslo_concurrency.lockutils [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquired lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.398 250022 DEBUG nova.network.neutron [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Refreshing network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.427 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.427 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.428 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.428 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.428 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.430 250022 INFO nova.compute.manager [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Terminating instance
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.432 250022 DEBUG nova.compute.manager [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 15:43:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 1.1 MiB/s wr, 65 op/s
Jan 20 15:43:11 compute-0 kernel: tapce2e7728-b7 (unregistering): left promiscuous mode
Jan 20 15:43:11 compute-0 NetworkManager[48960]: <info>  [1768923791.4984] device (tapce2e7728-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 15:43:11 compute-0 ovn_controller[148666]: 2026-01-20T15:43:11Z|00815|binding|INFO|Releasing lport ce2e7728-b7f2-40ae-991c-c492681e800b from this chassis (sb_readonly=0)
Jan 20 15:43:11 compute-0 ovn_controller[148666]: 2026-01-20T15:43:11Z|00816|binding|INFO|Setting lport ce2e7728-b7f2-40ae-991c-c492681e800b down in Southbound
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.505 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 ovn_controller[148666]: 2026-01-20T15:43:11Z|00817|binding|INFO|Removing iface tapce2e7728-b7 ovn-installed in OVS
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.507 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.514 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:61:a3 10.100.0.9'], port_security=['fa:16:3e:36:61:a3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '858544da-d6c9-46d0-ac10-f36f6813e593', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '728662ec7f654a3fb2e53a90b8707d7e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b6de228c-5aa6-462d-950b-3f1ea3d45f2d b7424c3a-5aee-4d68-a5d7-51752094553b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=909829b9-c0dd-4f89-9095-7f817ccefae3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>], logical_port=ce2e7728-b7f2-40ae-991c-c492681e800b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4c2750ba60>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.515 160071 INFO neutron.agent.ovn.metadata.agent [-] Port ce2e7728-b7f2-40ae-991c-c492681e800b in datapath b5fb4ee9-fa45-4797-871a-53247ebaf43e unbound from our chassis
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.516 160071 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5fb4ee9-fa45-4797-871a-53247ebaf43e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.518 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[cbed0883-31df-493e-90a7-1d2ae99a079e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.519 160071 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e namespace which is not needed anymore
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.586 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000da.scope: Deactivated successfully.
Jan 20 15:43:11 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000da.scope: Consumed 17.121s CPU time.
Jan 20 15:43:11 compute-0 systemd-machined[216401]: Machine qemu-95-instance-000000da terminated.
Jan 20 15:43:11 compute-0 kernel: tapce2e7728-b7: entered promiscuous mode
Jan 20 15:43:11 compute-0 kernel: tapce2e7728-b7 (unregistering): left promiscuous mode
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.666 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [NOTICE]   (399191) : haproxy version is 2.8.14-c23fe91
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [NOTICE]   (399191) : path to executable is /usr/sbin/haproxy
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [WARNING]  (399191) : Exiting Master process...
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [WARNING]  (399191) : Exiting Master process...
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [ALERT]    (399191) : Current worker (399193) exited with code 143 (Terminated)
Jan 20 15:43:11 compute-0 neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e[399187]: [WARNING]  (399191) : All workers exited. Exiting... (0)
Jan 20 15:43:11 compute-0 systemd[1]: libpod-4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c.scope: Deactivated successfully.
Jan 20 15:43:11 compute-0 podman[400880]: 2026-01-20 15:43:11.683365077 +0000 UTC m=+0.044188137 container died 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.684 250022 INFO nova.virt.libvirt.driver [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Instance destroyed successfully.
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.684 250022 DEBUG nova.objects.instance [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lazy-loading 'resources' on Instance uuid 858544da-d6c9-46d0-ac10-f36f6813e593 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 15:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c-userdata-shm.mount: Deactivated successfully.
Jan 20 15:43:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-91677ca609e7123c34361184b4d067e312fee07dfa6acf718713befd95832db9-merged.mount: Deactivated successfully.
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.709 250022 DEBUG nova.virt.libvirt.vif [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T15:41:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-342561427-access_point-943937531',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-342561427-acc',id=218,image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNPrC26E5zjpds8PmYXeLNQKBwLdgsc+VcubrdKnriEXDiMjUXGvx1Qk1D9X7eLck7XYpiSHt4U9t1SsZB3lsAeahV1YqeLst2/p8UQkxJjHaCXNOlF5uwsraAqiSop7uA==',key_name='tempest-TestSecurityGroupsBasicOps-681797586',keypairs=<?>,launch_index=0,launched_at=2026-01-20T15:42:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='728662ec7f654a3fb2e53a90b8707d7e',ramdisk_id='',reservation_id='r-sbfck5cd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a32b3e07-16d8-46fd-9a7b-c242c432fcf9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-342561427',owner_user_name='tempest-TestSecurityGroupsBasicOps-342561427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T15:42:06Z,user_data=None,user_id='5985ef736503499a9f1d734cabc33ce5',uuid=858544da-d6c9-46d0-ac10-f36f6813e593,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.709 250022 DEBUG nova.network.os_vif_util [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converting VIF {"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.710 250022 DEBUG nova.network.os_vif_util [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.711 250022 DEBUG os_vif [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.712 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.713 250022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce2e7728-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:11 compute-0 podman[400880]: 2026-01-20 15:43:11.713888914 +0000 UTC m=+0.074711964 container cleanup 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.714 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.715 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.718 250022 INFO os_vif [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:61:a3,bridge_name='br-int',has_traffic_filtering=True,id=ce2e7728-b7f2-40ae-991c-c492681e800b,network=Network(b5fb4ee9-fa45-4797-871a-53247ebaf43e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce2e7728-b7')
Jan 20 15:43:11 compute-0 systemd[1]: libpod-conmon-4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c.scope: Deactivated successfully.
Jan 20 15:43:11 compute-0 podman[400912]: 2026-01-20 15:43:11.775910093 +0000 UTC m=+0.041548155 container remove 4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.781 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[1c56a1ce-772d-41d4-ba56-9963f4ba842c]: (4, ('Tue Jan 20 03:43:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e (4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c)\n4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c\nTue Jan 20 03:43:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e (4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c)\n4b8bb9a20577cb778584fe1354a35a96ba98a813c5d76d900bee9ce96acf274c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.782 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[db1157d3-5a3c-420d-992b-bbb16dc3b751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.783 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5fb4ee9-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.784 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 kernel: tapb5fb4ee9-f0: left promiscuous mode
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.788 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[dddc15c4-6a73-4749-87dd-5bdbf201fbec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 nova_compute[250018]: 2026-01-20 15:43:11.798 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.806 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[ab90b00c-112d-43ed-aff7-4c3761d2c817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.807 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[df0f4cd5-31e6-40da-ad94-17d075cbf11c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.823 257604 DEBUG oslo.privsep.daemon [-] privsep: reply[b2990f09-0c53-46d1-b920-9464b6f4d2c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966595, 'reachable_time': 27028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400944, 'error': None, 'target': 'ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.825 160517 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5fb4ee9-fa45-4797-871a-53247ebaf43e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 15:43:11 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:11.826 160517 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1b6b27-7aa4-4ef2-a8da-0d6d255292cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 15:43:11 compute-0 systemd[1]: run-netns-ovnmeta\x2db5fb4ee9\x2dfa45\x2d4797\x2d871a\x2d53247ebaf43e.mount: Deactivated successfully.
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021745009771077612 of space, bias 1.0, pg target 0.6523502931323284 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:43:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:43:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:12.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.188 250022 INFO nova.virt.libvirt.driver [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Deleting instance files /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593_del
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.189 250022 INFO nova.virt.libvirt.driver [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Deletion of /var/lib/nova/instances/858544da-d6c9-46d0-ac10-f36f6813e593_del complete
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.291 250022 INFO nova.compute.manager [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Took 0.86 seconds to destroy the instance on the hypervisor.
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.292 250022 DEBUG oslo.service.loopingcall [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.292 250022 DEBUG nova.compute.manager [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.292 250022 DEBUG nova.network.neutron [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.396 250022 DEBUG nova.compute.manager [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-unplugged-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.396 250022 DEBUG oslo_concurrency.lockutils [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.397 250022 DEBUG oslo_concurrency.lockutils [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.397 250022 DEBUG oslo_concurrency.lockutils [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.398 250022 DEBUG nova.compute.manager [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] No waiting events found dispatching network-vif-unplugged-ce2e7728-b7f2-40ae-991c-c492681e800b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.399 250022 DEBUG nova.compute.manager [req-fede368e-98e4-433a-829e-ace33723564f req-f4a0390a-fc97-428e-8d1c-3bf5f2ca087c 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-unplugged-ce2e7728-b7f2-40ae-991c-c492681e800b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 15:43:12 compute-0 ceph-mon[74360]: pgmap v3630: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 1.1 MiB/s wr, 65 op/s
Jan 20 15:43:12 compute-0 nova_compute[250018]: 2026-01-20 15:43:12.935 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:13 compute-0 sudo[400947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:13 compute-0 sudo[400947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:13 compute-0 sudo[400947]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:13 compute-0 sudo[400972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:13 compute-0 sudo[400972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:13 compute-0 sudo[400972]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:13.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 20 15:43:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/454088694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.848 250022 DEBUG nova.network.neutron [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updated VIF entry in instance network info cache for port ce2e7728-b7f2-40ae-991c-c492681e800b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.849 250022 DEBUG nova.network.neutron [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [{"id": "ce2e7728-b7f2-40ae-991c-c492681e800b", "address": "fa:16:3e:36:61:a3", "network": {"id": "b5fb4ee9-fa45-4797-871a-53247ebaf43e", "bridge": "br-int", "label": "tempest-network-smoke--639703376", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "728662ec7f654a3fb2e53a90b8707d7e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce2e7728-b7", "ovs_interfaceid": "ce2e7728-b7f2-40ae-991c-c492681e800b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.866 250022 DEBUG nova.network.neutron [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.873 250022 DEBUG oslo_concurrency.lockutils [req-8986e5f6-9af4-4dcf-bcc5-670b4e161fb5 req-baea5f29-5749-4742-bf0d-a79339a2a22a 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Releasing lock "refresh_cache-858544da-d6c9-46d0-ac10-f36f6813e593" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.883 250022 INFO nova.compute.manager [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Took 1.59 seconds to deallocate network for instance.
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.932 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.933 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.937 250022 DEBUG nova.compute.manager [req-ffa99a0d-1c62-4b0b-b34d-ddbabbd4c8a9 req-ff5a8426-6ce3-4ca4-baa2-137ce3252810 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-deleted-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:13 compute-0 nova_compute[250018]: 2026-01-20 15:43:13.976 250022 DEBUG oslo_concurrency.processutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:14.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694918340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.422 250022 DEBUG oslo_concurrency.processutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.427 250022 DEBUG nova.compute.provider_tree [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.446 250022 DEBUG nova.scheduler.client.report [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.471 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.475 250022 DEBUG nova.compute.manager [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.476 250022 DEBUG oslo_concurrency.lockutils [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Acquiring lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.476 250022 DEBUG oslo_concurrency.lockutils [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.476 250022 DEBUG oslo_concurrency.lockutils [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.476 250022 DEBUG nova.compute.manager [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] No waiting events found dispatching network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.476 250022 WARNING nova.compute.manager [req-a4897b66-13f2-43a7-9ef8-85dcaf5e340a req-44cba2b4-4eb2-4850-a81e-1c46cd3c1637 15e2d293aecb44f4b8fadb4968d7c65b d5b132113da54ff6b616e719b9c45446 - - default default] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Received unexpected event network-vif-plugged-ce2e7728-b7f2-40ae-991c-c492681e800b for instance with vm_state deleted and task_state None.
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.494 250022 INFO nova.scheduler.client.report [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Deleted allocations for instance 858544da-d6c9-46d0-ac10-f36f6813e593
Jan 20 15:43:14 compute-0 nova_compute[250018]: 2026-01-20 15:43:14.581 250022 DEBUG oslo_concurrency.lockutils [None req-77ce7269-5522-4f3f-a642-708b74a45626 5985ef736503499a9f1d734cabc33ce5 728662ec7f654a3fb2e53a90b8707d7e - - default default] Lock "858544da-d6c9-46d0-ac10-f36f6813e593" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:14 compute-0 ceph-mon[74360]: pgmap v3631: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 29 op/s
Jan 20 15:43:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1694918340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 15:43:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 15:43:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 15 KiB/s wr, 51 op/s
Jan 20 15:43:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:16.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:16 compute-0 ceph-mon[74360]: pgmap v3632: 321 pgs: 321 active+clean; 159 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 15 KiB/s wr, 51 op/s
Jan 20 15:43:16 compute-0 nova_compute[250018]: 2026-01-20 15:43:16.747 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:17 compute-0 nova_compute[250018]: 2026-01-20 15:43:17.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:43:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:17.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:43:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Jan 20 15:43:17 compute-0 nova_compute[250018]: 2026-01-20 15:43:17.937 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:43:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:18.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:43:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:18 compute-0 podman[401023]: 2026-01-20 15:43:18.50325854 +0000 UTC m=+0.075153266 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:43:18 compute-0 podman[401022]: 2026-01-20 15:43:18.541753644 +0000 UTC m=+0.128702147 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 20 15:43:18 compute-0 ceph-mon[74360]: pgmap v3633: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Jan 20 15:43:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:19.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Jan 20 15:43:19 compute-0 nova_compute[250018]: 2026-01-20 15:43:19.693 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:19 compute-0 ceph-mon[74360]: pgmap v3634: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Jan 20 15:43:19 compute-0 nova_compute[250018]: 2026-01-20 15:43:19.770 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:20.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:20 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:43:20 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:43:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:21.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Jan 20 15:43:21 compute-0 nova_compute[250018]: 2026-01-20 15:43:21.783 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:22 compute-0 nova_compute[250018]: 2026-01-20 15:43:22.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:22.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:22 compute-0 ceph-mon[74360]: pgmap v3635: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 3.7 KiB/s wr, 50 op/s
Jan 20 15:43:22 compute-0 nova_compute[250018]: 2026-01-20 15:43:22.939 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:24 compute-0 ceph-mon[74360]: pgmap v3636: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1142949718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:25.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/578300199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:43:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:26.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:26 compute-0 ceph-mon[74360]: pgmap v3637: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.682 250022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768923791.6810715, 858544da-d6c9-46d0-ac10-f36f6813e593 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.682 250022 INFO nova.compute.manager [-] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] VM Stopped (Lifecycle Event)
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.701 250022 DEBUG nova.compute.manager [None req-16331cf5-fc82-4c5f-9b91-8a20bfde2ee1 - - - - - -] [instance: 858544da-d6c9-46d0-ac10-f36f6813e593] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 15:43:26 compute-0 nova_compute[250018]: 2026-01-20 15:43:26.788 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:27 compute-0 nova_compute[250018]: 2026-01-20 15:43:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:27.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 6 op/s
Jan 20 15:43:27 compute-0 nova_compute[250018]: 2026-01-20 15:43:27.942 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.081 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.081 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.081 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.082 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.082 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:28.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618259457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.502 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.680 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.682 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4164MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.682 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.683 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:28 compute-0 ceph-mon[74360]: pgmap v3638: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 6 op/s
Jan 20 15:43:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3618259457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.750 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.750 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:43:28 compute-0 nova_compute[250018]: 2026-01-20 15:43:28.769 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:43:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:43:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2347421671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-0 nova_compute[250018]: 2026-01-20 15:43:29.212 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:43:29 compute-0 nova_compute[250018]: 2026-01-20 15:43:29.219 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:43:29 compute-0 nova_compute[250018]: 2026-01-20 15:43:29.239 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:43:29 compute-0 nova_compute[250018]: 2026-01-20 15:43:29.273 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:43:29 compute-0 nova_compute[250018]: 2026-01-20 15:43:29.273 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:43:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:43:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3199149310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2347421671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/204372452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:30.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:30 compute-0 ceph-mon[74360]: pgmap v3639: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:30.816 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:30.817 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:43:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:30.818 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:43:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:31.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:31 compute-0 ceph-mon[74360]: pgmap v3640: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:31 compute-0 nova_compute[250018]: 2026-01-20 15:43:31.792 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:32.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:32 compute-0 nova_compute[250018]: 2026-01-20 15:43:32.945 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:33.222 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:43:33 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:33.223 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:43:33 compute-0 nova_compute[250018]: 2026-01-20 15:43:33.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:33 compute-0 sudo[401121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:33 compute-0 sudo[401121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:33 compute-0 sudo[401121]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:33 compute-0 sudo[401146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:33 compute-0 sudo[401146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:33 compute-0 sudo[401146]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:34 compute-0 nova_compute[250018]: 2026-01-20 15:43:34.273 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:34 compute-0 ceph-mon[74360]: pgmap v3641: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:35 compute-0 nova_compute[250018]: 2026-01-20 15:43:35.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:36.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:36 compute-0 ceph-mon[74360]: pgmap v3642: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:36 compute-0 nova_compute[250018]: 2026-01-20 15:43:36.796 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:37.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:37 compute-0 ceph-mon[74360]: pgmap v3643: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:43:37 compute-0 nova_compute[250018]: 2026-01-20 15:43:37.946 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:38.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:38 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:43:38.225 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:43:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1438658562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:43:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 127 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 219 KiB/s wr, 2 op/s
Jan 20 15:43:39 compute-0 ceph-mon[74360]: pgmap v3644: 321 pgs: 321 active+clean; 127 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 219 KiB/s wr, 2 op/s
Jan 20 15:43:40 compute-0 nova_compute[250018]: 2026-01-20 15:43:40.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:40.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:41.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:41 compute-0 nova_compute[250018]: 2026-01-20 15:43:41.800 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:42.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:42 compute-0 ceph-mon[74360]: pgmap v3645: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:42 compute-0 nova_compute[250018]: 2026-01-20 15:43:42.948 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:43 compute-0 nova_compute[250018]: 2026-01-20 15:43:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:44.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:44 compute-0 ceph-mon[74360]: pgmap v3646: 321 pgs: 321 active+clean; 163 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 18 op/s
Jan 20 15:43:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:46.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:46 compute-0 ceph-mon[74360]: pgmap v3647: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:46 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/697914505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:43:46 compute-0 nova_compute[250018]: 2026-01-20 15:43:46.804 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:47 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3464045524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:43:47 compute-0 nova_compute[250018]: 2026-01-20 15:43:47.950 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:48 compute-0 nova_compute[250018]: 2026-01-20 15:43:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:43:48 compute-0 nova_compute[250018]: 2026-01-20 15:43:48.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:43:48 compute-0 nova_compute[250018]: 2026-01-20 15:43:48.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:43:48 compute-0 nova_compute[250018]: 2026-01-20 15:43:48.063 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:43:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:48.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:48 compute-0 ceph-mon[74360]: pgmap v3648: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:43:49 compute-0 podman[401180]: 2026-01-20 15:43:49.452189965 +0000 UTC m=+0.049192053 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 15:43:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:49.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:43:49 compute-0 podman[401179]: 2026-01-20 15:43:49.513072315 +0000 UTC m=+0.110551426 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:43:49 compute-0 ceph-mon[74360]: pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 15:43:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:50.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Jan 20 15:43:51 compute-0 nova_compute[250018]: 2026-01-20 15:43:51.842 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:52.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:43:52 compute-0 ceph-mon[74360]: pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 1.6 MiB/s wr, 44 op/s
Jan 20 15:43:52 compute-0 sshd-session[401227]: Invalid user guest from 134.122.57.138 port 38474
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:43:52
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'images', 'vms', 'backups']
Jan 20 15:43:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:43:52 compute-0 sshd-session[401227]: Connection closed by invalid user guest 134.122.57.138 port 38474 [preauth]
Jan 20 15:43:53 compute-0 nova_compute[250018]: 2026-01-20 15:43:52.998 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:43:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:43:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 56 KiB/s wr, 29 op/s
Jan 20 15:43:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:53 compute-0 sudo[401230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:53 compute-0 sudo[401230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:53 compute-0 sudo[401230]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:53 compute-0 sudo[401255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:43:53 compute-0 sudo[401255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:43:53 compute-0 sudo[401255]: pam_unix(sudo:session): session closed for user root
Jan 20 15:43:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:54.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:54 compute-0 ceph-mon[74360]: pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 56 KiB/s wr, 29 op/s
Jan 20 15:43:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 56 KiB/s wr, 82 op/s
Jan 20 15:43:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:56.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:56 compute-0 ceph-mon[74360]: pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 56 KiB/s wr, 82 op/s
Jan 20 15:43:56 compute-0 nova_compute[250018]: 2026-01-20 15:43:56.845 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:43:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:57.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:43:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:43:58 compute-0 nova_compute[250018]: 2026-01-20 15:43:58.000 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:43:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:43:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:43:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:43:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:43:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:43:58 compute-0 ceph-mon[74360]: pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:43:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:43:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 20 15:43:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:43:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 20 15:43:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:43:59 compute-0 ceph-mon[74360]: pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:00.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:01 compute-0 ovn_controller[148666]: 2026-01-20T15:44:01Z|00818|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 20 15:44:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 973 KiB/s wr, 83 op/s
Jan 20 15:44:01 compute-0 nova_compute[250018]: 2026-01-20 15:44:01.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:02.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:02 compute-0 ceph-mon[74360]: pgmap v3655: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 973 KiB/s wr, 83 op/s
Jan 20 15:44:03 compute-0 nova_compute[250018]: 2026-01-20 15:44:03.001 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 960 KiB/s wr, 65 op/s
Jan 20 15:44:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:04 compute-0 sudo[401286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:04 compute-0 sudo[401286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-0 sudo[401286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-0 sudo[401311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:44:04 compute-0 sudo[401311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-0 sudo[401311]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-0 sudo[401336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:04 compute-0 sudo[401336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:04 compute-0 sudo[401336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:04 compute-0 ceph-mon[74360]: pgmap v3656: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 960 KiB/s wr, 65 op/s
Jan 20 15:44:04 compute-0 sudo[401361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:44:04 compute-0 sudo[401361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:05 compute-0 sudo[401361]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a81ddfe7-15e8-4e86-b765-eabafebd2523 does not exist
Jan 20 15:44:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev e61024d7-9f23-4f74-bb78-8d9938a769be does not exist
Jan 20 15:44:05 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 67ded290-e158-4ee0-81a4-e361ff7d7173 does not exist
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:44:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-0 sudo[401418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:05 compute-0 sudo[401418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:05 compute-0 sudo[401418]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:05 compute-0 sudo[401443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:44:05 compute-0 sudo[401443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:05 compute-0 sudo[401443]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:05 compute-0 sudo[401468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:05 compute-0 sudo[401468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:05 compute-0 sudo[401468]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:05 compute-0 sudo[401493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:44:05 compute-0 sudo[401493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 99 op/s
Jan 20 15:44:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:44:05 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.753019548 +0000 UTC m=+0.040740526 container create 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:44:05 compute-0 systemd[1]: Started libpod-conmon-768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030.scope.
Jan 20 15:44:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.829066127 +0000 UTC m=+0.116787125 container init 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.738402811 +0000 UTC m=+0.026123809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.838095471 +0000 UTC m=+0.125816479 container start 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.841552256 +0000 UTC m=+0.129273254 container attach 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:44:05 compute-0 admiring_cerf[401575]: 167 167
Jan 20 15:44:05 compute-0 systemd[1]: libpod-768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030.scope: Deactivated successfully.
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.845305657 +0000 UTC m=+0.133026635 container died 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87d3dc180d1676b788105643a32526ceb364c3d70bbf4b23a35333114a300ba-merged.mount: Deactivated successfully.
Jan 20 15:44:05 compute-0 podman[401559]: 2026-01-20 15:44:05.881704242 +0000 UTC m=+0.169425220 container remove 768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:44:05 compute-0 systemd[1]: libpod-conmon-768ad3af074692c4e43cec32383d0a88466785644fc35ec993de7fe8d2cf1030.scope: Deactivated successfully.
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.063928899 +0000 UTC m=+0.041637219 container create 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:44:06 compute-0 systemd[1]: Started libpod-conmon-0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f.scope.
Jan 20 15:44:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.046876856 +0000 UTC m=+0.024585186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.147549293 +0000 UTC m=+0.125257623 container init 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.153884635 +0000 UTC m=+0.131592945 container start 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.157837033 +0000 UTC m=+0.135545383 container attach 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 20 15:44:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:06.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:06 compute-0 ceph-mon[74360]: pgmap v3657: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 99 op/s
Jan 20 15:44:06 compute-0 nova_compute[250018]: 2026-01-20 15:44:06.854 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:06 compute-0 admiring_spence[401616]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:44:06 compute-0 admiring_spence[401616]: --> relative data size: 1.0
Jan 20 15:44:06 compute-0 admiring_spence[401616]: --> All data devices are unavailable
Jan 20 15:44:06 compute-0 systemd[1]: libpod-0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f.scope: Deactivated successfully.
Jan 20 15:44:06 compute-0 podman[401599]: 2026-01-20 15:44:06.988293706 +0000 UTC m=+0.966002016 container died 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef62da96e30a60fe2dbe56d20857026882fea9028caa5e48d1a79090ced8ba1b-merged.mount: Deactivated successfully.
Jan 20 15:44:07 compute-0 podman[401599]: 2026-01-20 15:44:07.041463896 +0000 UTC m=+1.019172206 container remove 0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_spence, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:44:07 compute-0 systemd[1]: libpod-conmon-0417b9c2b2d17b95f9fa4bcfa60db1656d6af19ef2982380a1977d60c0f2c66f.scope: Deactivated successfully.
Jan 20 15:44:07 compute-0 sudo[401493]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:07 compute-0 sudo[401644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:07 compute-0 sudo[401644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:07 compute-0 sudo[401644]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:07 compute-0 sudo[401669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:44:07 compute-0 sudo[401669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:07 compute-0 sudo[401669]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:07 compute-0 sudo[401694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:07 compute-0 sudo[401694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:07 compute-0 sudo[401694]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:07 compute-0 sudo[401719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:44:07 compute-0 sudo[401719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.587522837 +0000 UTC m=+0.040044846 container create 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:44:07 compute-0 systemd[1]: Started libpod-conmon-38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f.scope.
Jan 20 15:44:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.660045581 +0000 UTC m=+0.112567650 container init 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.570206657 +0000 UTC m=+0.022728666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.666006452 +0000 UTC m=+0.118528441 container start 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:44:07 compute-0 sharp_dhawan[401801]: 167 167
Jan 20 15:44:07 compute-0 systemd[1]: libpod-38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f.scope: Deactivated successfully.
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.669779184 +0000 UTC m=+0.122301273 container attach 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.670178695 +0000 UTC m=+0.122700715 container died 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-17bfb339569b96eac496a19e2d06a48dbc8edb1448bf56eefabd868f5c0251c7-merged.mount: Deactivated successfully.
Jan 20 15:44:07 compute-0 podman[401785]: 2026-01-20 15:44:07.704166726 +0000 UTC m=+0.156688715 container remove 38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:44:07 compute-0 systemd[1]: libpod-conmon-38ae8be219002a40c5057f53d83dd1e9a7e6b3e94581116ccc218838c4db221f.scope: Deactivated successfully.
Jan 20 15:44:07 compute-0 podman[401824]: 2026-01-20 15:44:07.882170837 +0000 UTC m=+0.050015016 container create 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:44:07 compute-0 systemd[1]: Started libpod-conmon-88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35.scope.
Jan 20 15:44:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97bf0d01a437fcdddd45d0b21bf5d03375e564ba2b894fdba507d798126179e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97bf0d01a437fcdddd45d0b21bf5d03375e564ba2b894fdba507d798126179e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97bf0d01a437fcdddd45d0b21bf5d03375e564ba2b894fdba507d798126179e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97bf0d01a437fcdddd45d0b21bf5d03375e564ba2b894fdba507d798126179e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:07 compute-0 podman[401824]: 2026-01-20 15:44:07.85975839 +0000 UTC m=+0.027602589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:07 compute-0 podman[401824]: 2026-01-20 15:44:07.956345256 +0000 UTC m=+0.124189475 container init 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:44:07 compute-0 podman[401824]: 2026-01-20 15:44:07.964105987 +0000 UTC m=+0.131950166 container start 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:44:07 compute-0 podman[401824]: 2026-01-20 15:44:07.966885832 +0000 UTC m=+0.134730031 container attach 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:44:08 compute-0 nova_compute[250018]: 2026-01-20 15:44:08.003 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:08.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:08 compute-0 ceph-mon[74360]: pgmap v3658: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]: {
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:     "0": [
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:         {
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "devices": [
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "/dev/loop3"
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             ],
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "lv_name": "ceph_lv0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "lv_size": "7511998464",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "name": "ceph_lv0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "tags": {
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.cluster_name": "ceph",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.crush_device_class": "",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.encrypted": "0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.osd_id": "0",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.type": "block",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:                 "ceph.vdo": "0"
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             },
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "type": "block",
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:             "vg_name": "ceph_vg0"
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:         }
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]:     ]
Jan 20 15:44:08 compute-0 optimistic_blackwell[401840]: }
Jan 20 15:44:08 compute-0 systemd[1]: libpod-88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35.scope: Deactivated successfully.
Jan 20 15:44:08 compute-0 podman[401850]: 2026-01-20 15:44:08.766705846 +0000 UTC m=+0.024861195 container died 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 15:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d97bf0d01a437fcdddd45d0b21bf5d03375e564ba2b894fdba507d798126179e-merged.mount: Deactivated successfully.
Jan 20 15:44:08 compute-0 podman[401850]: 2026-01-20 15:44:08.824548062 +0000 UTC m=+0.082703391 container remove 88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:44:08 compute-0 systemd[1]: libpod-conmon-88fa4e9f3e2f24bce2fb7fcdea797be2a8598e88c4b70f03514a4987924bcb35.scope: Deactivated successfully.
Jan 20 15:44:08 compute-0 sudo[401719]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:08 compute-0 sudo[401865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:08 compute-0 sudo[401865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:08 compute-0 sudo[401865]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:09 compute-0 sudo[401890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:44:09 compute-0 sudo[401890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:09 compute-0 sudo[401890]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:09 compute-0 sudo[401915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:09 compute-0 sudo[401915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:09 compute-0 sudo[401915]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:09 compute-0 sudo[401940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:44:09 compute-0 sudo[401940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.511298594 +0000 UTC m=+0.038129585 container create e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:44:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:09 compute-0 systemd[1]: Started libpod-conmon-e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85.scope.
Jan 20 15:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.570736463 +0000 UTC m=+0.097567474 container init e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.576180291 +0000 UTC m=+0.103011292 container start e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:44:09 compute-0 serene_edison[402021]: 167 167
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.579223893 +0000 UTC m=+0.106054884 container attach e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:44:09 compute-0 systemd[1]: libpod-e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85.scope: Deactivated successfully.
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.579855531 +0000 UTC m=+0.106686522 container died e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.497101449 +0000 UTC m=+0.023932450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b77a73efdbc860b30e26f27c6a5a1d3481a340872756de100f2e9ef23de5ccf-merged.mount: Deactivated successfully.
Jan 20 15:44:09 compute-0 podman[402004]: 2026-01-20 15:44:09.610672305 +0000 UTC m=+0.137503296 container remove e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:44:09 compute-0 systemd[1]: libpod-conmon-e9915e03a78e9a5ef1f44f8f6c74e041101e2e81147f81dbfba4e91d48c95c85.scope: Deactivated successfully.
Jan 20 15:44:09 compute-0 podman[402046]: 2026-01-20 15:44:09.794981617 +0000 UTC m=+0.049642105 container create f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:44:09 compute-0 systemd[1]: Started libpod-conmon-f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf.scope.
Jan 20 15:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:44:09 compute-0 podman[402046]: 2026-01-20 15:44:09.767430431 +0000 UTC m=+0.022090979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98daf1ee2edda688799c6e8bcdb72d18c28120e94a5247851eaf9e27b514b1f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98daf1ee2edda688799c6e8bcdb72d18c28120e94a5247851eaf9e27b514b1f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98daf1ee2edda688799c6e8bcdb72d18c28120e94a5247851eaf9e27b514b1f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98daf1ee2edda688799c6e8bcdb72d18c28120e94a5247851eaf9e27b514b1f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:44:09 compute-0 podman[402046]: 2026-01-20 15:44:09.88663825 +0000 UTC m=+0.141298738 container init f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:44:09 compute-0 podman[402046]: 2026-01-20 15:44:09.893168836 +0000 UTC m=+0.147829294 container start f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:44:09 compute-0 podman[402046]: 2026-01-20 15:44:09.896347183 +0000 UTC m=+0.151007641 container attach f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:44:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:10.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:10 compute-0 ceph-mon[74360]: pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:10 compute-0 funny_tesla[402062]: {
Jan 20 15:44:10 compute-0 funny_tesla[402062]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:44:10 compute-0 funny_tesla[402062]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:44:10 compute-0 funny_tesla[402062]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:44:10 compute-0 funny_tesla[402062]:         "osd_id": 0,
Jan 20 15:44:10 compute-0 funny_tesla[402062]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:44:10 compute-0 funny_tesla[402062]:         "type": "bluestore"
Jan 20 15:44:10 compute-0 funny_tesla[402062]:     }
Jan 20 15:44:10 compute-0 funny_tesla[402062]: }
Jan 20 15:44:10 compute-0 systemd[1]: libpod-f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf.scope: Deactivated successfully.
Jan 20 15:44:10 compute-0 podman[402046]: 2026-01-20 15:44:10.70664333 +0000 UTC m=+0.961303778 container died f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-98daf1ee2edda688799c6e8bcdb72d18c28120e94a5247851eaf9e27b514b1f1-merged.mount: Deactivated successfully.
Jan 20 15:44:10 compute-0 podman[402046]: 2026-01-20 15:44:10.751508175 +0000 UTC m=+1.006168623 container remove f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:44:10 compute-0 systemd[1]: libpod-conmon-f5f257854819ee07b21131c60d080176d5ffe5e60b3952cc6078deb0edd52fdf.scope: Deactivated successfully.
Jan 20 15:44:10 compute-0 sudo[401940]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:44:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:10 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:44:10 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fef42bbe-d8bc-491e-b90f-f114aa50d21c does not exist
Jan 20 15:44:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 518d9a4c-8e91-4619-8794-0c98e0f63ce2 does not exist
Jan 20 15:44:10 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 36425e8d-4a1b-4798-abba-cc047d04095d does not exist
Jan 20 15:44:10 compute-0 sudo[402095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:10 compute-0 sudo[402095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:10 compute-0 sudo[402095]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:10 compute-0 sudo[402120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:44:10 compute-0 sudo[402120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:10 compute-0 sudo[402120]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:11 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:44:11 compute-0 ceph-mon[74360]: pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 15:44:11 compute-0 nova_compute[250018]: 2026-01-20 15:44:11.858 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002168684859482598 of space, bias 1.0, pg target 0.6506054578447794 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:44:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:44:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:12.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:13 compute-0 nova_compute[250018]: 2026-01-20 15:44:13.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Jan 20 15:44:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:13 compute-0 sudo[402146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:13 compute-0 sudo[402146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:13 compute-0 sudo[402146]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:13 compute-0 sudo[402171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:13 compute-0 sudo[402171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:13 compute-0 sudo[402171]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:14 compute-0 ceph-mon[74360]: pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Jan 20 15:44:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:44:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/889081625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:44:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 158 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 260 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Jan 20 15:44:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:16.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:16 compute-0 ceph-mon[74360]: pgmap v3662: 321 pgs: 321 active+clean; 158 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 260 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Jan 20 15:44:16 compute-0 nova_compute[250018]: 2026-01-20 15:44:16.860 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:17.178 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:44:17 compute-0 nova_compute[250018]: 2026-01-20 15:44:17.178 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:17 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:17.179 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:44:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 100 KiB/s wr, 28 op/s
Jan 20 15:44:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:17.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:18 compute-0 nova_compute[250018]: 2026-01-20 15:44:18.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:18.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:18 compute-0 ceph-mon[74360]: pgmap v3663: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 100 KiB/s wr, 28 op/s
Jan 20 15:44:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3037305259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:19 compute-0 nova_compute[250018]: 2026-01-20 15:44:19.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 20 op/s
Jan 20 15:44:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:20 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:20.181 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:44:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:20.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:20 compute-0 podman[402201]: 2026-01-20 15:44:20.484076012 +0000 UTC m=+0.066606205 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:44:20 compute-0 podman[402200]: 2026-01-20 15:44:20.539159594 +0000 UTC m=+0.131241306 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:44:20 compute-0 ceph-mon[74360]: pgmap v3664: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 20 op/s
Jan 20 15:44:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 20 15:44:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:21.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:21 compute-0 nova_compute[250018]: 2026-01-20 15:44:21.862 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:22.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:22 compute-0 ceph-mon[74360]: pgmap v3665: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 20 15:44:23 compute-0 nova_compute[250018]: 2026-01-20 15:44:23.008 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:23 compute-0 nova_compute[250018]: 2026-01-20 15:44:23.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:23.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:24.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:24 compute-0 ceph-mon[74360]: pgmap v3666: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2793886615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:25.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2855771777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:26 compute-0 ceph-mon[74360]: pgmap v3667: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 20 15:44:26 compute-0 nova_compute[250018]: 2026-01-20 15:44:26.865 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:27 compute-0 nova_compute[250018]: 2026-01-20 15:44:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:27 compute-0 nova_compute[250018]: 2026-01-20 15:44:27.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:44:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 5.5 KiB/s wr, 22 op/s
Jan 20 15:44:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:27.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:27 compute-0 ceph-mon[74360]: pgmap v3668: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 5.5 KiB/s wr, 22 op/s
Jan 20 15:44:28 compute-0 nova_compute[250018]: 2026-01-20 15:44:28.010 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:28 compute-0 nova_compute[250018]: 2026-01-20 15:44:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 15:44:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:29.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.079 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.079 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:44:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:44:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016780540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.547 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:44:30 compute-0 ceph-mon[74360]: pgmap v3669: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 15:44:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1016780540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.692 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.693 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4166MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.693 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.694 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.814 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.814 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:30.818 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:30.818 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:44:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:44:30.819 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.830 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.844 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.845 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.880 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.902 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:44:30 compute-0 nova_compute[250018]: 2026-01-20 15:44:30.930 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:44:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:44:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067520434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.417 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.422 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.438 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.440 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.441 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:44:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:44:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2706026692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3067520434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:31 compute-0 nova_compute[250018]: 2026-01-20 15:44:31.870 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:32 compute-0 ceph-mon[74360]: pgmap v3670: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Jan 20 15:44:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1480609939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:33 compute-0 nova_compute[250018]: 2026-01-20 15:44:33.010 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:33.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:33 compute-0 sudo[402295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:33 compute-0 sudo[402295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:33 compute-0 sudo[402295]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:33 compute-0 sudo[402320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:33 compute-0 sudo[402320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:34 compute-0 sudo[402320]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:34 compute-0 nova_compute[250018]: 2026-01-20 15:44:34.441 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:34 compute-0 ceph-mon[74360]: pgmap v3671: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:35 compute-0 sshd-session[402293]: Invalid user guest from 134.122.57.138 port 52134
Jan 20 15:44:35 compute-0 sshd-session[402293]: Connection closed by invalid user guest 134.122.57.138 port 52134 [preauth]
Jan 20 15:44:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:35.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:35 compute-0 ceph-mon[74360]: pgmap v3672: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:36 compute-0 nova_compute[250018]: 2026-01-20 15:44:36.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:36 compute-0 nova_compute[250018]: 2026-01-20 15:44:36.873 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3380038031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:44:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:37.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:38 compute-0 nova_compute[250018]: 2026-01-20 15:44:38.013 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:38 compute-0 ceph-mon[74360]: pgmap v3673: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:44:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 126 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 270 KiB/s wr, 11 op/s
Jan 20 15:44:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:39.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:40.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:40 compute-0 ceph-mon[74360]: pgmap v3674: 321 pgs: 321 active+clean; 126 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 270 KiB/s wr, 11 op/s
Jan 20 15:44:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:41 compute-0 nova_compute[250018]: 2026-01-20 15:44:41.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:42.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:42 compute-0 ceph-mon[74360]: pgmap v3675: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:43 compute-0 nova_compute[250018]: 2026-01-20 15:44:43.067 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:43 compute-0 ceph-mon[74360]: pgmap v3676: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2712563561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:44:45 compute-0 nova_compute[250018]: 2026-01-20 15:44:45.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:45 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3294861214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:44:45 compute-0 ceph-mon[74360]: pgmap v3677: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:46.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:46 compute-0 nova_compute[250018]: 2026-01-20 15:44:46.980 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:47 compute-0 nova_compute[250018]: 2026-01-20 15:44:47.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:48 compute-0 nova_compute[250018]: 2026-01-20 15:44:48.107 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:48 compute-0 ceph-mon[74360]: pgmap v3678: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:44:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 20 15:44:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:49.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:50.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:50 compute-0 ceph-mon[74360]: pgmap v3679: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 20 15:44:51 compute-0 nova_compute[250018]: 2026-01-20 15:44:51.225 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:51 compute-0 nova_compute[250018]: 2026-01-20 15:44:51.225 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:44:51 compute-0 nova_compute[250018]: 2026-01-20 15:44:51.225 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:44:51 compute-0 podman[402355]: 2026-01-20 15:44:51.528000604 +0000 UTC m=+0.103787413 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 20 15:44:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 20 15:44:51 compute-0 podman[402354]: 2026-01-20 15:44:51.553847263 +0000 UTC m=+0.132199761 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 20 15:44:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:51 compute-0 nova_compute[250018]: 2026-01-20 15:44:51.675 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:44:51 compute-0 nova_compute[250018]: 2026-01-20 15:44:51.993 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:52.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:44:52 compute-0 ceph-mon[74360]: pgmap v3680: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:44:52
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Jan 20 15:44:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:44:53 compute-0 nova_compute[250018]: 2026-01-20 15:44:53.153 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:44:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:54 compute-0 nova_compute[250018]: 2026-01-20 15:44:54.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:44:54 compute-0 nova_compute[250018]: 2026-01-20 15:44:54.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:44:54 compute-0 sudo[402399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:54 compute-0 sudo[402399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:54 compute-0 sudo[402399]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:54 compute-0 sudo[402424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:44:54 compute-0 sudo[402424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:44:54 compute-0 sudo[402424]: pam_unix(sudo:session): session closed for user root
Jan 20 15:44:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:54 compute-0 ceph-mon[74360]: pgmap v3681: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 20 15:44:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 52 op/s
Jan 20 15:44:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:44:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:56.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:44:56 compute-0 ceph-mon[74360]: pgmap v3682: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 52 op/s
Jan 20 15:44:56 compute-0 nova_compute[250018]: 2026-01-20 15:44:56.996 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:44:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:44:58 compute-0 nova_compute[250018]: 2026-01-20 15:44:58.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:44:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:44:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:44:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:44:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:44:58 compute-0 ceph-mon[74360]: pgmap v3683: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:44:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:44:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:44:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:44:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:00.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:00 compute-0 ceph-mon[74360]: pgmap v3684: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 20 15:45:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 20 15:45:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:02 compute-0 nova_compute[250018]: 2026-01-20 15:45:02.000 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:02 compute-0 ceph-mon[74360]: pgmap v3685: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 20 15:45:03 compute-0 nova_compute[250018]: 2026-01-20 15:45:03.198 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 20 15:45:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:03.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:04 compute-0 ceph-mon[74360]: pgmap v3686: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 44 op/s
Jan 20 15:45:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 195 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 88 op/s
Jan 20 15:45:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:05.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:06.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:06 compute-0 ceph-mon[74360]: pgmap v3687: 321 pgs: 321 active+clean; 195 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 88 op/s
Jan 20 15:45:07 compute-0 nova_compute[250018]: 2026-01-20 15:45:07.005 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1004 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:45:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:45:07 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 82K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1494 writes, 6101 keys, 1494 commit groups, 1.0 writes per commit group, ingest: 10.17 MB, 0.02 MB/s
                                           Interval WAL: 1494 writes, 1494 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     58.0      1.83              0.35        57    0.032       0      0       0.0       0.0
                                             L6      1/0   12.04 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    146.9    126.2      4.56              1.79        56    0.081    440K    30K       0.0       0.0
                                            Sum      1/0   12.04 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4    104.8    106.7      6.40              2.14       113    0.057    440K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.9    157.0    156.1      0.34              0.17         8    0.042     44K   1970       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    146.9    126.2      4.56              1.79        56    0.081    440K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     58.1      1.83              0.35        56    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.104, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.67 GB write, 0.10 MB/s write, 0.65 GB read, 0.10 MB/s read, 6.4 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5576af6451f0#2 capacity: 304.00 MB usage: 73.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000462 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4253,70.62 MB,23.2294%) FilterBlock(114,1.17 MB,0.38534%) IndexBlock(114,1.94 MB,0.637215%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 15:45:08 compute-0 nova_compute[250018]: 2026-01-20 15:45:08.199 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:08.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:08 compute-0 ceph-mon[74360]: pgmap v3688: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1004 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 20 15:45:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:45:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:09 compute-0 ceph-mon[74360]: pgmap v3689: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 20 15:45:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:11 compute-0 sudo[402458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:11 compute-0 sudo[402458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-0 sudo[402458]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-0 sudo[402483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:45:11 compute-0 sudo[402483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-0 sudo[402483]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-0 sudo[402508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:11 compute-0 sudo[402508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-0 sudo[402508]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:11 compute-0 sudo[402533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:45:11 compute-0 sudo[402533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:11.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:12 compute-0 nova_compute[250018]: 2026-01-20 15:45:12.006 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:12 compute-0 sudo[402533]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev febbaba1-481f-47a6-a19f-9ca5a856d925 does not exist
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 55411359-05be-42b2-8698-ab2d0155287b does not exist
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4bf31b4a-5f16-4fc3-ae4c-2db6cb12008e does not exist
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:45:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:45:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:45:12 compute-0 sudo[402589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:12 compute-0 sudo[402589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:12 compute-0 sudo[402589]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:12 compute-0 sudo[402614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:45:12 compute-0 sudo[402614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:12 compute-0 sudo[402614]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:12 compute-0 sudo[402639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:12 compute-0 sudo[402639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:12 compute-0 sudo[402639]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:12.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:12 compute-0 sudo[402664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:45:12 compute-0 sudo[402664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:12 compute-0 ceph-mon[74360]: pgmap v3690: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:45:12 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.67955711 +0000 UTC m=+0.040564950 container create 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:12 compute-0 systemd[1]: Started libpod-conmon-1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259.scope.
Jan 20 15:45:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.756474144 +0000 UTC m=+0.117482004 container init 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.661145042 +0000 UTC m=+0.022152902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.76554347 +0000 UTC m=+0.126551310 container start 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.769146107 +0000 UTC m=+0.130153977 container attach 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:45:12 compute-0 nervous_blackburn[402746]: 167 167
Jan 20 15:45:12 compute-0 systemd[1]: libpod-1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259.scope: Deactivated successfully.
Jan 20 15:45:12 compute-0 conmon[402746]: conmon 1d573c1e9fb0f35fed34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259.scope/container/memory.events
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.773846695 +0000 UTC m=+0.134854535 container died 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a9c237d2431380f449a55f76bcdad754ac555454b34ad4736287032bbae4f5-merged.mount: Deactivated successfully.
Jan 20 15:45:12 compute-0 podman[402730]: 2026-01-20 15:45:12.808837272 +0000 UTC m=+0.169845132 container remove 1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:45:12 compute-0 systemd[1]: libpod-conmon-1d573c1e9fb0f35fed34c7d13dc54a8bad75a888389e0f67210a8d8705ee7259.scope: Deactivated successfully.
Jan 20 15:45:12 compute-0 podman[402768]: 2026-01-20 15:45:12.988118838 +0000 UTC m=+0.056601814 container create ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:45:13 compute-0 systemd[1]: Started libpod-conmon-ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35.scope.
Jan 20 15:45:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:12.969146624 +0000 UTC m=+0.037629660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:13.072276308 +0000 UTC m=+0.140759304 container init ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:13.07789116 +0000 UTC m=+0.146374136 container start ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:13.081188589 +0000 UTC m=+0.149671555 container attach ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:45:13 compute-0 nova_compute[250018]: 2026-01-20 15:45:13.201 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:13.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:13 compute-0 vibrant_hodgkin[402784]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:45:13 compute-0 vibrant_hodgkin[402784]: --> relative data size: 1.0
Jan 20 15:45:13 compute-0 vibrant_hodgkin[402784]: --> All data devices are unavailable
Jan 20 15:45:13 compute-0 systemd[1]: libpod-ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35.scope: Deactivated successfully.
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:13.836866457 +0000 UTC m=+0.905349433 container died ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d38ac781395e8a9d5e880546d4b3a0bdc6c1cfcfb034b3fe9ce59904a4c00153-merged.mount: Deactivated successfully.
Jan 20 15:45:13 compute-0 podman[402768]: 2026-01-20 15:45:13.88313014 +0000 UTC m=+0.951613116 container remove ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:45:13 compute-0 systemd[1]: libpod-conmon-ef15987673148214024b4595e847df0a12f5171c9bb1e9fbc672d94120c70f35.scope: Deactivated successfully.
Jan 20 15:45:13 compute-0 sudo[402664]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:13 compute-0 sudo[402810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:13 compute-0 sudo[402810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:13 compute-0 sudo[402810]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-0 sudo[402835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:45:14 compute-0 sudo[402835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-0 sudo[402835]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-0 sudo[402860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:14 compute-0 sudo[402860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-0 sudo[402860]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-0 sudo[402885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:45:14 compute-0 sudo[402885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-0 sudo[402910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:14 compute-0 sudo[402910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-0 sudo[402910]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:14 compute-0 sudo[402942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:14 compute-0 sudo[402942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:14 compute-0 sudo[402942]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.489219937 +0000 UTC m=+0.041608728 container create 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:14 compute-0 systemd[1]: Started libpod-conmon-07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf.scope.
Jan 20 15:45:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.562029369 +0000 UTC m=+0.114418180 container init 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.470236472 +0000 UTC m=+0.022625303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.568925516 +0000 UTC m=+0.121314307 container start 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.572875892 +0000 UTC m=+0.125264713 container attach 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:45:14 compute-0 sweet_kare[403019]: 167 167
Jan 20 15:45:14 compute-0 systemd[1]: libpod-07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf.scope: Deactivated successfully.
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.574087185 +0000 UTC m=+0.126475976 container died 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c60faa1ec090f2706c4b79acf7341ec18e40a76c7560dcc395c38c5f178bc4-merged.mount: Deactivated successfully.
Jan 20 15:45:14 compute-0 podman[403002]: 2026-01-20 15:45:14.606331278 +0000 UTC m=+0.158720069 container remove 07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:45:14 compute-0 ceph-mon[74360]: pgmap v3691: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1310749465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:45:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1310749465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:45:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4231837537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:14 compute-0 systemd[1]: libpod-conmon-07dc600e798c9c06a2365d8acb03209157fa96542a665a35c99e6249e4fb8ccf.scope: Deactivated successfully.
Jan 20 15:45:14 compute-0 podman[403044]: 2026-01-20 15:45:14.77176462 +0000 UTC m=+0.047924060 container create 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:45:14 compute-0 systemd[1]: Started libpod-conmon-022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2.scope.
Jan 20 15:45:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:14 compute-0 podman[403044]: 2026-01-20 15:45:14.749201438 +0000 UTC m=+0.025360878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc26c3d7e0eaa34c8ba4577c35e0ea6843a43d8e41f36320dc40cc3383bc7d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc26c3d7e0eaa34c8ba4577c35e0ea6843a43d8e41f36320dc40cc3383bc7d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc26c3d7e0eaa34c8ba4577c35e0ea6843a43d8e41f36320dc40cc3383bc7d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc26c3d7e0eaa34c8ba4577c35e0ea6843a43d8e41f36320dc40cc3383bc7d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:14 compute-0 podman[403044]: 2026-01-20 15:45:14.868194191 +0000 UTC m=+0.144353681 container init 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:45:14 compute-0 podman[403044]: 2026-01-20 15:45:14.874855172 +0000 UTC m=+0.151014612 container start 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:45:14 compute-0 podman[403044]: 2026-01-20 15:45:14.878108229 +0000 UTC m=+0.154267679 container attach 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:45:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 215 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Jan 20 15:45:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:15.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:15 compute-0 confident_khayyam[403061]: {
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:     "0": [
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:         {
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "devices": [
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "/dev/loop3"
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             ],
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "lv_name": "ceph_lv0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "lv_size": "7511998464",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "name": "ceph_lv0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "tags": {
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.cluster_name": "ceph",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.crush_device_class": "",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.encrypted": "0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.osd_id": "0",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.type": "block",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:                 "ceph.vdo": "0"
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             },
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "type": "block",
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:             "vg_name": "ceph_vg0"
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:         }
Jan 20 15:45:15 compute-0 confident_khayyam[403061]:     ]
Jan 20 15:45:15 compute-0 confident_khayyam[403061]: }
Jan 20 15:45:15 compute-0 systemd[1]: libpod-022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2.scope: Deactivated successfully.
Jan 20 15:45:15 compute-0 podman[403044]: 2026-01-20 15:45:15.709084108 +0000 UTC m=+0.985243538 container died 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 20 15:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc26c3d7e0eaa34c8ba4577c35e0ea6843a43d8e41f36320dc40cc3383bc7d73-merged.mount: Deactivated successfully.
Jan 20 15:45:15 compute-0 podman[403044]: 2026-01-20 15:45:15.758233329 +0000 UTC m=+1.034392749 container remove 022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:45:15 compute-0 systemd[1]: libpod-conmon-022ad3719e4c22c1b93f79fbc7dd7e332f56a7af9c92516e40ff9d11dc56cca2.scope: Deactivated successfully.
Jan 20 15:45:15 compute-0 sudo[402885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:15 compute-0 sudo[403085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:15 compute-0 sudo[403085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:15 compute-0 sudo[403085]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:15 compute-0 sudo[403110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:45:15 compute-0 sudo[403110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:15 compute-0 sudo[403110]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:15 compute-0 sudo[403135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:15 compute-0 sudo[403135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:15 compute-0 sudo[403135]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:16 compute-0 sudo[403160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:45:16 compute-0 sudo[403160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.45677841 +0000 UTC m=+0.056693198 container create 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 15:45:16 compute-0 systemd[1]: Started libpod-conmon-980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02.scope.
Jan 20 15:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.438966697 +0000 UTC m=+0.038881515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.558046002 +0000 UTC m=+0.157960790 container init 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.563696035 +0000 UTC m=+0.163610823 container start 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.566925573 +0000 UTC m=+0.166840411 container attach 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:45:16 compute-0 thirsty_leavitt[403241]: 167 167
Jan 20 15:45:16 compute-0 systemd[1]: libpod-980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02.scope: Deactivated successfully.
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.570277854 +0000 UTC m=+0.170192642 container died 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 15:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a2a13fe8264e96264e3fa6db359a1ce89bccf308de1280a5b6fc83fb3418971-merged.mount: Deactivated successfully.
Jan 20 15:45:16 compute-0 podman[403225]: 2026-01-20 15:45:16.604690806 +0000 UTC m=+0.204605604 container remove 980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:45:16 compute-0 systemd[1]: libpod-conmon-980be976b1e76abfeb979fc7f279fab79c8935116461e46f697f12d13171dd02.scope: Deactivated successfully.
Jan 20 15:45:16 compute-0 ceph-mon[74360]: pgmap v3692: 321 pgs: 321 active+clean; 215 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Jan 20 15:45:16 compute-0 sshd-session[403041]: Invalid user guest from 134.122.57.138 port 39060
Jan 20 15:45:16 compute-0 podman[403266]: 2026-01-20 15:45:16.785667168 +0000 UTC m=+0.050775016 container create 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:45:16 compute-0 systemd[1]: Started libpod-conmon-801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285.scope.
Jan 20 15:45:16 compute-0 sshd-session[403041]: Connection closed by invalid user guest 134.122.57.138 port 39060 [preauth]
Jan 20 15:45:16 compute-0 podman[403266]: 2026-01-20 15:45:16.758855092 +0000 UTC m=+0.023963020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ea221669e5c7b859a5e76a8c5953347cb6c9b28e5bc3a35161b8a147f2c56e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ea221669e5c7b859a5e76a8c5953347cb6c9b28e5bc3a35161b8a147f2c56e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ea221669e5c7b859a5e76a8c5953347cb6c9b28e5bc3a35161b8a147f2c56e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ea221669e5c7b859a5e76a8c5953347cb6c9b28e5bc3a35161b8a147f2c56e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:45:16 compute-0 podman[403266]: 2026-01-20 15:45:16.877724632 +0000 UTC m=+0.142832550 container init 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:45:16 compute-0 podman[403266]: 2026-01-20 15:45:16.890089496 +0000 UTC m=+0.155197334 container start 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:45:16 compute-0 podman[403266]: 2026-01-20 15:45:16.893734985 +0000 UTC m=+0.158842923 container attach 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 15:45:17 compute-0 nova_compute[250018]: 2026-01-20 15:45:17.010 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 2.5 MiB/s wr, 45 op/s
Jan 20 15:45:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:17.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]: {
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:         "osd_id": 0,
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:         "type": "bluestore"
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]:     }
Jan 20 15:45:17 compute-0 charming_ishizaka[403282]: }
Jan 20 15:45:17 compute-0 systemd[1]: libpod-801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285.scope: Deactivated successfully.
Jan 20 15:45:17 compute-0 conmon[403282]: conmon 801a385233897f14e886 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285.scope/container/memory.events
Jan 20 15:45:17 compute-0 podman[403266]: 2026-01-20 15:45:17.723530961 +0000 UTC m=+0.988638809 container died 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 20 15:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ea221669e5c7b859a5e76a8c5953347cb6c9b28e5bc3a35161b8a147f2c56e-merged.mount: Deactivated successfully.
Jan 20 15:45:17 compute-0 podman[403266]: 2026-01-20 15:45:17.782877748 +0000 UTC m=+1.047985586 container remove 801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 20 15:45:17 compute-0 systemd[1]: libpod-conmon-801a385233897f14e88637e724d7667ca44a441efe45eaa3d7ebf696d84ab285.scope: Deactivated successfully.
Jan 20 15:45:17 compute-0 sudo[403160]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:45:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:45:17 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c26c6549-e7bb-444e-bf97-2ee4e683d3cc does not exist
Jan 20 15:45:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev b4779229-19d2-4df7-9c75-b0fb737442fd does not exist
Jan 20 15:45:17 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f14e3ac9-41eb-4012-8f73-b2089fdb3304 does not exist
Jan 20 15:45:17 compute-0 sudo[403315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:17 compute-0 sudo[403315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:17 compute-0 sudo[403315]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:17 compute-0 sudo[403340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:45:17 compute-0 sudo[403340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:17 compute-0 sudo[403340]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:18 compute-0 nova_compute[250018]: 2026-01-20 15:45:18.203 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:18.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:18 compute-0 ceph-mon[74360]: pgmap v3693: 321 pgs: 321 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 2.5 MiB/s wr, 45 op/s
Jan 20 15:45:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:18 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:45:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1748637888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/739713049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.674610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918674664, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 251, "total_data_size": 3924521, "memory_usage": 3982816, "flush_reason": "Manual Compaction"}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918694056, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 3835390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80441, "largest_seqno": 82561, "table_properties": {"data_size": 3825792, "index_size": 6091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19644, "raw_average_key_size": 20, "raw_value_size": 3806783, "raw_average_value_size": 3953, "num_data_blocks": 265, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923702, "oldest_key_time": 1768923702, "file_creation_time": 1768923918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 19478 microseconds, and 7559 cpu microseconds.
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.694096) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 3835390 bytes OK
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.694111) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.695543) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.695555) EVENT_LOG_v1 {"time_micros": 1768923918695551, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.695572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3915963, prev total WAL file size 3915963, number of live WAL files 2.
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.696550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(3745KB)], [185(12MB)]
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918696626, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 16459933, "oldest_snapshot_seqno": -1}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 11031 keys, 14481874 bytes, temperature: kUnknown
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918769599, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 14481874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14410612, "index_size": 42605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 290221, "raw_average_key_size": 26, "raw_value_size": 14217668, "raw_average_value_size": 1288, "num_data_blocks": 1625, "num_entries": 11031, "num_filter_entries": 11031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.769852) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 14481874 bytes
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.771289) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.3 rd, 198.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.0 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 11550, records dropped: 519 output_compression: NoCompression
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.771304) EVENT_LOG_v1 {"time_micros": 1768923918771297, "job": 116, "event": "compaction_finished", "compaction_time_micros": 73055, "compaction_time_cpu_micros": 34069, "output_level": 6, "num_output_files": 1, "total_output_size": 14481874, "num_input_records": 11550, "num_output_records": 11031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918772162, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923918774553, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.696402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.774597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.774601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.774602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.774603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:18 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:45:18.774605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:45:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:20 compute-0 ceph-mon[74360]: pgmap v3694: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:21 compute-0 nova_compute[250018]: 2026-01-20 15:45:21.074 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:22 compute-0 nova_compute[250018]: 2026-01-20 15:45:22.015 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:22.507 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:45:22 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:22.508 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:45:22 compute-0 nova_compute[250018]: 2026-01-20 15:45:22.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:22 compute-0 podman[403369]: 2026-01-20 15:45:22.556561907 +0000 UTC m=+0.131365559 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:22 compute-0 podman[403368]: 2026-01-20 15:45:22.57771595 +0000 UTC m=+0.163765807 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 15:45:23 compute-0 nova_compute[250018]: 2026-01-20 15:45:23.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:23 compute-0 nova_compute[250018]: 2026-01-20 15:45:23.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:23 compute-0 ceph-mon[74360]: pgmap v3695: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:23 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:23.511 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:45:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:24 compute-0 ceph-mon[74360]: pgmap v3696: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 15:45:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3839987835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1092852698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 958 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 20 15:45:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:25.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:26 compute-0 ceph-mon[74360]: pgmap v3697: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 958 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 20 15:45:27 compute-0 nova_compute[250018]: 2026-01-20 15:45:27.016 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:27 compute-0 nova_compute[250018]: 2026-01-20 15:45:27.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:27 compute-0 nova_compute[250018]: 2026-01-20 15:45:27.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:45:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 20 15:45:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:27.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:28 compute-0 nova_compute[250018]: 2026-01-20 15:45:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:28 compute-0 nova_compute[250018]: 2026-01-20 15:45:28.218 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:28 compute-0 ceph-mon[74360]: pgmap v3698: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 20 15:45:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 188 KiB/s wr, 75 op/s
Jan 20 15:45:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:30.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:30 compute-0 ceph-mon[74360]: pgmap v3699: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 188 KiB/s wr, 75 op/s
Jan 20 15:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:30.819 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:30.820 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:45:30.820 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.082 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.083 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.083 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.083 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.084 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:31 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:31 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867368074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.545 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1867368074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.753 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.754 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4173MB free_disk=20.921817779541016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.754 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.755 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.847 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.848 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:45:31 compute-0 nova_compute[250018]: 2026-01-20 15:45:31.870 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.019 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:45:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/254292647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.313 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.324 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:45:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.346 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.348 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.348 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.349 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.349 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:45:32 compute-0 nova_compute[250018]: 2026-01-20 15:45:32.381 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:45:32 compute-0 ceph-mon[74360]: pgmap v3700: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/254292647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:33 compute-0 nova_compute[250018]: 2026-01-20 15:45:33.221 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2225520846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:34.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:34 compute-0 sudo[403458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:34 compute-0 sudo[403458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:34 compute-0 sudo[403458]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:34 compute-0 sudo[403483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:34 compute-0 sudo[403483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:34 compute-0 sudo[403483]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:34 compute-0 ceph-mon[74360]: pgmap v3701: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 15:45:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/740262342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:35 compute-0 nova_compute[250018]: 2026-01-20 15:45:35.380 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 111 op/s
Jan 20 15:45:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:45:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:45:36 compute-0 nova_compute[250018]: 2026-01-20 15:45:36.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:36 compute-0 ceph-mon[74360]: pgmap v3702: 321 pgs: 321 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 111 op/s
Jan 20 15:45:37 compute-0 nova_compute[250018]: 2026-01-20 15:45:37.022 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 20 15:45:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:38 compute-0 nova_compute[250018]: 2026-01-20 15:45:38.223 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:38.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:38 compute-0 ceph-mon[74360]: pgmap v3703: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 20 15:45:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 20 15:45:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:40 compute-0 ceph-mon[74360]: pgmap v3704: 321 pgs: 321 active+clean; 276 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 20 15:45:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:41.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:41 compute-0 ceph-mon[74360]: pgmap v3705: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:42 compute-0 nova_compute[250018]: 2026-01-20 15:45:42.025 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:42 compute-0 nova_compute[250018]: 2026-01-20 15:45:42.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:43 compute-0 nova_compute[250018]: 2026-01-20 15:45:43.275 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:44 compute-0 ceph-mon[74360]: pgmap v3706: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 15:45:44 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1327197156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 20 15:45:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:46.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:46 compute-0 ceph-mon[74360]: pgmap v3707: 321 pgs: 321 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 20 15:45:47 compute-0 nova_compute[250018]: 2026-01-20 15:45:47.029 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:47 compute-0 nova_compute[250018]: 2026-01-20 15:45:47.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 762 KiB/s wr, 54 op/s
Jan 20 15:45:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:48 compute-0 nova_compute[250018]: 2026-01-20 15:45:48.277 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:48 compute-0 ceph-mon[74360]: pgmap v3708: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 762 KiB/s wr, 54 op/s
Jan 20 15:45:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 47 KiB/s wr, 51 op/s
Jan 20 15:45:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:50 compute-0 nova_compute[250018]: 2026-01-20 15:45:50.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:45:50 compute-0 nova_compute[250018]: 2026-01-20 15:45:50.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:45:50 compute-0 nova_compute[250018]: 2026-01-20 15:45:50.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:45:50 compute-0 nova_compute[250018]: 2026-01-20 15:45:50.077 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:45:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:50 compute-0 ceph-mon[74360]: pgmap v3709: 321 pgs: 321 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 47 KiB/s wr, 51 op/s
Jan 20 15:45:50 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3862295260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:45:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 48 KiB/s wr, 65 op/s
Jan 20 15:45:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:52 compute-0 nova_compute[250018]: 2026-01-20 15:45:52.062 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:52.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:45:52 compute-0 ceph-mon[74360]: pgmap v3710: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 48 KiB/s wr, 65 op/s
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:45:52
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', 'volumes']
Jan 20 15:45:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:45:53 compute-0 nova_compute[250018]: 2026-01-20 15:45:53.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:53 compute-0 podman[403518]: 2026-01-20 15:45:53.453968844 +0000 UTC m=+0.044493296 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:45:53 compute-0 podman[403517]: 2026-01-20 15:45:53.482234099 +0000 UTC m=+0.075498736 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:45:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:53.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:54 compute-0 sudo[403559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:54 compute-0 sudo[403559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:54 compute-0 sudo[403559]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:54 compute-0 sudo[403584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:45:54 compute-0 sudo[403584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:45:54 compute-0 sudo[403584]: pam_unix(sudo:session): session closed for user root
Jan 20 15:45:54 compute-0 ceph-mon[74360]: pgmap v3711: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:45:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:45:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:45:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:56.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:45:56 compute-0 ceph-mon[74360]: pgmap v3712: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 20 15:45:57 compute-0 nova_compute[250018]: 2026-01-20 15:45:57.064 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 31 op/s
Jan 20 15:45:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:57.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:57 compute-0 sshd-session[403609]: Invalid user guest from 134.122.57.138 port 42022
Jan 20 15:45:57 compute-0 sshd-session[403609]: Connection closed by invalid user guest 134.122.57.138 port 42022 [preauth]
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:45:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:45:58 compute-0 nova_compute[250018]: 2026-01-20 15:45:58.322 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:45:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:45:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:45:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:45:58 compute-0 ceph-mon[74360]: pgmap v3713: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 31 op/s
Jan 20 15:45:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:45:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:45:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:45:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:45:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:00 compute-0 ceph-mon[74360]: pgmap v3714: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 20 15:46:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 15 op/s
Jan 20 15:46:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:01.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:01 compute-0 ceph-mon[74360]: pgmap v3715: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 15 op/s
Jan 20 15:46:02 compute-0 nova_compute[250018]: 2026-01-20 15:46:02.068 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:03 compute-0 nova_compute[250018]: 2026-01-20 15:46:03.324 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:03.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:04 compute-0 ceph-mon[74360]: pgmap v3716: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:46:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:46:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:06 compute-0 ceph-mon[74360]: pgmap v3717: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:07 compute-0 nova_compute[250018]: 2026-01-20 15:46:07.074 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:08 compute-0 nova_compute[250018]: 2026-01-20 15:46:08.325 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:46:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:08.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:46:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.552813) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968552845, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 636, "num_deletes": 250, "total_data_size": 831225, "memory_usage": 843536, "flush_reason": "Manual Compaction"}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968558264, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 531405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82562, "largest_seqno": 83197, "table_properties": {"data_size": 528539, "index_size": 837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7711, "raw_average_key_size": 20, "raw_value_size": 522605, "raw_average_value_size": 1382, "num_data_blocks": 38, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923919, "oldest_key_time": 1768923919, "file_creation_time": 1768923968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 5498 microseconds, and 2659 cpu microseconds.
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.558310) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 531405 bytes OK
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.558327) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.559776) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.559810) EVENT_LOG_v1 {"time_micros": 1768923968559799, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.559835) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 827915, prev total WAL file size 827915, number of live WAL files 2.
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.560638) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303039' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(518KB)], [188(13MB)]
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968560738, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 15013279, "oldest_snapshot_seqno": -1}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 10919 keys, 11470432 bytes, temperature: kUnknown
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968635802, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11470432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11404124, "index_size": 37907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 288068, "raw_average_key_size": 26, "raw_value_size": 11217278, "raw_average_value_size": 1027, "num_data_blocks": 1430, "num_entries": 10919, "num_filter_entries": 10919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768923968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.636701) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11470432 bytes
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.638324) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 152.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(49.8) write-amplify(21.6) OK, records in: 11409, records dropped: 490 output_compression: NoCompression
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.638344) EVENT_LOG_v1 {"time_micros": 1768923968638335, "job": 118, "event": "compaction_finished", "compaction_time_micros": 75156, "compaction_time_cpu_micros": 40182, "output_level": 6, "num_output_files": 1, "total_output_size": 11470432, "num_input_records": 11409, "num_output_records": 10919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968638626, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768923968642200, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.560583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.642309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.642314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.642317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.642320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:46:08.642322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:46:08 compute-0 ceph-mon[74360]: pgmap v3718: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:09.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:10.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:10 compute-0 ceph-mon[74360]: pgmap v3719: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:12 compute-0 nova_compute[250018]: 2026-01-20 15:46:12.077 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:46:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:46:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:12 compute-0 ceph-mon[74360]: pgmap v3720: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:13 compute-0 nova_compute[250018]: 2026-01-20 15:46:13.365 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:13.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2810297471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:46:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/2810297471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:46:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:46:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:46:14 compute-0 ceph-mon[74360]: pgmap v3721: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:14 compute-0 sudo[403621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:14 compute-0 sudo[403621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:14 compute-0 sudo[403621]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:14 compute-0 sudo[403646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:14 compute-0 sudo[403646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:14 compute-0 sudo[403646]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:15.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:15 compute-0 ceph-mon[74360]: pgmap v3722: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:17 compute-0 nova_compute[250018]: 2026-01-20 15:46:17.081 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:17.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:18 compute-0 sudo[403672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:18 compute-0 sudo[403672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-0 sudo[403672]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-0 nova_compute[250018]: 2026-01-20 15:46:18.367 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:18 compute-0 sudo[403697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:46:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:18.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:18 compute-0 sudo[403697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-0 sudo[403697]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-0 sudo[403723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:18 compute-0 sudo[403723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-0 sudo[403723]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:18 compute-0 sudo[403748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:46:18 compute-0 sudo[403748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:18 compute-0 ceph-mon[74360]: pgmap v3723: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:18 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-crash-compute-0[81580]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 15:46:19 compute-0 sudo[403748]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 764e3029-1611-4baa-b93f-0ab639dd2c42 does not exist
Jan 20 15:46:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 1a17fbb3-19e6-44d2-bc81-98da1b6664cc does not exist
Jan 20 15:46:19 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 60de8c98-9834-45ec-9ec3-c76b1b10baaf does not exist
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:46:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-0 sudo[403802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:19 compute-0 sudo[403802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:19 compute-0 sudo[403802]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:19 compute-0 sudo[403827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:46:19 compute-0 sudo[403827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:19 compute-0 sudo[403827]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:19 compute-0 sudo[403852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:19 compute-0 sudo[403852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:19 compute-0 sudo[403852]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:19 compute-0 sudo[403877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:46:19 compute-0 sudo[403877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:46:19 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:46:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:19.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.872331679 +0000 UTC m=+0.067186371 container create 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:46:19 compute-0 systemd[1]: Started libpod-conmon-1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6.scope.
Jan 20 15:46:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.846856039 +0000 UTC m=+0.041710721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.956026376 +0000 UTC m=+0.150881088 container init 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.968187276 +0000 UTC m=+0.163041938 container start 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.971841254 +0000 UTC m=+0.166696006 container attach 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:46:19 compute-0 clever_kepler[403959]: 167 167
Jan 20 15:46:19 compute-0 systemd[1]: libpod-1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6.scope: Deactivated successfully.
Jan 20 15:46:19 compute-0 conmon[403959]: conmon 1795be5a74211eb80b73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6.scope/container/memory.events
Jan 20 15:46:19 compute-0 podman[403943]: 2026-01-20 15:46:19.974868206 +0000 UTC m=+0.169722868 container died 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f0e916563e27b00821d9a9c399ac30ad822677c8f63c8638eaea73d47055cdd-merged.mount: Deactivated successfully.
Jan 20 15:46:20 compute-0 podman[403943]: 2026-01-20 15:46:20.012842475 +0000 UTC m=+0.207697127 container remove 1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:46:20 compute-0 systemd[1]: libpod-conmon-1795be5a74211eb80b73617cc6f023e6ce27230fa119ceeae91361513559c0e6.scope: Deactivated successfully.
Jan 20 15:46:20 compute-0 podman[403983]: 2026-01-20 15:46:20.231666962 +0000 UTC m=+0.066909813 container create 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:46:20 compute-0 systemd[1]: Started libpod-conmon-98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2.scope.
Jan 20 15:46:20 compute-0 podman[403983]: 2026-01-20 15:46:20.207265161 +0000 UTC m=+0.042508022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:20 compute-0 podman[403983]: 2026-01-20 15:46:20.326149961 +0000 UTC m=+0.161392782 container init 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:46:20 compute-0 podman[403983]: 2026-01-20 15:46:20.342683049 +0000 UTC m=+0.177925870 container start 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:46:20 compute-0 podman[403983]: 2026-01-20 15:46:20.346844282 +0000 UTC m=+0.182087113 container attach 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:46:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:20 compute-0 ceph-mon[74360]: pgmap v3724: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:21 compute-0 hopeful_ritchie[403999]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:46:21 compute-0 hopeful_ritchie[403999]: --> relative data size: 1.0
Jan 20 15:46:21 compute-0 hopeful_ritchie[403999]: --> All data devices are unavailable
Jan 20 15:46:21 compute-0 systemd[1]: libpod-98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2.scope: Deactivated successfully.
Jan 20 15:46:21 compute-0 conmon[403999]: conmon 98c68776fea62d563087 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2.scope/container/memory.events
Jan 20 15:46:21 compute-0 podman[403983]: 2026-01-20 15:46:21.191536721 +0000 UTC m=+1.026779532 container died 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 20 15:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2bbb96f2eecf6a7bbe8221c542aeb93c40927e43c0119f06ae6966f5cfb1da1-merged.mount: Deactivated successfully.
Jan 20 15:46:21 compute-0 podman[403983]: 2026-01-20 15:46:21.255704879 +0000 UTC m=+1.090947690 container remove 98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:46:21 compute-0 systemd[1]: libpod-conmon-98c68776fea62d563087f22558f1c44b35010aa7cb1ae0050bc3a2e57274f1c2.scope: Deactivated successfully.
Jan 20 15:46:21 compute-0 sudo[403877]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:21 compute-0 sudo[404026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:21 compute-0 sudo[404026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:21 compute-0 sudo[404026]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:21 compute-0 sudo[404051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:46:21 compute-0 sudo[404051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:21 compute-0 sudo[404051]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:21 compute-0 sudo[404076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:21 compute-0 sudo[404076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:21 compute-0 sudo[404076]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:21 compute-0 sudo[404101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:46:21 compute-0 sudo[404101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:21 compute-0 podman[404167]: 2026-01-20 15:46:21.973262205 +0000 UTC m=+0.036654154 container create 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:46:22 compute-0 systemd[1]: Started libpod-conmon-33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd.scope.
Jan 20 15:46:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:22.042084929 +0000 UTC m=+0.105476898 container init 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:22.047192067 +0000 UTC m=+0.110584016 container start 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:22.050440405 +0000 UTC m=+0.113832364 container attach 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:46:22 compute-0 frosty_lamarr[404183]: 167 167
Jan 20 15:46:22 compute-0 systemd[1]: libpod-33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd.scope: Deactivated successfully.
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:22.052792509 +0000 UTC m=+0.116184458 container died 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:21.958060433 +0000 UTC m=+0.021452382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:22 compute-0 nova_compute[250018]: 2026-01-20 15:46:22.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f8da4159637740918d0d098dc04f5fa12a5b312d0b150f63456ecb5ca14f33f-merged.mount: Deactivated successfully.
Jan 20 15:46:22 compute-0 podman[404167]: 2026-01-20 15:46:22.093551103 +0000 UTC m=+0.156943062 container remove 33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:46:22 compute-0 systemd[1]: libpod-conmon-33083119a705330d530b8a008166eccf1314732de8cdea01feb9095a24f696dd.scope: Deactivated successfully.
Jan 20 15:46:22 compute-0 nova_compute[250018]: 2026-01-20 15:46:22.135 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:22 compute-0 podman[404208]: 2026-01-20 15:46:22.308135975 +0000 UTC m=+0.067156070 container create 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 20 15:46:22 compute-0 systemd[1]: Started libpod-conmon-98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f.scope.
Jan 20 15:46:22 compute-0 podman[404208]: 2026-01-20 15:46:22.286421297 +0000 UTC m=+0.045441432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:22.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7722fe091bc8e804d56ba86bbd4b72ddee7fbbbb9e3af84e802537a1aa578e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7722fe091bc8e804d56ba86bbd4b72ddee7fbbbb9e3af84e802537a1aa578e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7722fe091bc8e804d56ba86bbd4b72ddee7fbbbb9e3af84e802537a1aa578e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7722fe091bc8e804d56ba86bbd4b72ddee7fbbbb9e3af84e802537a1aa578e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:22 compute-0 podman[404208]: 2026-01-20 15:46:22.407888887 +0000 UTC m=+0.166909012 container init 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 20 15:46:22 compute-0 podman[404208]: 2026-01-20 15:46:22.422127103 +0000 UTC m=+0.181147198 container start 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:46:22 compute-0 podman[404208]: 2026-01-20 15:46:22.425900185 +0000 UTC m=+0.184920310 container attach 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:22 compute-0 ceph-mon[74360]: pgmap v3725: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:23 compute-0 beautiful_wu[404226]: {
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:     "0": [
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:         {
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "devices": [
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "/dev/loop3"
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             ],
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "lv_name": "ceph_lv0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "lv_size": "7511998464",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "name": "ceph_lv0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "tags": {
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.cluster_name": "ceph",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.crush_device_class": "",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.encrypted": "0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.osd_id": "0",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.type": "block",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:                 "ceph.vdo": "0"
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             },
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "type": "block",
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:             "vg_name": "ceph_vg0"
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:         }
Jan 20 15:46:23 compute-0 beautiful_wu[404226]:     ]
Jan 20 15:46:23 compute-0 beautiful_wu[404226]: }
Jan 20 15:46:23 compute-0 systemd[1]: libpod-98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f.scope: Deactivated successfully.
Jan 20 15:46:23 compute-0 podman[404208]: 2026-01-20 15:46:23.40295502 +0000 UTC m=+1.161975145 container died 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 15:46:23 compute-0 nova_compute[250018]: 2026-01-20 15:46:23.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c7722fe091bc8e804d56ba86bbd4b72ddee7fbbbb9e3af84e802537a1aa578e-merged.mount: Deactivated successfully.
Jan 20 15:46:23 compute-0 podman[404208]: 2026-01-20 15:46:23.471113265 +0000 UTC m=+1.230133350 container remove 98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_wu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:46:23 compute-0 systemd[1]: libpod-conmon-98f013744acc4f63bfa508e28b149ae5e468a3f96832098a4b174f4c358fe03f.scope: Deactivated successfully.
Jan 20 15:46:23 compute-0 sudo[404101]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:23 compute-0 sudo[404252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:23 compute-0 podman[404249]: 2026-01-20 15:46:23.581726241 +0000 UTC m=+0.062455582 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 15:46:23 compute-0 sudo[404252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:23 compute-0 sudo[404252]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:23 compute-0 podman[404251]: 2026-01-20 15:46:23.613567864 +0000 UTC m=+0.092914698 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:46:23 compute-0 sudo[404314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:46:23 compute-0 sudo[404314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:23 compute-0 sudo[404314]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:23 compute-0 sudo[404341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:23 compute-0 sudo[404341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:23 compute-0 sudo[404341]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:23 compute-0 sudo[404366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:46:23 compute-0 sudo[404366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:24 compute-0 nova_compute[250018]: 2026-01-20 15:46:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.11368382 +0000 UTC m=+0.041998199 container create 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:46:24 compute-0 systemd[1]: Started libpod-conmon-9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88.scope.
Jan 20 15:46:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.183161552 +0000 UTC m=+0.111475931 container init 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.094933422 +0000 UTC m=+0.023247811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.19121707 +0000 UTC m=+0.119531449 container start 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.194756596 +0000 UTC m=+0.123070975 container attach 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:46:24 compute-0 happy_lumiere[404449]: 167 167
Jan 20 15:46:24 compute-0 systemd[1]: libpod-9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88.scope: Deactivated successfully.
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.196990897 +0000 UTC m=+0.125305266 container died 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c22d72cf8213474df7f483e0a385c6eae7020228a4b9471455be861ce6b1879-merged.mount: Deactivated successfully.
Jan 20 15:46:24 compute-0 podman[404432]: 2026-01-20 15:46:24.234274406 +0000 UTC m=+0.162588775 container remove 9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:46:24 compute-0 systemd[1]: libpod-conmon-9d99ea070e7b6f6c66712dc0115cc8100b77d6c9041956b568a0e96f256a4a88.scope: Deactivated successfully.
Jan 20 15:46:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:24.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:24 compute-0 podman[404471]: 2026-01-20 15:46:24.391257259 +0000 UTC m=+0.051252650 container create 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:46:24 compute-0 systemd[1]: Started libpod-conmon-88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9.scope.
Jan 20 15:46:24 compute-0 podman[404471]: 2026-01-20 15:46:24.365137521 +0000 UTC m=+0.025132942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:46:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e95e70d9867a7f992c5718869a78c330629bd08a4ee7ac9d906901083651e42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e95e70d9867a7f992c5718869a78c330629bd08a4ee7ac9d906901083651e42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e95e70d9867a7f992c5718869a78c330629bd08a4ee7ac9d906901083651e42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e95e70d9867a7f992c5718869a78c330629bd08a4ee7ac9d906901083651e42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:46:24 compute-0 podman[404471]: 2026-01-20 15:46:24.487724411 +0000 UTC m=+0.147719842 container init 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 15:46:24 compute-0 podman[404471]: 2026-01-20 15:46:24.495852161 +0000 UTC m=+0.155847542 container start 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:46:24 compute-0 podman[404471]: 2026-01-20 15:46:24.499617583 +0000 UTC m=+0.159612994 container attach 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:46:24 compute-0 ceph-mon[74360]: pgmap v3726: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:25 compute-0 lucid_sammet[404488]: {
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:         "osd_id": 0,
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:         "type": "bluestore"
Jan 20 15:46:25 compute-0 lucid_sammet[404488]:     }
Jan 20 15:46:25 compute-0 lucid_sammet[404488]: }
Jan 20 15:46:25 compute-0 systemd[1]: libpod-88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9.scope: Deactivated successfully.
Jan 20 15:46:25 compute-0 podman[404471]: 2026-01-20 15:46:25.364835619 +0000 UTC m=+1.024831030 container died 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e95e70d9867a7f992c5718869a78c330629bd08a4ee7ac9d906901083651e42-merged.mount: Deactivated successfully.
Jan 20 15:46:25 compute-0 podman[404471]: 2026-01-20 15:46:25.419623012 +0000 UTC m=+1.079618393 container remove 88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:46:25 compute-0 systemd[1]: libpod-conmon-88fbe63e09d6127151f986885f5bb5f71dda86057c8502b6c230192bc2a562c9.scope: Deactivated successfully.
Jan 20 15:46:25 compute-0 sudo[404366]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:46:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:46:25 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f50861fb-65d6-4231-87d8-1aafce22ed60 does not exist
Jan 20 15:46:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7713e92a-78b8-4f5b-ae7b-018d05faad9f does not exist
Jan 20 15:46:25 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c742400a-e6ae-4a64-bc25-c6b490192c39 does not exist
Jan 20 15:46:25 compute-0 sudo[404520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:25 compute-0 sudo[404520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:25 compute-0 sudo[404520]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:25 compute-0 sudo[404545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:46:25 compute-0 sudo[404545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:25 compute-0 sudo[404545]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:26 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:46:26 compute-0 ceph-mon[74360]: pgmap v3727: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:27 compute-0 nova_compute[250018]: 2026-01-20 15:46:27.137 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:28 compute-0 nova_compute[250018]: 2026-01-20 15:46:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:28 compute-0 nova_compute[250018]: 2026-01-20 15:46:28.052 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:28 compute-0 nova_compute[250018]: 2026-01-20 15:46:28.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:46:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:28.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:28 compute-0 nova_compute[250018]: 2026-01-20 15:46:28.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:28 compute-0 ceph-mon[74360]: pgmap v3728: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:30.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:30 compute-0 ceph-mon[74360]: pgmap v3729: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:46:30.821 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:46:30.821 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:46:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:46:30.821 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:46:31 compute-0 nova_compute[250018]: 2026-01-20 15:46:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:46:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:32 compute-0 nova_compute[250018]: 2026-01-20 15:46:32.187 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:32 compute-0 ceph-mon[74360]: pgmap v3730: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:33 compute-0 nova_compute[250018]: 2026-01-20 15:46:33.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:34.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:34 compute-0 ceph-mon[74360]: pgmap v3731: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:34 compute-0 sudo[404575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:34 compute-0 sudo[404575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:34 compute-0 sudo[404575]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:35 compute-0 sudo[404600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:35 compute-0 sudo[404600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:35 compute-0 sudo[404600]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:46:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:46:35 compute-0 ceph-mon[74360]: pgmap v3732: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:36.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:37 compute-0 sshd-session[404625]: Invalid user guest from 134.122.57.138 port 49510
Jan 20 15:46:37 compute-0 nova_compute[250018]: 2026-01-20 15:46:37.189 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:37 compute-0 sshd-session[404625]: Connection closed by invalid user guest 134.122.57.138 port 49510 [preauth]
Jan 20 15:46:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:38.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:38 compute-0 nova_compute[250018]: 2026-01-20 15:46:38.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:38 compute-0 ceph-mon[74360]: pgmap v3733: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:40.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:40 compute-0 ceph-mon[74360]: pgmap v3734: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:41.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:42 compute-0 nova_compute[250018]: 2026-01-20 15:46:42.242 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:42 compute-0 ceph-mon[74360]: pgmap v3735: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:43 compute-0 nova_compute[250018]: 2026-01-20 15:46:43.485 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:43.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:46:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:46:44 compute-0 ceph-mon[74360]: pgmap v3736: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:46 compute-0 ceph-mon[74360]: pgmap v3737: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:47 compute-0 nova_compute[250018]: 2026-01-20 15:46:47.247 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:47.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:48 compute-0 nova_compute[250018]: 2026-01-20 15:46:48.548 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:48 compute-0 ceph-mon[74360]: pgmap v3738: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:50.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:50 compute-0 ceph-mon[74360]: pgmap v3739: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:51.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:51 compute-0 ceph-mon[74360]: pgmap v3740: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:52 compute-0 nova_compute[250018]: 2026-01-20 15:46:52.301 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:46:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:46:52
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'images', 'backups']
Jan 20 15:46:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:46:53 compute-0 nova_compute[250018]: 2026-01-20 15:46:53.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:54 compute-0 podman[404639]: 2026-01-20 15:46:54.498553562 +0000 UTC m=+0.078757035 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 15:46:54 compute-0 podman[404638]: 2026-01-20 15:46:54.528613505 +0000 UTC m=+0.108829399 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 15:46:54 compute-0 ceph-mon[74360]: pgmap v3741: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:55 compute-0 sudo[404685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:55 compute-0 sudo[404685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:55 compute-0 sudo[404685]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:55 compute-0 sudo[404710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:46:55 compute-0 sudo[404710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:46:55 compute-0 sudo[404710]: pam_unix(sudo:session): session closed for user root
Jan 20 15:46:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:56 compute-0 ceph-mon[74360]: pgmap v3742: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:57 compute-0 nova_compute[250018]: 2026-01-20 15:46:57.304 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:57.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:46:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:46:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:46:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:46:58 compute-0 nova_compute[250018]: 2026-01-20 15:46:58.552 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:46:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:46:58 compute-0 ceph-mon[74360]: pgmap v3743: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:46:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:46:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:46:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:46:59.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:00 compute-0 ceph-mon[74360]: pgmap v3744: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:01.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:02 compute-0 nova_compute[250018]: 2026-01-20 15:47:02.309 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:02 compute-0 ceph-mon[74360]: pgmap v3745: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:03 compute-0 nova_compute[250018]: 2026-01-20 15:47:03.554 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:03.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:04 compute-0 ceph-mon[74360]: pgmap v3746: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:06.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:06 compute-0 ceph-mon[74360]: pgmap v3747: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:07 compute-0 nova_compute[250018]: 2026-01-20 15:47:07.313 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:07.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:07 compute-0 ceph-mon[74360]: pgmap v3748: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:08 compute-0 nova_compute[250018]: 2026-01-20 15:47:08.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.575526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028575575, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 741, "num_deletes": 254, "total_data_size": 1033991, "memory_usage": 1047904, "flush_reason": "Manual Compaction"}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028584016, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1022693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83198, "largest_seqno": 83938, "table_properties": {"data_size": 1018873, "index_size": 1599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8317, "raw_average_key_size": 18, "raw_value_size": 1011251, "raw_average_value_size": 2298, "num_data_blocks": 71, "num_entries": 440, "num_filter_entries": 440, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768923968, "oldest_key_time": 1768923968, "file_creation_time": 1768924028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 8557 microseconds, and 4466 cpu microseconds.
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.584068) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1022693 bytes OK
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.584105) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.585974) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.585994) EVENT_LOG_v1 {"time_micros": 1768924028585986, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.586014) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1030302, prev total WAL file size 1030302, number of live WAL files 2.
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.586668) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353233' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end)
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(998KB)], [191(10MB)]
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028586735, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12493125, "oldest_snapshot_seqno": -1}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 10839 keys, 12373283 bytes, temperature: kUnknown
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028663439, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 12373283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12306064, "index_size": 39010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 287308, "raw_average_key_size": 26, "raw_value_size": 12119264, "raw_average_value_size": 1118, "num_data_blocks": 1475, "num_entries": 10839, "num_filter_entries": 10839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768924028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.663747) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12373283 bytes
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.665410) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 161.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(24.3) write-amplify(12.1) OK, records in: 11359, records dropped: 520 output_compression: NoCompression
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.665430) EVENT_LOG_v1 {"time_micros": 1768924028665421, "job": 120, "event": "compaction_finished", "compaction_time_micros": 76783, "compaction_time_cpu_micros": 29201, "output_level": 6, "num_output_files": 1, "total_output_size": 12373283, "num_input_records": 11359, "num_output_records": 10839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028665754, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924028668280, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.586511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.668360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.668365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.668367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.668368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:08 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:47:08.668370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:47:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:09.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:10 compute-0 ceph-mon[74360]: pgmap v3749: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:11.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:47:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:47:12 compute-0 nova_compute[250018]: 2026-01-20 15:47:12.317 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:12.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:12 compute-0 ceph-mon[74360]: pgmap v3750: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:13 compute-0 nova_compute[250018]: 2026-01-20 15:47:13.557 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/559030029' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:47:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/559030029' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:47:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:14 compute-0 ceph-mon[74360]: pgmap v3751: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:14 compute-0 sshd-session[404744]: Invalid user oracle from 134.122.57.138 port 47528
Jan 20 15:47:15 compute-0 sudo[404747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:15 compute-0 sudo[404747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:15 compute-0 sudo[404747]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:15 compute-0 sshd-session[404744]: Connection closed by invalid user oracle 134.122.57.138 port 47528 [preauth]
Jan 20 15:47:15 compute-0 sudo[404772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:15 compute-0 sudo[404772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:15 compute-0 sudo[404772]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:15.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:16.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:16 compute-0 ceph-mon[74360]: pgmap v3752: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:47:16 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 60K writes, 223K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 60K writes, 22K syncs, 2.64 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2300 writes, 7887 keys, 2300 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                           Interval WAL: 2300 writes, 981 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.320 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.485 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.486 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.486 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.486 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.487 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.515 250022 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 61.03 sec
Jan 20 15:47:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/536885463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886461505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:17 compute-0 nova_compute[250018]: 2026-01-20 15:47:17.929 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.078 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.079 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4198MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.079 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.080 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.373 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.374 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:47:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:18.420 160071 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '12:bb:42', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '06:92:24:f7:15:56'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.421 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:18 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:18.421 160071 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.438 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.558 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:18 compute-0 ceph-mon[74360]: pgmap v3753: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1886461505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2951300976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1097814839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316246032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.920 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.926 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.951 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.954 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:47:18 compute-0 nova_compute[250018]: 2026-01-20 15:47:18.954 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:19.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3316246032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2122862983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:20 compute-0 ceph-mon[74360]: pgmap v3754: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:21 compute-0 ceph-mon[74360]: pgmap v3755: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.323 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:22.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.955 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.955 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.955 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.956 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.981 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.981 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:22 compute-0 nova_compute[250018]: 2026-01-20 15:47:22.981 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:23 compute-0 nova_compute[250018]: 2026-01-20 15:47:23.561 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:24 compute-0 nova_compute[250018]: 2026-01-20 15:47:24.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:24 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:24.423 160071 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=367c1a2c-b16a-4828-ab5a-626bb50023b4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 15:47:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:24 compute-0 ceph-mon[74360]: pgmap v3756: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:25 compute-0 podman[404847]: 2026-01-20 15:47:25.485775536 +0000 UTC m=+0.065971008 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:47:25 compute-0 podman[404846]: 2026-01-20 15:47:25.515347087 +0000 UTC m=+0.095592730 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 15:47:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:25 compute-0 sudo[404891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:25 compute-0 sudo[404891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:25 compute-0 sudo[404891]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 sudo[404916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:47:26 compute-0 sudo[404916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 sudo[404916]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 nova_compute[250018]: 2026-01-20 15:47:26.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:26 compute-0 sudo[404941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:26 compute-0 sudo[404941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 sudo[404941]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 sudo[404966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:47:26 compute-0 sudo[404966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:26.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:26 compute-0 sudo[404966]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 ceph-mon[74360]: pgmap v3757: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2959745194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 74d79d2f-2b97-4045-8347-03a19fb0811a does not exist
Jan 20 15:47:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 64775bf5-a81c-4826-9f3e-0aa1477fa631 does not exist
Jan 20 15:47:26 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 7035c7b4-7163-4f1c-93c8-5c5b38bf5003 does not exist
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:47:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:47:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:26 compute-0 sudo[405022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:26 compute-0 sudo[405022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 sudo[405022]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 sudo[405047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:47:26 compute-0 sudo[405047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 sudo[405047]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:26 compute-0 sudo[405072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:26 compute-0 sudo[405072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:26 compute-0 sudo[405072]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:27 compute-0 sudo[405097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:47:27 compute-0 sudo[405097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:27 compute-0 nova_compute[250018]: 2026-01-20 15:47:27.355 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.39724247 +0000 UTC m=+0.037773944 container create 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:47:27 compute-0 systemd[1]: Started libpod-conmon-3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a.scope.
Jan 20 15:47:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.37948843 +0000 UTC m=+0.020019924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.482074578 +0000 UTC m=+0.122606072 container init 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.48876452 +0000 UTC m=+0.129295994 container start 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.49248745 +0000 UTC m=+0.133018924 container attach 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:47:27 compute-0 optimistic_northcutt[405178]: 167 167
Jan 20 15:47:27 compute-0 systemd[1]: libpod-3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a.scope: Deactivated successfully.
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.496313484 +0000 UTC m=+0.136844958 container died 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 20 15:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e42803acbe57484d317e95a95ace9d8fd73108d6a0b3db86b842a7f01f15be0-merged.mount: Deactivated successfully.
Jan 20 15:47:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:27 compute-0 podman[405162]: 2026-01-20 15:47:27.636028188 +0000 UTC m=+0.276559762 container remove 3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_northcutt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:47:27 compute-0 systemd[1]: libpod-conmon-3245bc6922c787d9bed00505efac05b48677922d08cbf5559ad729a07062127a.scope: Deactivated successfully.
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:47:27 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:47:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:27.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:27 compute-0 podman[405200]: 2026-01-20 15:47:27.823062504 +0000 UTC m=+0.040130078 container create 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 15:47:27 compute-0 systemd[1]: Started libpod-conmon-0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6.scope.
Jan 20 15:47:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:27 compute-0 podman[405200]: 2026-01-20 15:47:27.901471678 +0000 UTC m=+0.118539272 container init 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:47:27 compute-0 podman[405200]: 2026-01-20 15:47:27.805606161 +0000 UTC m=+0.022673755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:27 compute-0 podman[405200]: 2026-01-20 15:47:27.908867889 +0000 UTC m=+0.125935463 container start 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:47:27 compute-0 podman[405200]: 2026-01-20 15:47:27.911760607 +0000 UTC m=+0.128828211 container attach 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 20 15:47:28 compute-0 nova_compute[250018]: 2026-01-20 15:47:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:28 compute-0 nova_compute[250018]: 2026-01-20 15:47:28.604 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:28 compute-0 modest_bouman[405215]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:47:28 compute-0 modest_bouman[405215]: --> relative data size: 1.0
Jan 20 15:47:28 compute-0 modest_bouman[405215]: --> All data devices are unavailable
Jan 20 15:47:28 compute-0 ceph-mon[74360]: pgmap v3758: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4258930174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:28 compute-0 systemd[1]: libpod-0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6.scope: Deactivated successfully.
Jan 20 15:47:28 compute-0 podman[405231]: 2026-01-20 15:47:28.788200566 +0000 UTC m=+0.026062477 container died 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-64dac67fcc2ca384b1db50b6647758930997d11c3f431f4984c07eec7867038c-merged.mount: Deactivated successfully.
Jan 20 15:47:28 compute-0 podman[405231]: 2026-01-20 15:47:28.851027178 +0000 UTC m=+0.088889069 container remove 0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bouman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:47:28 compute-0 systemd[1]: libpod-conmon-0c0e1a771319678a2812c6809b4e126ef3b8f986153e9df0d6e1ffbe77c3e3a6.scope: Deactivated successfully.
Jan 20 15:47:28 compute-0 sudo[405097]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:28 compute-0 sudo[405246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:28 compute-0 sudo[405246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:28 compute-0 sudo[405246]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:29 compute-0 sudo[405271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:47:29 compute-0 sudo[405271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:29 compute-0 sudo[405271]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:29 compute-0 sudo[405296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:29 compute-0 sudo[405296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:29 compute-0 sudo[405296]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:29 compute-0 sudo[405321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:47:29 compute-0 sudo[405321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.445701114 +0000 UTC m=+0.037723932 container create dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 15:47:29 compute-0 systemd[1]: Started libpod-conmon-dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e.scope.
Jan 20 15:47:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.522440893 +0000 UTC m=+0.114463731 container init dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.428243602 +0000 UTC m=+0.020266460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.53044103 +0000 UTC m=+0.122463848 container start dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:47:29 compute-0 dreamy_ganguly[405403]: 167 167
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.534022957 +0000 UTC m=+0.126045775 container attach dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 15:47:29 compute-0 systemd[1]: libpod-dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e.scope: Deactivated successfully.
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.534790878 +0000 UTC m=+0.126813686 container died dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:47:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-631a720693da68dd5fc3f2a51be84e1d10f9e68f06323e713badec9a51643500-merged.mount: Deactivated successfully.
Jan 20 15:47:29 compute-0 podman[405387]: 2026-01-20 15:47:29.657697577 +0000 UTC m=+0.249720395 container remove dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:47:29 compute-0 systemd[1]: libpod-conmon-dccc779876ea4adfe13a482fbc2ce44c21112131b90988c5974c65c26a03789e.scope: Deactivated successfully.
Jan 20 15:47:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:29 compute-0 podman[405427]: 2026-01-20 15:47:29.850287553 +0000 UTC m=+0.067466558 container create e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:47:29 compute-0 systemd[1]: Started libpod-conmon-e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f.scope.
Jan 20 15:47:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:29 compute-0 podman[405427]: 2026-01-20 15:47:29.80586377 +0000 UTC m=+0.023042785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a0d539393fe26a1b5f5d1d1078c166bba680873612b8cce68c3f65a1cd9ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a0d539393fe26a1b5f5d1d1078c166bba680873612b8cce68c3f65a1cd9ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a0d539393fe26a1b5f5d1d1078c166bba680873612b8cce68c3f65a1cd9ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23a0d539393fe26a1b5f5d1d1078c166bba680873612b8cce68c3f65a1cd9ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:29 compute-0 ceph-mon[74360]: pgmap v3759: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:29 compute-0 podman[405427]: 2026-01-20 15:47:29.920342031 +0000 UTC m=+0.137521026 container init e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:47:29 compute-0 podman[405427]: 2026-01-20 15:47:29.926501248 +0000 UTC m=+0.143680243 container start e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:47:29 compute-0 podman[405427]: 2026-01-20 15:47:29.929194841 +0000 UTC m=+0.146373856 container attach e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:47:30 compute-0 nova_compute[250018]: 2026-01-20 15:47:30.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:30 compute-0 nova_compute[250018]: 2026-01-20 15:47:30.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:47:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:30 compute-0 sad_bose[405443]: {
Jan 20 15:47:30 compute-0 sad_bose[405443]:     "0": [
Jan 20 15:47:30 compute-0 sad_bose[405443]:         {
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "devices": [
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "/dev/loop3"
Jan 20 15:47:30 compute-0 sad_bose[405443]:             ],
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "lv_name": "ceph_lv0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "lv_size": "7511998464",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "name": "ceph_lv0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "tags": {
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.cluster_name": "ceph",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.crush_device_class": "",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.encrypted": "0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.osd_id": "0",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.type": "block",
Jan 20 15:47:30 compute-0 sad_bose[405443]:                 "ceph.vdo": "0"
Jan 20 15:47:30 compute-0 sad_bose[405443]:             },
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "type": "block",
Jan 20 15:47:30 compute-0 sad_bose[405443]:             "vg_name": "ceph_vg0"
Jan 20 15:47:30 compute-0 sad_bose[405443]:         }
Jan 20 15:47:30 compute-0 sad_bose[405443]:     ]
Jan 20 15:47:30 compute-0 sad_bose[405443]: }
Jan 20 15:47:30 compute-0 systemd[1]: libpod-e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f.scope: Deactivated successfully.
Jan 20 15:47:30 compute-0 podman[405453]: 2026-01-20 15:47:30.746452076 +0000 UTC m=+0.028698437 container died e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23a0d539393fe26a1b5f5d1d1078c166bba680873612b8cce68c3f65a1cd9ab-merged.mount: Deactivated successfully.
Jan 20 15:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:30.822 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:30.823 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:47:30.823 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:30 compute-0 podman[405453]: 2026-01-20 15:47:30.885517164 +0000 UTC m=+0.167763455 container remove e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bose, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:47:30 compute-0 systemd[1]: libpod-conmon-e11a929b451865b187b58973d865aa6c1e4fcbff89662e5d65276d1678ef391f.scope: Deactivated successfully.
Jan 20 15:47:30 compute-0 sudo[405321]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:30 compute-0 sudo[405468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:30 compute-0 sudo[405468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:30 compute-0 sudo[405468]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:31 compute-0 sudo[405493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:47:31 compute-0 sudo[405493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:31 compute-0 sudo[405493]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:31 compute-0 sudo[405518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:31 compute-0 sudo[405518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:31 compute-0 sudo[405518]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:31 compute-0 sudo[405543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:47:31 compute-0 sudo[405543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.526835785 +0000 UTC m=+0.101401539 container create bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.447584597 +0000 UTC m=+0.022150381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:31 compute-0 systemd[1]: Started libpod-conmon-bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69.scope.
Jan 20 15:47:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.588108463 +0000 UTC m=+0.162674227 container init bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.595044841 +0000 UTC m=+0.169610595 container start bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.599302537 +0000 UTC m=+0.173868291 container attach bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:47:31 compute-0 objective_kalam[405621]: 167 167
Jan 20 15:47:31 compute-0 systemd[1]: libpod-bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69.scope: Deactivated successfully.
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.601119867 +0000 UTC m=+0.175685621 container died bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 15:47:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-39d818f9be9de393ceed7fd09007efcd11242637d274563e24cb2dbb10391742-merged.mount: Deactivated successfully.
Jan 20 15:47:31 compute-0 podman[405605]: 2026-01-20 15:47:31.636936786 +0000 UTC m=+0.211502540 container remove bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:47:31 compute-0 systemd[1]: libpod-conmon-bd960392023ec9b65800b2298892c76fdc39856a6f479e2e1c1154e0592fca69.scope: Deactivated successfully.
Jan 20 15:47:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:31 compute-0 podman[405645]: 2026-01-20 15:47:31.779232931 +0000 UTC m=+0.036538811 container create 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:47:31 compute-0 systemd[1]: Started libpod-conmon-283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26.scope.
Jan 20 15:47:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65434caaafc06d826576396ea410a0172c7e7591b99811b40eaddfb7cc668932/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65434caaafc06d826576396ea410a0172c7e7591b99811b40eaddfb7cc668932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65434caaafc06d826576396ea410a0172c7e7591b99811b40eaddfb7cc668932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65434caaafc06d826576396ea410a0172c7e7591b99811b40eaddfb7cc668932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:47:31 compute-0 podman[405645]: 2026-01-20 15:47:31.8552781 +0000 UTC m=+0.112584040 container init 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:47:31 compute-0 podman[405645]: 2026-01-20 15:47:31.764180533 +0000 UTC m=+0.021486433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:47:31 compute-0 podman[405645]: 2026-01-20 15:47:31.862149416 +0000 UTC m=+0.119455296 container start 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:47:31 compute-0 podman[405645]: 2026-01-20 15:47:31.865625801 +0000 UTC m=+0.122931741 container attach 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:47:31 compute-0 ceph-mgr[74653]: [devicehealth INFO root] Check health
Jan 20 15:47:32 compute-0 nova_compute[250018]: 2026-01-20 15:47:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:32 compute-0 nova_compute[250018]: 2026-01-20 15:47:32.394 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:32.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:32 compute-0 eager_chaum[405661]: {
Jan 20 15:47:32 compute-0 eager_chaum[405661]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:47:32 compute-0 eager_chaum[405661]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:47:32 compute-0 eager_chaum[405661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:47:32 compute-0 eager_chaum[405661]:         "osd_id": 0,
Jan 20 15:47:32 compute-0 eager_chaum[405661]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:47:32 compute-0 eager_chaum[405661]:         "type": "bluestore"
Jan 20 15:47:32 compute-0 eager_chaum[405661]:     }
Jan 20 15:47:32 compute-0 eager_chaum[405661]: }
Jan 20 15:47:32 compute-0 ceph-mon[74360]: pgmap v3760: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:32 compute-0 systemd[1]: libpod-283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26.scope: Deactivated successfully.
Jan 20 15:47:32 compute-0 podman[405645]: 2026-01-20 15:47:32.69114319 +0000 UTC m=+0.948449070 container died 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:47:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-65434caaafc06d826576396ea410a0172c7e7591b99811b40eaddfb7cc668932-merged.mount: Deactivated successfully.
Jan 20 15:47:32 compute-0 podman[405645]: 2026-01-20 15:47:32.742485051 +0000 UTC m=+0.999790931 container remove 283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:47:32 compute-0 systemd[1]: libpod-conmon-283d5ab8378ec2b143622f94cf08bed0dbd0e52242bf93278034c27e44c1fe26.scope: Deactivated successfully.
Jan 20 15:47:32 compute-0 sudo[405543]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:47:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:47:32 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev fff84426-d57e-472f-a0e1-39cb4649aa58 does not exist
Jan 20 15:47:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 3c8dad9c-fff9-44bd-a21f-6421b4a4490b does not exist
Jan 20 15:47:32 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 0ca9819a-34a8-4718-a3ba-bae77e3275ab does not exist
Jan 20 15:47:32 compute-0 sudo[405697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:32 compute-0 sudo[405697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:32 compute-0 sudo[405697]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:32 compute-0 sudo[405722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:47:32 compute-0 sudo[405722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:32 compute-0 sudo[405722]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:33 compute-0 nova_compute[250018]: 2026-01-20 15:47:33.606 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:33 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:47:33 compute-0 ceph-mon[74360]: pgmap v3761: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:34 compute-0 nova_compute[250018]: 2026-01-20 15:47:34.552 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:34 compute-0 nova_compute[250018]: 2026-01-20 15:47:34.553 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:34 compute-0 nova_compute[250018]: 2026-01-20 15:47:34.553 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:34 compute-0 nova_compute[250018]: 2026-01-20 15:47:34.553 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:47:34 compute-0 nova_compute[250018]: 2026-01-20 15:47:34.554 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847583616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-0 nova_compute[250018]: 2026-01-20 15:47:35.015 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/847583616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3096368661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:35 compute-0 nova_compute[250018]: 2026-01-20 15:47:35.187 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:47:35 compute-0 nova_compute[250018]: 2026-01-20 15:47:35.188 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4165MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:47:35 compute-0 nova_compute[250018]: 2026-01-20 15:47:35.189 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:47:35 compute-0 nova_compute[250018]: 2026-01-20 15:47:35.189 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:47:35 compute-0 sudo[405770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:35 compute-0 sudo[405770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:35 compute-0 sudo[405770]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:35 compute-0 sudo[405795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:35 compute-0 sudo[405795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:35 compute-0 sudo[405795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:36 compute-0 ceph-mon[74360]: pgmap v3762: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:37 compute-0 nova_compute[250018]: 2026-01-20 15:47:37.400 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:37 compute-0 nova_compute[250018]: 2026-01-20 15:47:37.476 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:47:37 compute-0 nova_compute[250018]: 2026-01-20 15:47:37.477 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:47:37 compute-0 nova_compute[250018]: 2026-01-20 15:47:37.614 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:47:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:47:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1871560347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.093 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.100 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.157 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.158 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.158 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:47:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:38 compute-0 nova_compute[250018]: 2026-01-20 15:47:38.609 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:38 compute-0 ceph-mon[74360]: pgmap v3763: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2258595412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1871560347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:47:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:40 compute-0 ceph-mon[74360]: pgmap v3764: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:47:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:47:42 compute-0 nova_compute[250018]: 2026-01-20 15:47:42.161 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:42 compute-0 nova_compute[250018]: 2026-01-20 15:47:42.161 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:42 compute-0 nova_compute[250018]: 2026-01-20 15:47:42.403 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:42 compute-0 ceph-mon[74360]: pgmap v3765: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:43 compute-0 nova_compute[250018]: 2026-01-20 15:47:43.612 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:44 compute-0 ceph-mon[74360]: pgmap v3766: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:46 compute-0 ceph-mon[74360]: pgmap v3767: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:47 compute-0 nova_compute[250018]: 2026-01-20 15:47:47.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:47 compute-0 nova_compute[250018]: 2026-01-20 15:47:47.406 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:47.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:48 compute-0 nova_compute[250018]: 2026-01-20 15:47:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:48 compute-0 nova_compute[250018]: 2026-01-20 15:47:48.614 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:48 compute-0 ceph-mon[74360]: pgmap v3768: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:49.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:49 compute-0 ceph-mon[74360]: pgmap v3769: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:50.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:51.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:52 compute-0 nova_compute[250018]: 2026-01-20 15:47:52.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:47:52 compute-0 nova_compute[250018]: 2026-01-20 15:47:52.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:47:52 compute-0 nova_compute[250018]: 2026-01-20 15:47:52.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:47:52 compute-0 nova_compute[250018]: 2026-01-20 15:47:52.148 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:47:52 compute-0 nova_compute[250018]: 2026-01-20 15:47:52.449 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:52.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:47:52 compute-0 ceph-mon[74360]: pgmap v3770: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:47:52
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta']
Jan 20 15:47:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:47:53 compute-0 sshd-session[405850]: Invalid user oracle from 134.122.57.138 port 51748
Jan 20 15:47:53 compute-0 sshd-session[405850]: Connection closed by invalid user oracle 134.122.57.138 port 51748 [preauth]
Jan 20 15:47:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:53 compute-0 nova_compute[250018]: 2026-01-20 15:47:53.616 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:53.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:54.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:54 compute-0 ceph-mon[74360]: pgmap v3771: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:55 compute-0 sudo[405854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:55 compute-0 sudo[405854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:55 compute-0 sudo[405854]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:55 compute-0 sudo[405886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:47:55 compute-0 sudo[405886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:47:55 compute-0 sudo[405886]: pam_unix(sudo:session): session closed for user root
Jan 20 15:47:55 compute-0 podman[405879]: 2026-01-20 15:47:55.680136509 +0000 UTC m=+0.067261103 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 15:47:55 compute-0 podman[405878]: 2026-01-20 15:47:55.751017309 +0000 UTC m=+0.146716595 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 15:47:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:55.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:56.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:56 compute-0 ceph-mon[74360]: pgmap v3772: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:57 compute-0 nova_compute[250018]: 2026-01-20 15:47:57.452 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:47:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:47:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:47:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:47:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:47:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:47:58 compute-0 nova_compute[250018]: 2026-01-20 15:47:58.618 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:47:58 compute-0 ceph-mon[74360]: pgmap v3773: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:47:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:47:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:47:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:47:59.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:47:59 compute-0 ceph-mon[74360]: pgmap v3774: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:00.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:01.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:02 compute-0 nova_compute[250018]: 2026-01-20 15:48:02.455 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:02 compute-0 ceph-mon[74360]: pgmap v3775: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:03 compute-0 nova_compute[250018]: 2026-01-20 15:48:03.620 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:03.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:04 compute-0 ceph-mon[74360]: pgmap v3776: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:48:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:48:06 compute-0 ceph-mon[74360]: pgmap v3777: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:07 compute-0 nova_compute[250018]: 2026-01-20 15:48:07.458 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:08 compute-0 nova_compute[250018]: 2026-01-20 15:48:08.623 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:08 compute-0 ceph-mon[74360]: pgmap v3778: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:09 compute-0 ceph-mon[74360]: pgmap v3779: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:48:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:48:12 compute-0 nova_compute[250018]: 2026-01-20 15:48:12.463 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:12 compute-0 ceph-mon[74360]: pgmap v3780: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:13 compute-0 nova_compute[250018]: 2026-01-20 15:48:13.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:13.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:14.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:14 compute-0 ceph-mon[74360]: pgmap v3781: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/576372207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:48:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/576372207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:48:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:15 compute-0 sudo[405961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:15 compute-0 sudo[405961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:15 compute-0 sudo[405961]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:15 compute-0 sudo[405986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:15 compute-0 sudo[405986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:15 compute-0 sudo[405986]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:16 compute-0 ceph-mon[74360]: pgmap v3782: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:17 compute-0 nova_compute[250018]: 2026-01-20 15:48:17.503 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:17.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:18 compute-0 nova_compute[250018]: 2026-01-20 15:48:18.632 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:18 compute-0 ceph-mon[74360]: pgmap v3783: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:19.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:20 compute-0 ceph-mon[74360]: pgmap v3784: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:22.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:22 compute-0 nova_compute[250018]: 2026-01-20 15:48:22.534 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:22 compute-0 ceph-mon[74360]: pgmap v3785: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:23 compute-0 nova_compute[250018]: 2026-01-20 15:48:23.683 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:24.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:24 compute-0 ceph-mon[74360]: pgmap v3786: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:25 compute-0 nova_compute[250018]: 2026-01-20 15:48:25.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:26 compute-0 podman[406018]: 2026-01-20 15:48:26.461134776 +0000 UTC m=+0.049730758 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 20 15:48:26 compute-0 podman[406017]: 2026-01-20 15:48:26.502372783 +0000 UTC m=+0.092204718 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:48:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:26 compute-0 ceph-mon[74360]: pgmap v3787: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:27 compute-0 nova_compute[250018]: 2026-01-20 15:48:27.536 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:27.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:27 compute-0 ceph-mon[74360]: pgmap v3788: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:28 compute-0 nova_compute[250018]: 2026-01-20 15:48:28.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:28 compute-0 nova_compute[250018]: 2026-01-20 15:48:28.685 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:28 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4229310605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:29 compute-0 nova_compute[250018]: 2026-01-20 15:48:29.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/281312359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:29 compute-0 ceph-mon[74360]: pgmap v3789: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:30 compute-0 nova_compute[250018]: 2026-01-20 15:48:30.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:30 compute-0 nova_compute[250018]: 2026-01-20 15:48:30.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:48:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:30.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:48:30.823 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:48:30.823 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:48:30.824 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:31 compute-0 sshd-session[406060]: Invalid user oracle from 134.122.57.138 port 58224
Jan 20 15:48:31 compute-0 sshd-session[406060]: Connection closed by invalid user oracle 134.122.57.138 port 58224 [preauth]
Jan 20 15:48:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:31.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:32 compute-0 nova_compute[250018]: 2026-01-20 15:48:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:32.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:32 compute-0 nova_compute[250018]: 2026-01-20 15:48:32.576 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:32 compute-0 ceph-mon[74360]: pgmap v3790: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:33 compute-0 sudo[406064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:33 compute-0 sudo[406064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-0 sudo[406064]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-0 sudo[406089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:48:33 compute-0 sudo[406089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-0 sudo[406089]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-0 sudo[406114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:33 compute-0 sudo[406114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-0 sudo[406114]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:33 compute-0 sudo[406139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:48:33 compute-0 sudo[406139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:33 compute-0 nova_compute[250018]: 2026-01-20 15:48:33.728 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:33.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:34 compute-0 sudo[406139]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 88902d50-8d0b-4e6b-b1f4-57c10dede4c2 does not exist
Jan 20 15:48:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 873fda0d-c54e-409e-917b-32d28649172e does not exist
Jan 20 15:48:34 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 91320dc8-88a8-4f29-8044-b8e4ada23913 does not exist
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:48:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:34 compute-0 sudo[406195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:34 compute-0 sudo[406195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:34 compute-0 sudo[406195]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:34 compute-0 sudo[406220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:48:34 compute-0 sudo[406220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:34 compute-0 sudo[406220]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:34 compute-0 sudo[406245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:34 compute-0 sudo[406245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:34 compute-0 sudo[406245]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:34 compute-0 sudo[406270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:48:34 compute-0 sudo[406270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.697514446 +0000 UTC m=+0.038367531 container create b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:48:34 compute-0 ceph-mon[74360]: pgmap v3791: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:48:34 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:48:34 compute-0 systemd[1]: Started libpod-conmon-b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff.scope.
Jan 20 15:48:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.679278442 +0000 UTC m=+0.020131517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.784256136 +0000 UTC m=+0.125109261 container init b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.797402151 +0000 UTC m=+0.138255196 container start b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.80065204 +0000 UTC m=+0.141505085 container attach b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 20 15:48:34 compute-0 keen_goldstine[406352]: 167 167
Jan 20 15:48:34 compute-0 systemd[1]: libpod-b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff.scope: Deactivated successfully.
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.803244219 +0000 UTC m=+0.144097304 container died b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 20 15:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9fd035d75ec4727f69006b0b196afafecf52c7a2af5eb932e4c855be2bcacc8-merged.mount: Deactivated successfully.
Jan 20 15:48:34 compute-0 podman[406336]: 2026-01-20 15:48:34.846018328 +0000 UTC m=+0.186871393 container remove b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:48:34 compute-0 systemd[1]: libpod-conmon-b8a85f4ea355ff3d7a54eb613ad6372bb2d4d33606fe6932ad307f2da1a254ff.scope: Deactivated successfully.
Jan 20 15:48:35 compute-0 podman[406375]: 2026-01-20 15:48:35.035664385 +0000 UTC m=+0.052315348 container create 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:48:35 compute-0 podman[406375]: 2026-01-20 15:48:35.014536282 +0000 UTC m=+0.031187285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:35 compute-0 systemd[1]: Started libpod-conmon-69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd.scope.
Jan 20 15:48:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:35 compute-0 podman[406375]: 2026-01-20 15:48:35.175510193 +0000 UTC m=+0.192161166 container init 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 15:48:35 compute-0 podman[406375]: 2026-01-20 15:48:35.186697486 +0000 UTC m=+0.203348459 container start 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 15:48:35 compute-0 podman[406375]: 2026-01-20 15:48:35.189971115 +0000 UTC m=+0.206622098 container attach 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:48:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:35 compute-0 sudo[406396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:35 compute-0 sudo[406396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:35 compute-0 sudo[406396]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:35 compute-0 sudo[406427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:36 compute-0 sudo[406427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:36 compute-0 sudo[406427]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-0 hungry_rhodes[406391]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:48:36 compute-0 hungry_rhodes[406391]: --> relative data size: 1.0
Jan 20 15:48:36 compute-0 hungry_rhodes[406391]: --> All data devices are unavailable
Jan 20 15:48:36 compute-0 podman[406375]: 2026-01-20 15:48:36.057855832 +0000 UTC m=+1.074506795 container died 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:48:36 compute-0 systemd[1]: libpod-69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd.scope: Deactivated successfully.
Jan 20 15:48:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ffffa8c6b05e39d410642bdda46df842c960415021b4cc85a2402c2862c3a6-merged.mount: Deactivated successfully.
Jan 20 15:48:36 compute-0 podman[406375]: 2026-01-20 15:48:36.120351235 +0000 UTC m=+1.137002208 container remove 69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:48:36 compute-0 systemd[1]: libpod-conmon-69f060da27526cfb5de2f9149e44baa7f56f1c567009fe1d7bcc3e96ddc692dd.scope: Deactivated successfully.
Jan 20 15:48:36 compute-0 sudo[406270]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-0 sudo[406471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:36 compute-0 sudo[406471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:36 compute-0 sudo[406471]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-0 sudo[406496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:48:36 compute-0 sudo[406496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:36 compute-0 sudo[406496]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-0 sudo[406521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:36 compute-0 sudo[406521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:36 compute-0 sudo[406521]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:36 compute-0 sudo[406546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:48:36 compute-0 sudo[406546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:36.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:36 compute-0 nova_compute[250018]: 2026-01-20 15:48:36.588 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:36 compute-0 nova_compute[250018]: 2026-01-20 15:48:36.589 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:36 compute-0 nova_compute[250018]: 2026-01-20 15:48:36.589 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:36 compute-0 nova_compute[250018]: 2026-01-20 15:48:36.590 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:48:36 compute-0 nova_compute[250018]: 2026-01-20 15:48:36.590 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.731007486 +0000 UTC m=+0.035317658 container create 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:48:36 compute-0 systemd[1]: Started libpod-conmon-08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594.scope.
Jan 20 15:48:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:36 compute-0 ceph-mon[74360]: pgmap v3792: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.811213809 +0000 UTC m=+0.115523981 container init 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.716585905 +0000 UTC m=+0.020896097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.817606821 +0000 UTC m=+0.121916993 container start 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.820973662 +0000 UTC m=+0.125283864 container attach 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:48:36 compute-0 relaxed_jennings[406648]: 167 167
Jan 20 15:48:36 compute-0 systemd[1]: libpod-08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594.scope: Deactivated successfully.
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.823346147 +0000 UTC m=+0.127656319 container died 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:48:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-39881abe49d2bb3b9064950092bdf9e799ab7fb54178ba52457fbdb8df0742b2-merged.mount: Deactivated successfully.
Jan 20 15:48:36 compute-0 podman[406613]: 2026-01-20 15:48:36.856215068 +0000 UTC m=+0.160525240 container remove 08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 20 15:48:36 compute-0 systemd[1]: libpod-conmon-08a302df47cfd3713cd0dafcb2c21e9034c653c8fe75848eb82e2fae218d1594.scope: Deactivated successfully.
Jan 20 15:48:37 compute-0 podman[406672]: 2026-01-20 15:48:37.01398271 +0000 UTC m=+0.042365998 container create 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:48:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:48:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518712833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-0 systemd[1]: Started libpod-conmon-580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702.scope.
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.056 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:48:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fde871590ad27c20aed0061add973b1d284488e06663c7f1e3e4f36510bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fde871590ad27c20aed0061add973b1d284488e06663c7f1e3e4f36510bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fde871590ad27c20aed0061add973b1d284488e06663c7f1e3e4f36510bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:37 compute-0 podman[406672]: 2026-01-20 15:48:36.995674594 +0000 UTC m=+0.024057882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085fde871590ad27c20aed0061add973b1d284488e06663c7f1e3e4f36510bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:37 compute-0 podman[406672]: 2026-01-20 15:48:37.10588427 +0000 UTC m=+0.134267608 container init 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 15:48:37 compute-0 podman[406672]: 2026-01-20 15:48:37.113262329 +0000 UTC m=+0.141645617 container start 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:48:37 compute-0 podman[406672]: 2026-01-20 15:48:37.117780621 +0000 UTC m=+0.146164039 container attach 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.221 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.222 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.222 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.223 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.578 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.744 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.745 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:48:37 compute-0 nova_compute[250018]: 2026-01-20 15:48:37.781 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:48:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3518712833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/334353735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:37 compute-0 ceph-mon[74360]: pgmap v3793: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:37 compute-0 elegant_galois[406691]: {
Jan 20 15:48:37 compute-0 elegant_galois[406691]:     "0": [
Jan 20 15:48:37 compute-0 elegant_galois[406691]:         {
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "devices": [
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "/dev/loop3"
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             ],
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "lv_name": "ceph_lv0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "lv_size": "7511998464",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "name": "ceph_lv0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "tags": {
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.cluster_name": "ceph",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.crush_device_class": "",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.encrypted": "0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.osd_id": "0",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.type": "block",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:                 "ceph.vdo": "0"
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             },
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "type": "block",
Jan 20 15:48:37 compute-0 elegant_galois[406691]:             "vg_name": "ceph_vg0"
Jan 20 15:48:37 compute-0 elegant_galois[406691]:         }
Jan 20 15:48:37 compute-0 elegant_galois[406691]:     ]
Jan 20 15:48:37 compute-0 elegant_galois[406691]: }
Jan 20 15:48:37 compute-0 systemd[1]: libpod-580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702.scope: Deactivated successfully.
Jan 20 15:48:37 compute-0 podman[406701]: 2026-01-20 15:48:37.937502465 +0000 UTC m=+0.022443409 container died 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-085fde871590ad27c20aed0061add973b1d284488e06663c7f1e3e4f36510bd9-merged.mount: Deactivated successfully.
Jan 20 15:48:37 compute-0 podman[406701]: 2026-01-20 15:48:37.985527255 +0000 UTC m=+0.070468189 container remove 580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:48:37 compute-0 systemd[1]: libpod-conmon-580856915c77b288c71e6482606eb3e2e61d65e77ae8c4b80283ba1cfd0a6702.scope: Deactivated successfully.
Jan 20 15:48:38 compute-0 sudo[406546]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:38 compute-0 sudo[406735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:38 compute-0 sudo[406735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:38 compute-0 sudo[406735]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:38 compute-0 sudo[406760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:48:38 compute-0 sudo[406760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:38 compute-0 sudo[406760]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:38 compute-0 sudo[406785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:38 compute-0 sudo[406785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:38 compute-0 sudo[406785]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:38 compute-0 sudo[406810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:48:38 compute-0 sudo[406810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:48:38 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467179759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:38 compute-0 nova_compute[250018]: 2026-01-20 15:48:38.258 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:48:38 compute-0 nova_compute[250018]: 2026-01-20 15:48:38.265 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:48:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:38.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.548080063 +0000 UTC m=+0.038191075 container create e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 20 15:48:38 compute-0 systemd[1]: Started libpod-conmon-e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94.scope.
Jan 20 15:48:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.6199685 +0000 UTC m=+0.110079542 container init e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.626580459 +0000 UTC m=+0.116691471 container start e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.531038102 +0000 UTC m=+0.021149134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.630068854 +0000 UTC m=+0.120179876 container attach e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:48:38 compute-0 magical_varahamihira[406896]: 167 167
Jan 20 15:48:38 compute-0 systemd[1]: libpod-e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94.scope: Deactivated successfully.
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.632717575 +0000 UTC m=+0.122828597 container died e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 15:48:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6705dc2bf149ffc0d42aca35f35e66a5e8b94a868107517badbc9108fe7094a-merged.mount: Deactivated successfully.
Jan 20 15:48:38 compute-0 podman[406880]: 2026-01-20 15:48:38.670492658 +0000 UTC m=+0.160603660 container remove e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:48:38 compute-0 systemd[1]: libpod-conmon-e38b9adee7322290f87d04b91ccf443ef85cf3af557b50bdd92a900372accb94.scope: Deactivated successfully.
Jan 20 15:48:38 compute-0 nova_compute[250018]: 2026-01-20 15:48:38.779 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/467179759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:38 compute-0 podman[406919]: 2026-01-20 15:48:38.867076514 +0000 UTC m=+0.038203386 container create 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 15:48:38 compute-0 systemd[1]: Started libpod-conmon-2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1.scope.
Jan 20 15:48:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d12362fab143d63223eed6e1109cf200a5b132def8d4893ef69ade57d7633d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d12362fab143d63223eed6e1109cf200a5b132def8d4893ef69ade57d7633d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d12362fab143d63223eed6e1109cf200a5b132def8d4893ef69ade57d7633d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d12362fab143d63223eed6e1109cf200a5b132def8d4893ef69ade57d7633d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:48:38 compute-0 podman[406919]: 2026-01-20 15:48:38.850804603 +0000 UTC m=+0.021931505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:48:38 compute-0 podman[406919]: 2026-01-20 15:48:38.986287883 +0000 UTC m=+0.157414785 container init 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:48:39 compute-0 podman[406919]: 2026-01-20 15:48:39.000793405 +0000 UTC m=+0.171920347 container start 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 15:48:39 compute-0 podman[406919]: 2026-01-20 15:48:39.004693441 +0000 UTC m=+0.175820353 container attach 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:48:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]: {
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:         "osd_id": 0,
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:         "type": "bluestore"
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]:     }
Jan 20 15:48:39 compute-0 suspicious_grothendieck[406936]: }
Jan 20 15:48:39 compute-0 ceph-mon[74360]: pgmap v3794: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:39 compute-0 systemd[1]: libpod-2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1.scope: Deactivated successfully.
Jan 20 15:48:39 compute-0 conmon[406936]: conmon 2a88821b4802a1a2b312 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1.scope/container/memory.events
Jan 20 15:48:39 compute-0 podman[406919]: 2026-01-20 15:48:39.853230334 +0000 UTC m=+1.024357216 container died 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d12362fab143d63223eed6e1109cf200a5b132def8d4893ef69ade57d7633d-merged.mount: Deactivated successfully.
Jan 20 15:48:39 compute-0 podman[406919]: 2026-01-20 15:48:39.901832561 +0000 UTC m=+1.072959443 container remove 2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:48:39 compute-0 systemd[1]: libpod-conmon-2a88821b4802a1a2b3126234e86d695fd549f09d7a8b3f436867007d85e40ce1.scope: Deactivated successfully.
Jan 20 15:48:39 compute-0 sudo[406810]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:48:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:39 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:48:39 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9c67f0c8-9a86-405c-9c04-b7d3f7de8c11 does not exist
Jan 20 15:48:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 76ec2149-3810-47d3-90ac-8a166cd9fb10 does not exist
Jan 20 15:48:39 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 6208f994-8558-4f8e-920e-d3d1442646bb does not exist
Jan 20 15:48:39 compute-0 nova_compute[250018]: 2026-01-20 15:48:39.985 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:48:39 compute-0 nova_compute[250018]: 2026-01-20 15:48:39.987 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:48:39 compute-0 nova_compute[250018]: 2026-01-20 15:48:39.988 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:48:40 compute-0 sudo[406967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:40 compute-0 sudo[406967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:40 compute-0 sudo[406967]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:40 compute-0 sudo[406992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:48:40 compute-0 sudo[406992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:40 compute-0 sudo[406992]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:40 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:48:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3934550702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:48:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:41 compute-0 ceph-mon[74360]: pgmap v3795: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:42 compute-0 nova_compute[250018]: 2026-01-20 15:48:42.582 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:43 compute-0 nova_compute[250018]: 2026-01-20 15:48:43.780 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:43.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:48:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:48:44 compute-0 ceph-mon[74360]: pgmap v3796: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:45 compute-0 nova_compute[250018]: 2026-01-20 15:48:45.988 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:45 compute-0 nova_compute[250018]: 2026-01-20 15:48:45.989 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:48:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:48:46 compute-0 ceph-mon[74360]: pgmap v3797: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:47 compute-0 nova_compute[250018]: 2026-01-20 15:48:47.585 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:47.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:48 compute-0 nova_compute[250018]: 2026-01-20 15:48:48.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:48 compute-0 ceph-mon[74360]: pgmap v3798: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:48 compute-0 nova_compute[250018]: 2026-01-20 15:48:48.781 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 15:48:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:50.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:50 compute-0 ceph-mon[74360]: pgmap v3799: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:48:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:48:52 compute-0 nova_compute[250018]: 2026-01-20 15:48:52.625 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:48:52
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', '.rgw.root', 'vms']
Jan 20 15:48:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:48:52 compute-0 ceph-mon[74360]: pgmap v3800: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:53 compute-0 nova_compute[250018]: 2026-01-20 15:48:53.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:48:53 compute-0 nova_compute[250018]: 2026-01-20 15:48:53.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:48:53 compute-0 nova_compute[250018]: 2026-01-20 15:48:53.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:48:53 compute-0 nova_compute[250018]: 2026-01-20 15:48:53.076 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:48:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:53 compute-0 nova_compute[250018]: 2026-01-20 15:48:53.785 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:48:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:48:54 compute-0 ceph-mon[74360]: pgmap v3801: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 15:48:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 20 15:48:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:56 compute-0 sudo[407025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:56 compute-0 sudo[407025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:56 compute-0 sudo[407025]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:56 compute-0 sudo[407050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:48:56 compute-0 sudo[407050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:48:56 compute-0 sudo[407050]: pam_unix(sudo:session): session closed for user root
Jan 20 15:48:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:56 compute-0 ceph-mon[74360]: pgmap v3802: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 20 15:48:57 compute-0 podman[407077]: 2026-01-20 15:48:57.506452459 +0000 UTC m=+0.089575827 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 15:48:57 compute-0 podman[407076]: 2026-01-20 15:48:57.569317592 +0000 UTC m=+0.153125748 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 15:48:57 compute-0 nova_compute[250018]: 2026-01-20 15:48:57.627 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:48:57 compute-0 ceph-mon[74360]: pgmap v3803: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:48:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:57.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:48:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:48:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:48:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:48:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:48:58 compute-0 nova_compute[250018]: 2026-01-20 15:48:58.787 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:48:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:48:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:48:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:48:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:48:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:00.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:00 compute-0 ceph-mon[74360]: pgmap v3804: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:49:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:49:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:01.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:02.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:02 compute-0 nova_compute[250018]: 2026-01-20 15:49:02.674 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:02 compute-0 ceph-mon[74360]: pgmap v3805: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Jan 20 15:49:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:03 compute-0 nova_compute[250018]: 2026-01-20 15:49:03.789 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:03 compute-0 ceph-mon[74360]: pgmap v3806: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.835929) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143835996, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1223, "num_deletes": 251, "total_data_size": 2016394, "memory_usage": 2042064, "flush_reason": "Manual Compaction"}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 20 15:49:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143854287, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1982070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83939, "largest_seqno": 85161, "table_properties": {"data_size": 1976282, "index_size": 3118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12420, "raw_average_key_size": 19, "raw_value_size": 1964668, "raw_average_value_size": 3153, "num_data_blocks": 139, "num_entries": 623, "num_filter_entries": 623, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768924029, "oldest_key_time": 1768924029, "file_creation_time": 1768924143, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 18419 microseconds, and 5768 cpu microseconds.
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.854347) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1982070 bytes OK
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.854373) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.856212) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.856233) EVENT_LOG_v1 {"time_micros": 1768924143856226, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.856258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2011002, prev total WAL file size 2011002, number of live WAL files 2.
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.857265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1935KB)], [194(11MB)]
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143857337, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 14355353, "oldest_snapshot_seqno": -1}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 10947 keys, 12391177 bytes, temperature: kUnknown
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143960625, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 12391177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12323252, "index_size": 39460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 290266, "raw_average_key_size": 26, "raw_value_size": 12134342, "raw_average_value_size": 1108, "num_data_blocks": 1490, "num_entries": 10947, "num_filter_entries": 10947, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768924143, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.960933) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 12391177 bytes
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.962280) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.9 rd, 119.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.8 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(13.5) write-amplify(6.3) OK, records in: 11462, records dropped: 515 output_compression: NoCompression
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.962300) EVENT_LOG_v1 {"time_micros": 1768924143962290, "job": 122, "event": "compaction_finished", "compaction_time_micros": 103365, "compaction_time_cpu_micros": 55461, "output_level": 6, "num_output_files": 1, "total_output_size": 12391177, "num_input_records": 11462, "num_output_records": 10947, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143962896, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924143965981, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.857155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.966028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.966033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.966035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.966036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:03 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:49:03.966038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:49:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:06 compute-0 ceph-mon[74360]: pgmap v3807: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 169 op/s
Jan 20 15:49:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 20 15:49:07 compute-0 nova_compute[250018]: 2026-01-20 15:49:07.678 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:07.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:08 compute-0 ceph-mon[74360]: pgmap v3808: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 20 15:49:08 compute-0 nova_compute[250018]: 2026-01-20 15:49:08.833 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:10 compute-0 ceph-mon[74360]: pgmap v3809: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:10 compute-0 sshd-session[407126]: Invalid user oracle from 134.122.57.138 port 42564
Jan 20 15:49:11 compute-0 sshd-session[407126]: Connection closed by invalid user oracle 134.122.57.138 port 42564 [preauth]
Jan 20 15:49:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:11.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:49:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:49:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:12 compute-0 nova_compute[250018]: 2026-01-20 15:49:12.730 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:12 compute-0 ceph-mon[74360]: pgmap v3810: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:13.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:13 compute-0 nova_compute[250018]: 2026-01-20 15:49:13.882 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:49:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:49:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:49:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:49:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:14 compute-0 ceph-mon[74360]: pgmap v3811: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:49:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1297599588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:49:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:15 compute-0 ceph-mon[74360]: pgmap v3812: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:16 compute-0 sudo[407131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:16 compute-0 sudo[407131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:16 compute-0 sudo[407131]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:16 compute-0 sudo[407156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:16 compute-0 sudo[407156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:16 compute-0 sudo[407156]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:17 compute-0 nova_compute[250018]: 2026-01-20 15:49:17.766 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:17.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:18 compute-0 ceph-mon[74360]: pgmap v3813: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:18 compute-0 nova_compute[250018]: 2026-01-20 15:49:18.912 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:20 compute-0 ceph-mon[74360]: pgmap v3814: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 20 15:49:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:21.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:22 compute-0 nova_compute[250018]: 2026-01-20 15:49:22.807 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:22 compute-0 nova_compute[250018]: 2026-01-20 15:49:22.941 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:23 compute-0 ceph-mon[74360]: pgmap v3815: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:23 compute-0 nova_compute[250018]: 2026-01-20 15:49:23.951 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:24 compute-0 ceph-mon[74360]: pgmap v3816: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:26 compute-0 nova_compute[250018]: 2026-01-20 15:49:26.191 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:26 compute-0 ceph-mon[74360]: pgmap v3817: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:27 compute-0 nova_compute[250018]: 2026-01-20 15:49:27.811 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:27.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:28 compute-0 nova_compute[250018]: 2026-01-20 15:49:28.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:28 compute-0 podman[407190]: 2026-01-20 15:49:28.469267246 +0000 UTC m=+0.053081999 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 20 15:49:28 compute-0 podman[407189]: 2026-01-20 15:49:28.523478154 +0000 UTC m=+0.109543637 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 15:49:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:28 compute-0 ceph-mon[74360]: pgmap v3818: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:28 compute-0 nova_compute[250018]: 2026-01-20 15:49:28.952 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:30 compute-0 nova_compute[250018]: 2026-01-20 15:49:30.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:30 compute-0 ceph-mon[74360]: pgmap v3819: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2906927452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:49:30.823 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:49:30.824 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:49:30.824 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3255063098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.083 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.084 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.084 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.085 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:49:32 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:49:32 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041901981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.554 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:49:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.788 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.790 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4207MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.790 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.791 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:49:32 compute-0 ceph-mon[74360]: pgmap v3820: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1041901981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:32 compute-0 nova_compute[250018]: 2026-01-20 15:49:32.849 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.085 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.085 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.114 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing inventories for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.163 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating ProviderTree inventory for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.163 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Updating inventory in ProviderTree for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.203 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing aggregate associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.225 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Refreshing trait associations for resource provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed, traits: COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.249 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:49:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:49:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183781135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.717 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.726 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.750 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.753 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.753 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:49:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2183781135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:33 compute-0 nova_compute[250018]: 2026-01-20 15:49:33.991 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:34 compute-0 ceph-mon[74360]: pgmap v3821: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:36 compute-0 sudo[407284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:36 compute-0 sudo[407284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:36 compute-0 sudo[407284]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:36 compute-0 sudo[407310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:36 compute-0 sudo[407310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:36 compute-0 sudo[407310]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:37 compute-0 ceph-mon[74360]: pgmap v3822: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:37 compute-0 nova_compute[250018]: 2026-01-20 15:49:37.880 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:38 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4206473289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:39 compute-0 nova_compute[250018]: 2026-01-20 15:49:39.051 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:39 compute-0 ceph-mon[74360]: pgmap v3823: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2377546314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:49:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:39 compute-0 nova_compute[250018]: 2026-01-20 15:49:39.755 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:39.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:40 compute-0 sudo[407336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:40 compute-0 sudo[407336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-0 sudo[407336]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-0 sudo[407362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:40 compute-0 sudo[407362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-0 sudo[407362]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:40 compute-0 sudo[407387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:40 compute-0 sudo[407387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:40 compute-0 sudo[407387]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:40 compute-0 sudo[407412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 20 15:49:40 compute-0 sudo[407412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:41 compute-0 nova_compute[250018]: 2026-01-20 15:49:41.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:41 compute-0 ceph-mon[74360]: pgmap v3824: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:41 compute-0 sudo[407412]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:41 compute-0 sudo[407458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:41 compute-0 sudo[407458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:41 compute-0 sudo[407458]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-0 sudo[407483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:41 compute-0 sudo[407483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:41 compute-0 sudo[407483]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-0 sudo[407508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:41 compute-0 sudo[407508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:41 compute-0 sudo[407508]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-0 sudo[407533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:49:41 compute-0 sudo[407533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:49:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:41.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:49:41 compute-0 sudo[407533]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 2f785df9-04cc-4681-8ae8-d3f3bf7c3942 does not exist
Jan 20 15:49:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8be10e5a-5f69-4ef7-9a25-83ec24da6628 does not exist
Jan 20 15:49:41 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 99f92b5d-a282-4723-8d2b-ef34b309bbe7 does not exist
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:49:41 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:49:41 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:42 compute-0 sudo[407589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:42 compute-0 sudo[407589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:42 compute-0 sudo[407589]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:42 compute-0 sudo[407614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:42 compute-0 sudo[407614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:42 compute-0 sudo[407614]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-0 ceph-mon[74360]: pgmap v3825: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:49:42 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:49:42 compute-0 sudo[407639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:42 compute-0 sudo[407639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:42 compute-0 sudo[407639]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:42 compute-0 sudo[407664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:49:42 compute-0 sudo[407664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:42.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.623484195 +0000 UTC m=+0.049610164 container create f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:49:42 compute-0 systemd[1]: Started libpod-conmon-f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31.scope.
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.601826309 +0000 UTC m=+0.027952328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.71556544 +0000 UTC m=+0.141691429 container init f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.723535736 +0000 UTC m=+0.149661705 container start f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.726243749 +0000 UTC m=+0.152369738 container attach f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:49:42 compute-0 elegant_nobel[407746]: 167 167
Jan 20 15:49:42 compute-0 systemd[1]: libpod-f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31.scope: Deactivated successfully.
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.732850998 +0000 UTC m=+0.158976987 container died f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 20 15:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-457274a642d0a04cebc6cc073b1e7d437fb074215f293669c5b96de381f5a8e2-merged.mount: Deactivated successfully.
Jan 20 15:49:42 compute-0 podman[407730]: 2026-01-20 15:49:42.781440284 +0000 UTC m=+0.207566263 container remove f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:49:42 compute-0 systemd[1]: libpod-conmon-f6c2a611a751cbc423f83174a816876ff4e325f096790ecfb682b888c250eb31.scope: Deactivated successfully.
Jan 20 15:49:42 compute-0 nova_compute[250018]: 2026-01-20 15:49:42.897 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:42 compute-0 podman[407768]: 2026-01-20 15:49:42.954695707 +0000 UTC m=+0.046074599 container create ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:49:43 compute-0 systemd[1]: Started libpod-conmon-ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875.scope.
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:42.937229253 +0000 UTC m=+0.028608175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:43.053269976 +0000 UTC m=+0.144648918 container init ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:43.067366898 +0000 UTC m=+0.158745810 container start ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:43.070958256 +0000 UTC m=+0.162337168 container attach ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:49:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:43 compute-0 peaceful_mahavira[407784]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:49:43 compute-0 peaceful_mahavira[407784]: --> relative data size: 1.0
Jan 20 15:49:43 compute-0 peaceful_mahavira[407784]: --> All data devices are unavailable
Jan 20 15:49:43 compute-0 systemd[1]: libpod-ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875.scope: Deactivated successfully.
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:43.868092207 +0000 UTC m=+0.959471119 container died ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 20 15:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-86bcaf0847534452685f6813794c66746606e65e593700930f80610c7e9a1b78-merged.mount: Deactivated successfully.
Jan 20 15:49:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:43.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:43 compute-0 podman[407768]: 2026-01-20 15:49:43.921543725 +0000 UTC m=+1.012922617 container remove ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:49:43 compute-0 systemd[1]: libpod-conmon-ee5f18fdca6597e6c9afafa4e3b8482049a1cf9558b3127c4eacdb1a50705875.scope: Deactivated successfully.
Jan 20 15:49:43 compute-0 sudo[407664]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:44 compute-0 sudo[407810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:44 compute-0 sudo[407810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:44 compute-0 sudo[407810]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:44 compute-0 sudo[407835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:44 compute-0 sudo[407835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:44 compute-0 nova_compute[250018]: 2026-01-20 15:49:44.090 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:44 compute-0 sudo[407835]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:44 compute-0 sudo[407860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:44 compute-0 sudo[407860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:44 compute-0 sudo[407860]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:44 compute-0 sudo[407885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:49:44 compute-0 sudo[407885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.586095924 +0000 UTC m=+0.044538077 container create 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:49:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:44 compute-0 systemd[1]: Started libpod-conmon-3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6.scope.
Jan 20 15:49:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.56708759 +0000 UTC m=+0.025529833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.68190945 +0000 UTC m=+0.140351623 container init 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.688494388 +0000 UTC m=+0.146936541 container start 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.691830719 +0000 UTC m=+0.150272872 container attach 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 15:49:44 compute-0 sad_bouman[407967]: 167 167
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.697446731 +0000 UTC m=+0.155888884 container died 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:49:44 compute-0 systemd[1]: libpod-3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6.scope: Deactivated successfully.
Jan 20 15:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-58cbdf19b200c86b9ff21d15254d71e74a6f93af0e4455ab86864d4aada09b6b-merged.mount: Deactivated successfully.
Jan 20 15:49:44 compute-0 ceph-mon[74360]: pgmap v3826: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:44 compute-0 podman[407951]: 2026-01-20 15:49:44.778704732 +0000 UTC m=+0.237146885 container remove 3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 15:49:44 compute-0 systemd[1]: libpod-conmon-3460b0bd17d6f9a2aace8212fc635ebb00148de45cf72bc8370f5236630c7af6.scope: Deactivated successfully.
Jan 20 15:49:44 compute-0 podman[407990]: 2026-01-20 15:49:44.967248238 +0000 UTC m=+0.061958599 container create 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 15:49:45 compute-0 systemd[1]: Started libpod-conmon-8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706.scope.
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:44.945436908 +0000 UTC m=+0.040147309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724d4aac0d458bf2d7daa939118a9fcc59724b4afb51999a70fcd7e8ce41a458/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724d4aac0d458bf2d7daa939118a9fcc59724b4afb51999a70fcd7e8ce41a458/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724d4aac0d458bf2d7daa939118a9fcc59724b4afb51999a70fcd7e8ce41a458/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724d4aac0d458bf2d7daa939118a9fcc59724b4afb51999a70fcd7e8ce41a458/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:45.073485486 +0000 UTC m=+0.168195877 container init 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:45.081635417 +0000 UTC m=+0.176345778 container start 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:45.085170973 +0000 UTC m=+0.179881344 container attach 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 20 15:49:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:45 compute-0 exciting_ellis[408006]: {
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:     "0": [
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:         {
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "devices": [
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "/dev/loop3"
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             ],
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "lv_name": "ceph_lv0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "lv_size": "7511998464",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "name": "ceph_lv0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "tags": {
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.cluster_name": "ceph",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.crush_device_class": "",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.encrypted": "0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.osd_id": "0",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.type": "block",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:                 "ceph.vdo": "0"
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             },
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "type": "block",
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:             "vg_name": "ceph_vg0"
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:         }
Jan 20 15:49:45 compute-0 exciting_ellis[408006]:     ]
Jan 20 15:49:45 compute-0 exciting_ellis[408006]: }
Jan 20 15:49:45 compute-0 systemd[1]: libpod-8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706.scope: Deactivated successfully.
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:45.868496649 +0000 UTC m=+0.963207051 container died 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-724d4aac0d458bf2d7daa939118a9fcc59724b4afb51999a70fcd7e8ce41a458-merged.mount: Deactivated successfully.
Jan 20 15:49:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:45.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:45 compute-0 podman[407990]: 2026-01-20 15:49:45.939791071 +0000 UTC m=+1.034501432 container remove 8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:49:45 compute-0 systemd[1]: libpod-conmon-8a907f3999276993417500bd1619e928ba834388a6a8142d4efd631801d5f706.scope: Deactivated successfully.
Jan 20 15:49:45 compute-0 sudo[407885]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:46 compute-0 sudo[408031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:46 compute-0 sudo[408031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:46 compute-0 sudo[408031]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:46 compute-0 sudo[408056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:49:46 compute-0 sudo[408056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:46 compute-0 sudo[408056]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:46 compute-0 sudo[408081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:46 compute-0 sudo[408081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:46 compute-0 sudo[408081]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:46 compute-0 sudo[408106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:49:46 compute-0 sudo[408106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:46.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.665531718 +0000 UTC m=+0.038406921 container create e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 20 15:49:46 compute-0 systemd[1]: Started libpod-conmon-e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc.scope.
Jan 20 15:49:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.648358383 +0000 UTC m=+0.021233606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.755970788 +0000 UTC m=+0.128846011 container init e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.763283035 +0000 UTC m=+0.136158238 container start e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.766074281 +0000 UTC m=+0.138949534 container attach e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 15:49:46 compute-0 elastic_williamson[408189]: 167 167
Jan 20 15:49:46 compute-0 systemd[1]: libpod-e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc.scope: Deactivated successfully.
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.767754746 +0000 UTC m=+0.140629989 container died e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 20 15:49:46 compute-0 ceph-mon[74360]: pgmap v3827: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f63f27994774a140f358a6bdc931b6905a85289964efcb0f9b14611334c7d484-merged.mount: Deactivated successfully.
Jan 20 15:49:46 compute-0 podman[408173]: 2026-01-20 15:49:46.80886439 +0000 UTC m=+0.181739633 container remove e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_williamson, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 15:49:46 compute-0 systemd[1]: libpod-conmon-e189d7f827d0a5778082cd50ef74093c72b057697d061451d02998369e0d11dc.scope: Deactivated successfully.
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:47.012641579 +0000 UTC m=+0.043546990 container create 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:49:47 compute-0 nova_compute[250018]: 2026-01-20 15:49:47.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:47 compute-0 systemd[1]: Started libpod-conmon-71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb.scope.
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:46.995009902 +0000 UTC m=+0.025915333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:49:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882ceb4f93397096c8250343f161df8d38adfc8ecc8112a1a97f412f99a23caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882ceb4f93397096c8250343f161df8d38adfc8ecc8112a1a97f412f99a23caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882ceb4f93397096c8250343f161df8d38adfc8ecc8112a1a97f412f99a23caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882ceb4f93397096c8250343f161df8d38adfc8ecc8112a1a97f412f99a23caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:47.114449887 +0000 UTC m=+0.145355308 container init 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:47.120289685 +0000 UTC m=+0.151195096 container start 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:47.123962655 +0000 UTC m=+0.154868066 container attach 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:49:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:47 compute-0 nova_compute[250018]: 2026-01-20 15:49:47.902 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:47 compute-0 trusting_easley[408230]: {
Jan 20 15:49:47 compute-0 trusting_easley[408230]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:49:47 compute-0 trusting_easley[408230]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:49:47 compute-0 trusting_easley[408230]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:49:47 compute-0 trusting_easley[408230]:         "osd_id": 0,
Jan 20 15:49:47 compute-0 trusting_easley[408230]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:49:47 compute-0 trusting_easley[408230]:         "type": "bluestore"
Jan 20 15:49:47 compute-0 trusting_easley[408230]:     }
Jan 20 15:49:47 compute-0 trusting_easley[408230]: }
Jan 20 15:49:47 compute-0 systemd[1]: libpod-71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb.scope: Deactivated successfully.
Jan 20 15:49:47 compute-0 podman[408214]: 2026-01-20 15:49:47.952096325 +0000 UTC m=+0.983001726 container died 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 20 15:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-882ceb4f93397096c8250343f161df8d38adfc8ecc8112a1a97f412f99a23caa-merged.mount: Deactivated successfully.
Jan 20 15:49:48 compute-0 podman[408214]: 2026-01-20 15:49:48.010567179 +0000 UTC m=+1.041472590 container remove 71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:49:48 compute-0 systemd[1]: libpod-conmon-71331337e0f1afacbda6930774d0f9f0181215f8ad84145cb8696e2b7c2bf3eb.scope: Deactivated successfully.
Jan 20 15:49:48 compute-0 sudo[408106]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:48 compute-0 nova_compute[250018]: 2026-01-20 15:49:48.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:49:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:49:48 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9ff4243b-d6ab-4d59-8ae0-54c7e38653b0 does not exist
Jan 20 15:49:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev dd80f192-9cbc-4c0f-bbad-e9e96e703762 does not exist
Jan 20 15:49:48 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4b96b476-930b-4243-a34f-61a1150c0fc7 does not exist
Jan 20 15:49:48 compute-0 sudo[408261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:48 compute-0 sudo[408261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:48 compute-0 sudo[408261]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:48 compute-0 sudo[408286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:49:48 compute-0 sudo[408286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:48 compute-0 sudo[408286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:48.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:49 compute-0 ceph-mon[74360]: pgmap v3828: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:49 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:49:49 compute-0 nova_compute[250018]: 2026-01-20 15:49:49.100 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:49 compute-0 nova_compute[250018]: 2026-01-20 15:49:49.166 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:49.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:51 compute-0 ceph-mon[74360]: pgmap v3829: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:51 compute-0 sshd-session[408312]: Invalid user oracle from 134.122.57.138 port 46972
Jan 20 15:49:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:51 compute-0 sshd-session[408312]: Connection closed by invalid user oracle 134.122.57.138 port 46972 [preauth]
Jan 20 15:49:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:52 compute-0 ceph-mon[74360]: pgmap v3830: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:49:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:49:52
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Jan 20 15:49:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:49:52 compute-0 nova_compute[250018]: 2026-01-20 15:49:52.906 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:53 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:53 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:53.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:54 compute-0 nova_compute[250018]: 2026-01-20 15:49:54.206 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:54 compute-0 ceph-mon[74360]: pgmap v3831: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:55 compute-0 nova_compute[250018]: 2026-01-20 15:49:55.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:55 compute-0 nova_compute[250018]: 2026-01-20 15:49:55.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:49:55 compute-0 nova_compute[250018]: 2026-01-20 15:49:55.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:49:55 compute-0 nova_compute[250018]: 2026-01-20 15:49:55.098 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:49:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:55 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:55 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:55 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:55.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:56 compute-0 nova_compute[250018]: 2026-01-20 15:49:56.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:49:56 compute-0 nova_compute[250018]: 2026-01-20 15:49:56.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 15:49:56 compute-0 sudo[408318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:56 compute-0 sudo[408318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:56 compute-0 sudo[408318]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:56 compute-0 sudo[408343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:49:56 compute-0 sudo[408343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:49:56 compute-0 sudo[408343]: pam_unix(sudo:session): session closed for user root
Jan 20 15:49:56 compute-0 ceph-mon[74360]: pgmap v3832: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:57 compute-0 nova_compute[250018]: 2026-01-20 15:49:57.911 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:57 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:57 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:49:57 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:57.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:49:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:49:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:49:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:49:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:49:58 compute-0 ceph-mon[74360]: pgmap v3833: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:59 compute-0 nova_compute[250018]: 2026-01-20 15:49:59.242 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:49:59 compute-0 podman[408370]: 2026-01-20 15:49:59.490946357 +0000 UTC m=+0.070239333 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:49:59 compute-0 podman[408369]: 2026-01-20 15:49:59.517923428 +0000 UTC m=+0.105098627 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 20 15:49:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:49:59 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:49:59 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:49:59 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:49:59.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:00 compute-0 ceph-mon[74360]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 15:50:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:00 compute-0 ceph-mon[74360]: pgmap v3834: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:00 compute-0 ceph-mon[74360]: overall HEALTH_OK
Jan 20 15:50:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3835: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:01 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:01 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:01 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:02.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:02 compute-0 ceph-mon[74360]: pgmap v3835: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:02 compute-0 nova_compute[250018]: 2026-01-20 15:50:02.916 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:03 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:03 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:03 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:04 compute-0 nova_compute[250018]: 2026-01-20 15:50:04.278 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:04.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:04 compute-0 ceph-mon[74360]: pgmap v3836: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:05 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:05 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:05 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:05.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:06.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:06 compute-0 ceph-mon[74360]: pgmap v3837: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3838: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:07 compute-0 nova_compute[250018]: 2026-01-20 15:50:07.920 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:07 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:07 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:07 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:07.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:08 compute-0 ceph-mon[74360]: pgmap v3838: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:09 compute-0 nova_compute[250018]: 2026-01-20 15:50:09.297 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:09 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:09 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:09 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:09.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:10 compute-0 ceph-mon[74360]: pgmap v3839: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3840: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:11 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:11 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:11 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:50:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:50:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:12 compute-0 nova_compute[250018]: 2026-01-20 15:50:12.925 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:13 compute-0 ceph-mon[74360]: pgmap v3840: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:50:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:50:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:50:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:50:13 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:13 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:13 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:50:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/660523494' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:50:14 compute-0 nova_compute[250018]: 2026-01-20 15:50:14.299 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:15 compute-0 ceph-mon[74360]: pgmap v3841: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:15 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:15 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:15 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:16 compute-0 ceph-mon[74360]: pgmap v3842: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:16 compute-0 sudo[408422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:16 compute-0 sudo[408422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:16 compute-0 sudo[408422]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:16 compute-0 sudo[408447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:16 compute-0 sudo[408447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:16 compute-0 sudo[408447]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:17 compute-0 nova_compute[250018]: 2026-01-20 15:50:17.928 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:17 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:17 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:17 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:18 compute-0 ceph-mon[74360]: pgmap v3843: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:19 compute-0 nova_compute[250018]: 2026-01-20 15:50:19.332 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3844: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:19 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:19 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:19 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:20 compute-0 ceph-mon[74360]: pgmap v3844: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:21 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:21 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:21 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:21.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:22.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:22 compute-0 ceph-mon[74360]: pgmap v3845: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:22 compute-0 nova_compute[250018]: 2026-01-20 15:50:22.965 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:23 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:23 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:23 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:24 compute-0 nova_compute[250018]: 2026-01-20 15:50:24.334 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:24 compute-0 ceph-mon[74360]: pgmap v3846: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:25 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:25 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:25 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:25.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:26 compute-0 ceph-mon[74360]: pgmap v3847: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:27 compute-0 nova_compute[250018]: 2026-01-20 15:50:27.081 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3848: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:27 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:27 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:27 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:27.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:27 compute-0 nova_compute[250018]: 2026-01-20 15:50:27.969 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:28 compute-0 ceph-mon[74360]: pgmap v3848: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:29 compute-0 nova_compute[250018]: 2026-01-20 15:50:29.336 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:29 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:29 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:29 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:29.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:30 compute-0 nova_compute[250018]: 2026-01-20 15:50:30.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:30 compute-0 podman[408480]: 2026-01-20 15:50:30.496244931 +0000 UTC m=+0.084405558 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 15:50:30 compute-0 podman[408479]: 2026-01-20 15:50:30.49657515 +0000 UTC m=+0.084763708 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 15:50:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:30.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:50:30.825 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:50:30.826 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:50:30.826 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:30 compute-0 ceph-mon[74360]: pgmap v3849: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3445212458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:31 compute-0 nova_compute[250018]: 2026-01-20 15:50:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:31 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:31 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:31 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:31.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2651699864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:32 compute-0 nova_compute[250018]: 2026-01-20 15:50:32.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:32 compute-0 nova_compute[250018]: 2026-01-20 15:50:32.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:50:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:32.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:32 compute-0 nova_compute[250018]: 2026-01-20 15:50:32.974 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:32 compute-0 ceph-mon[74360]: pgmap v3850: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.090 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.091 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.091 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.092 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:50:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:50:33 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990908444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.576 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:50:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:33 compute-0 sshd-session[408522]: Invalid user oracle from 134.122.57.138 port 49336
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.742 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.744 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4183MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.744 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.745 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.824 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.824 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:50:33 compute-0 sshd-session[408522]: Connection closed by invalid user oracle 134.122.57.138 port 49336 [preauth]
Jan 20 15:50:33 compute-0 nova_compute[250018]: 2026-01-20 15:50:33.851 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:50:33 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:33 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:33 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3990908444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:50:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/425106910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.339 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.346 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.353 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.402 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.404 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:50:34 compute-0 nova_compute[250018]: 2026-01-20 15:50:34.405 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:50:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:35 compute-0 ceph-mon[74360]: pgmap v3851: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:35 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/425106910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:35 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:35 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:35 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:36 compute-0 sudo[408571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:36 compute-0 sudo[408571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:36 compute-0 sudo[408571]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:36 compute-0 sudo[408596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:36 compute-0 sudo[408596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:36 compute-0 sudo[408596]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:37 compute-0 ceph-mon[74360]: pgmap v3852: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:37 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:37 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:37 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:37 compute-0 nova_compute[250018]: 2026-01-20 15:50:37.979 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:39 compute-0 ceph-mon[74360]: pgmap v3853: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1285300892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:39 compute-0 nova_compute[250018]: 2026-01-20 15:50:39.340 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:39 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:39 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:39 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1405845245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:50:40 compute-0 nova_compute[250018]: 2026-01-20 15:50:40.406 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:41 compute-0 nova_compute[250018]: 2026-01-20 15:50:41.046 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:41 compute-0 ceph-mon[74360]: pgmap v3854: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:41 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:41 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:41 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:42 compute-0 nova_compute[250018]: 2026-01-20 15:50:42.983 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:43 compute-0 nova_compute[250018]: 2026-01-20 15:50:43.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:43 compute-0 nova_compute[250018]: 2026-01-20 15:50:43.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 15:50:43 compute-0 nova_compute[250018]: 2026-01-20 15:50:43.073 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 15:50:43 compute-0 ceph-mon[74360]: pgmap v3855: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:43 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:43 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:43 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:43.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:44 compute-0 ceph-mon[74360]: pgmap v3856: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:44 compute-0 nova_compute[250018]: 2026-01-20 15:50:44.382 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:45 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:45 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:45 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:46 compute-0 ceph-mon[74360]: pgmap v3857: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:47 compute-0 nova_compute[250018]: 2026-01-20 15:50:47.987 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:47 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:47 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:47 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:48 compute-0 sudo[408627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:48 compute-0 sudo[408627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-0 sudo[408627]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-0 sudo[408652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:48 compute-0 sudo[408652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-0 sudo[408652]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:48 compute-0 sudo[408677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:48 compute-0 sudo[408677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-0 sudo[408677]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:48 compute-0 sudo[408702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:50:48 compute-0 sudo[408702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:48 compute-0 ceph-mon[74360]: pgmap v3858: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:50:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:50:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:50:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:50:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:49 compute-0 sudo[408702]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:49 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 20 15:50:49 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:49 compute-0 nova_compute[250018]: 2026-01-20 15:50:49.385 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:49 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:49 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:49 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f5901ba0-0a43-485f-8e03-b128f8c3ef06 does not exist
Jan 20 15:50:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev c3fc9484-8934-4c1d-b32a-5b955d396931 does not exist
Jan 20 15:50:50 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bf19bb12-8696-4dc6-bb8d-cfbcba089f9c does not exist
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:50:50 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:50:50 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:50 compute-0 sudo[408759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:50 compute-0 sudo[408759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:50 compute-0 sudo[408759]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:50 compute-0 sudo[408784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:50 compute-0 sudo[408784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:50 compute-0 sudo[408784]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:50 compute-0 sudo[408809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:50 compute-0 sudo[408809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:50 compute-0 sudo[408809]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:50 compute-0 sudo[408834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:50:50 compute-0 sudo[408834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:50.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.837719069 +0000 UTC m=+0.046671776 container create 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:50:50 compute-0 systemd[1]: Started libpod-conmon-3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f.scope.
Jan 20 15:50:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.817296255 +0000 UTC m=+0.026248982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.92085377 +0000 UTC m=+0.129806497 container init 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.930482571 +0000 UTC m=+0.139435278 container start 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.933610306 +0000 UTC m=+0.142563033 container attach 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:50:50 compute-0 gifted_morse[408916]: 167 167
Jan 20 15:50:50 compute-0 systemd[1]: libpod-3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f.scope: Deactivated successfully.
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.938003975 +0000 UTC m=+0.146956692 container died 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b46634e31de4077b58fd7a9fafc86fb01936249e1c5f0e28d5b452391b1d76-merged.mount: Deactivated successfully.
Jan 20 15:50:50 compute-0 podman[408900]: 2026-01-20 15:50:50.984351431 +0000 UTC m=+0.193304158 container remove 3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 20 15:50:50 compute-0 systemd[1]: libpod-conmon-3c4660074bbd387c0a17d13f627bbcb53c590ad3bfa2deffb13664ca947ccf6f.scope: Deactivated successfully.
Jan 20 15:50:51 compute-0 nova_compute[250018]: 2026-01-20 15:50:51.074 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:51 compute-0 ceph-mon[74360]: pgmap v3859: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:50:51 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:50:51 compute-0 podman[408941]: 2026-01-20 15:50:51.162627659 +0000 UTC m=+0.049778710 container create ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:50:51 compute-0 systemd[1]: Started libpod-conmon-ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177.scope.
Jan 20 15:50:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:51 compute-0 podman[408941]: 2026-01-20 15:50:51.14123664 +0000 UTC m=+0.028387661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:51 compute-0 podman[408941]: 2026-01-20 15:50:51.471051463 +0000 UTC m=+0.358202504 container init ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 20 15:50:51 compute-0 podman[408941]: 2026-01-20 15:50:51.477658542 +0000 UTC m=+0.364809553 container start ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:50:51 compute-0 podman[408941]: 2026-01-20 15:50:51.551236424 +0000 UTC m=+0.438387435 container attach ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:50:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:51 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:51 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:50:51 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:50:52 compute-0 epic_williamson[408957]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:50:52 compute-0 epic_williamson[408957]: --> relative data size: 1.0
Jan 20 15:50:52 compute-0 epic_williamson[408957]: --> All data devices are unavailable
Jan 20 15:50:52 compute-0 systemd[1]: libpod-ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177.scope: Deactivated successfully.
Jan 20 15:50:52 compute-0 podman[408972]: 2026-01-20 15:50:52.312100184 +0000 UTC m=+0.027323172 container died ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e818258bbf46ceff7778d21e624ce8c006c1cf84d6eedc7feda396cf17eb380b-merged.mount: Deactivated successfully.
Jan 20 15:50:52 compute-0 podman[408972]: 2026-01-20 15:50:52.434296483 +0000 UTC m=+0.149519461 container remove ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_williamson, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:50:52 compute-0 systemd[1]: libpod-conmon-ec9e0a2ba2899b2bdfd57357c4bd5910410e71d9169bb694a3f3aa303e8d0177.scope: Deactivated successfully.
Jan 20 15:50:52 compute-0 sudo[408834]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:52 compute-0 sudo[408988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:50:52 compute-0 sudo[408988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:52 compute-0 sudo[408988]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:52 compute-0 sudo[409013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:52 compute-0 sudo[409013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:52 compute-0 sudo[409013]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:52 compute-0 sudo[409038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:52 compute-0 sudo[409038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:52 compute-0 sudo[409038]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:50:52
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'backups']
Jan 20 15:50:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:50:52 compute-0 sudo[409063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:50:52 compute-0 sudo[409063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:53 compute-0 nova_compute[250018]: 2026-01-20 15:50:53.017 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.07290642 +0000 UTC m=+0.041686650 container create a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:50:53 compute-0 systemd[1]: Started libpod-conmon-a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20.scope.
Jan 20 15:50:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:53 compute-0 ceph-mon[74360]: pgmap v3860: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.051752417 +0000 UTC m=+0.020532647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.150274226 +0000 UTC m=+0.119054466 container init a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.158414296 +0000 UTC m=+0.127194496 container start a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:50:53 compute-0 sad_chatelet[409146]: 167 167
Jan 20 15:50:53 compute-0 systemd[1]: libpod-a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20.scope: Deactivated successfully.
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.183420404 +0000 UTC m=+0.152200614 container attach a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.184030841 +0000 UTC m=+0.152811071 container died a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bb3bf3ad8c5ddee16334b29f9bb58df9cbbce30f21bb5146e1ddbf5575cc25-merged.mount: Deactivated successfully.
Jan 20 15:50:53 compute-0 podman[409130]: 2026-01-20 15:50:53.218243197 +0000 UTC m=+0.187023407 container remove a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 20 15:50:53 compute-0 systemd[1]: libpod-conmon-a22f38ad385dd3c16d131c673a807acb644b58f38ae2fd603ee160bba58f4e20.scope: Deactivated successfully.
Jan 20 15:50:53 compute-0 podman[409168]: 2026-01-20 15:50:53.3741742 +0000 UTC m=+0.041886345 container create 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:50:53 compute-0 systemd[1]: Started libpod-conmon-805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb.scope.
Jan 20 15:50:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed8d994e497bbc4b50d758e41e31e5276298934d0f32899d477cb10aec39a2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed8d994e497bbc4b50d758e41e31e5276298934d0f32899d477cb10aec39a2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed8d994e497bbc4b50d758e41e31e5276298934d0f32899d477cb10aec39a2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed8d994e497bbc4b50d758e41e31e5276298934d0f32899d477cb10aec39a2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:53 compute-0 podman[409168]: 2026-01-20 15:50:53.442976245 +0000 UTC m=+0.110688390 container init 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:50:53 compute-0 podman[409168]: 2026-01-20 15:50:53.450600761 +0000 UTC m=+0.118312876 container start 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 20 15:50:53 compute-0 podman[409168]: 2026-01-20 15:50:53.357599872 +0000 UTC m=+0.025312027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:53 compute-0 podman[409168]: 2026-01-20 15:50:53.453613652 +0000 UTC m=+0.121325777 container attach 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 15:50:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:53 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:54 compute-0 keen_colden[409184]: {
Jan 20 15:50:54 compute-0 keen_colden[409184]:     "0": [
Jan 20 15:50:54 compute-0 keen_colden[409184]:         {
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "devices": [
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "/dev/loop3"
Jan 20 15:50:54 compute-0 keen_colden[409184]:             ],
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "lv_name": "ceph_lv0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "lv_size": "7511998464",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "name": "ceph_lv0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "tags": {
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.cluster_name": "ceph",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.crush_device_class": "",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.encrypted": "0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.osd_id": "0",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.type": "block",
Jan 20 15:50:54 compute-0 keen_colden[409184]:                 "ceph.vdo": "0"
Jan 20 15:50:54 compute-0 keen_colden[409184]:             },
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "type": "block",
Jan 20 15:50:54 compute-0 keen_colden[409184]:             "vg_name": "ceph_vg0"
Jan 20 15:50:54 compute-0 keen_colden[409184]:         }
Jan 20 15:50:54 compute-0 keen_colden[409184]:     ]
Jan 20 15:50:54 compute-0 keen_colden[409184]: }
Jan 20 15:50:54 compute-0 systemd[1]: libpod-805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb.scope: Deactivated successfully.
Jan 20 15:50:54 compute-0 podman[409168]: 2026-01-20 15:50:54.191064077 +0000 UTC m=+0.858776212 container died 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:50:54 compute-0 ceph-mon[74360]: pgmap v3861: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-eed8d994e497bbc4b50d758e41e31e5276298934d0f32899d477cb10aec39a2a-merged.mount: Deactivated successfully.
Jan 20 15:50:54 compute-0 podman[409168]: 2026-01-20 15:50:54.356672892 +0000 UTC m=+1.024385057 container remove 805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:50:54 compute-0 systemd[1]: libpod-conmon-805c88b3f934adaebdfab243496afcec718641c68d6f18c70c89f12ed3c805eb.scope: Deactivated successfully.
Jan 20 15:50:54 compute-0 sudo[409063]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:54 compute-0 nova_compute[250018]: 2026-01-20 15:50:54.427 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:54 compute-0 sudo[409208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:54 compute-0 sudo[409208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:54 compute-0 sudo[409208]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:54 compute-0 sudo[409233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:50:54 compute-0 sudo[409233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:54 compute-0 sudo[409233]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:54 compute-0 sudo[409258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:54 compute-0 sudo[409258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:54 compute-0 sudo[409258]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:54 compute-0 sudo[409283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:50:54 compute-0 sudo[409283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.027502432 +0000 UTC m=+0.046657604 container create 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 20 15:50:55 compute-0 systemd[1]: Started libpod-conmon-078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4.scope.
Jan 20 15:50:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.008630782 +0000 UTC m=+0.027786014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.11602943 +0000 UTC m=+0.135184612 container init 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.122755192 +0000 UTC m=+0.141910374 container start 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.125986331 +0000 UTC m=+0.145141513 container attach 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:50:55 compute-0 hungry_raman[409365]: 167 167
Jan 20 15:50:55 compute-0 systemd[1]: libpod-078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4.scope: Deactivated successfully.
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.129133636 +0000 UTC m=+0.148288818 container died 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2bfe97d0425f21ae630bd5b24b92ff1dec738d1817fa6f36b09b071d9103b1b-merged.mount: Deactivated successfully.
Jan 20 15:50:55 compute-0 podman[409349]: 2026-01-20 15:50:55.17030885 +0000 UTC m=+0.189464032 container remove 078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 20 15:50:55 compute-0 systemd[1]: libpod-conmon-078a9a82549bd340effa44d80b3d78c9377bc5c1b3559cd8298875bd972a20d4.scope: Deactivated successfully.
Jan 20 15:50:55 compute-0 sshd-session[409367]: error: kex_exchange_identification: read: Connection reset by peer
Jan 20 15:50:55 compute-0 sshd-session[409367]: Connection reset by 176.120.22.52 port 26577
Jan 20 15:50:55 compute-0 podman[409388]: 2026-01-20 15:50:55.343791309 +0000 UTC m=+0.044626670 container create 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:50:55 compute-0 systemd[1]: Started libpod-conmon-986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120.scope.
Jan 20 15:50:55 compute-0 podman[409388]: 2026-01-20 15:50:55.325217447 +0000 UTC m=+0.026052808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:50:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9a6b3dbd58c201cb96d3a9fb81423134499cacd20cc4536bb42ab38e610273/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9a6b3dbd58c201cb96d3a9fb81423134499cacd20cc4536bb42ab38e610273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9a6b3dbd58c201cb96d3a9fb81423134499cacd20cc4536bb42ab38e610273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9a6b3dbd58c201cb96d3a9fb81423134499cacd20cc4536bb42ab38e610273/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:50:55 compute-0 podman[409388]: 2026-01-20 15:50:55.44646182 +0000 UTC m=+0.147297191 container init 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:50:55 compute-0 podman[409388]: 2026-01-20 15:50:55.455478325 +0000 UTC m=+0.156313666 container start 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:50:55 compute-0 podman[409388]: 2026-01-20 15:50:55.458511857 +0000 UTC m=+0.159347228 container attach 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:50:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:50:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]: {
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:         "osd_id": 0,
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:         "type": "bluestore"
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]:     }
Jan 20 15:50:56 compute-0 distracted_zhukovsky[409405]: }
Jan 20 15:50:56 compute-0 systemd[1]: libpod-986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120.scope: Deactivated successfully.
Jan 20 15:50:56 compute-0 podman[409388]: 2026-01-20 15:50:56.300499422 +0000 UTC m=+1.001334763 container died 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9a6b3dbd58c201cb96d3a9fb81423134499cacd20cc4536bb42ab38e610273-merged.mount: Deactivated successfully.
Jan 20 15:50:56 compute-0 podman[409388]: 2026-01-20 15:50:56.355148342 +0000 UTC m=+1.055983683 container remove 986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:50:56 compute-0 systemd[1]: libpod-conmon-986a06e270899a1dc23795a1820efac7f445c9354527862a7173439002d22120.scope: Deactivated successfully.
Jan 20 15:50:56 compute-0 sudo[409283]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:50:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:56 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:50:56 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 74f99c3e-4194-42b6-b4ad-750baca8c64a does not exist
Jan 20 15:50:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bfac1168-0c5c-4948-a61b-49c1d65a5fb5 does not exist
Jan 20 15:50:56 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 5b00658f-c9a9-46f9-bb58-28d77929ddf2 does not exist
Jan 20 15:50:56 compute-0 sudo[409439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:56 compute-0 sudo[409439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:56 compute-0 sudo[409439]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:56 compute-0 sudo[409464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:50:56 compute-0 sudo[409464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:56 compute-0 sudo[409464]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:56.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:56 compute-0 ceph-mon[74360]: pgmap v3862: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:56 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:50:57 compute-0 nova_compute[250018]: 2026-01-20 15:50:57.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:50:57 compute-0 nova_compute[250018]: 2026-01-20 15:50:57.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:50:57 compute-0 nova_compute[250018]: 2026-01-20 15:50:57.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:50:57 compute-0 nova_compute[250018]: 2026-01-20 15:50:57.074 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:50:57 compute-0 sudo[409489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:57 compute-0 sudo[409489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:57 compute-0 sudo[409489]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:57 compute-0 sudo[409514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:50:57 compute-0 sudo[409514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:50:57 compute-0 sudo[409514]: pam_unix(sudo:session): session closed for user root
Jan 20 15:50:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:50:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:58 compute-0 nova_compute[250018]: 2026-01-20 15:50:58.021 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:50:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:50:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:50:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:50:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:50:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:50:58.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:50:58 compute-0 ceph-mon[74360]: pgmap v3863: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:50:59 compute-0 nova_compute[250018]: 2026-01-20 15:50:59.428 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:50:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:00.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:00 compute-0 ceph-mon[74360]: pgmap v3864: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:01 compute-0 podman[409542]: 2026-01-20 15:51:01.467431622 +0000 UTC m=+0.061655461 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:51:01 compute-0 podman[409541]: 2026-01-20 15:51:01.522046402 +0000 UTC m=+0.116421225 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 15:51:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:02.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:02.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:02 compute-0 ceph-mon[74360]: pgmap v3865: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:03 compute-0 nova_compute[250018]: 2026-01-20 15:51:03.024 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:04 compute-0 nova_compute[250018]: 2026-01-20 15:51:04.429 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:04 compute-0 ceph-mon[74360]: pgmap v3866: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:51:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:06.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:51:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:06 compute-0 ceph-mon[74360]: pgmap v3867: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:08.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:08 compute-0 nova_compute[250018]: 2026-01-20 15:51:08.030 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:09 compute-0 ceph-mon[74360]: pgmap v3868: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:09 compute-0 nova_compute[250018]: 2026-01-20 15:51:09.431 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:11 compute-0 ceph-mon[74360]: pgmap v3869: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:12.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:51:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:51:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:13 compute-0 nova_compute[250018]: 2026-01-20 15:51:13.033 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:13 compute-0 ceph-mon[74360]: pgmap v3870: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:14.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:51:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/3537550131' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:51:14 compute-0 nova_compute[250018]: 2026-01-20 15:51:14.483 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:14.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:15 compute-0 ceph-mon[74360]: pgmap v3871: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:15 compute-0 sshd-session[409592]: Invalid user oracle from 134.122.57.138 port 54596
Jan 20 15:51:15 compute-0 sshd-session[409592]: Connection closed by invalid user oracle 134.122.57.138 port 54596 [preauth]
Jan 20 15:51:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:16.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:16 compute-0 ceph-mon[74360]: pgmap v3872: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:17 compute-0 sudo[409596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:17 compute-0 sudo[409596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:17 compute-0 sudo[409596]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:17 compute-0 sudo[409621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:17 compute-0 sudo[409621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:17 compute-0 sudo[409621]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:18 compute-0 nova_compute[250018]: 2026-01-20 15:51:18.037 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:18 compute-0 ceph-mon[74360]: pgmap v3873: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:19 compute-0 nova_compute[250018]: 2026-01-20 15:51:19.485 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:20.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:20 compute-0 ceph-mon[74360]: pgmap v3874: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:21 compute-0 nova_compute[250018]: 2026-01-20 15:51:21.891 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:22.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:22.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:22 compute-0 ceph-mon[74360]: pgmap v3875: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:23 compute-0 nova_compute[250018]: 2026-01-20 15:51:23.042 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:24 compute-0 nova_compute[250018]: 2026-01-20 15:51:24.539 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:24.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:24 compute-0 ceph-mon[74360]: pgmap v3876: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:26.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:26 compute-0 ceph-mon[74360]: pgmap v3877: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:27 compute-0 nova_compute[250018]: 2026-01-20 15:51:27.074 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:28 compute-0 nova_compute[250018]: 2026-01-20 15:51:28.046 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:28.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:28 compute-0 ceph-mon[74360]: pgmap v3878: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:29 compute-0 nova_compute[250018]: 2026-01-20 15:51:29.541 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:51:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:51:30.826 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:51:30.827 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:51:30.827 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:30 compute-0 ceph-mon[74360]: pgmap v3879: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:31 compute-0 nova_compute[250018]: 2026-01-20 15:51:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:31 compute-0 nova_compute[250018]: 2026-01-20 15:51:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:32 compute-0 podman[409655]: 2026-01-20 15:51:32.47425465 +0000 UTC m=+0.055519205 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 15:51:32 compute-0 podman[409653]: 2026-01-20 15:51:32.511048367 +0000 UTC m=+0.091922911 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 15:51:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:32 compute-0 ceph-mon[74360]: pgmap v3880: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3335256768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:33 compute-0 nova_compute[250018]: 2026-01-20 15:51:33.049 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:33 compute-0 nova_compute[250018]: 2026-01-20 15:51:33.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:33 compute-0 nova_compute[250018]: 2026-01-20 15:51:33.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:51:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/814707366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.077 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.078 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.078 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.078 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:51:34 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:51:34 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208351070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.500 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.542 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.666 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.667 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.667 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.667 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:51:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:34.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.741 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.741 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:51:34 compute-0 nova_compute[250018]: 2026-01-20 15:51:34.767 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:51:34 compute-0 ceph-mon[74360]: pgmap v3881: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:34 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3208351070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:35 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:51:35 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180192974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:35 compute-0 nova_compute[250018]: 2026-01-20 15:51:35.218 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:51:35 compute-0 nova_compute[250018]: 2026-01-20 15:51:35.224 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:51:35 compute-0 nova_compute[250018]: 2026-01-20 15:51:35.244 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:51:35 compute-0 nova_compute[250018]: 2026-01-20 15:51:35.247 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:51:35 compute-0 nova_compute[250018]: 2026-01-20 15:51:35.247 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:51:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1180192974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:51:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:51:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:36.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:37 compute-0 ceph-mon[74360]: pgmap v3882: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:37 compute-0 sudo[409744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:37 compute-0 sudo[409744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:37 compute-0 sudo[409744]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:37 compute-0 sudo[409769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:37 compute-0 sudo[409769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:37 compute-0 sudo[409769]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:38 compute-0 nova_compute[250018]: 2026-01-20 15:51:38.053 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:39 compute-0 ceph-mon[74360]: pgmap v3883: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:39 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/701871106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:39 compute-0 nova_compute[250018]: 2026-01-20 15:51:39.544 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:40 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1389155706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:51:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:40 compute-0 nova_compute[250018]: 2026-01-20 15:51:40.248 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:40.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:41 compute-0 ceph-mon[74360]: pgmap v3884: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:42 compute-0 nova_compute[250018]: 2026-01-20 15:51:42.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:51:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:51:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:43 compute-0 ceph-mon[74360]: pgmap v3885: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:43 compute-0 nova_compute[250018]: 2026-01-20 15:51:43.091 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:44.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:44 compute-0 nova_compute[250018]: 2026-01-20 15:51:44.546 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:44.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:45 compute-0 ceph-mon[74360]: pgmap v3886: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:46.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:46.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:47 compute-0 ceph-mon[74360]: pgmap v3887: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:48 compute-0 nova_compute[250018]: 2026-01-20 15:51:48.096 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:48 compute-0 ceph-mon[74360]: pgmap v3888: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:48.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:49 compute-0 nova_compute[250018]: 2026-01-20 15:51:49.549 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:50.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:50.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:50 compute-0 ceph-mon[74360]: pgmap v3889: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:51 compute-0 nova_compute[250018]: 2026-01-20 15:51:51.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:52 compute-0 nova_compute[250018]: 2026-01-20 15:51:52.045 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:51:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:52.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:51:52
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups']
Jan 20 15:51:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:51:52 compute-0 ceph-mon[74360]: pgmap v3890: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:53 compute-0 nova_compute[250018]: 2026-01-20 15:51:53.100 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:54 compute-0 nova_compute[250018]: 2026-01-20 15:51:54.550 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:54.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:54 compute-0 ceph-mon[74360]: pgmap v3891: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:56 compute-0 ceph-mon[74360]: pgmap v3892: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:57 compute-0 sudo[409807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-0 sudo[409807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409807]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 sudo[409832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:51:57 compute-0 sshd-session[409804]: Invalid user oracle from 134.122.57.138 port 45560
Jan 20 15:51:57 compute-0 sudo[409832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409832]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 sudo[409857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-0 sudo[409857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409857]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 sudo[409882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:51:57 compute-0 sudo[409882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sshd-session[409804]: Connection closed by invalid user oracle 134.122.57.138 port 45560 [preauth]
Jan 20 15:51:57 compute-0 sudo[409926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-0 sudo[409926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409926]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 sudo[409882]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 sudo[409960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-0 sudo[409960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409960]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:51:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 9a1f3768-4852-44a0-9219-22cbee617d41 does not exist
Jan 20 15:51:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev a52012ee-08fe-4172-b457-631638ae0aac does not exist
Jan 20 15:51:57 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev bb3de0c4-471e-49ba-b295-bc3e202e5981 does not exist
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:51:57 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:51:57 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:57 compute-0 sudo[409988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:57 compute-0 sudo[409988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:57 compute-0 sudo[409988]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:58 compute-0 sudo[410013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:51:58 compute-0 sudo[410013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:58 compute-0 sudo[410013]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:58 compute-0 nova_compute[250018]: 2026-01-20 15:51:58.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:51:58 compute-0 nova_compute[250018]: 2026-01-20 15:51:58.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:51:58 compute-0 nova_compute[250018]: 2026-01-20 15:51:58.051 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:51:58 compute-0 nova_compute[250018]: 2026-01-20 15:51:58.065 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:51:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:51:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:51:58.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:58 compute-0 nova_compute[250018]: 2026-01-20 15:51:58.102 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:58 compute-0 sudo[410038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:58 compute-0 sudo[410038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:58 compute-0 sudo[410038]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:58 compute-0 sudo[410063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:51:58 compute-0 sudo[410063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.556779692 +0000 UTC m=+0.057255202 container create 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:51:58 compute-0 systemd[1]: Started libpod-conmon-92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1.scope.
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.534837907 +0000 UTC m=+0.035313527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:51:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.658053585 +0000 UTC m=+0.158529115 container init 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:51:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.672065714 +0000 UTC m=+0.172541224 container start 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.676278848 +0000 UTC m=+0.176754358 container attach 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 15:51:58 compute-0 compassionate_easley[410146]: 167 167
Jan 20 15:51:58 compute-0 systemd[1]: libpod-92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1.scope: Deactivated successfully.
Jan 20 15:51:58 compute-0 conmon[410146]: conmon 92e15744a5936dfde081 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1.scope/container/memory.events
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.679955298 +0000 UTC m=+0.180430808 container died 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 15:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c5291b24414f429829a4fc68184e13fd0d294f86b5646033c5c833971497a0-merged.mount: Deactivated successfully.
Jan 20 15:51:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:51:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:51:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:51:58.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:51:58 compute-0 podman[410130]: 2026-01-20 15:51:58.726137729 +0000 UTC m=+0.226613249 container remove 92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:51:58 compute-0 systemd[1]: libpod-conmon-92e15744a5936dfde08188662fce192e6a5933a0070cb813eea81268e8149ba1.scope: Deactivated successfully.
Jan 20 15:51:58 compute-0 ceph-mon[74360]: pgmap v3893: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:51:58 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:51:58 compute-0 podman[410169]: 2026-01-20 15:51:58.914246994 +0000 UTC m=+0.055290689 container create 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:51:58 compute-0 systemd[1]: Started libpod-conmon-4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac.scope.
Jan 20 15:51:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:51:58 compute-0 podman[410169]: 2026-01-20 15:51:58.893327327 +0000 UTC m=+0.034371062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:51:59 compute-0 podman[410169]: 2026-01-20 15:51:59.004058037 +0000 UTC m=+0.145101752 container init 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 20 15:51:59 compute-0 podman[410169]: 2026-01-20 15:51:59.015190138 +0000 UTC m=+0.156233833 container start 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 20 15:51:59 compute-0 podman[410169]: 2026-01-20 15:51:59.019042193 +0000 UTC m=+0.160085928 container attach 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:51:59 compute-0 nova_compute[250018]: 2026-01-20 15:51:59.551 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:51:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:51:59 compute-0 hungry_lichterman[410186]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:51:59 compute-0 hungry_lichterman[410186]: --> relative data size: 1.0
Jan 20 15:51:59 compute-0 hungry_lichterman[410186]: --> All data devices are unavailable
Jan 20 15:51:59 compute-0 systemd[1]: libpod-4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac.scope: Deactivated successfully.
Jan 20 15:51:59 compute-0 podman[410169]: 2026-01-20 15:51:59.821163419 +0000 UTC m=+0.962207104 container died 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-0abc6732e6046ab7db676f5a9fda4e4a48b8a24750ace22a975998bc6cc39f73-merged.mount: Deactivated successfully.
Jan 20 15:51:59 compute-0 podman[410169]: 2026-01-20 15:51:59.870256759 +0000 UTC m=+1.011300434 container remove 4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:51:59 compute-0 systemd[1]: libpod-conmon-4f89d15fbc5106f55270e5f76f20bcc5bc1ab87766e8158e7a8c322efa609bac.scope: Deactivated successfully.
Jan 20 15:51:59 compute-0 sudo[410063]: pam_unix(sudo:session): session closed for user root
Jan 20 15:51:59 compute-0 sudo[410214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:51:59 compute-0 sudo[410214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:51:59 compute-0 sudo[410214]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:00 compute-0 sudo[410239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:52:00 compute-0 sudo[410239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:00 compute-0 sudo[410239]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:00 compute-0 sudo[410264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:00 compute-0 sudo[410264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:00 compute-0 sudo[410264]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:00 compute-0 sudo[410289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:52:00 compute-0 sudo[410289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.494104045 +0000 UTC m=+0.040009694 container create eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:52:00 compute-0 systemd[1]: Started libpod-conmon-eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50.scope.
Jan 20 15:52:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.474196666 +0000 UTC m=+0.020102335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.581824512 +0000 UTC m=+0.127730181 container init eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.588199514 +0000 UTC m=+0.134105163 container start eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.591555405 +0000 UTC m=+0.137461054 container attach eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 20 15:52:00 compute-0 clever_murdock[410374]: 167 167
Jan 20 15:52:00 compute-0 systemd[1]: libpod-eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50.scope: Deactivated successfully.
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.593728834 +0000 UTC m=+0.139634473 container died eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c5bea412b3b07dded5d6c58205ccf6268f79c10b009a29155b21e0e09dbd77-merged.mount: Deactivated successfully.
Jan 20 15:52:00 compute-0 podman[410357]: 2026-01-20 15:52:00.626259506 +0000 UTC m=+0.172165155 container remove eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 20 15:52:00 compute-0 systemd[1]: libpod-conmon-eb27d9d8626e88a488a7569d958155fb19b29cd89259520e002f20b793735f50.scope: Deactivated successfully.
Jan 20 15:52:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:00.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:00 compute-0 podman[410398]: 2026-01-20 15:52:00.795698295 +0000 UTC m=+0.038261028 container create 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:52:00 compute-0 systemd[1]: Started libpod-conmon-3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784.scope.
Jan 20 15:52:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1dec33f1de15c39b953dd354413cb998aeeb70f432d3c3cd34f546fe3de2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:00 compute-0 podman[410398]: 2026-01-20 15:52:00.77926516 +0000 UTC m=+0.021827913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1dec33f1de15c39b953dd354413cb998aeeb70f432d3c3cd34f546fe3de2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1dec33f1de15c39b953dd354413cb998aeeb70f432d3c3cd34f546fe3de2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31f1dec33f1de15c39b953dd354413cb998aeeb70f432d3c3cd34f546fe3de2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:00 compute-0 podman[410398]: 2026-01-20 15:52:00.891225352 +0000 UTC m=+0.133788115 container init 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:52:00 compute-0 ceph-mon[74360]: pgmap v3894: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:00 compute-0 podman[410398]: 2026-01-20 15:52:00.902238491 +0000 UTC m=+0.144801234 container start 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:52:00 compute-0 podman[410398]: 2026-01-20 15:52:00.90553807 +0000 UTC m=+0.148100813 container attach 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 15:52:01 compute-0 infallible_raman[410415]: {
Jan 20 15:52:01 compute-0 infallible_raman[410415]:     "0": [
Jan 20 15:52:01 compute-0 infallible_raman[410415]:         {
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "devices": [
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "/dev/loop3"
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             ],
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "lv_name": "ceph_lv0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "lv_size": "7511998464",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "name": "ceph_lv0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "tags": {
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.cluster_name": "ceph",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.crush_device_class": "",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.encrypted": "0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.osd_id": "0",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.type": "block",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:                 "ceph.vdo": "0"
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             },
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "type": "block",
Jan 20 15:52:01 compute-0 infallible_raman[410415]:             "vg_name": "ceph_vg0"
Jan 20 15:52:01 compute-0 infallible_raman[410415]:         }
Jan 20 15:52:01 compute-0 infallible_raman[410415]:     ]
Jan 20 15:52:01 compute-0 infallible_raman[410415]: }
Jan 20 15:52:01 compute-0 systemd[1]: libpod-3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784.scope: Deactivated successfully.
Jan 20 15:52:01 compute-0 podman[410398]: 2026-01-20 15:52:01.695864537 +0000 UTC m=+0.938427310 container died 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-31f1dec33f1de15c39b953dd354413cb998aeeb70f432d3c3cd34f546fe3de2e-merged.mount: Deactivated successfully.
Jan 20 15:52:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:01 compute-0 podman[410398]: 2026-01-20 15:52:01.745794839 +0000 UTC m=+0.988357572 container remove 3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:52:01 compute-0 systemd[1]: libpod-conmon-3c363617996b348aa7fbc712b264a3bbf81c057f45a75080a85ca61c87b06784.scope: Deactivated successfully.
Jan 20 15:52:01 compute-0 sudo[410289]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:01 compute-0 sudo[410436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:01 compute-0 sudo[410436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:01 compute-0 sudo[410436]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:01 compute-0 sudo[410461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:52:01 compute-0 sudo[410461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:01 compute-0 sudo[410461]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:02 compute-0 sudo[410486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:02 compute-0 sudo[410486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:02 compute-0 sudo[410486]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:02 compute-0 sudo[410511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:52:02 compute-0 sudo[410511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.402552107 +0000 UTC m=+0.042493161 container create 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:52:02 compute-0 systemd[1]: Started libpod-conmon-2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae.scope.
Jan 20 15:52:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.387507741 +0000 UTC m=+0.027448815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.491458336 +0000 UTC m=+0.131399420 container init 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.496847962 +0000 UTC m=+0.136789016 container start 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.500081379 +0000 UTC m=+0.140022433 container attach 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:52:02 compute-0 youthful_bassi[410595]: 167 167
Jan 20 15:52:02 compute-0 systemd[1]: libpod-2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae.scope: Deactivated successfully.
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.501366444 +0000 UTC m=+0.141307498 container died 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-10539f06b0b62b7818e8f4b2dde3a970ae31d2ce68748c1eb34b5be38f72a533-merged.mount: Deactivated successfully.
Jan 20 15:52:02 compute-0 podman[410577]: 2026-01-20 15:52:02.541948103 +0000 UTC m=+0.181889157 container remove 2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 20 15:52:02 compute-0 systemd[1]: libpod-conmon-2edff645d104f1f36dc821c3d504fb220f87a733c81f138cdb2ca487f70b4fae.scope: Deactivated successfully.
Jan 20 15:52:02 compute-0 podman[410601]: 2026-01-20 15:52:02.597295672 +0000 UTC m=+0.062186375 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 15:52:02 compute-0 podman[410608]: 2026-01-20 15:52:02.629259948 +0000 UTC m=+0.091455868 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 15:52:02 compute-0 podman[410660]: 2026-01-20 15:52:02.700645062 +0000 UTC m=+0.046745587 container create 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:52:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:02.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:02 compute-0 systemd[1]: Started libpod-conmon-94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945.scope.
Jan 20 15:52:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69e4765f845a8fb10d755ec461a7a5d7671906b39f38317e3318eb0d1a8cb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69e4765f845a8fb10d755ec461a7a5d7671906b39f38317e3318eb0d1a8cb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69e4765f845a8fb10d755ec461a7a5d7671906b39f38317e3318eb0d1a8cb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69e4765f845a8fb10d755ec461a7a5d7671906b39f38317e3318eb0d1a8cb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:52:02 compute-0 podman[410660]: 2026-01-20 15:52:02.679051737 +0000 UTC m=+0.025152302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:52:02 compute-0 podman[410660]: 2026-01-20 15:52:02.776724993 +0000 UTC m=+0.122825528 container init 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:52:02 compute-0 podman[410660]: 2026-01-20 15:52:02.782598092 +0000 UTC m=+0.128698607 container start 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 20 15:52:02 compute-0 podman[410660]: 2026-01-20 15:52:02.785572062 +0000 UTC m=+0.131672577 container attach 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:52:03 compute-0 ceph-mon[74360]: pgmap v3895: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:03 compute-0 nova_compute[250018]: 2026-01-20 15:52:03.105 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]: {
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:         "osd_id": 0,
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:         "type": "bluestore"
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]:     }
Jan 20 15:52:03 compute-0 wizardly_shirley[410677]: }
Jan 20 15:52:03 compute-0 systemd[1]: libpod-94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945.scope: Deactivated successfully.
Jan 20 15:52:03 compute-0 podman[410660]: 2026-01-20 15:52:03.614972947 +0000 UTC m=+0.961073472 container died 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b69e4765f845a8fb10d755ec461a7a5d7671906b39f38317e3318eb0d1a8cb4a-merged.mount: Deactivated successfully.
Jan 20 15:52:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:03 compute-0 podman[410660]: 2026-01-20 15:52:03.673168274 +0000 UTC m=+1.019268799 container remove 94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shirley, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 15:52:03 compute-0 systemd[1]: libpod-conmon-94522bdf900d2d34e09497c1a48f9d788ad82fcebcaaa0a010c9c0451fd84945.scope: Deactivated successfully.
Jan 20 15:52:03 compute-0 sudo[410511]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:52:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:52:03 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d6014338-9590-4127-9d24-7aea3a6a6c4c does not exist
Jan 20 15:52:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev f711bca0-5d8f-4afb-a2a7-2548864d7ca9 does not exist
Jan 20 15:52:03 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 4f56f011-66c0-46bc-9d4c-9075fddbf736 does not exist
Jan 20 15:52:03 compute-0 sudo[410713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:03 compute-0 sudo[410713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:03 compute-0 sudo[410713]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:03 compute-0 sudo[410738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:52:03 compute-0 sudo[410738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:03 compute-0 sudo[410738]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:04.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:04 compute-0 nova_compute[250018]: 2026-01-20 15:52:04.554 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:04.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:04 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:52:04 compute-0 ceph-mon[74360]: pgmap v3896: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:06.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:06 compute-0 ceph-mon[74360]: pgmap v3897: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:08.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:08 compute-0 nova_compute[250018]: 2026-01-20 15:52:08.109 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:08 compute-0 ceph-mon[74360]: pgmap v3898: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:09 compute-0 nova_compute[250018]: 2026-01-20 15:52:09.556 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:10.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:10 compute-0 ceph-mon[74360]: pgmap v3899: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:12.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:52:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:52:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:12 compute-0 ceph-mon[74360]: pgmap v3900: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:13 compute-0 nova_compute[250018]: 2026-01-20 15:52:13.114 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 20 15:52:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:52:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 20 15:52:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:52:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:52:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1897007487' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:52:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:14.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:14 compute-0 nova_compute[250018]: 2026-01-20 15:52:14.558 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:14.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:15 compute-0 ceph-mon[74360]: pgmap v3901: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:16.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:16 compute-0 ceph-mon[74360]: pgmap v3902: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3903: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:17 compute-0 sudo[410770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:17 compute-0 sudo[410770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:17 compute-0 sudo[410770]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:17 compute-0 sudo[410795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:17 compute-0 sudo[410795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:17 compute-0 sudo[410795]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:18 compute-0 nova_compute[250018]: 2026-01-20 15:52:18.117 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:18.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:18 compute-0 ceph-mon[74360]: pgmap v3903: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:19 compute-0 nova_compute[250018]: 2026-01-20 15:52:19.560 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:20.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:20 compute-0 ceph-mon[74360]: pgmap v3904: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3905: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:22.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:22 compute-0 ceph-mon[74360]: pgmap v3905: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:23 compute-0 nova_compute[250018]: 2026-01-20 15:52:23.120 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:24 compute-0 nova_compute[250018]: 2026-01-20 15:52:24.604 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:24.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:25 compute-0 ceph-mon[74360]: pgmap v3906: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:26.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:26.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:27 compute-0 ceph-mon[74360]: pgmap v3907: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:27 compute-0 nova_compute[250018]: 2026-01-20 15:52:27.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:28.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:28 compute-0 nova_compute[250018]: 2026-01-20 15:52:28.158 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:29 compute-0 ceph-mon[74360]: pgmap v3908: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:29 compute-0 nova_compute[250018]: 2026-01-20 15:52:29.607 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:30.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:30 compute-0 ceph-mon[74360]: pgmap v3909: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:52:30.828 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:52:30.829 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:52:30.829 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:31 compute-0 nova_compute[250018]: 2026-01-20 15:52:31.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:31 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:32.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:32 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:32 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:32 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:32 compute-0 ceph-mon[74360]: pgmap v3910: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:32 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2680723541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:33 compute-0 nova_compute[250018]: 2026-01-20 15:52:33.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:33 compute-0 nova_compute[250018]: 2026-01-20 15:52:33.160 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:33 compute-0 podman[410829]: 2026-01-20 15:52:33.447153288 +0000 UTC m=+0.043891201 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 15:52:33 compute-0 podman[410828]: 2026-01-20 15:52:33.479240447 +0000 UTC m=+0.075971810 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 15:52:33 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:33 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:33 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3402365577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:34.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:34 compute-0 nova_compute[250018]: 2026-01-20 15:52:34.607 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:34 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:34 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:34 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:34 compute-0 ceph-mon[74360]: pgmap v3911: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:35 compute-0 nova_compute[250018]: 2026-01-20 15:52:35.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:35 compute-0 nova_compute[250018]: 2026-01-20 15:52:35.050 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 15:52:35 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.075 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.076 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.076 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.077 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:52:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:36.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:36 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:52:36 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065801336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.535 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.685 250022 WARNING nova.virt.libvirt.driver [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.686 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4161MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.687 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.687 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:52:36 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:36 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:36 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:36.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.775 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.775 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 15:52:36 compute-0 nova_compute[250018]: 2026-01-20 15:52:36.794 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 15:52:36 compute-0 ceph-mon[74360]: pgmap v3912: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:36 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4065801336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:37 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 20 15:52:37 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579735090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:37 compute-0 nova_compute[250018]: 2026-01-20 15:52:37.275 250022 DEBUG oslo_concurrency.processutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 15:52:37 compute-0 nova_compute[250018]: 2026-01-20 15:52:37.281 250022 DEBUG nova.compute.provider_tree [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed in ProviderTree for provider: 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 15:52:37 compute-0 nova_compute[250018]: 2026-01-20 15:52:37.308 250022 DEBUG nova.scheduler.client.report [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Inventory has not changed for provider 068db7fd-4bd6-45a9-8bd6-a22cfe7596ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 15:52:37 compute-0 nova_compute[250018]: 2026-01-20 15:52:37.310 250022 DEBUG nova.compute.resource_tracker [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 15:52:37 compute-0 nova_compute[250018]: 2026-01-20 15:52:37.310 250022 DEBUG oslo_concurrency.lockutils [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:52:37 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:37 compute-0 sshd-session[410875]: Invalid user oracle from 134.122.57.138 port 33066
Jan 20 15:52:37 compute-0 sudo[410922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:37 compute-0 sudo[410922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:37 compute-0 sudo[410922]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:37 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/579735090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:37 compute-0 sudo[410947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:37 compute-0 sudo[410947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:38 compute-0 sudo[410947]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:38 compute-0 sshd-session[410875]: Connection closed by invalid user oracle 134.122.57.138 port 33066 [preauth]
Jan 20 15:52:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:38 compute-0 nova_compute[250018]: 2026-01-20 15:52:38.163 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:38 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:38 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:38 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:38 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:38.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:38 compute-0 ceph-mon[74360]: pgmap v3913: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:39 compute-0 nova_compute[250018]: 2026-01-20 15:52:39.610 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:39 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:40.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:40 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:40 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:40 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:40.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:41 compute-0 ceph-mon[74360]: pgmap v3914: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:41 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1369305377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:41 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:42.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:42 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/405040981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 15:52:42 compute-0 nova_compute[250018]: 2026-01-20 15:52:42.312 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:42 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:42 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:42 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:42.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:43 compute-0 nova_compute[250018]: 2026-01-20 15:52:43.044 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:43 compute-0 nova_compute[250018]: 2026-01-20 15:52:43.167 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:43 compute-0 ceph-mon[74360]: pgmap v3915: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:43 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:43 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:44.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.455319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364455485, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 3891132, "memory_usage": 3951088, "flush_reason": "Manual Compaction"}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 20 15:52:44 compute-0 ceph-mon[74360]: pgmap v3916: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364486438, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3812952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85163, "largest_seqno": 87268, "table_properties": {"data_size": 3803442, "index_size": 6003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19248, "raw_average_key_size": 20, "raw_value_size": 3784588, "raw_average_value_size": 3992, "num_data_blocks": 262, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768924145, "oldest_key_time": 1768924145, "file_creation_time": 1768924364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 31202 microseconds, and 15528 cpu microseconds.
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.486531) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3812952 bytes OK
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.486559) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.489274) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.489303) EVENT_LOG_v1 {"time_micros": 1768924364489294, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.489329) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3882684, prev total WAL file size 3882684, number of live WAL files 2.
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.491071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3723KB)], [197(11MB)]
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364491184, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 16204129, "oldest_snapshot_seqno": -1}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11376 keys, 14207518 bytes, temperature: kUnknown
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364560304, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 14207518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14135177, "index_size": 42788, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28485, "raw_key_size": 299836, "raw_average_key_size": 26, "raw_value_size": 13937276, "raw_average_value_size": 1225, "num_data_blocks": 1628, "num_entries": 11376, "num_filter_entries": 11376, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768917305, "oldest_key_time": 0, "file_creation_time": 1768924364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1688fb6f-f397-4aac-a0e8-874ff91d97a7", "db_session_id": "5V3N2TVXYZBCXP55EZHK", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.560587) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14207518 bytes
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.561691) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.1 rd, 205.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11895, records dropped: 519 output_compression: NoCompression
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.561706) EVENT_LOG_v1 {"time_micros": 1768924364561698, "job": 124, "event": "compaction_finished", "compaction_time_micros": 69217, "compaction_time_cpu_micros": 33546, "output_level": 6, "num_output_files": 1, "total_output_size": 14207518, "num_input_records": 11895, "num_output_records": 11376, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364562434, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768924364564645, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.490874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.564774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.564783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.564786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.564789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 ceph-mon[74360]: rocksdb: (Original Log Time 2026/01/20-15:52:44.564791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 15:52:44 compute-0 nova_compute[250018]: 2026-01-20 15:52:44.643 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:44 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:44 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:44 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:45 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:46.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:46 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:46 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:46 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:46.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:47 compute-0 ceph-mon[74360]: pgmap v3917: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:47 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:48.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:48 compute-0 nova_compute[250018]: 2026-01-20 15:52:48.171 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:48 compute-0 ceph-mon[74360]: pgmap v3918: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:48 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:48 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:48 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:48 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:49 compute-0 nova_compute[250018]: 2026-01-20 15:52:49.689 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:49 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:52:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:50.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:52:50 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:50 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:50 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:50.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:51 compute-0 ceph-mon[74360]: pgmap v3919: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:51 compute-0 nova_compute[250018]: 2026-01-20 15:52:51.050 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:51 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Optimize plan auto_2026-01-20_15:52:52
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] do_upmap
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'images', 'backups', '.rgw.root']
Jan 20 15:52:52 compute-0 ceph-mgr[74653]: [balancer INFO root] prepared 0/10 changes
Jan 20 15:52:52 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:52 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:52 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:52.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:53 compute-0 nova_compute[250018]: 2026-01-20 15:52:53.174 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:53 compute-0 ceph-mon[74360]: pgmap v3920: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:53 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:53 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:54.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:54 compute-0 nova_compute[250018]: 2026-01-20 15:52:54.692 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:54 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:54 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:54 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:54 compute-0 ceph-mon[74360]: pgmap v3921: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:55 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.008000211s ======
Jan 20 15:52:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:56.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000211s
Jan 20 15:52:56 compute-0 ceph-mon[74360]: pgmap v3922: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:56 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:56 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:56 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:56.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:57 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:58 compute-0 nova_compute[250018]: 2026-01-20 15:52:58.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:52:58 compute-0 nova_compute[250018]: 2026-01-20 15:52:58.052 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 15:52:58 compute-0 nova_compute[250018]: 2026-01-20 15:52:58.053 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 15:52:58 compute-0 ceph-mgr[74653]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 15:52:58 compute-0 sudo[410982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:58 compute-0 sudo[410982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:58 compute-0 sudo[410982]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:58 compute-0 sudo[411007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:52:58 compute-0 sudo[411007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:52:58 compute-0 sudo[411007]: pam_unix(sudo:session): session closed for user root
Jan 20 15:52:58 compute-0 nova_compute[250018]: 2026-01-20 15:52:58.176 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:52:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:52:58.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:52:58 compute-0 nova_compute[250018]: 2026-01-20 15:52:58.211 250022 DEBUG nova.compute.manager [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 15:52:58 compute-0 ceph-mon[74360]: pgmap v3923: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:52:58 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:52:58 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:52:58 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:52:58 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:52:58.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:52:59 compute-0 nova_compute[250018]: 2026-01-20 15:52:59.737 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:52:59 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:00.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:00 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:00 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:00 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:00 compute-0 ceph-mon[74360]: pgmap v3924: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:01 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:01 compute-0 sshd-session[411034]: Accepted publickey for zuul from 192.168.122.10 port 60012 ssh2: ECDSA SHA256:Yw0kyD5N4lqNgr1J3b5cYIIxKFrTRY8zW6kk+n6imz4
Jan 20 15:53:01 compute-0 systemd-logind[796]: New session 62 of user zuul.
Jan 20 15:53:01 compute-0 systemd[1]: Started Session 62 of User zuul.
Jan 20 15:53:01 compute-0 sshd-session[411034]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 15:53:01 compute-0 sudo[411038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 20 15:53:01 compute-0 sudo[411038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 15:53:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:02.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:02 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:02 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:02 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:02.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:02 compute-0 ceph-mon[74360]: pgmap v3925: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:03 compute-0 nova_compute[250018]: 2026-01-20 15:53:03.181 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:03 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:03 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3926: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:04.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:04 compute-0 sudo[411171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:04 compute-0 sudo[411171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-0 sudo[411171]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-0 sudo[411229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:04 compute-0 sudo[411229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-0 sudo[411229]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-0 podman[411214]: 2026-01-20 15:53:04.458730803 +0000 UTC m=+0.080169293 container health_status ef789c5a1674a400723be7fe753d17a9e1f36a6112db434ea9912d1129762ab2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 15:53:04 compute-0 podman[411210]: 2026-01-20 15:53:04.507470412 +0000 UTC m=+0.130769522 container health_status 43f0d6b2d79591e9d3dc6cea0175af03c00350480f2e2ecd5b7145f6f8495533 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '4a32417983ff32267599655c6e45254baefd9d4970135e23c41405384e1081af-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9-049e6a34997ef410123733d44859de0b529d7f88b4c9352715bb83318a85ecb9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:53:04 compute-0 sudo[411283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:04 compute-0 sudo[411283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-0 sudo[411283]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:04 compute-0 sudo[411329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 20 15:53:04 compute-0 sudo[411329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:04 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37089 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:04 compute-0 nova_compute[250018]: 2026-01-20 15:53:04.739 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:04 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:04 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:04 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:04.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:04 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50426 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:04 compute-0 ceph-mon[74360]: pgmap v3926: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:05 compute-0 podman[411429]: 2026-01-20 15:53:05.089608151 +0000 UTC m=+0.066931854 container exec a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 20 15:53:05 compute-0 podman[411429]: 2026-01-20 15:53:05.181140791 +0000 UTC m=+0.158464524 container exec_died a602f19ce9ef2d4922e558036fcbd51fac25abd19d28d7df857e5fbe8f959ba3 (image=quay.io/ceph/ceph:v18, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:53:05 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37095 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:05 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50435 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:05 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 20 15:53:05 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2586220715' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 20 15:53:05 compute-0 podman[411628]: 2026-01-20 15:53:05.863579455 +0000 UTC m=+0.059687978 container exec a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:53:05 compute-0 podman[411628]: 2026-01-20 15:53:05.87412848 +0000 UTC m=+0.070237013 container exec_died a2973a514c852ff316e6ad2ff84585210b4ad01d86cdf2de06f634d9390a88e8 (image=quay.io/ceph/haproxy:2.3, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-haproxy-rgw-default-compute-0-nqkboe)
Jan 20 15:53:05 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:05 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 20 15:53:05 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.49987 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-0 podman[411710]: 2026-01-20 15:53:06.119815025 +0000 UTC m=+0.062747671 container exec 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4)
Jan 20 15:53:06 compute-0 podman[411710]: 2026-01-20 15:53:06.14178198 +0000 UTC m=+0.084714586 container exec_died 0c2042652fe8d88ae47b6333c235a533de4d966f44da3b69a5fc82baeacb10bf (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-keepalived-rgw-default-compute-0-gcjsxe, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., release=1793, name=keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, version=2.2.4, build-date=2023-02-22T09:23:20)
Jan 20 15:53:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 sudo[411329]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-0 ceph-mon[74360]: from='client.37089 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-0 ceph-mon[74360]: from='client.50426 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2586220715' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:06 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:53:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:53:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:53:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:53:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 sudo[411751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:06 compute-0 sudo[411751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-0 sudo[411751]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-0 sudo[411776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:06 compute-0 sudo[411776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-0 sudo[411776]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-0 sudo[411801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:06 compute-0 sudo[411801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-0 sudo[411801]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:06 compute-0 sudo[411828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 20 15:53:06 compute-0 sudo[411828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:06 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50450 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 20 15:53:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 20 15:53:06 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:06 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:06 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:06 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:06 compute-0 sudo[411828]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.37095 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.50435 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: pgmap v3927: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1223251623' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.49987 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.50450 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3700717351' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 113608d5-8b0f-4a09-bd7f-1b507d83e2da does not exist
Jan 20 15:53:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 8787a982-0a32-4e84-82a9-c8f3c435b180 does not exist
Jan 20 15:53:07 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 62870985-aafd-481f-af60-92b614450c5c does not exist
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:53:07 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:53:07 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:07 compute-0 sudo[411889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:07 compute-0 sudo[411889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:07 compute-0 sudo[411889]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:07 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:07 compute-0 sudo[411917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:07 compute-0 sudo[411917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:07 compute-0 sudo[411917]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:07 compute-0 sudo[411942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:07 compute-0 sudo[411942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:07 compute-0 sudo[411942]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:07 compute-0 sudo[411967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 15:53:07 compute-0 sudo[411967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:08.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:08 compute-0 nova_compute[250018]: 2026-01-20 15:53:08.213 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.260441106 +0000 UTC m=+0.052535055 container create d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:53:08 compute-0 systemd[1]: Started libpod-conmon-d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf.scope.
Jan 20 15:53:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.240192857 +0000 UTC m=+0.032286836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.353018053 +0000 UTC m=+0.145112042 container init d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.370176028 +0000 UTC m=+0.162269977 container start d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.374845534 +0000 UTC m=+0.166939513 container attach d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:08 compute-0 elastic_jemison[412048]: 167 167
Jan 20 15:53:08 compute-0 systemd[1]: libpod-d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf.scope: Deactivated successfully.
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.377867597 +0000 UTC m=+0.169961546 container died d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 15:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-53d1575cc0cef8fff1614dcb48667ff51a82176b79b7d80b142df79728c62de3-merged.mount: Deactivated successfully.
Jan 20 15:53:08 compute-0 podman[412031]: 2026-01-20 15:53:08.443272947 +0000 UTC m=+0.235366916 container remove d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:53:08 compute-0 systemd[1]: libpod-conmon-d729985667d79854d51762a8359fcab6556790e5105612eeecfdf4f216234caf.scope: Deactivated successfully.
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 15:53:08 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:08 compute-0 ceph-mon[74360]: pgmap v3928: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:08 compute-0 podman[412082]: 2026-01-20 15:53:08.640424698 +0000 UTC m=+0.051066444 container create 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 20 15:53:08 compute-0 systemd[1]: Started libpod-conmon-14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c.scope.
Jan 20 15:53:08 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:08 compute-0 podman[412082]: 2026-01-20 15:53:08.621221387 +0000 UTC m=+0.031863163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:08 compute-0 podman[412082]: 2026-01-20 15:53:08.739183463 +0000 UTC m=+0.149825239 container init 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 20 15:53:08 compute-0 podman[412082]: 2026-01-20 15:53:08.750463419 +0000 UTC m=+0.161105165 container start 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:08 compute-0 podman[412082]: 2026-01-20 15:53:08.754121187 +0000 UTC m=+0.164762963 container attach 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 15:53:08 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:08 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:53:08 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:53:08 compute-0 ovs-vsctl[412120]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 15:53:09 compute-0 serene_ride[412107]: --> passed data devices: 0 physical, 1 LVM
Jan 20 15:53:09 compute-0 serene_ride[412107]: --> relative data size: 1.0
Jan 20 15:53:09 compute-0 serene_ride[412107]: --> All data devices are unavailable
Jan 20 15:53:09 compute-0 systemd[1]: libpod-14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c.scope: Deactivated successfully.
Jan 20 15:53:09 compute-0 podman[412082]: 2026-01-20 15:53:09.566337736 +0000 UTC m=+0.976979492 container died 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:53:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5aecac1048a19cb6915189758a5d8c1f232a4351f2403523d7bf54835ec5777-merged.mount: Deactivated successfully.
Jan 20 15:53:09 compute-0 podman[412082]: 2026-01-20 15:53:09.64879612 +0000 UTC m=+1.059437876 container remove 14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ride, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:09 compute-0 systemd[1]: libpod-conmon-14009ae7a6e41a244a8839c07266ec71c92a4c1d97473d58e5d4fac0b094053c.scope: Deactivated successfully.
Jan 20 15:53:09 compute-0 sudo[411967]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:09 compute-0 nova_compute[250018]: 2026-01-20 15:53:09.741 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:09 compute-0 sudo[412217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:09 compute-0 sudo[412217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:09 compute-0 sudo[412217]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:09 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:09 compute-0 sudo[412287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:09 compute-0 sudo[412287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:09 compute-0 sudo[412287]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:09 compute-0 virtqemud[249565]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 15:53:09 compute-0 virtqemud[249565]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 15:53:09 compute-0 sudo[412322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:09 compute-0 sudo[412322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:09 compute-0 sudo[412322]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:09 compute-0 virtqemud[249565]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 15:53:09 compute-0 sudo[412355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- lvm list --format json
Jan 20 15:53:09 compute-0 sudo[412355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.291850058 +0000 UTC m=+0.042444291 container create 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:10 compute-0 systemd[1]: Started libpod-conmon-36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1.scope.
Jan 20 15:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.366758787 +0000 UTC m=+0.117353050 container init 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.271122027 +0000 UTC m=+0.021716270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.377778066 +0000 UTC m=+0.128372309 container start 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.381702902 +0000 UTC m=+0.132297145 container attach 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:10 compute-0 sweet_galois[412510]: 167 167
Jan 20 15:53:10 compute-0 systemd[1]: libpod-36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1.scope: Deactivated successfully.
Jan 20 15:53:10 compute-0 conmon[412510]: conmon 36574ffbae5c8b974d8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1.scope/container/memory.events
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.387220602 +0000 UTC m=+0.137814845 container died 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 20 15:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d755e05f5d6aab6df46f9c33d1c694e28b8a336789891a2719c12477e8b8d159-merged.mount: Deactivated successfully.
Jan 20 15:53:10 compute-0 podman[412464]: 2026-01-20 15:53:10.42669221 +0000 UTC m=+0.177286453 container remove 36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 20 15:53:10 compute-0 systemd[1]: libpod-conmon-36574ffbae5c8b974d8b2d42072d8a667b55ea258bf01fd5595ba0de31def4c1.scope: Deactivated successfully.
Jan 20 15:53:10 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: cache status {prefix=cache status} (starting...)
Jan 20 15:53:10 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:10 compute-0 podman[412602]: 2026-01-20 15:53:10.611210409 +0000 UTC m=+0.055135876 container create 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:10 compute-0 systemd[1]: Started libpod-conmon-11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023.scope.
Jan 20 15:53:10 compute-0 podman[412602]: 2026-01-20 15:53:10.589668264 +0000 UTC m=+0.033593751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431fa5ef9fdd8649589edaed2cd8b0942a37fcf86c38de8de82a98d1c3582185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431fa5ef9fdd8649589edaed2cd8b0942a37fcf86c38de8de82a98d1c3582185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431fa5ef9fdd8649589edaed2cd8b0942a37fcf86c38de8de82a98d1c3582185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431fa5ef9fdd8649589edaed2cd8b0942a37fcf86c38de8de82a98d1c3582185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:10 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: client ls {prefix=client ls} (starting...)
Jan 20 15:53:10 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:10 compute-0 lvm[412650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 15:53:10 compute-0 lvm[412650]: VG ceph_vg0 finished
Jan 20 15:53:10 compute-0 podman[412602]: 2026-01-20 15:53:10.743907303 +0000 UTC m=+0.187832790 container init 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:10 compute-0 podman[412602]: 2026-01-20 15:53:10.755865596 +0000 UTC m=+0.199791063 container start 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:10 compute-0 podman[412602]: 2026-01-20 15:53:10.759570547 +0000 UTC m=+0.203496014 container attach 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:53:10 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:10 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:10 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:10.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:10 compute-0 ceph-mon[74360]: pgmap v3929: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:11 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37122 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]: {
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:     "0": [
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:         {
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "devices": [
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "/dev/loop3"
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             ],
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "lv_name": "ceph_lv0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "lv_size": "7511998464",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e399cf45-e6b6-5393-99f1-75c601d3f188,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=afc3740a-ccee-46f8-b343-22648cc89376,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "lv_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "name": "ceph_lv0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "tags": {
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.block_uuid": "JDsmKJ-cW3f-kcvS-e5h4-yEcD-vh7x-WsuSGv",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.cluster_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.cluster_name": "ceph",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.crush_device_class": "",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.encrypted": "0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.osd_fsid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.osd_id": "0",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.type": "block",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:                 "ceph.vdo": "0"
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             },
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "type": "block",
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:             "vg_name": "ceph_vg0"
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:         }
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]:     ]
Jan 20 15:53:11 compute-0 amazing_brahmagupta[412640]: }
Jan 20 15:53:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 20 15:53:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569318041' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:11 compute-0 systemd[1]: libpod-11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023.scope: Deactivated successfully.
Jan 20 15:53:11 compute-0 podman[412602]: 2026-01-20 15:53:11.533827068 +0000 UTC m=+0.977752535 container died 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-431fa5ef9fdd8649589edaed2cd8b0942a37fcf86c38de8de82a98d1c3582185-merged.mount: Deactivated successfully.
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-0 podman[412602]: 2026-01-20 15:53:11.605283134 +0000 UTC m=+1.049208601 container remove 11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 15:53:11 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37134 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:11 compute-0 systemd[1]: libpod-conmon-11e6e7f462584e5abce90adbd628a27de65da97d2f0c2b6784bbebf200d14023.scope: Deactivated successfully.
Jan 20 15:53:11 compute-0 sudo[412355]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:11 compute-0 sudo[412807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-0 sudo[412807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:11 compute-0 sudo[412807]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:11 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:11 compute-0 sudo[412851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 15:53:11 compute-0 sudo[412851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:11 compute-0 sudo[412851]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:11 compute-0 sudo[412897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:11 compute-0 sudo[412897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:11 compute-0 sudo[412897]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:11 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1569318041' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:11 compute-0 sudo[412922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e399cf45-e6b6-5393-99f1-75c601d3f188/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid e399cf45-e6b6-5393-99f1-75c601d3f188 -- raw list --format json
Jan 20 15:53:11 compute-0 sudo[412922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:11 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 20 15:53:11 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906255464' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 15:53:11 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50008 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:53:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:12.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.252761781 +0000 UTC m=+0.041708241 container create 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:12 compute-0 systemd[1]: Started libpod-conmon-72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892.scope.
Jan 20 15:53:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.325938763 +0000 UTC m=+0.114885263 container init 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.234111466 +0000 UTC m=+0.023057946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.332425489 +0000 UTC m=+0.121371949 container start 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.336392786 +0000 UTC m=+0.125339256 container attach 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 15:53:12 compute-0 quirky_zhukovsky[413048]: 167 167
Jan 20 15:53:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 20 15:53:12 compute-0 systemd[1]: libpod-72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892.scope: Deactivated successfully.
Jan 20 15:53:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126405177' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.339372217 +0000 UTC m=+0.128318697 container died 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37155 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:12.358+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ea9b84bbb5e61ac40f1029e8cb7f2d064aca58b4011dc3c96e39f603f3885ba-merged.mount: Deactivated successfully.
Jan 20 15:53:12 compute-0 podman[413032]: 2026-01-20 15:53:12.377606262 +0000 UTC m=+0.166552742 container remove 72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 15:53:12 compute-0 systemd[1]: libpod-conmon-72c9acafa2a6ec7af222d4166344785657b4727ca028c59af61260aa69267892.scope: Deactivated successfully.
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50032 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 20 15:53:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 podman[413093]: 2026-01-20 15:53:12.542650623 +0000 UTC m=+0.043859819 container create d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 15:53:12 compute-0 systemd[1]: Started libpod-conmon-d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab.scope.
Jan 20 15:53:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 15:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44d095de5b3c96e9d942f2c27583cadfc651754a0c7c722ce5147b64562e409/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:12 compute-0 podman[413093]: 2026-01-20 15:53:12.52444449 +0000 UTC m=+0.025653686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 20 15:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44d095de5b3c96e9d942f2c27583cadfc651754a0c7c722ce5147b64562e409/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44d095de5b3c96e9d942f2c27583cadfc651754a0c7c722ce5147b64562e409/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44d095de5b3c96e9d942f2c27583cadfc651754a0c7c722ce5147b64562e409/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 15:53:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 20 15:53:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 podman[413093]: 2026-01-20 15:53:12.640316208 +0000 UTC m=+0.141525414 container init d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 20 15:53:12 compute-0 podman[413093]: 2026-01-20 15:53:12.648770877 +0000 UTC m=+0.149980073 container start d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 15:53:12 compute-0 podman[413093]: 2026-01-20 15:53:12.654058281 +0000 UTC m=+0.155267507 container attach d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: ops {prefix=ops} (starting...)
Jan 20 15:53:12 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:12 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:12 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:12 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:12.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 20 15:53:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881459806' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 20 15:53:12 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773467630' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.37122 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.37134 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: pgmap v3930: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3906255464' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.50008 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4126405177' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4190772805' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3384103310' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1881459806' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:12 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3773467630' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 nova_compute[250018]: 2026-01-20 15:53:13.216 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50059 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:13 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:13.227+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 20 15:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129415843' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50504 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:13 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:13.277+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:13 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: session ls {prefix=session ls} (starting...)
Jan 20 15:53:13 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi Can't run that command on an inactive MDS!
Jan 20 15:53:13 compute-0 dreamy_newton[413138]: {
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:     "afc3740a-ccee-46f8-b343-22648cc89376": {
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:         "ceph_fsid": "e399cf45-e6b6-5393-99f1-75c601d3f188",
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:         "osd_id": 0,
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:         "osd_uuid": "afc3740a-ccee-46f8-b343-22648cc89376",
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:         "type": "bluestore"
Jan 20 15:53:13 compute-0 dreamy_newton[413138]:     }
Jan 20 15:53:13 compute-0 dreamy_newton[413138]: }
Jan 20 15:53:13 compute-0 systemd[1]: libpod-d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab.scope: Deactivated successfully.
Jan 20 15:53:13 compute-0 podman[413093]: 2026-01-20 15:53:13.516786038 +0000 UTC m=+1.017995234 container died d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37197 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44d095de5b3c96e9d942f2c27583cadfc651754a0c7c722ce5147b64562e409-merged.mount: Deactivated successfully.
Jan 20 15:53:13 compute-0 podman[413093]: 2026-01-20 15:53:13.574400679 +0000 UTC m=+1.075609885 container remove d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 15:53:13 compute-0 ceph-mds[93566]: mds.cephfs.compute-0.znrafi asok_command: status {prefix=status} (starting...)
Jan 20 15:53:13 compute-0 systemd[1]: libpod-conmon-d9bec46e8a72829d7f22e5266beca7834bf39579ec6c69e90ad966b6394b38ab.scope: Deactivated successfully.
Jan 20 15:53:13 compute-0 sudo[412922]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 20 15:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 20 15:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 021ffb32-6b42-4bfa-ba9b-2363d741059c does not exist
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev 18618aad-9335-4fff-bebf-f8899fa7431d does not exist
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: [progress WARNING root] complete: ev d7956a28-4503-4b78-9b49-4f6523f1a80f does not exist
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:13 compute-0 sudo[413286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:13 compute-0 sudo[413286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:13 compute-0 sudo[413286]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 20 15:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546405552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:13 compute-0 sudo[413330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 15:53:13 compute-0 sudo[413330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:13 compute-0 sudo[413330]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.50468 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.37155 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.50020 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.50032 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1906101247' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/620047485' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1129415843' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/713696760' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3593184449' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='mgr.14132 192.168.122.100:0/3912305496' entity='mgr.compute-0.wookjv' 
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4057764480' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1050385293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.10:0/1050385293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1706995726' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3546405552' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1882969061' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 15:53:13 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 20 15:53:13 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054439434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50104 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 20 15:53:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148797103' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:14.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50552 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50116 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 20 15:53:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388841994' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 15:53:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121232392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50564 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 nova_compute[250018]: 2026-01-20 15:53:14.743 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:14 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:14 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:14 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:14.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37254 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:14 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:14.815+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:14 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 20 15:53:14 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/829000758' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.37188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.50059 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.50504 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.37197 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: pgmap v3931: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4139919592' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3054439434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.50104 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2698181100' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/817991170' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2148797103' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2388841994' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1121232392' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2058637716' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2827410809' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:14 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/829000758' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 20 15:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124324877' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 20 15:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 20 15:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030325856' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 20 15:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163341667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 20 15:53:15 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870537491' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50182 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:15 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:15 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:15.729+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:15 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3932: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:15 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37296 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.50552 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.50116 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.50564 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.37254 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/124324877' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1991284178' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/893498978' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2579756407' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3030325856' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3585713236' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3163341667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2989774423' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/22520359' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2870537491' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3071146643' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1406338621' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 20 15:53:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598650118' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50612 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:16 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:16.144+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37311 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:53:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:53:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 20 15:53:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810450873' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50227 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:16 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:16 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:16.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:16 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 20 15:53:16 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947449568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:16 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50639 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:04.725133+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 59080704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:05.725240+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 59080704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:06.725344+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a8b8c000/0x0/0x1bfc00000, data 0x16b831a/0x18e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.348615646s of 19.486160278s, submitted: 30
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4159972 data_alloc: 234881024 data_used: 16629760
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 58736640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:07.725461+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369467392 unmapped: 55255040 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:08.725566+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a802a000/0x0/0x1bfc00000, data 0x221a31a/0x2444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:09.725689+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:10.725796+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:11.725940+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264758 data_alloc: 234881024 data_used: 16855040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:12.726064+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7fa2000/0x0/0x1bfc00000, data 0x22a131a/0x24cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:13.726221+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:14.726356+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368459776 unmapped: 56262656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:15.726478+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7f82000/0x0/0x1bfc00000, data 0x22c231a/0x24ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368467968 unmapped: 56254464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:16.726626+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4260826 data_alloc: 234881024 data_used: 16855040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368467968 unmapped: 56254464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:17.726773+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368467968 unmapped: 56254464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:18.726912+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7f82000/0x0/0x1bfc00000, data 0x22c231a/0x24ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:19.727030+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:20.727609+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.560415268s of 14.110114098s, submitted: 76
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:21.727863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4262870 data_alloc: 234881024 data_used: 16855040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:22.728735+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7f6e000/0x0/0x1bfc00000, data 0x22d531a/0x24ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 ms_handle_reset con 0x562bbbec1000 session 0x562bbccd9c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 ms_handle_reset con 0x562bbbec3400 session 0x562bbd156780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:23.728906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:24.729340+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:25.729573+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:26.729725+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7f6f000/0x0/0x1bfc00000, data 0x22d531a/0x24ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4261062 data_alloc: 234881024 data_used: 16855040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:27.729917+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:28.730058+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:29.730259+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:30.730590+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 56246272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a7f6f000/0x0/0x1bfc00000, data 0x22d531a/0x24ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 ms_handle_reset con 0x562bba10c800 session 0x562bbca143c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.339342117s of 10.381141663s, submitted: 11
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:31.730727+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 56238080 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _renew_subs
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bba219c00 session 0x562bbafa4b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbbea4000 session 0x562bbcd625a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec1000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbbec1000 session 0x562bb9c134a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb39c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbcb39c00 session 0x562bbd133860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267212 data_alloc: 234881024 data_used: 16867328
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:32.731186+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bba10c800 session 0x562bbcd5d680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:33.731421+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:34.731677+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:35.731866+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a7f6a000/0x0/0x1bfc00000, data 0x22d6fd5/0x2503000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:36.732015+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:37.732188+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267824 data_alloc: 234881024 data_used: 16887808
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbb1c6400 session 0x562bbc817860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368500736 unmapped: 56221696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbf1b7400 session 0x562bbba341e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec1000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbbec1000 session 0x562bbd1570e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:38.732321+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:39.732445+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:40.732561+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 ms_handle_reset con 0x562bbcb7c000 session 0x562bbc817860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _renew_subs
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 ms_handle_reset con 0x562bbcb7c000 session 0x562bb9c134a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a968f000/0x0/0x1bfc00000, data 0xbb2f73/0xdde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:41.732699+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:42.732827+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4012318 data_alloc: 218103808 data_used: 4845568
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.201254845s of 11.307619095s, submitted: 38
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 ms_handle_reset con 0x562bba10c800 session 0x562bbc5cd860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:43.733073+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c6400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 ms_handle_reset con 0x562bbb1c6400 session 0x562bbad94960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec1000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 ms_handle_reset con 0x562bbbec1000 session 0x562bbaf9f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:44.733278+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 60424192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:45.733529+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 60424192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a968f000/0x0/0x1bfc00000, data 0xbb4bae/0xddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbf1b7400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 ms_handle_reset con 0x562bbf1b7400 session 0x562bbba20b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:46.733750+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:47.734167+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4014041 data_alloc: 218103808 data_used: 5079040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a968e000/0x0/0x1bfc00000, data 0xbb4b4c/0xdde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:48.734268+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:49.734449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a968e000/0x0/0x1bfc00000, data 0xbb4b4c/0xdde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:50.734606+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:51.734755+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 60407808 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:52.734894+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4014041 data_alloc: 218103808 data_used: 5079040
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364314624 unmapped: 60407808 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.923768044s of 10.001539230s, submitted: 23
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:53.735053+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364249088 unmapped: 60473344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a968c000/0x0/0x1bfc00000, data 0xbb668b/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:54.735190+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364249088 unmapped: 60473344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbd1323c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbc7dd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:55.735308+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364249088 unmapped: 60473344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbf1b7400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbf1b7400 session 0x562bb9c150e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:56.735443+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:57.735567+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:58.735695+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:21:59.735781+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:00.735911+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:01.736036+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:02.736163+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364265472 unmapped: 60456960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:03.736330+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:04.736448+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:05.736539+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:06.736693+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:07.736853+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:08.736980+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364273664 unmapped: 60448768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:09.737116+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:10.737244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:11.737398+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:12.737515+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:13.737668+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:14.737779+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:15.737908+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364281856 unmapped: 60440576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:16.738028+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 60432384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:17.738161+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364290048 unmapped: 60432384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:18.738298+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364298240 unmapped: 60424192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:19.738433+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:20.738550+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:21.738696+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:22.738831+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3994088 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:23.738998+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364306432 unmapped: 60416000 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbbaab2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c6400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c6400 session 0x562bb9bf4b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbae69a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbc816960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.296077728s of 31.380214691s, submitted: 33
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbafa5e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbf1b7400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbf1b7400 session 0x562bbafa2000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:24.739122+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 60112896 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9294000/0x0/0x1bfc00000, data 0xfaf6a4/0x11da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:25.739534+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 60112896 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:26.739669+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 60112896 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:27.739921+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4046395 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:28.740077+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9294000/0x0/0x1bfc00000, data 0xfaf6dd/0x11da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:29.740206+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:30.740341+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9294000/0x0/0x1bfc00000, data 0xfaf6dd/0x11da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:31.740479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec1000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbec1000 session 0x562bbba3f680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9294000/0x0/0x1bfc00000, data 0xfaf6dd/0x11da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:32.740631+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9bf4960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4046395 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364617728 unmapped: 60104704 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9c06b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbbaf6d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:33.740777+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364625920 unmapped: 60096512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbf1b7400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:34.740913+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364625920 unmapped: 60096512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:35.741045+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:36.741178+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.50182 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: pgmap v3932: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.37296 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1908376492' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/598650118' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1611874348' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/878803266' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/693312015' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2810450873' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2432965860' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/692267894' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/947449568' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:37.741337+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4088233 data_alloc: 218103808 data_used: 10280960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9293000/0x0/0x1bfc00000, data 0xfaf6ed/0x11db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:38.741518+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:39.741651+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:40.741773+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:41.741899+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:42.742086+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4088233 data_alloc: 218103808 data_used: 10280960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9293000/0x0/0x1bfc00000, data 0xfaf6ed/0x11db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:43.742291+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:44.742426+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9293000/0x0/0x1bfc00000, data 0xfaf6ed/0x11db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:45.742719+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364634112 unmapped: 60088320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:46.742861+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 364642304 unmapped: 60080128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:47.742995+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.288240433s of 23.380861282s, submitted: 35
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4127389 data_alloc: 218103808 data_used: 10276864
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 56393728 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:48.743127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:49.743278+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c24000/0x0/0x1bfc00000, data 0x161e6ed/0x184a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:50.743425+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:51.743547+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:52.743669+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4155631 data_alloc: 218103808 data_used: 12382208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:53.743841+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:54.743958+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c24000/0x0/0x1bfc00000, data 0x161e6ed/0x184a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 56082432 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:55.744103+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:56.744252+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:57.744413+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.778119087s of 10.070379257s, submitted: 115
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbaf9ef00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbf1b7400 session 0x562bb9c152c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158007 data_alloc: 218103808 data_used: 12402688
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:58.744584+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:22:59.744723+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:00.744844+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:01.744972+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:02.745111+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158007 data_alloc: 218103808 data_used: 12402688
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:03.745280+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbc7dc1e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:04.745483+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:05.745601+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:06.745748+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:07.745874+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4159927 data_alloc: 218103808 data_used: 12570624
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:08.746043+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:09.746203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:10.746356+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:11.746481+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:12.746651+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4159927 data_alloc: 218103808 data_used: 12570624
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:13.746807+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:14.746939+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:15.747064+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:16.747195+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.400562286s of 19.403242111s, submitted: 1
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:17.747350+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4167471 data_alloc: 218103808 data_used: 13279232
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:18.747472+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:19.747625+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:20.747797+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:21.747916+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:22.748088+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4168127 data_alloc: 218103808 data_used: 13328384
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:23.748263+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8bf4000/0x0/0x1bfc00000, data 0x164e6ed/0x187a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:24.748516+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:25.748747+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:26.748913+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbccd2780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbc7dde00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 56000512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:27.749081+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.589791775s of 10.509191513s, submitted: 12
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4168415 data_alloc: 218103808 data_used: 13336576
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 55992320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:28.749283+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 55992320 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:29.749532+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbba34960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 55975936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954d000/0x0/0x1bfc00000, data 0x9ea68b/0xc15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:30.749697+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 55975936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:31.749905+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 55975936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:32.750059+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4004774 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 55975936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:33.750232+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368754688 unmapped: 55967744 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:34.750340+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 55959552 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:35.750466+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 55959552 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:36.750635+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 55959552 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:37.750791+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4004774 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 55959552 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:38.750992+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:39.751195+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:40.751367+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:41.751549+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:42.751736+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4004774 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:43.751992+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368771072 unmapped: 55951360 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:44.752138+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:45.752280+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:46.752443+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:47.752610+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4004774 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:48.752758+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:49.752881+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:50.753057+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:51.753200+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55943168 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:52.753358+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4004774 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 55934976 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:53.753534+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368787456 unmapped: 55934976 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbebeb000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.396490097s of 26.856384277s, submitted: 21
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:54.753698+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbebeb000 session 0x562bbae68b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbd133680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbae68960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a954e000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [1])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbba2ed20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbba2e780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:55.753902+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:56.754147+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:57.754299+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045063 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a9306000/0x0/0x1bfc00000, data 0xf3e67b/0x1168000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:58.754449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:23:59.754639+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368803840 unmapped: 55918592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:00.754755+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 55910400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:01.754891+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 55910400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9df400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9df400 session 0x562bbc5cdc20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:02.754996+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4047935 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368959488 unmapped: 55762944 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:03.755479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368959488 unmapped: 55762944 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:04.755549+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:05.760522+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:06.760672+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:07.760830+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087775 data_alloc: 218103808 data_used: 10293248
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:08.760983+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:09.761140+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:10.761417+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:11.761563+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:12.761740+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087775 data_alloc: 218103808 data_used: 10293248
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:13.761936+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368967680 unmapped: 55754752 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92e2000/0x0/0x1bfc00000, data 0xf6267b/0x118c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:14.762075+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.390119553s of 20.715553284s, submitted: 16
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369459200 unmapped: 55263232 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:15.762189+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:16.762308+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:17.762447+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144783 data_alloc: 218103808 data_used: 11100160
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:18.762640+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:19.762782+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:20.762903+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:21.763074+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:22.763230+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144783 data_alloc: 218103808 data_used: 11100160
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:23.763407+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369909760 unmapped: 54812672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:24.763539+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9c14000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbba2e780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 54878208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:25.763651+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369844224 unmapped: 54878208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:26.763796+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 54870016 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:27.763991+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4142415 data_alloc: 218103808 data_used: 11108352
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369852416 unmapped: 54870016 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:28.764194+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369860608 unmapped: 54861824 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:29.764324+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369860608 unmapped: 54861824 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.277636528s of 15.398482323s, submitted: 32
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:30.764456+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369860608 unmapped: 54861824 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:31.764619+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369860608 unmapped: 54861824 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:32.764782+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8d81000/0x0/0x1bfc00000, data 0x14c367b/0x16ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4142547 data_alloc: 218103808 data_used: 11108352
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369860608 unmapped: 54861824 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bb9c15a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbcd5d2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:33.765177+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9dfc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9dfc00 session 0x562bbbaf63c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:34.765303+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:35.765424+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:36.765560+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:37.765698+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:38.765830+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:39.765903+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:40.766046+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:41.766218+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 57532416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:42.766368+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:43.766558+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:44.766756+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:45.766910+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:46.767030+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:47.767175+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:48.767252+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:49.767426+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 57524224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:50.767574+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:51.767724+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:52.767890+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:53.768022+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:54.768159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:55.768329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 57516032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:56.768464+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:57.768610+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:58.768767+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:24:59.768906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:00.769048+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:01.769176+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:02.769329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:03.769535+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367214592 unmapped: 57507840 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:04.769731+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:05.769982+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:06.770116+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:07.770240+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:08.770472+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:09.770655+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:10.770893+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:11.771104+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 57499648 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:12.771247+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4010519 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367230976 unmapped: 57491456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:13.771450+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367230976 unmapped: 57491456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:14.771618+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:15.771754+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:16.771907+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbacc1a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbc817680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bb9bf52c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbcd62b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbebea800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.297416687s of 46.359588623s, submitted: 23
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbebea800 session 0x562bbbadfe00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbd1572c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbd133860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbcd62960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbccd9c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:17.772035+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4097577 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:18.772161+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:19.772332+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:20.772506+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8cf1000/0x0/0x1bfc00000, data 0x155268b/0x177d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:21.772679+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 57475072 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:22.772821+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4097577 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 57466880 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:23.772974+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9f7400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9f7400 session 0x562bbcd77c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367411200 unmapped: 57311232 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:24.773114+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 367411200 unmapped: 57311232 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:25.773293+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:26.773451+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:27.773576+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184941 data_alloc: 234881024 data_used: 16580608
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:28.773715+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:29.773882+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:30.773997+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:31.774127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:32.774328+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184941 data_alloc: 234881024 data_used: 16580608
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:33.774589+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:34.774787+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:35.774908+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 369254400 unmapped: 55468032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:36.775060+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ccd000/0x0/0x1bfc00000, data 0x157668b/0x17a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.792362213s of 19.867391586s, submitted: 11
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 52256768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:37.775233+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224259 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:38.775360+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:39.775471+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:40.775596+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:41.775713+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:42.775837+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224259 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:43.776019+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:44.776153+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:45.776290+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:46.776499+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:47.776647+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224275 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:48.776780+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:49.776897+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:50.777014+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:51.777142+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:52.777258+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224275 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:53.777412+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:54.777520+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:55.777637+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:56.777756+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:57.778047+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224275 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:58.778203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:25:59.778338+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:00.778496+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371793920 unmapped: 52928512 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:01.778624+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:02.778749+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224275 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:03.778921+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:04.779085+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:05.779223+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:06.779364+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:07.779563+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224275 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:08.779706+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371810304 unmapped: 52912128 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:09.779869+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371818496 unmapped: 52903936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:10.780057+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371818496 unmapped: 52903936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:11.780192+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371818496 unmapped: 52903936 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:12.780371+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371826688 unmapped: 52895744 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbd133c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbbcd2000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb37800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb37800 session 0x562bbcc0fc20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bb9fda000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.371215820s of 36.478992462s, submitted: 32
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bb9c79c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:13.780554+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298643 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bbba2fe00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbc7dcf00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbd132d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb37800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb37800 session 0x562bbcd621e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:14.780705+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:15.780825+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:16.781030+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:17.781203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:18.781353+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298643 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371875840 unmapped: 52846592 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:19.781522+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:20.781689+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:21.781800+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:22.781929+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:23.782092+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298643 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:24.782240+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371884032 unmapped: 52838400 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:25.782439+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371892224 unmapped: 52830208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:26.782586+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371892224 unmapped: 52830208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:27.782753+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371892224 unmapped: 52830208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:28.782949+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298643 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371892224 unmapped: 52830208 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:29.783085+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372719616 unmapped: 52002816 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:30.783191+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376848384 unmapped: 47874048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:31.783329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376848384 unmapped: 47874048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:32.783490+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:33.783727+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4369203 data_alloc: 234881024 data_used: 27398144
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:34.783876+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:35.784119+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:36.784276+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:37.784420+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:38.784550+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4369203 data_alloc: 234881024 data_used: 27398144
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:39.784678+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 47865856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.989894867s of 27.156105042s, submitted: 10
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:40.784793+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379150336 unmapped: 45572096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7fd3000/0x0/0x1bfc00000, data 0x226f69b/0x249b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [1])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:41.784911+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379248640 unmapped: 45473792 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:42.785063+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 45432832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:43.785257+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4437531 data_alloc: 234881024 data_used: 28110848
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379289600 unmapped: 45432832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a78dc000/0x0/0x1bfc00000, data 0x296669b/0x2b92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:44.785511+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 45424640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:45.785631+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 45424640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:46.785751+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 207K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s
                                           Cumulative WAL: 55K writes, 20K syncs, 2.66 writes per sync, written: 0.19 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3703 writes, 11K keys, 3703 commit groups, 1.0 writes per commit group, ingest: 11.83 MB, 0.02 MB/s
                                           Interval WAL: 3703 writes, 1503 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379297792 unmapped: 45424640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets getting new tickets!
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:47.786137+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _finish_auth 0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:47.786826+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 45416448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:48.786517+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a78d9000/0x0/0x1bfc00000, data 0x296969b/0x2b95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4436507 data_alloc: 234881024 data_used: 28110848
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 45416448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:49.786759+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 45416448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:50.786893+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379305984 unmapped: 45416448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:51.787138+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 45408256 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:52.788587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbcd77860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a78d9000/0x0/0x1bfc00000, data 0x296969b/0x2b95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379314176 unmapped: 45408256 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.305365562s of 12.589902878s, submitted: 59
Jan 20 15:53:17 compute-0 ceph-osd[84815]: mgrc ms_handle_reset ms_handle_reset con 0x562bbcb6ec00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2542147622
Jan 20 15:53:17 compute-0 ceph-osd[84815]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2542147622,v1:192.168.122.100:6801/2542147622]
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: get_auth_request con 0x562bbc9df400 auth_method 0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: mgrc handle_mgr_configure stats_period=5
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bba0fd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:53.788831+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232909 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:54.789056+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:55.789248+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:56.790018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:57.790223+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:58.790797+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232909 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:26:59.790968+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:00.791355+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbec4c00 session 0x562bb9bf4d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:01.791561+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:02.792001+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:03.792221+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232909 data_alloc: 234881024 data_used: 17395712
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:04.792563+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbc817e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377438208 unmapped: 47284224 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9bf4780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.499879837s of 12.544260979s, submitted: 12
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:05.792697+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbd133a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8961000/0x0/0x1bfc00000, data 0x18e268b/0x1b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:06.792921+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:07.793106+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:08.793274+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:09.793467+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:10.793582+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:11.793734+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:12.793942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:13.794125+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:14.794452+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:15.794578+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:16.794766+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:17.794950+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:18.795119+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:19.795253+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:20.795413+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:21.795560+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:22.795745+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:23.795969+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:24.796125+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:25.796271+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:26.796466+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:27.796618+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:28.796785+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:29.796943+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:30.797109+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:31.797310+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:32.797482+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:33.797644+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4030027 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:34.797810+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:35.797947+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:36.798127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:37.798273+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368254976 unmapped: 56467456 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.571235657s of 32.601169586s, submitted: 12
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a985a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbaf9ef00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbce543c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9bf4780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bb9bf4d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbcd621e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:38.798469+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4108756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8eb9000/0x0/0x1bfc00000, data 0x138b67b/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:39.798587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:40.798725+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:41.798872+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8eb9000/0x0/0x1bfc00000, data 0x138b67b/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:42.799018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bb9c79c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:43.799188+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8e95000/0x0/0x1bfc00000, data 0x13af67b/0x15d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112572 data_alloc: 218103808 data_used: 4800512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 56139776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:44.799313+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:45.799435+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:46.799635+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:47.799757+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8e95000/0x0/0x1bfc00000, data 0x13af67b/0x15d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:48.799893+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183772 data_alloc: 234881024 data_used: 14716928
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:49.800049+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:50.800235+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:51.800474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:52.800680+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8e95000/0x0/0x1bfc00000, data 0x13af67b/0x15d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368590848 unmapped: 56131584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:53.800866+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183772 data_alloc: 234881024 data_used: 14716928
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 56123392 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:54.801045+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.667245865s of 16.764028549s, submitted: 24
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372416512 unmapped: 52305920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:55.801214+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372613120 unmapped: 52109312 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:56.801434+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:57.801640+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:58.801841+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4236828 data_alloc: 234881024 data_used: 16044032
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:27:59.802018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:00.802151+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:01.802292+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:02.802534+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:03.802761+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4236828 data_alloc: 234881024 data_used: 16044032
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:04.803039+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:05.803244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:06.803461+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:07.803614+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:08.803860+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4236828 data_alloc: 234881024 data_used: 16044032
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:09.804055+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:10.804295+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:11.804471+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:12.804691+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:13.804932+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4236988 data_alloc: 234881024 data_used: 16048128
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:14.805192+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372621312 unmapped: 52101120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:15.805405+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372629504 unmapped: 52092928 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbce07800 session 0x562bba0fcb40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea4000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:16.805647+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372629504 unmapped: 52092928 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:17.805833+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372629504 unmapped: 52092928 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:18.806036+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.699522018s of 23.843563080s, submitted: 61
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9fda000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbd133c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a882b000/0x0/0x1bfc00000, data 0x1a1967b/0x1c43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4235548 data_alloc: 234881024 data_used: 16048128
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 370212864 unmapped: 54509568 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbcc0fc20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:19.806241+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 370229248 unmapped: 54493184 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:20.806457+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 370262016 unmapped: 54460416 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:21.806568+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [0,0,1])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:22.806712+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:23.806919+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:24.807143+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:25.807324+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:26.807518+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:27.807711+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:28.807921+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:29.808036+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:30.808250+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:31.808433+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:32.808653+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:33.808863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:34.809026+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:35.809153+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:36.809305+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:37.809479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:38.809692+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:39.809906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 53272576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:40.810066+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:41.810213+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:42.810371+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:43.810590+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:44.810749+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:45.810913+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:46.811100+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:47.811250+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371458048 unmapped: 53264384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:48.811396+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038028 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:49.811514+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:50.811660+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.435323715s of 32.180305481s, submitted: 343
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bb9c06f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:51.811832+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:52.811968+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:53.812141+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4057954 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:54.812273+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:55.812406+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:56.812600+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:57.812714+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb37800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:58.812893+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073314 data_alloc: 218103808 data_used: 6864896
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:28:59.813070+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:00.813291+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:01.813438+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:02.813657+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:03.813813+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073314 data_alloc: 218103808 data_used: 6864896
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:04.814026+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:05.814187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:06.814407+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:07.814535+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:08.814722+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a91f2000/0x0/0x1bfc00000, data 0xc4267b/0xe6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073314 data_alloc: 218103808 data_used: 6864896
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 53256192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:09.814863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.952983856s of 18.974100113s, submitted: 5
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375021568 unmapped: 49700864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:10.815090+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376455168 unmapped: 48267264 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:11.815232+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8966000/0x0/0x1bfc00000, data 0x14ce67b/0x16f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376455168 unmapped: 48267264 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:12.815436+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376520704 unmapped: 48201728 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:13.815629+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4147414 data_alloc: 218103808 data_used: 7028736
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376520704 unmapped: 48201728 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:14.815760+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376520704 unmapped: 48201728 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:15.815992+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376643584 unmapped: 48078848 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:16.816162+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376643584 unmapped: 48078848 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:17.816322+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8945000/0x0/0x1bfc00000, data 0x14ef67b/0x1719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376643584 unmapped: 48078848 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:18.816488+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144574 data_alloc: 218103808 data_used: 7028736
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376643584 unmapped: 48078848 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:19.816627+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376643584 unmapped: 48078848 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:20.816839+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb37800 session 0x562bbcd5cf00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:21.816977+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.706958771s of 11.915753365s, submitted: 73
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcf0bc20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:22.817152+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:23.817310+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:24.817507+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:25.817661+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:26.817916+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:27.818149+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376651776 unmapped: 48070656 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:28.818301+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376659968 unmapped: 48062464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:29.818449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376659968 unmapped: 48062464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:30.818574+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376659968 unmapped: 48062464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:31.818819+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376668160 unmapped: 48054272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:32.819003+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376668160 unmapped: 48054272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:33.819225+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376668160 unmapped: 48054272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:34.819450+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376676352 unmapped: 48046080 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:35.819637+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376676352 unmapped: 48046080 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:36.819816+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376676352 unmapped: 48046080 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:37.820042+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376676352 unmapped: 48046080 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:38.820187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376684544 unmapped: 48037888 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:39.820320+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376684544 unmapped: 48037888 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:40.820581+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376692736 unmapped: 48029696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:41.820773+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376692736 unmapped: 48029696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:42.821024+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376692736 unmapped: 48029696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:43.821309+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376692736 unmapped: 48029696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:44.821474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376692736 unmapped: 48029696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:45.821659+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:46.821919+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:47.822096+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a944a000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:48.822280+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4045852 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:49.822467+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:50.822700+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376700928 unmapped: 48021504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:51.822909+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbba205a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbbaf10e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbcf0a000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb77400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb77400 session 0x562bbc5cd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.600294113s of 30.625379562s, submitted: 10
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377741312 unmapped: 46981120 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:52.823036+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbccd2f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd5c960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbbaaa1e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bb9bee1e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbebed800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbebed800 session 0x562bbccd2780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376774656 unmapped: 47947776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:53.823262+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a5a000/0x0/0x1bfc00000, data 0x13d86ed/0x1604000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4125142 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376774656 unmapped: 47947776 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:54.823482+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376782848 unmapped: 47939584 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:55.823692+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcd76960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a5a000/0x0/0x1bfc00000, data 0x13d86ed/0x1604000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376799232 unmapped: 47923200 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:56.823823+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bba994f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376799232 unmapped: 47923200 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:57.824019+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a5a000/0x0/0x1bfc00000, data 0x13d86ed/0x1604000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bbafa5e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbbadef00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376799232 unmapped: 47923200 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:58.824274+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4125752 data_alloc: 218103808 data_used: 4706304
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376815616 unmapped: 47906816 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:29:59.824449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:00.824590+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a59000/0x0/0x1bfc00000, data 0x13d86fd/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:01.824738+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:02.824942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:03.825108+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191512 data_alloc: 218103808 data_used: 13987840
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:04.825251+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:05.825432+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a59000/0x0/0x1bfc00000, data 0x13d86fd/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8a59000/0x0/0x1bfc00000, data 0x13d86fd/0x1605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:06.825555+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:07.825754+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:08.825942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191512 data_alloc: 218103808 data_used: 13987840
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:09.826177+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:10.826526+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.209590912s of 18.436899185s, submitted: 43
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:11.826703+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377569280 unmapped: 47153152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f7000/0x0/0x1bfc00000, data 0x183a6fd/0x1a67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:12.826839+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:13.826987+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228676 data_alloc: 218103808 data_used: 14151680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:14.827329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50254 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:15.827476+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:16.827664+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:17.827801+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:18.828010+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228676 data_alloc: 218103808 data_used: 14151680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:19.828227+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:20.828407+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:21.828559+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:22.828772+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:23.828980+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228676 data_alloc: 218103808 data_used: 14151680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:24.829174+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377577472 unmapped: 47144960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:25.829469+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:26.829599+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:27.829728+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:28.829877+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4228676 data_alloc: 218103808 data_used: 14151680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:29.830039+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:30.830211+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377585664 unmapped: 47136768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbebf800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.150606155s of 20.309419632s, submitted: 39
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:31.830463+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbebf800 session 0x562bbba20b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377602048 unmapped: 47120384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbebf800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbebf800 session 0x562bbccd21e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbc5cc5a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bba0fd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bb9fdad20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a816f000/0x0/0x1bfc00000, data 0x1cc26fd/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:32.830654+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377602048 unmapped: 47120384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:33.830921+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377602048 unmapped: 47120384 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a816f000/0x0/0x1bfc00000, data 0x1cc26fd/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264187 data_alloc: 218103808 data_used: 14155776
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:34.831194+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377610240 unmapped: 47112192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:35.831361+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377610240 unmapped: 47112192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:36.831659+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377610240 unmapped: 47112192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:37.831817+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377610240 unmapped: 47112192 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbccd8960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a816f000/0x0/0x1bfc00000, data 0x1cc26fd/0x1eef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:38.831959+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377765888 unmapped: 46956544 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297077 data_alloc: 234881024 data_used: 17866752
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:39.832081+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377839616 unmapped: 46882816 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:40.832249+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:41.832449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:42.832643+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a814a000/0x0/0x1bfc00000, data 0x1ce6720/0x1f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:43.832850+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a814a000/0x0/0x1bfc00000, data 0x1ce6720/0x1f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304597 data_alloc: 234881024 data_used: 18886656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:44.832964+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:45.833148+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 46850048 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:46.833353+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 46841856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:47.833507+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 46841856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a814a000/0x0/0x1bfc00000, data 0x1ce6720/0x1f14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:48.833742+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 46841856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:49.833875+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4304597 data_alloc: 234881024 data_used: 18886656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377880576 unmapped: 46841856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.214096069s of 19.388515472s, submitted: 25
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:50.834083+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379166720 unmapped: 45555712 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:51.834279+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378904576 unmapped: 45817856 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:52.834436+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7bfe000/0x0/0x1bfc00000, data 0x2232720/0x2460000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:53.834685+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:54.834906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4356143 data_alloc: 234881024 data_used: 19267584
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7bfe000/0x0/0x1bfc00000, data 0x2232720/0x2460000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:55.835135+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7bfe000/0x0/0x1bfc00000, data 0x2232720/0x2460000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7bfe000/0x0/0x1bfc00000, data 0x2232720/0x2460000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:56.835333+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:57.835476+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbad94960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcf0a1e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:58.835613+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378912768 unmapped: 45809664 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9c79c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7bfe000/0x0/0x1bfc00000, data 0x2232720/0x2460000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:30:59.835833+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4237579 data_alloc: 218103808 data_used: 14155776
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 50290688 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:00.836011+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 50290688 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:01.836169+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 50290688 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:02.836533+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 50290688 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a85f0000/0x0/0x1bfc00000, data 0x18406fd/0x1a6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:03.836847+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.318365097s of 13.075301170s, submitted: 90
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbca15860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbccd8000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374431744 unmapped: 50290688 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcf0ad20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:04.837048+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065428 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:05.837207+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:06.837461+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:07.837695+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:08.837903+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:09.838060+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065428 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:10.838199+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:11.838404+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:12.838555+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 52707328 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:13.838716+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:14.838942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065428 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:15.839083+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:16.839287+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:17.839519+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:18.839689+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:19.839903+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065428 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:20.840054+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:21.840240+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:22.840477+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:23.840747+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:24.840947+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4065428 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:25.841128+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 52699136 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:26.841298+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372031488 unmapped: 52690944 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:27.841446+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 372031488 unmapped: 52690944 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:28.841580+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.950906754s of 25.060792923s, submitted: 36
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd5c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a907f000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373243904 unmapped: 51478528 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:29.841801+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4086376 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373243904 unmapped: 51478528 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:30.842038+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373243904 unmapped: 51478528 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:31.842268+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373243904 unmapped: 51478528 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:32.842580+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373243904 unmapped: 51478528 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:33.843142+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:34.843474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4086376 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:35.843616+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:36.843851+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:37.843996+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:38.844244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:39.844564+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095816 data_alloc: 218103808 data_used: 6012928
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:40.844867+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:41.845047+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:42.845341+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:43.845647+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:44.845994+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095816 data_alloc: 218103808 data_used: 6012928
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:45.846237+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:46.846442+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:47.846663+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:48.846867+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 51412992 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:49.847154+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a92aa000/0x0/0x1bfc00000, data 0xb8a67b/0xdb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095816 data_alloc: 218103808 data_used: 6012928
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.214937210s of 21.242958069s, submitted: 12
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374644736 unmapped: 50077696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:50.847318+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c8f000/0x0/0x1bfc00000, data 0x11a567b/0x13cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:51.847506+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376184832 unmapped: 48537600 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:52.847747+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376184832 unmapped: 48537600 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:53.847996+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376250368 unmapped: 48472064 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:54.848219+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4150414 data_alloc: 218103808 data_used: 7094272
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376258560 unmapped: 48463872 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:55.848420+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376258560 unmapped: 48463872 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:56.848567+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c8f000/0x0/0x1bfc00000, data 0x11a567b/0x13cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376217600 unmapped: 48504832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:57.848765+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376217600 unmapped: 48504832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:58.848946+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376217600 unmapped: 48504832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:31:59.849095+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148450 data_alloc: 218103808 data_used: 7094272
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376217600 unmapped: 48504832 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:00.849233+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:01.849369+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:02.849592+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.515503883s of 12.726822853s, submitted: 67
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c6e000/0x0/0x1bfc00000, data 0x11c667b/0x13f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:03.849866+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:04.850085+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148682 data_alloc: 218103808 data_used: 7094272
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c68000/0x0/0x1bfc00000, data 0x11cc67b/0x13f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:05.850247+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c68000/0x0/0x1bfc00000, data 0x11cc67b/0x13f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c68000/0x0/0x1bfc00000, data 0x11cc67b/0x13f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:06.850439+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:07.850784+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376225792 unmapped: 48496640 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:08.851089+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:09.851269+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148758 data_alloc: 218103808 data_used: 7094272
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c65000/0x0/0x1bfc00000, data 0x11cf67b/0x13f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:10.851484+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c65000/0x0/0x1bfc00000, data 0x11cf67b/0x13f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:11.851650+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:12.851787+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:13.852087+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c65000/0x0/0x1bfc00000, data 0x11cf67b/0x13f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376233984 unmapped: 48488448 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:14.852348+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149238 data_alloc: 218103808 data_used: 7106560
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376242176 unmapped: 48480256 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:15.852597+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.042912483s of 13.056877136s, submitted: 4
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376250368 unmapped: 48472064 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:16.852719+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8c57000/0x0/0x1bfc00000, data 0x11dd67b/0x1407000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376250368 unmapped: 48472064 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:17.852862+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376250368 unmapped: 48472064 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:18.853033+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbcc0f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377446400 unmapped: 47276032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:19.853171+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4166426 data_alloc: 218103808 data_used: 7106560
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377446400 unmapped: 47276032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:20.853421+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377446400 unmapped: 47276032 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:21.853560+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbd1330e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377380864 unmapped: 47341568 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5800 session 0x562bba8074a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:22.853698+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8af5000/0x0/0x1bfc00000, data 0x133f67b/0x1569000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcd62960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377380864 unmapped: 47341568 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbbaf6b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:23.853898+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377561088 unmapped: 47161344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:24.854069+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4170796 data_alloc: 218103808 data_used: 7114752
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 47038464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:25.854206+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ad0000/0x0/0x1bfc00000, data 0x136368b/0x158e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:26.854328+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:27.854445+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:28.854807+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8ad0000/0x0/0x1bfc00000, data 0x136368b/0x158e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.777861595s of 13.823503494s, submitted: 19
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:29.854991+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171756 data_alloc: 218103808 data_used: 7237632
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:30.855133+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:31.855272+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377724928 unmapped: 46997504 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:32.855548+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377733120 unmapped: 46989312 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a8acc000/0x0/0x1bfc00000, data 0x136768b/0x1592000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:33.855770+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377733120 unmapped: 46989312 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:34.855967+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171756 data_alloc: 218103808 data_used: 7237632
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377733120 unmapped: 46989312 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:35.856192+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377749504 unmapped: 46972928 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:36.856336+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378396672 unmapped: 46325760 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:37.856535+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 45981696 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:38.856734+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a752f000/0x0/0x1bfc00000, data 0x175568b/0x1980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378814464 unmapped: 45907968 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:39.856942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4218610 data_alloc: 218103808 data_used: 7430144
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378814464 unmapped: 45907968 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:40.857142+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378814464 unmapped: 45907968 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.730768204s of 11.911810875s, submitted: 62
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:41.857320+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a752f000/0x0/0x1bfc00000, data 0x175568b/0x1980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:42.857559+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a753b000/0x0/0x1bfc00000, data 0x175868b/0x1983000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:43.857860+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:44.858031+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4212426 data_alloc: 218103808 data_used: 7430144
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:45.858240+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:46.858406+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a751c000/0x0/0x1bfc00000, data 0x177768b/0x19a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:47.858536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a751c000/0x0/0x1bfc00000, data 0x177768b/0x19a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:48.858676+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:49.858845+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4212686 data_alloc: 218103808 data_used: 7430144
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:50.859053+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a750d000/0x0/0x1bfc00000, data 0x178668b/0x19b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:51.859184+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbbaf1a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbae692c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a750d000/0x0/0x1bfc00000, data 0x178668b/0x19b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:52.859350+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbebf800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.192040443s of 11.268042564s, submitted: 8
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbebf800 session 0x562bb9c134a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 46120960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:53.859617+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa6000/0x0/0x1bfc00000, data 0x11ee67b/0x1418000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 46120960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:54.859794+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158855 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa3000/0x0/0x1bfc00000, data 0x11f167b/0x141b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 46120960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:55.859902+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378601472 unmapped: 46120960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa3000/0x0/0x1bfc00000, data 0x11f167b/0x141b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:56.861258+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa3000/0x0/0x1bfc00000, data 0x11f167b/0x141b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378609664 unmapped: 46112768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:57.861420+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa3000/0x0/0x1bfc00000, data 0x11f167b/0x141b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:58.861575+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:32:59.861704+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158855 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:00.861829+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:01.861995+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbba25c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378617856 unmapped: 46104576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:02.862106+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7aa3000/0x0/0x1bfc00000, data 0x11f167b/0x141b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.045230865s of 10.114977837s, submitted: 21
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbca145a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:03.862250+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:04.862455+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:05.863179+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:06.863331+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:07.863692+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:08.863846+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:09.863986+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:10.864121+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:11.864320+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377053184 unmapped: 47669248 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:12.864460+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377061376 unmapped: 47661056 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:13.864637+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377061376 unmapped: 47661056 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:14.864816+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:15.864989+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:16.865125+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:17.865273+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:18.865481+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:19.865645+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:20.865770+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 47652864 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:21.865927+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:22.866051+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:23.866238+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:24.866481+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:25.866603+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:26.866712+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:27.868046+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377077760 unmapped: 47644672 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:28.868226+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:29.868442+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:30.868652+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:31.868830+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:32.868953+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:33.869121+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377085952 unmapped: 47636480 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:34.869316+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4080453 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377094144 unmapped: 47628288 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:35.869460+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377094144 unmapped: 47628288 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:36.869638+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377094144 unmapped: 47628288 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:37.869824+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377094144 unmapped: 47628288 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:38.869941+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.398391724s of 36.401378632s, submitted: 1
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:39.870037+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd5de00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbd156960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095040 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:40.870166+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:41.872084+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:42.872536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb7c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb7c000 session 0x562bbccd30e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a818f000/0x0/0x1bfc00000, data 0xb046dd/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:43.873958+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9c13860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377102336 unmapped: 47620096 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:44.874435+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbbadef00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bba8070e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095040 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 47611904 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:45.874781+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 47611904 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:46.875165+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 47611904 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:47.875499+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a818f000/0x0/0x1bfc00000, data 0xb046dd/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:48.875744+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:49.876195+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095040 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:50.876539+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:51.876992+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377348096 unmapped: 47374336 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:52.877145+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:53.877706+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a818f000/0x0/0x1bfc00000, data 0xb046dd/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377356288 unmapped: 47366144 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:54.877885+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4095040 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377364480 unmapped: 47357952 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:55.878231+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbccd3e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcfd2800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.766756058s of 16.877328873s, submitted: 27
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 47439872 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:56.878825+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:57.879148+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcfd2800 session 0x562bbcb7e780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:58.879495+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:33:59.879773+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083442 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:00.879881+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:01.880035+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 47431680 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:02.880267+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:03.880455+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:04.880575+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083442 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:05.880696+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:06.880875+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:07.881013+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:08.881248+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:09.881436+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083442 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 47423488 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:10.881651+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377307136 unmapped: 47415296 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:11.881799+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377307136 unmapped: 47415296 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:12.881940+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377307136 unmapped: 47415296 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:13.882150+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82a9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377315328 unmapped: 47407104 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bba0fcb40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbba2f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bb9c063c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbcd5d680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:14.882341+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc85a400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.585412979s of 18.642955780s, submitted: 13
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4131494 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377356288 unmapped: 47366144 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:15.882544+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7c85000/0x0/0x1bfc00000, data 0x100e68b/0x1239000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc85a400 session 0x562bb9c790e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbbcd2b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9c06960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbcf0a960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbbaab2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377356288 unmapped: 47366144 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:16.883041+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc85b400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc85b400 session 0x562bbccd2780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7b82000/0x0/0x1bfc00000, data 0x111168b/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 377356288 unmapped: 47366144 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:17.883217+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbbaf63c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7b82000/0x0/0x1bfc00000, data 0x111168b/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375635968 unmapped: 49086464 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:18.883603+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375644160 unmapped: 49078272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:19.883778+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbca154a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bb9bf45a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4144134 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375644160 unmapped: 49078272 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:20.884076+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7b82000/0x0/0x1bfc00000, data 0x111168b/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375521280 unmapped: 49201152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:21.884325+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375521280 unmapped: 49201152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7b82000/0x0/0x1bfc00000, data 0x111168b/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:22.884611+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7b82000/0x0/0x1bfc00000, data 0x111168b/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbba2f0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc85b400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375521280 unmapped: 49201152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:23.884874+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc85b400 session 0x562bbba25c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:24.885088+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:25.885343+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:26.885545+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:27.885704+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:28.885873+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:29.886042+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:30.886177+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:31.886322+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:32.886498+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:33.886676+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:34.886805+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:35.886931+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:36.887044+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:37.887187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:38.887348+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:39.887524+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:40.887706+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:41.887907+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373440512 unmapped: 51281920 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:42.888128+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:43.888337+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:44.888476+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:45.888636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:46.888797+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:47.888978+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 51265536 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:48.889187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:49.889356+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:50.889701+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:51.889868+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:52.890024+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:53.890218+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:54.890426+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:55.890554+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373465088 unmapped: 51257344 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:56.890690+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373473280 unmapped: 51249152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:57.890863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373473280 unmapped: 51249152 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:58.891204+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:34:59.891344+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:00.891536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:01.891689+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:02.891821+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:03.891966+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 51240960 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:04.892126+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4087138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373489664 unmapped: 51232768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:05.892244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373489664 unmapped: 51232768 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:06.892431+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a82aa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373497856 unmapped: 51224576 heap: 424722432 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:07.892560+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcd5c000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcf0ad20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbccd8000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbca15860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcc19000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 53.686439514s of 53.759906769s, submitted: 22
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:08.892673+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcc19000 session 0x562bb9c79c20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7683000/0x0/0x1bfc00000, data 0x161167b/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:09.892837+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7683000/0x0/0x1bfc00000, data 0x161167b/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4182302 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:10.892965+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:11.893123+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:12.893295+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7683000/0x0/0x1bfc00000, data 0x161167b/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:13.893502+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:14.893636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9bf50e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183095 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:15.893869+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:16.894057+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:17.894196+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373825536 unmapped: 58777600 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:18.894477+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375308288 unmapped: 57294848 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7683000/0x0/0x1bfc00000, data 0x161167b/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:19.895147+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375537664 unmapped: 57065472 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:20.895745+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4274107 data_alloc: 234881024 data_used: 17358848
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375537664 unmapped: 57065472 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:21.896805+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:22.897325+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:23.897904+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:24.898351+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7683000/0x0/0x1bfc00000, data 0x161167b/0x183b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:25.898900+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4274107 data_alloc: 234881024 data_used: 17358848
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:26.900539+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:27.900825+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:28.901450+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.425910950s of 20.513919830s, submitted: 22
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:29.901595+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 381689856 unmapped: 50913280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6c36000/0x0/0x1bfc00000, data 0x205867b/0x2282000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:30.901793+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358539 data_alloc: 234881024 data_used: 18870272
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 381755392 unmapped: 50847744 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:31.901928+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382205952 unmapped: 50397184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:32.902093+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382205952 unmapped: 50397184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:33.902514+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:34.902690+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:35.902878+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368335 data_alloc: 234881024 data_used: 18776064
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6bf9000/0x0/0x1bfc00000, data 0x208d67b/0x22b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:36.903015+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6c04000/0x0/0x1bfc00000, data 0x209067b/0x22ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:37.903253+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:38.903409+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:39.903562+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6c04000/0x0/0x1bfc00000, data 0x209067b/0x22ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:40.903814+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359615 data_alloc: 234881024 data_used: 18780160
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:41.904012+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.744534492s of 12.963235855s, submitted: 115
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:42.904134+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6c02000/0x0/0x1bfc00000, data 0x209267b/0x22bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:43.904296+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382353408 unmapped: 50249728 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbbaded20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd770e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:44.904512+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bb9c14000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:45.904657+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:46.904818+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:47.905016+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:48.905191+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:49.905336+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:50.905521+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:51.905736+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:52.905890+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:53.906159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:54.906328+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:55.906554+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:56.906753+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:57.906958+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:58.907107+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:35:59.907333+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:00.907510+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:01.907699+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:02.907858+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:03.908031+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:04.908183+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:05.908369+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:06.908569+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:07.908736+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373153792 unmapped: 59449344 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:08.908916+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:09.909264+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:10.909487+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4098187 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7cb3000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:11.909697+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:12.909866+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:13.910080+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373161984 unmapped: 59441152 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9de400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.510192871s of 32.574234009s, submitted: 28
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:14.910216+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9de400 session 0x562bbd133a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9bf0d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd76960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbacc0b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbbadf860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea4000/0x0/0x1bfc00000, data 0xdef6a4/0x101a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:15.910483+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136801 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:16.910637+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:17.910809+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea4000/0x0/0x1bfc00000, data 0xdef6dd/0x101a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:18.910954+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb38800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb38800 session 0x562bbcb7e780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:19.911119+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bba807860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:20.911354+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbcd5d680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136801 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bb9c790e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea4000/0x0/0x1bfc00000, data 0xdef6dd/0x101a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc5a5000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb38800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:21.911553+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373211136 unmapped: 59392000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea3000/0x0/0x1bfc00000, data 0xdef6ed/0x101b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:22.911690+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:23.911858+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:24.912068+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:25.912192+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4167363 data_alloc: 218103808 data_used: 8904704
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:26.912442+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea3000/0x0/0x1bfc00000, data 0xdef6ed/0x101b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:27.912613+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea3000/0x0/0x1bfc00000, data 0xdef6ed/0x101b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:28.912764+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:29.912909+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea3000/0x0/0x1bfc00000, data 0xdef6ed/0x101b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:30.913049+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4167363 data_alloc: 218103808 data_used: 8904704
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7ea3000/0x0/0x1bfc00000, data 0xdef6ed/0x101b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:31.913187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373350400 unmapped: 59252736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbca143c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bb9befa40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:32.913341+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbace2000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbca15e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.232671738s of 18.331647873s, submitted: 40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 58351616 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bba0ade00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbcd5c5a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bb9c78f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbcd634a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bba0ad0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:33.913570+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373522432 unmapped: 59080704 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:34.913690+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:35.913824+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224645 data_alloc: 218103808 data_used: 9490432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a66d8000/0x0/0x1bfc00000, data 0x14196fd/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:36.914055+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:37.914238+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:38.914525+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:39.914718+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a66d8000/0x0/0x1bfc00000, data 0x14196fd/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:40.914855+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4224645 data_alloc: 218103808 data_used: 9490432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 373530624 unmapped: 59072512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:41.914985+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:42.915144+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:43.915314+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a66d8000/0x0/0x1bfc00000, data 0x14196fd/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:44.915463+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:45.915587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247685 data_alloc: 218103808 data_used: 12730368
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57057280 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 57K writes, 215K keys, 57K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 57K writes, 21K syncs, 2.65 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2225 writes, 7775 keys, 2225 commit groups, 1.0 writes per commit group, ingest: 7.92 MB, 0.01 MB/s
                                           Interval WAL: 2225 writes, 934 syncs, 2.38 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb8667610#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562bb86671f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:46.915715+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:47.915913+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:48.916080+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a66d8000/0x0/0x1bfc00000, data 0x14196fd/0x1646000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:49.916229+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:50.916349+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247685 data_alloc: 218103808 data_used: 12730368
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:51.916440+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57024512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:52.916562+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.862173080s of 19.998111725s, submitted: 55
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375799808 unmapped: 56803328 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65f0000/0x0/0x1bfc00000, data 0x15016fd/0x172e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:53.916750+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65f0000/0x0/0x1bfc00000, data 0x15016fd/0x172e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 375799808 unmapped: 56803328 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:54.916961+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:55.917119+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280433 data_alloc: 218103808 data_used: 12763136
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:56.917340+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:57.917821+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:58.918013+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:36:59.918487+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a628e000/0x0/0x1bfc00000, data 0x18636fd/0x1a90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:00.918673+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4279861 data_alloc: 218103808 data_used: 12763136
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:01.918963+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:02.919127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:03.919486+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 56590336 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:04.919718+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.704158783s of 11.834191322s, submitted: 36
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a626b000/0x0/0x1bfc00000, data 0x18856fd/0x1ab2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376020992 unmapped: 56582144 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:05.919853+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280405 data_alloc: 218103808 data_used: 12763136
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:06.919977+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6264000/0x0/0x1bfc00000, data 0x188c6fd/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:07.920290+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbbaf0d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bb9bf4b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6264000/0x0/0x1bfc00000, data 0x188c6fd/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:08.920431+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:09.920573+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:10.920764+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4200181 data_alloc: 218103808 data_used: 9490432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:11.920935+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 56573952 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:12.921081+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc5a5000 session 0x562bbbcd2b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376045568 unmapped: 56557568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb38800 session 0x562bb9c143c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:13.921245+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bba0ac960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:14.921446+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:15.921649+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:16.921863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:17.922063+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:18.922227+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:19.922454+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:20.922585+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 56541184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:21.922775+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:22.922927+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:23.923134+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:24.923304+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:25.923464+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:26.923613+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:27.923755+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:28.924133+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:29.924439+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:30.924662+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:31.925176+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 56532992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:32.925396+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:33.925620+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:34.925860+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:35.926156+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:36.926423+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:37.926736+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:38.926965+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:39.927207+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:40.927446+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:41.927864+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:42.928068+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:43.928506+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 56524800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:44.928734+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:45.929071+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:46.929270+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:47.929509+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:48.929737+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:49.930018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:50.930231+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:51.930450+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:52.930680+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:53.930970+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 56516608 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:54.931193+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:55.931344+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:56.931470+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:57.931708+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:58.931854+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:37:59.932008+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 56508416 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:00.932145+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376111104 unmapped: 56492032 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:01.932288+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376111104 unmapped: 56492032 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:02.932485+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376119296 unmapped: 56483840 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:03.932743+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376127488 unmapped: 56475648 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:04.932960+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376127488 unmapped: 56475648 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:05.933264+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a7108000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110756 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376127488 unmapped: 56475648 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:06.933390+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376127488 unmapped: 56475648 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:07.933622+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.202774048s of 63.326625824s, submitted: 44
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9bee780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbafa3e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbcd76f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376274944 unmapped: 56328192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbd1321e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcd77680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:08.933803+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6dc2000/0x0/0x1bfc00000, data 0xd3267b/0xf5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:09.934046+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:10.934263+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4142391 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:11.934509+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbba243c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:12.934640+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbcd77860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb38800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb38800 session 0x562bbd1563c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:13.934858+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbc5cc3c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6dc0000/0x0/0x1bfc00000, data 0xd326ad/0xf5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376283136 unmapped: 56320000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:14.934991+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376356864 unmapped: 56246272 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:15.935136+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4161468 data_alloc: 218103808 data_used: 6926336
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376455168 unmapped: 56147968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:16.935348+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376463360 unmapped: 56139776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:17.935489+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376463360 unmapped: 56139776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:18.935775+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6dc0000/0x0/0x1bfc00000, data 0xd326ad/0xf5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376463360 unmapped: 56139776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:19.936113+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.842704773s of 11.919524193s, submitted: 25
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376463360 unmapped: 56139776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:20.936342+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4161244 data_alloc: 218103808 data_used: 6930432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376463360 unmapped: 56139776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:21.936551+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376553472 unmapped: 56049664 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:22.936679+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376610816 unmapped: 55992320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69b0000/0x0/0x1bfc00000, data 0xd326ad/0xf5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:23.936831+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376610816 unmapped: 55992320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:24.936957+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376610816 unmapped: 55992320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:25.937099+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4161244 data_alloc: 218103808 data_used: 6930432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 376610816 unmapped: 55992320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:26.937354+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6826000/0x0/0x1bfc00000, data 0xebc6ad/0x10e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [0,0,0,0,0,2,7])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378036224 unmapped: 54566912 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:27.937567+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:28.937723+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a63cf000/0x0/0x1bfc00000, data 0x13126ad/0x153e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:29.937908+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a63af000/0x0/0x1bfc00000, data 0x132a6ad/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:30.938041+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4218840 data_alloc: 218103808 data_used: 6979584
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:31.938159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:32.938314+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a63af000/0x0/0x1bfc00000, data 0x132a6ad/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:33.938502+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.253366470s of 14.312877655s, submitted: 372
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:34.938639+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:35.938790+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4218856 data_alloc: 218103808 data_used: 6979584
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:36.938999+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a63af000/0x0/0x1bfc00000, data 0x132a6ad/0x1556000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:37.939169+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcf0be00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bba8061e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378626048 unmapped: 53977088 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:38.939329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bb9c06f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:39.939485+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:40.939621+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:41.939792+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:42.939942+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:43.940106+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:44.940282+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:45.940430+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:46.940561+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:47.940696+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:48.940906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:49.941045+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:50.941233+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378650624 unmapped: 53952512 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:51.941397+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:52.941567+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:53.941735+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:54.941897+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:55.942091+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:56.942481+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:57.942687+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378658816 unmapped: 53944320 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:58.942835+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:38:59.943027+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:00.943260+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:01.943469+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:02.953654+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:03.953920+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:04.954079+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378667008 unmapped: 53936128 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:05.954201+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:06.954303+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:07.954415+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:08.954577+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:09.954742+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:10.954998+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4122138 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:11.955203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:12.955412+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378675200 unmapped: 53927936 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:13.955662+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378683392 unmapped: 53919744 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbb1c8800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbb1c8800 session 0x562bbccd21e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbcc0e000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9c23a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbbc7d4a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 39.750427246s of 39.857383728s, submitted: 38
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbd132b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:14.955845+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:15.956070+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4150180 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:16.956315+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:17.956520+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:18.956841+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:19.957091+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378699776 unmapped: 53903360 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:20.957310+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 53895168 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4150180 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:21.957474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378716160 unmapped: 53886976 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:22.957720+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378716160 unmapped: 53886976 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:23.957993+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 53878784 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:24.958186+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 53878784 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:25.970086+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 53878784 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4175780 data_alloc: 218103808 data_used: 8372224
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:26.970291+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:27.970454+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:28.970636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:29.970845+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:30.971014+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4175780 data_alloc: 218103808 data_used: 8372224
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:31.971229+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:32.971450+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:33.971664+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:34.971821+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:35.972022+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378929152 unmapped: 53673984 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6977000/0x0/0x1bfc00000, data 0xd6d67b/0xf97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4175780 data_alloc: 218103808 data_used: 8372224
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:36.972169+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 378937344 unmapped: 53665792 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.219261169s of 23.245233536s, submitted: 6
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:37.972319+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 53108736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:38.972531+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 53108736 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a643c000/0x0/0x1bfc00000, data 0x12a867b/0x14d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:39.972691+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:40.972948+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226064 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:41.973079+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:42.973226+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6419000/0x0/0x1bfc00000, data 0x12cb67b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:43.973445+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:44.973625+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379502592 unmapped: 53100544 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:45.973819+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 53092352 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6419000/0x0/0x1bfc00000, data 0x12cb67b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:46.973993+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223984 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 53092352 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:47.974127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379510784 unmapped: 53092352 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:48.974278+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379518976 unmapped: 53084160 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:49.974422+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379518976 unmapped: 53084160 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:50.974594+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:51.974778+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223984 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:52.974941+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:53.975129+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:54.975337+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:55.975532+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:56.975679+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223984 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:57.975865+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379527168 unmapped: 53075968 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:58.976045+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 53067776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:39:59.976243+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 53067776 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:00.976469+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 53059584 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:01.976677+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223984 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 53059584 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x12ce67b/0x14f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:02.976869+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 53051392 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea0400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea0400 session 0x562bba806b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbcd63e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbccd3860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbba3fa40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea0400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.135486603s of 26.255990982s, submitted: 44
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:03.977056+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea0400 session 0x562bbc5cd0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:04.977292+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:05.977554+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62db000/0x0/0x1bfc00000, data 0x140967b/0x1633000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:06.977771+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240696 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:07.978435+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:08.978579+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:09.978752+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52609024 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:10.978905+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 52600832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:11.979029+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240652 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 52600832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62d9000/0x0/0x1bfc00000, data 0x140a67b/0x1634000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc6cdc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc6cdc00 session 0x562bbbaf7a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:12.979180+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380002304 unmapped: 52600832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbbadf2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbca15a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:13.979338+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.092492104s of 10.122621536s, submitted: 12
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bba0fd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380010496 unmapped: 52592640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:14.979532+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380010496 unmapped: 52592640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea0400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:15.979754+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 52584448 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:16.979946+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4251127 data_alloc: 218103808 data_used: 9297920
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380035072 unmapped: 52568064 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62d9000/0x0/0x1bfc00000, data 0x140a68b/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:17.980151+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62d9000/0x0/0x1bfc00000, data 0x140a68b/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:18.980338+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:19.980490+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:20.980623+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:21.980754+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4251127 data_alloc: 218103808 data_used: 9297920
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:22.980969+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62d9000/0x0/0x1bfc00000, data 0x140a68b/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:23.981272+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52559872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:24.981458+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52543488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:25.981720+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a62d9000/0x0/0x1bfc00000, data 0x140a68b/0x1635000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52543488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:26.981864+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4251127 data_alloc: 218103808 data_used: 9297920
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52535296 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.634784698s of 13.658702850s, submitted: 7
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:27.982069+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384688128 unmapped: 47915008 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6256000/0x0/0x1bfc00000, data 0x148d68b/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:28.982264+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:29.982440+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57d1000/0x0/0x1bfc00000, data 0x1f1268b/0x213d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:30.982729+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:31.982877+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4337573 data_alloc: 218103808 data_used: 10416128
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57d1000/0x0/0x1bfc00000, data 0x1f1268b/0x213d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:32.983112+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:33.983325+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57d1000/0x0/0x1bfc00000, data 0x1f1268b/0x213d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:34.983536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384237568 unmapped: 48365568 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:35.983818+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:36.984021+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57b0000/0x0/0x1bfc00000, data 0x1f3368b/0x215e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335913 data_alloc: 218103808 data_used: 10416128
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:37.984221+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57b0000/0x0/0x1bfc00000, data 0x1f3368b/0x215e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:38.984487+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:39.984721+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:40.984915+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a57b0000/0x0/0x1bfc00000, data 0x1f3368b/0x215e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:41.985169+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335913 data_alloc: 218103808 data_used: 10416128
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384245760 unmapped: 48357376 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:42.985456+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:43.985744+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.258222580s of 16.516244888s, submitted: 98
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:44.985940+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bbccd2b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea0400 session 0x562bbc7dda40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:45.986121+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbd132b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a61a1000/0x0/0x1bfc00000, data 0x12d068b/0x14fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:46.986360+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231892 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:47.986650+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:48.986810+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:49.986950+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:50.987159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384253952 unmapped: 48349184 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a61a2000/0x0/0x1bfc00000, data 0x12d067b/0x14fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:51.987477+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231892 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 48340992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:52.987706+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 48340992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:53.987991+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 48340992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:54.988171+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 48340992 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a61a2000/0x0/0x1bfc00000, data 0x12d067b/0x14fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:55.988367+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 48332800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a61a2000/0x0/0x1bfc00000, data 0x12d067b/0x14fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:56.988583+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231892 data_alloc: 218103808 data_used: 8384512
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a61a2000/0x0/0x1bfc00000, data 0x12d067b/0x14fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 48332800 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbcf0a1e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.627133369s of 13.695060730s, submitted: 27
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:57.988715+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bbafa3e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:58.988926+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:40:59.989128+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:00.989268+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:01.989536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136398 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:02.989685+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:03.989979+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:04.990160+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:05.990442+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:06.990633+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136398 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:07.990852+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:08.991026+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:09.991323+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:10.991533+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:11.991683+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136398 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382050304 unmapped: 50552832 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:12.991830+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:13.991963+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:14.992163+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:15.992314+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:16.992484+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136398 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:17.992626+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:18.992758+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:19.992847+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382058496 unmapped: 50544640 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:20.992974+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382066688 unmapped: 50536448 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:21.993128+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4136398 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382066688 unmapped: 50536448 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:22.993288+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382066688 unmapped: 50536448 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:23.993452+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.413507462s of 26.433465958s, submitted: 9
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bba0ad0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 50520064 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba219c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba219c00 session 0x562bbd1325a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72000 session 0x562bbba34960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bb9c23a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbba2f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:24.993578+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382091264 unmapped: 50511872 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:25.993731+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 50503680 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:26.993914+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179747 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 50503680 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6788000/0x0/0x1bfc00000, data 0xf5c67b/0x1186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:27.994116+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 50503680 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:28.994276+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:29.994472+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:30.994642+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6788000/0x0/0x1bfc00000, data 0xf5c67b/0x1186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6788000/0x0/0x1bfc00000, data 0xf5c67b/0x1186000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:31.994861+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179747 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:32.995049+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:33.995244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.582936287s of 10.676069260s, submitted: 15
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:34.995366+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bb9c15680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:35.995601+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb78c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382107648 unmapped: 50495488 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:36.995755+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181852 data_alloc: 218103808 data_used: 4734976
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 50479104 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6787000/0x0/0x1bfc00000, data 0xf5c69e/0x1187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:37.995919+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6787000/0x0/0x1bfc00000, data 0xf5c69e/0x1187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:38.996051+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:39.996209+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:40.996337+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:41.996493+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4221532 data_alloc: 218103808 data_used: 10321920
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:42.996641+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:43.996808+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6787000/0x0/0x1bfc00000, data 0xf5c69e/0x1187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:44.996939+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:45.997082+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382418944 unmapped: 50184192 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6787000/0x0/0x1bfc00000, data 0xf5c69e/0x1187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:46.997197+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4221532 data_alloc: 218103808 data_used: 10321920
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 382427136 unmapped: 50176000 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:47.997338+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.294997215s of 13.310887337s, submitted: 5
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384352256 unmapped: 48250880 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:48.997486+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384352256 unmapped: 48250880 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:49.997607+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:50.997762+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:51.997856+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4254338 data_alloc: 218103808 data_used: 11018240
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:52.998023+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:53.998200+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:54.998320+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:55.998454+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:56.998585+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4254354 data_alloc: 218103808 data_used: 11018240
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:57.998728+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:58.998930+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:41:59.999067+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383680512 unmapped: 48922624 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:00.999226+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383688704 unmapped: 48914432 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bbacc1e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:01.999449+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4254674 data_alloc: 218103808 data_used: 11026432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383696896 unmapped: 48906240 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:02.999622+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383696896 unmapped: 48906240 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:03.999791+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6532000/0x0/0x1bfc00000, data 0x11b169e/0x13dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383696896 unmapped: 48906240 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:04.999940+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383696896 unmapped: 48906240 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:06.000110+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383696896 unmapped: 48906240 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:07.000223+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4254674 data_alloc: 218103808 data_used: 11026432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383705088 unmapped: 48898048 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:08.000349+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 383705088 unmapped: 48898048 heap: 432603136 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bb9c06000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbcd625a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bbc816960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbcf0b4a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:09.000455+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb39400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.636184692s of 20.767435074s, submitted: 33
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb39400 session 0x562bbc5cd680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bb9fdb680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9c22000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bbc5cd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbba20960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b85000/0x0/0x1bfc00000, data 0x1b5c6d6/0x1d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:10.000607+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b85000/0x0/0x1bfc00000, data 0x1b5c6d6/0x1d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:11.000725+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:12.000884+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4334069 data_alloc: 218103808 data_used: 11026432
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba13fc00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba13fc00 session 0x562bbba3c780
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b85000/0x0/0x1bfc00000, data 0x1b5c70f/0x1d89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:13.001022+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bbbcd2960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:14.001191+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbafa2000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bb9bef4a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384942080 unmapped: 56623104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:15.001329+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 384974848 unmapped: 56590336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:16.001460+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387710976 unmapped: 53854208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:17.001619+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407587 data_alloc: 234881024 data_used: 19943424
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b84000/0x0/0x1bfc00000, data 0x1b5c71f/0x1d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:18.001763+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:19.001993+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:20.002145+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b84000/0x0/0x1bfc00000, data 0x1b5c71f/0x1d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.381933212s of 11.494090080s, submitted: 36
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:21.002263+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:22.002436+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407367 data_alloc: 234881024 data_used: 20004864
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:23.002604+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387719168 unmapped: 53846016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:24.002833+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387727360 unmapped: 53837824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:25.002967+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387727360 unmapped: 53837824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:26.003109+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b82000/0x0/0x1bfc00000, data 0x1b5d71f/0x1d8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 394346496 unmapped: 47218688 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:27.003261+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512247 data_alloc: 234881024 data_used: 21610496
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390070272 unmapped: 51494912 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:28.003592+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:29.003733+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:30.003873+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4ec6000/0x0/0x1bfc00000, data 0x281a71f/0x2a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:31.004021+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:32.004149+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525351 data_alloc: 234881024 data_used: 22044672
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:33.004271+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4ec6000/0x0/0x1bfc00000, data 0x281a71f/0x2a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:34.004427+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.557025909s of 13.791321754s, submitted: 125
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:35.004558+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbccd2d20
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bba0fc960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391135232 unmapped: 50429952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:36.004679+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bbd1332c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:37.004790+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b9b000/0x0/0x1bfc00000, data 0x11b269e/0x13dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267427 data_alloc: 218103808 data_used: 9994240
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:38.004927+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b9b000/0x0/0x1bfc00000, data 0x11b269e/0x13dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:39.005096+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:40.005277+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5b9b000/0x0/0x1bfc00000, data 0x11b269e/0x13dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:41.005463+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb78c00 session 0x562bbd132f00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391176192 unmapped: 50388992 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:42.005607+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bbafa4960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:43.005801+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:44.006034+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:45.006183+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:46.006367+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:47.006567+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:48.006816+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:49.006990+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:50.007474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:51.007725+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:52.009173+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:53.009973+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:54.010517+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:55.010718+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:56.011542+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:57.011719+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:58.012068+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387760128 unmapped: 53805056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:42:59.012322+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387768320 unmapped: 53796864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:00.012587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:01.012855+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387768320 unmapped: 53796864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:02.013011+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387768320 unmapped: 53796864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:03.013229+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:04.013548+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:05.013709+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf9000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:06.013874+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:07.014100+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4156073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:08.014260+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.158493042s of 34.314132690s, submitted: 58
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbc7dd680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba218400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba218400 session 0x562bbbaf63c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bb9f72800 session 0x562bbc7dc5a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:09.014428+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbccd8000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb78c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb78c00 session 0x562bbca143c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e5000/0x0/0x1bfc00000, data 0xcff67b/0xf29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:10.014568+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:11.014714+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:12.014870+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e5000/0x0/0x1bfc00000, data 0xcff67b/0xf29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186154 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:13.014990+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:14.015232+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:15.015517+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:16.015755+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e5000/0x0/0x1bfc00000, data 0xcff67b/0xf29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bba9941e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea4000 session 0x562bbcd62b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bb9f72800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:17.015951+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbba3f680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387776512 unmapped: 53788672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186154 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb78c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb78c00 session 0x562bbae681e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:18.016079+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387784704 unmapped: 53780480 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bbafa4b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.946137428s of 10.012309074s, submitted: 17
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc01f400
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:19.016425+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387784704 unmapped: 53780480 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:20.016594+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387792896 unmapped: 53772288 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:21.016843+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e4000/0x0/0x1bfc00000, data 0xcff68b/0xf2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:22.017206+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4210476 data_alloc: 218103808 data_used: 7921664
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:23.017587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:24.017890+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:25.018061+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:26.018281+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387809280 unmapped: 53755904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e4000/0x0/0x1bfc00000, data 0xcff68b/0xf2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:27.018555+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387817472 unmapped: 53747712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4210476 data_alloc: 218103808 data_used: 7921664
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e4000/0x0/0x1bfc00000, data 0xcff68b/0xf2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:28.018728+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387817472 unmapped: 53747712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:29.018914+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387817472 unmapped: 53747712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:30.019167+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 387817472 unmapped: 53747712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.796102524s of 11.799226761s, submitted: 1
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69e4000/0x0/0x1bfc00000, data 0xcff68b/0xf2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:31.019354+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:32.019767+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4246584 data_alloc: 218103808 data_used: 8065024
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:33.020047+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:34.020299+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:35.020516+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:36.020629+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65d8000/0x0/0x1bfc00000, data 0x110b68b/0x1336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:37.020737+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4246584 data_alloc: 218103808 data_used: 8065024
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:38.020943+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65d8000/0x0/0x1bfc00000, data 0x110b68b/0x1336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:39.021090+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:40.021443+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:41.021658+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:42.021870+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4246584 data_alloc: 218103808 data_used: 8065024
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:43.022035+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:44.022203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390168576 unmapped: 51396608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65d8000/0x0/0x1bfc00000, data 0x110b68b/0x1336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.992965698s of 14.095608711s, submitted: 37
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbc4e03c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc01f400 session 0x562bb9fda000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a65d8000/0x0/0x1bfc00000, data 0x110b68b/0x1336000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:45.022354+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbae69860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:46.022683+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:47.022916+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162611 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:48.023095+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:49.023222+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:50.023501+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:51.023636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:52.023789+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:53.023963+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162611 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:54.024180+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:55.024298+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:56.024456+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:57.024596+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:58.024748+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162611 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:43:59.024888+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:00.025001+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:01.025169+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:02.025338+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:03.025499+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4162611 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:04.025658+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:05.025807+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:06.025983+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:07.026162+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390045696 unmapped: 51519488 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.104436874s of 23.175792694s, submitted: 24
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbbadf0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb78c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb78c00 session 0x562bbc5cc960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:08.026277+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4189948 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d5000/0x0/0x1bfc00000, data 0xd0e6dd/0xf39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390053888 unmapped: 51511296 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:09.026409+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390062080 unmapped: 51503104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:10.026630+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390062080 unmapped: 51503104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:11.026785+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390062080 unmapped: 51503104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:12.026988+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d5000/0x0/0x1bfc00000, data 0xd0e6dd/0xf39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390062080 unmapped: 51503104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:13.027159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4189948 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390062080 unmapped: 51503104 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:14.027325+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390070272 unmapped: 51494912 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d5000/0x0/0x1bfc00000, data 0xd0e6dd/0xf39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bbcc0e000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:15.027458+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390070272 unmapped: 51494912 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9f7c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9f7c00 session 0x562bbbadef00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:16.027637+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390070272 unmapped: 51494912 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbba3f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bba7f65a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:17.027748+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbc9f7c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbcb78c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:18.027862+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4192407 data_alloc: 218103808 data_used: 4706304
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:19.028038+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d4000/0x0/0x1bfc00000, data 0xd0e6ed/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:20.028209+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:21.028355+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:22.028482+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d4000/0x0/0x1bfc00000, data 0xd0e6ed/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:23.028647+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4207767 data_alloc: 218103808 data_used: 6868992
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d4000/0x0/0x1bfc00000, data 0xd0e6ed/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:24.028864+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:25.029047+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:26.029194+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390078464 unmapped: 51486720 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:27.029332+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 51470336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:28.029514+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4207767 data_alloc: 218103808 data_used: 6868992
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 51470336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:29.029712+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a69d4000/0x0/0x1bfc00000, data 0xd0e6ed/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 51470336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:30.029925+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 51470336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:31.030107+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.209741592s of 23.290512085s, submitted: 33
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5cc6000/0x0/0x1bfc00000, data 0x1a1c6ed/0x1c48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 49790976 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:32.030238+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392634368 unmapped: 48930816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:33.030370+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320113 data_alloc: 218103808 data_used: 7106560
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:34.030607+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c95000/0x0/0x1bfc00000, data 0x1a4c6ed/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:35.030747+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:36.030906+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:37.031055+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c95000/0x0/0x1bfc00000, data 0x1a4c6ed/0x1c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:38.031190+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4318113 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x1a4f6ed/0x1c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:39.031353+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:40.031540+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x1a4f6ed/0x1c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:41.031688+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:42.031840+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:43.031974+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4318113 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x1a4f6ed/0x1c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:44.032115+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bbc5cd680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec0000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbec0000 session 0x562bbcf0b4a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbce06c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbce06c00 session 0x562bbc816960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbce06c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbce06c00 session 0x562bbcd625a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.348495483s of 13.547456741s, submitted: 97
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bb9c06000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bb9c15680
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec0000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbec0000 session 0x562bbba2f860
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bb9c23a40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbef03000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbef03000 session 0x562bbd1325a0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:45.032297+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:46.032456+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:47.032666+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 48906240 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:48.032859+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bba0ad0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336983 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a88000/0x0/0x1bfc00000, data 0x1c596fd/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 48906240 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bbafa3e00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:49.032983+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a88000/0x0/0x1bfc00000, data 0x1c596fd/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392667136 unmapped: 48898048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:50.033129+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbec0000
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbec0000 session 0x562bbd132b40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbce06c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392667136 unmapped: 48898048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbce06c00 session 0x562bbc7dda40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbce06c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:51.033310+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 48742400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:52.033492+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 48742400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a63000/0x0/0x1bfc00000, data 0x1c7d720/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:53.033658+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357072 data_alloc: 218103808 data_used: 9256960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a63000/0x0/0x1bfc00000, data 0x1c7d720/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a63000/0x0/0x1bfc00000, data 0x1c7d720/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:54.033893+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:55.034037+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:56.034160+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:57.034313+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a63000/0x0/0x1bfc00000, data 0x1c7d720/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:58.034473+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357072 data_alloc: 218103808 data_used: 9256960
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392830976 unmapped: 48734208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:44:59.034636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392839168 unmapped: 48726016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:00.034771+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392839168 unmapped: 48726016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:01.034936+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392839168 unmapped: 48726016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:02.035099+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5a63000/0x0/0x1bfc00000, data 0x1c7d720/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392839168 unmapped: 48726016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.405544281s of 18.446109772s, submitted: 8
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:03.035201+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441230 data_alloc: 218103808 data_used: 9281536
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393895936 unmapped: 47669248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:04.035357+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:05.035504+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:06.035676+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4f27000/0x0/0x1bfc00000, data 0x27b8720/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:07.035841+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4f26000/0x0/0x1bfc00000, data 0x27b8720/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:08.036063+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457126 data_alloc: 218103808 data_used: 10117120
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:09.036177+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4f26000/0x0/0x1bfc00000, data 0x27b8720/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:10.036356+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:11.036557+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:12.036691+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbce06c00 session 0x562bba0fd2c0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbba3fa40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393904128 unmapped: 47661056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bbbea5c00
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbbea5c00 session 0x562bba0ad0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:13.036836+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4325944 data_alloc: 218103808 data_used: 7110656
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 47652864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c90000/0x0/0x1bfc00000, data 0x1a516ed/0x1c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:14.037053+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 47652864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:15.037197+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c90000/0x0/0x1bfc00000, data 0x1a516ed/0x1c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 47652864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c90000/0x0/0x1bfc00000, data 0x1a516ed/0x1c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:16.037324+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 47652864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a5c90000/0x0/0x1bfc00000, data 0x1a516ed/0x1c7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:17.037448+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.203443527s of 14.426984787s, submitted: 73
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbc9f7c00 session 0x562bbba3eb40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bbcb78c00 session 0x562bb9fdab40
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 393912320 unmapped: 47652864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: handle_auth_request added challenge on 0x562bba10c800
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:18.037569+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 ms_handle_reset con 0x562bba10c800 session 0x562bbba3f0e0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:19.037705+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:20.037834+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:21.037967+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:22.061809+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:23.061925+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:24.062088+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:25.062264+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:26.062458+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:27.062622+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:28.062798+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:29.062976+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:30.063177+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391561216 unmapped: 50003968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:31.063366+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:32.063659+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:33.063871+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:34.064152+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:35.064336+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:36.064483+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391569408 unmapped: 49995776 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:37.064689+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391577600 unmapped: 49987584 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:38.064845+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391577600 unmapped: 49987584 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:39.065041+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:40.065347+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:41.065505+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:42.065636+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:43.065815+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:44.066043+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:45.066166+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 49979392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:46.066334+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:47.066508+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:48.066676+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:49.066835+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:50.067003+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:51.067165+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:52.067301+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:53.067427+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391602176 unmapped: 49963008 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:54.067584+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391602176 unmapped: 49963008 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:55.067700+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:56.067838+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:57.067997+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:58.068168+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:45:59.068366+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:00.068588+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:01.068765+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:02.068923+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:03.069396+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391618560 unmapped: 49946624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:04.069790+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391626752 unmapped: 49938432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:05.070011+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391626752 unmapped: 49938432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:06.070306+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391626752 unmapped: 49938432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:07.070610+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391626752 unmapped: 49938432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:08.070853+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391626752 unmapped: 49938432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:09.071071+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391634944 unmapped: 49930240 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:10.071265+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391634944 unmapped: 49930240 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:11.071478+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:12.071688+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:13.071868+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:14.072074+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:15.072175+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:16.072346+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 49922048 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:17.072509+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391651328 unmapped: 49913856 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:18.072733+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391651328 unmapped: 49913856 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:19.072932+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:20.073087+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:21.073247+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:22.073478+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:23.073673+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:24.073856+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:25.073990+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:26.074124+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391675904 unmapped: 49889280 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:27.074273+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:28.074462+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:29.074601+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:30.074752+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:31.074944+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:32.075127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391684096 unmapped: 49881088 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:33.075313+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391692288 unmapped: 49872896 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:34.075553+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:35.075719+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:36.075872+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:37.076010+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:38.076140+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:39.076352+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391708672 unmapped: 49856512 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:40.076499+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391716864 unmapped: 49848320 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:41.076919+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391716864 unmapped: 49848320 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:42.077064+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 49840128 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:43.077224+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 49840128 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:44.077459+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 49840128 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:45.077596+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391725056 unmapped: 49840128 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:46.078018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 60K writes, 223K keys, 60K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 60K writes, 22K syncs, 2.64 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2300 writes, 7887 keys, 2300 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s
                                           Interval WAL: 2300 writes, 981 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 49831936 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:47.078151+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 49831936 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:48.078279+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 49831936 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:49.078465+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391733248 unmapped: 49831936 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:50.078631+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:51.078761+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:52.078878+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:53.079019+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:54.079191+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:55.079306+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:56.079444+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391741440 unmapped: 49823744 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:57.079570+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391757824 unmapped: 49807360 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:58.079708+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391757824 unmapped: 49807360 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:46:59.079832+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391757824 unmapped: 49807360 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:00.079954+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 49799168 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:01.080079+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 49799168 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:02.080200+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391766016 unmapped: 49799168 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:03.080325+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 49790976 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:04.080479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 49790976 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:05.080620+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 49790976 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:06.080736+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391774208 unmapped: 49790976 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:07.080850+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 49782784 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:08.080967+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 49782784 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:09.081085+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 49782784 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:10.081243+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391782400 unmapped: 49782784 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:11.081366+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391790592 unmapped: 49774592 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:12.081576+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391790592 unmapped: 49774592 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:13.081748+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391790592 unmapped: 49774592 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:14.082025+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391790592 unmapped: 49774592 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:15.082161+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391798784 unmapped: 49766400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:16.082348+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391798784 unmapped: 49766400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:17.082518+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391798784 unmapped: 49766400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:18.082708+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391798784 unmapped: 49766400 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:19.082848+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391806976 unmapped: 49758208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:20.082993+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391806976 unmapped: 49758208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:21.083141+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391806976 unmapped: 49758208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:22.083423+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391806976 unmapped: 49758208 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:23.083780+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:24.083948+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:25.084133+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:26.084344+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:27.084480+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:28.084622+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 49750016 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:29.084765+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:30.084935+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:31.085077+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:32.085203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:33.085365+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:34.085587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391823360 unmapped: 49741824 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:35.085748+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391831552 unmapped: 49733632 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:36.085954+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391839744 unmapped: 49725440 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:37.086127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 49717248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:38.086318+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 49717248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:39.086440+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 49717248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:40.086624+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 49717248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:41.086799+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391847936 unmapped: 49717248 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:42.086984+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 49709056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:43.087129+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 49709056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 sshd-session[413579]: Invalid user postgres from 134.122.57.138 port 47610
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:44.087305+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391856128 unmapped: 49709056 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:45.087434+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 49700864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:46.087564+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 49700864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:47.087785+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391864320 unmapped: 49700864 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:48.087973+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 49692672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:49.088135+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 49692672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:50.088326+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 49692672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:51.088467+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 49692672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:52.088711+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 49692672 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:53.088931+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391880704 unmapped: 49684480 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:54.089184+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391880704 unmapped: 49684480 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:55.089418+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391888896 unmapped: 49676288 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:56.089581+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:57.089719+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:58.089924+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:47:59.090108+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:00.090286+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:01.090479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:02.090680+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391897088 unmapped: 49668096 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:03.090883+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 49659904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:04.091146+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 49659904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:05.091343+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391905280 unmapped: 49659904 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:06.091552+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391913472 unmapped: 49651712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:07.092546+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391913472 unmapped: 49651712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:08.092680+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391913472 unmapped: 49651712 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:09.092833+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 49643520 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:10.092978+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 49643520 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:11.093127+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:12.093284+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:13.093551+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:14.093720+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:15.093886+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:16.094107+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:17.094243+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:18.094537+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391938048 unmapped: 49627136 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cf7000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4184073 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:19.094715+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391954432 unmapped: 49610752 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:20.094971+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 182.814880371s of 182.904525757s, submitted: 39
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391954432 unmapped: 49610752 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:21.095157+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391970816 unmapped: 49594368 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:22.095300+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391987200 unmapped: 49577984 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:23.095484+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392142848 unmapped: 49422336 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a6cfa000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:24.095737+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392159232 unmapped: 49405952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:25.095968+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392159232 unmapped: 49405952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:26.096158+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392159232 unmapped: 49405952 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:27.096355+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:28.096649+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:29.096808+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:30.096995+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:31.097122+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:32.097294+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:33.097445+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:34.097699+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:35.097867+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:36.098002+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:37.098184+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:38.098337+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:39.098475+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:40.098641+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:41.098828+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:42.098982+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:43.099164+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:44.099366+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:45.099564+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:46.099706+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:47.099846+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:48.100012+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392167424 unmapped: 49397760 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:49.100187+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:50.100465+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:51.100621+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:52.100796+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:53.100960+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:54.101151+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:55.101298+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392175616 unmapped: 49389568 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:56.101479+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392183808 unmapped: 49381376 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:57.101605+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392183808 unmapped: 49381376 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:58.101732+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392183808 unmapped: 49381376 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:48:59.101899+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:00.102027+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:01.102141+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:02.102347+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:03.102560+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:04.102789+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392192000 unmapped: 49373184 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:05.102960+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 49356800 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:06.103119+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:07.103254+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:08.103433+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:09.103565+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:10.103805+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:11.103963+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:12.104093+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392216576 unmapped: 49348608 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:13.104281+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392224768 unmapped: 49340416 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:14.104548+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392224768 unmapped: 49340416 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:15.104741+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:16.104966+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:17.105456+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:18.105588+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:19.105747+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:20.105900+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:21.106069+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:22.106238+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 49332224 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:23.106480+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:24.106710+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:25.106851+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:26.107063+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:27.107255+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:28.107453+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:29.107619+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392241152 unmapped: 49324032 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:30.107828+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:31.107978+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:32.108154+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:33.108287+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:34.108494+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:35.108653+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:36.108788+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392249344 unmapped: 49315840 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:37.108927+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:38.109077+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:39.109203+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:40.109420+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:41.109607+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:42.109874+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392257536 unmapped: 49307648 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:43.110046+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392265728 unmapped: 49299456 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:44.110269+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392265728 unmapped: 49299456 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:45.110474+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392265728 unmapped: 49299456 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:46.110629+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392265728 unmapped: 49299456 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:47.110836+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392273920 unmapped: 49291264 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:48.111071+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392273920 unmapped: 49291264 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:49.113226+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392273920 unmapped: 49291264 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:50.113975+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392282112 unmapped: 49283072 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:51.114270+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392282112 unmapped: 49283072 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:52.115205+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392282112 unmapped: 49283072 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:53.116058+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:54.116797+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:55.119124+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:56.119964+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:57.120536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:58.120857+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:49:59.121490+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:00.121765+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392290304 unmapped: 49274880 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:01.122095+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392298496 unmapped: 49266688 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:02.122351+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392298496 unmapped: 49266688 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:03.122599+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:04.122899+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:05.123179+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:06.123703+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:07.124094+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:08.124535+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:09.124863+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392306688 unmapped: 49258496 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:10.125117+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392314880 unmapped: 49250304 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:11.125453+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:12.125810+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:13.125973+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:14.126159+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:15.126326+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:16.126507+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:17.126741+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:18.126935+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392323072 unmapped: 49242112 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:19.127072+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:20.127453+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:21.127720+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:22.127982+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:23.128153+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:24.128354+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392331264 unmapped: 49233920 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:25.128557+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:26.128695+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:27.128822+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:28.129024+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:29.129513+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:30.129654+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392339456 unmapped: 49225728 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:31.129868+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:32.130160+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:33.130310+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:34.130574+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:35.130860+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:36.131336+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:37.131640+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:38.131871+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 49217536 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:39.132088+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392355840 unmapped: 49209344 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:40.132267+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392364032 unmapped: 49201152 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:41.132529+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392364032 unmapped: 49201152 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:42.132759+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 49192960 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:43.132971+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 49192960 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:44.133146+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 49192960 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:45.133357+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 49192960 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:46.133592+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392380416 unmapped: 49184768 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:47.133763+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392380416 unmapped: 49184768 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:48.133997+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392380416 unmapped: 49184768 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:49.134210+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392388608 unmapped: 49176576 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:50.134537+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:51.134779+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:52.135018+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:53.135276+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:54.135536+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:55.135683+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:56.135935+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:57.136209+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392396800 unmapped: 49168384 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:58.136512+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:50:59.136778+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:00.136969+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:01.137201+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:02.137410+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:03.137584+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 49160192 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:04.137832+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392413184 unmapped: 49152000 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:05.137964+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 49143808 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:06.138085+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 49143808 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:07.138263+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 49143808 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:08.138514+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392421376 unmapped: 49143808 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:09.138645+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 49135616 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:10.138779+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 49135616 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:11.138905+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 49135616 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:12.139022+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 49135616 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:13.139237+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:14.139432+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 49135616 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:15.139599+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392445952 unmapped: 49119232 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:16.139835+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392454144 unmapped: 49111040 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:17.139970+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392454144 unmapped: 49111040 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:18.140101+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392454144 unmapped: 49111040 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:19.140244+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392454144 unmapped: 49111040 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:20.140372+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392454144 unmapped: 49111040 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:21.140586+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 49102848 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:22.140746+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 49102848 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:23.140880+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392462336 unmapped: 49102848 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:24.141056+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392470528 unmapped: 49094656 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:25.141547+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:26.141670+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:27.141797+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:28.142017+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:29.142166+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:30.142288+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:31.142434+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392478720 unmapped: 49086464 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:32.142610+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392486912 unmapped: 49078272 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:33.142812+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 49070080 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:34.143087+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 49070080 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:35.143470+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 49070080 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:36.143735+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392495104 unmapped: 49070080 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:37.144007+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 49061888 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:38.144328+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392503296 unmapped: 49061888 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:39.144634+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:40.144833+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:41.145236+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:42.145448+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:43.145593+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:44.145815+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392511488 unmapped: 49053696 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:45.145969+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392519680 unmapped: 49045504 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:46.146136+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392519680 unmapped: 49045504 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:47.146275+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392519680 unmapped: 49045504 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:48.146451+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392527872 unmapped: 49037312 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:49.146593+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392527872 unmapped: 49037312 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:50.146758+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392527872 unmapped: 49037312 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:51.146959+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392536064 unmapped: 49029120 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:52.147117+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392544256 unmapped: 49020928 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:53.147296+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392544256 unmapped: 49020928 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:54.147499+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392544256 unmapped: 49020928 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:55.147663+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:56.147803+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:57.148024+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:58.148171+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:51:59.148336+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:00.148466+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392552448 unmapped: 49012736 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:01.148718+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 49004544 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:02.148858+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 49004544 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:03.148992+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 49004544 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:04.149229+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392568832 unmapped: 48996352 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:05.149369+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392568832 unmapped: 48996352 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:06.149609+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392568832 unmapped: 48996352 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:07.149778+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 48988160 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:08.150030+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 48988160 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:09.150257+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 48988160 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:10.150452+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392577024 unmapped: 48988160 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:11.150731+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:12.150954+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:13.151241+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:14.151686+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:15.151877+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:16.152150+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392585216 unmapped: 48979968 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:17.152425+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392601600 unmapped: 48963584 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:18.152757+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392601600 unmapped: 48963584 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:19.152947+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:20.153151+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:21.153312+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:22.153471+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:23.153587+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:24.153772+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:25.153911+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:26.154059+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392609792 unmapped: 48955392 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:27.154237+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:28.154422+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:29.154671+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:30.154842+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:31.155221+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:32.155412+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392617984 unmapped: 48947200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:33.155621+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392634368 unmapped: 48930816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:34.155864+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392634368 unmapped: 48930816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:35.156500+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:36.156656+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:37.156826+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:38.156963+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:39.157150+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:40.157298+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392642560 unmapped: 48922624 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:41.157556+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a68ea000/0x0/0x1bfc00000, data 0x9ea67b/0xc14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:42.157770+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392650752 unmapped: 48914432 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:43.157904+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392658944 unmapped: 48906240 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:44.158058+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 392314880 unmapped: 49250304 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'config diff' '{prefix=config diff}'
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'config show' '{prefix=config show}'
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 15:53:17 compute-0 ceph-osd[84815]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 15:53:17 compute-0 ceph-osd[84815]: bluestore.MempoolThread(0x562bb8745b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4183897 data_alloc: 218103808 data_used: 4702208
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:45.158199+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391610368 unmapped: 49954816 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: tick
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_tickets
Jan 20 15:53:17 compute-0 ceph-osd[84815]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T15:52:46.158351+0000)
Jan 20 15:53:17 compute-0 ceph-osd[84815]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 49971200 heap: 441565184 old mem: 2845415832 new mem: 2845415832
Jan 20 15:53:17 compute-0 ceph-osd[84815]: do_command 'log dump' '{prefix=log dump}'
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 15:53:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1551083616' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50660 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 15:53:17 compute-0 sshd-session[413579]: Connection closed by invalid user postgres 134.122.57.138 port 47610 [preauth]
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 20 15:53:17 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812618961' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50681 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:17 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50293 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 20 15:53:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364837486' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50693 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:18.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:18 compute-0 nova_compute[250018]: 2026-01-20 15:53:18.220 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50314 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 sudo[414173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:18 compute-0 sudo[414173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:18 compute-0 sudo[414173]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:18 compute-0 sudo[414224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 15:53:18 compute-0 sudo[414224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 15:53:18 compute-0 sudo[414224]: pam_unix(sudo:session): session closed for user root
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.50612 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.37311 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.50227 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.37350 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.50639 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/808794295' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2103643461' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.50254 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1551083616' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2779943835' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2247268192' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2812618961' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2298780932' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 20 15:53:18 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1550185194' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:18 compute-0 crontab[414319]: (root) LIST (root)
Jan 20 15:53:18 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:18 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:18 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:18 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50705 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:18 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50332 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 20 15:53:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668688603' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50714 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 20 15:53:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122230926' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37458 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:19 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:19.490+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50726 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.37362 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50660 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50269 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50681 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: pgmap v3933: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50293 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.37386 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3140472892' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1364837486' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50693 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50314 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.37401 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1550185194' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50705 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/149186993' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.50332 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/668688603' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/440145304' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3089804160' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1122230926' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:19 compute-0 nova_compute[250018]: 2026-01-20 15:53:19.786 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:19 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 20 15:53:19 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259400615' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:19 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50741 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 20 15:53:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743176531' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50395 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 20 15:53:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465038305' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50762 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.50714 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.50350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.37458 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.50726 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3403921456' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.50368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3608108182' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: pgmap v3934: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/947707734' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4259400615' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.50741 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3743176531' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/858049734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3465038305' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4011182796' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 20 15:53:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106595786' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 20 15:53:20 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515019539' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:20 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:20 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:20.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:20 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50434 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:20 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:20 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:20.911+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 20 15:53:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113246010' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50789 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mgr[74653]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:21 compute-0 ceph-e399cf45-e6b6-5393-99f1-75c601d3f188-mgr-compute-0-wookjv[74649]: 2026-01-20T15:53:21.173+0000 7f4562bc3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 15:53:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 20 15:53:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4293837088' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 20 15:53:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716471466' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.50395 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.50762 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4106595786' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/515019539' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.50434 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/685726112' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1113246010' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/4097280376' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4293837088' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/952246884' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4162090687' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3716471466' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2052119312' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 20 15:53:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906646828' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 20 15:53:21 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302797618' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:21 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 20 15:53:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636043298' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 20 15:53:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021293115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:22 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 15:53:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:22 compute-0 systemd[1]: Started Hostname Service.
Jan 20 15:53:22 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 20 15:53:22 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893597713' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37587 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.50789 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1793993722' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/906646828' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/502472852' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2302797618' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/728917619' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: pgmap v3935: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1585738903' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3519584347' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2636043298' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1021293115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2141415677' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2813367287' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/454954369' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1825010125' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3893597713' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2492913358' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3075812643' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 15:53:22 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:22 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:22 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:22.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:22 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37599 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37608 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:23 compute-0 nova_compute[250018]: 2026-01-20 15:53:23.221 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:23 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37617 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:23 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:23 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 20 15:53:23 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2078948093' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.37587 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/722851636' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1414603985' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.37599 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.37608 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3368371534' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1188902209' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/708267248' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1107210386' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2239306666' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1482769606' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/4113140527' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:23 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:24.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50614 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 20 15:53:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947446086' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50629 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 20 15:53:24 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094165142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50638 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50644 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 nova_compute[250018]: 2026-01-20 15:53:24.787 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:24 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:24 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:24 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:24.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.37617 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: pgmap v3936: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2078948093' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3271178705' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3218920685' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.37656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/134214396' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1947446086' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2496834440' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/3119005333' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1094165142' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:24 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1406159115' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50930 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 20 15:53:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745875369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50945 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50957 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50966 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50698 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50975 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50614 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50629 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50638 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50644 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50930 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2745875369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.50656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1694053069' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:25 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 15:53:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:26.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 15:53:26 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50725 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.51005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 20 15:53:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824071788' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50746 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.51023 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:26 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:26 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:26.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:26 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37779 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50945 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50665 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50957 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50966 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: pgmap v3937: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50698 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.50975 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/539377997' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2824071788' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1811606035' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/921537409' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/331790073' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:26 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.51041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 20 15:53:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287765366' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50812 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 20 15:53:27 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139354330' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.50725 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.51005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.50746 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.51023 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.37779 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1876167769' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/287765366' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1931971985' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:27 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/4139354330' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:28 compute-0 nova_compute[250018]: 2026-01-20 15:53:28.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:53:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883441442' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 15:53:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:28.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 15:53:28 compute-0 nova_compute[250018]: 2026-01-20 15:53:28.225 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:28 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50845 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 20 15:53:28 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1440796443' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:28 compute-0 ceph-mon[74360]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 15:53:28 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:28 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:28 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:28.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.51041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.50812 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: pgmap v3938: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/1807707528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1290319706' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/883441442' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/1440796443' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 15:53:29 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/1389273415' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37845 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 20 15:53:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370524222' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mgr[74653]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:29 compute-0 nova_compute[250018]: 2026-01-20 15:53:29.824 250022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 15:53:29 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.51179 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:29 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 20 15:53:29 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699355290' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.50845 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.37845 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/281162553' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/3370524222' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2933957023' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/2148008205' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/2699355290' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 15:53:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.101 - anonymous [20/Jan/2026:15:53:30.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:30 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.50932 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:30 compute-0 radosgw[93153]: ====== starting new request req=0x7fb5a77fe6f0 =====
Jan 20 15:53:30 compute-0 radosgw[93153]: ====== req done req=0x7fb5a77fe6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 15:53:30 compute-0 radosgw[93153]: beast: 0x7fb5a77fe6f0: 192.168.122.100 - anonymous [20/Jan/2026:15:53:30.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 15:53:30 compute-0 ceph-mon[74360]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 20 15:53:30 compute-0 ceph-mon[74360]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672864250' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 15:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:53:30.829 160071 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 15:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:53:30.830 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 15:53:30 compute-0 ovn_metadata_agent[160049]: 2026-01-20 15:53:30.830 160071 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 15:53:31 compute-0 nova_compute[250018]: 2026-01-20 15:53:31.051 250022 DEBUG oslo_service.periodic_task [None req-b3017b4c-c168-4f1d-8edb-45dc89d4f6f7 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 15:53:31 compute-0 ceph-mon[74360]: pgmap v3939: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.51179 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/608709473' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.37872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/2207446485' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.50932 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.100:0/672864250' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.101:0/684520784' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mon[74360]: from='client.? 192.168.122.102:0/3125534778' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37890 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 15:53:31 compute-0 ceph-mgr[74653]: log_channel(audit) log [DBG] : from='client.37902 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
